EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS
NASA Technical Reports Server (NTRS)
Jayroe, R. R.
1994-01-01
Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available evaluation criteria basically compare the observed results with the expected results. For the image reconstruction processes of registration and compression, the expected results are usually the original data or some selected characteristics of the original data. For classification processes the expected result is the ground truth of the scene. Thus, the comparison process consists of determining what changes occur in processing, where the changes occur, how much change occurs, and the amplitude of the change. The package includes evaluation routines for performing such comparisons as average uncertainty, average information transfer, chi-square statistics, multidimensional histograms, and computation of contingency matrices. This collection of routines is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 662K of 8 bit bytes. This collection of image processing and evaluation routines was developed in 1979.
An invertebrate embryologist's guide to routine processing of confocal images.
von Dassow, George
2014-01-01
It is almost impossible to use a confocal microscope without encountering the need to transform the raw data through image processing. Adherence to a set of straightforward guidelines will help ensure that image manipulations are both credible and repeatable. Meanwhile, attention to optimal data collection parameters will greatly simplify image processing, not only for convenience but for quality and credibility as well. Here I describe how to conduct routine confocal image processing tasks, including creating 3D animations or stereo images, false coloring or merging channels, background suppression, and compressing movie files for display.
NASA Technical Reports Server (NTRS)
Cecil, R. W.; White, R. A.; Szczur, M. R.
1972-01-01
The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoaf, S.; APS Engineering Support Division
A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.
Fission gas bubble identification using MATLAB's image processing toolbox
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collette, R.
Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. This study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding proved to bemore » the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods. - Highlights: •Automated image processing can aid in the fuel qualification process. •Routines are developed to characterize fission gas bubbles in irradiated U–Mo fuel. •Frequency domain filtration effectively eliminates FIB curtaining artifacts. •Adaptive thresholding proved to be the most accurate segmentation method. •The techniques established are ready to be applied to large scale data extraction testing.« less
NASA Astrophysics Data System (ADS)
Walsh, Alex J.; Skala, Melissa C.
2014-02-01
The heterogeneity of genotypes and phenotypes within cancers is correlated with disease progression and drug-resistant cellular sub-populations. Therefore, robust techniques capable of probing majority and minority cell populations are important both for cancer diagnostics and therapy monitoring. Herein, we present a modified CellProfiler routine to isolate cytoplasmic fluorescence signal on a single cell level from high resolution auto-fluorescence microscopic images.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.
Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines
Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213
Fission gas bubble identification using MATLAB's image processing toolbox
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collette, R.; King, J.; Keiser, Jr., D.
Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less
Fission gas bubble identification using MATLAB's image processing toolbox
Collette, R.; King, J.; Keiser, Jr., D.; ...
2016-06-08
Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less
Subband/Transform MATLAB Functions For Processing Images
NASA Technical Reports Server (NTRS)
Glover, D.
1995-01-01
SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.
NASA Technical Reports Server (NTRS)
1986-01-01
Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.
An automated dose tracking system for adaptive radiation therapy.
Liu, Chang; Kim, Jinkoo; Kumarasiri, Akila; Mayyas, Essa; Brown, Stephen L; Wen, Ning; Siddiqui, Farzan; Chetty, Indrin J
2018-02-01
The implementation of adaptive radiation therapy (ART) into routine clinical practice is technically challenging and requires significant resources to perform and validate each process step. The objective of this report is to identify the key components of ART, to illustrate how a specific automated procedure improves efficiency, and to facilitate the routine clinical application of ART. Data was used from patient images, exported from a clinical database and converted to an intermediate format for point-wise dose tracking and accumulation. The process was automated using in-house developed software containing three modularized components: an ART engine, user interactive tools, and integration tools. The ART engine conducts computing tasks using the following modules: data importing, image pre-processing, dose mapping, dose accumulation, and reporting. In addition, custom graphical user interfaces (GUIs) were developed to allow user interaction with select processes such as deformable image registration (DIR). A commercial scripting application programming interface was used to incorporate automated dose calculation for application in routine treatment planning. Each module was considered an independent program, written in C++or C#, running in a distributed Windows environment, scheduled and monitored by integration tools. The automated tracking system was retrospectively evaluated for 20 patients with prostate cancer and 96 patients with head and neck cancer, under institutional review board (IRB) approval. In addition, the system was evaluated prospectively using 4 patients with head and neck cancer. Altogether 780 prostate dose fractions and 2586 head and neck cancer dose fractions went processed, including DIR and dose mapping. On average, daily cumulative dose was computed in 3 h and the manual work was limited to 13 min per case with approximately 10% of cases requiring an additional 10 min for image registration refinement. An efficient and convenient dose tracking system for ART in the clinical setting is presented. The software and automated processes were rigorously evaluated and validated using patient image datasets. Automation of the various procedures has improved efficiency significantly, allowing for the routine clinical application of ART for improving radiation therapy effectiveness. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-05-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
NASA Technical Reports Server (NTRS)
Johnson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-01-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally with the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging--terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
Applications of Subsurface Radar for Mine Detection
1990-12-31
sofware routines for signal/image processing and image display, which are included in the Appendix along with examples of recent images obtained of the... maxima and minima. The case of the M19 shown a main backscattering lobe only 5* wide. These results demonstrate the realiability and consistency of
NASA Technical Reports Server (NTRS)
1982-01-01
A gallery of what might be called the ""Best of HCMM'' imagery is presented. These 100 images, consisting mainly of Day-VIS, Day-IR, and Night-IR scenes plus a few thermal inertia images, were selected from the collection accrued in the Missions Utilization Office (Code 902) at the Goddard Space Flight Center. They were selected because of both their pictorial quality and their information or interest content. Nearly all the images are the computer processed and contrast stretched products routinely produced by the image processing facility at GSFC. Several LANDSAT images, special HCMM images made by HCMM investigators, and maps round out the input.
NASA Technical Reports Server (NTRS)
Kumar, P.; Lin, F. Y.; Vaishampayan, V.; Farvardin, N.
1986-01-01
A complete documentation of the software developed in the Communication and Signal Processing Laboratory (CSPL) during the period of July 1985 to March 1986 is provided. Utility programs and subroutines that were developed for a user-friendly image and speech processing environment are described. Additional programs for data compression of image and speech type signals are included. Also, programs for the zero-memory and block transform quantization in the presence of channel noise are described. Finally, several routines for simulating the perfromance of image compression algorithms are included.
MMX-I: A data-processing software for multi-modal X-ray imaging and tomography
NASA Astrophysics Data System (ADS)
Bergamaschi, A.; Medjoubi, K.; Messaoudi, C.; Marco, S.; Somogyi, A.
2017-06-01
Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data.
Image-based information, communication, and retrieval
NASA Technical Reports Server (NTRS)
Bryant, N. A.; Zobrist, A. L.
1980-01-01
IBIS/VICAR system combines video image processing and information management. Flexible programs require user to supply only parameters specific to particular application. Special-purpose input/output routines transfer image data with reduced memory requirements. New application programs are easily incorporated. Program is written in FORTRAN IV, Assembler, and OS JCL for batch execution and has been implemented on IBM 360.
Rudmik, Luke; Smith, Kristine A; Soler, Zachary M; Schlosser, Rodney J; Smith, Timothy L
2014-10-01
Idiopathic olfactory loss is a common clinical scenario encountered by otolaryngologists. While trying to allocate limited health care resources appropriately, the decision to obtain a magnetic resonance imaging (MRI) scan to investigate for a rare intracranial abnormality can be difficult. To evaluate the cost-effectiveness of ordering routine MRI in patients with idiopathic olfactory loss. We performed a modeling-based economic evaluation with a time horizon of less than 1 year. Patients included in the analysis had idiopathic olfactory loss defined by no preceding viral illness or head trauma and negative findings of a physical examination and nasal endoscopy. Routine MRI vs no-imaging strategies. We developed a decision tree economic model from the societal perspective. Effectiveness, probability, and cost data were obtained from the published literature. Litigation rates and costs related to a missed diagnosis were obtained from the Physicians Insurers Association of America. A univariate threshold analysis and multivariate probabilistic sensitivity analysis were performed to quantify the degree of certainty in the economic conclusion of the reference case. The comparative groups included those who underwent routine MRI of the brain with contrast alone and those who underwent no brain imaging. The primary outcome was the cost per correct diagnosis of idiopathic olfactory loss. The mean (SD) cost for the MRI strategy totaled $2400.00 ($1717.54) and was effective 100% of the time, whereas the mean (SD) cost for the no-imaging strategy totaled $86.61 ($107.40) and was effective 98% of the time. The incremental cost-effectiveness ratio for the MRI strategy compared with the no-imaging strategy was $115 669.50, which is higher than most acceptable willingness-to-pay thresholds. The threshold analysis demonstrated that when the probability of having a treatable intracranial disease process reached 7.9%, the incremental cost-effectiveness ratio for MRI vs no imaging was $24 654.38. The probabilistic sensitivity analysis demonstrated that the no-imaging strategy was the cost-effective decision with 81% certainty at a willingness-to-pay threshold of $50 000. This economic evaluation suggests that the most cost-effective decision is to not obtain a routine MRI scan of the brain in patients with idiopathic olfactory loss. Outcomes from this study may be used to counsel patients and aid in the decision-making process.
Going fully digital: Perspective of a Dutch academic pathology lab
Stathonikos, Nikolas; Veta, Mitko; Huisman, André; van Diest, Paul J.
2013-01-01
During the last years, whole slide imaging has become more affordable and widely accepted in pathology labs. Digital slides are increasingly being used for digital archiving of routinely produced clinical slides, remote consultation and tumor boards, and quantitative image analysis for research purposes and in education. However, the implementation of a fully digital Pathology Department requires an in depth look into the suitability of digital slides for routine clinical use (the image quality of the produced digital slides and the factors that affect it) and the required infrastructure to support such use (the storage requirements and integration with lab management and hospital information systems). Optimization of digital pathology workflow requires communication between several systems, which can be facilitated by the use of open standards for digital slide storage and scanner management. Consideration of these aspects along with appropriate validation of the use of digital slides for routine pathology can pave the way for pathology departments to go “fully digital.” In this paper, we summarize our experiences so far in the process of implementing a fully digital workflow at our Pathology Department and the steps that are needed to complete this process. PMID:23858390
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan
2017-03-01
In this paper, we describe an enhanced DICOM Secondary Capture (SC) that integrates Image Quantification (IQ) results, Regions of Interest (ROIs), and Time Activity Curves (TACs) with screen shots by embedding extra medical imaging information into a standard DICOM header. A software toolkit of DICOM IQSC has been developed to implement the SC-centered information integration of quantitative analysis for routine practice of nuclear medicine. Primary experiments show that the DICOM IQSC method is simple and easy to implement seamlessly integrating post-processing workstations with PACS for archiving and retrieving IQ information. Additional DICOM IQSC applications in routine nuclear medicine and clinic research are also discussed.
The Radon cumulative distribution transform and its application to image classification
Kolouri, Soheil; Park, Se Rim; Rohde, Gustavo K.
2016-01-01
Invertible image representation methods (transforms) are routinely employed as low-level image processing operations based on which feature extraction and recognition algorithms are developed. Most transforms in current use (e.g. Fourier, Wavelet, etc.) are linear transforms, and, by themselves, are unable to substantially simplify the representation of image classes for classification. Here we describe a nonlinear, invertible, low-level image processing transform based on combining the well known Radon transform for image data, and the 1D Cumulative Distribution Transform proposed earlier. We describe a few of the properties of this new transform, and with both theoretical and experimental results show that it can often render certain problems linearly separable in transform space. PMID:26685245
Automating PACS quality control with the Vanderbilt image processing enterprise resource
NASA Astrophysics Data System (ADS)
Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.
2012-02-01
Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.
Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Goddu, S Murty; Mutic, Sasa; Deasy, Joseph O; Low, Daniel A
2011-01-01
Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-
Generative Adversarial Networks for Noise Reduction in Low-Dose CT.
Wolterink, Jelmer M; Leiner, Tim; Viergever, Max A; Isgum, Ivana
2017-12-01
Noise is inherent to low-dose CT acquisition. We propose to train a convolutional neural network (CNN) jointly with an adversarial CNN to estimate routine-dose CT images from low-dose CT images and hence reduce noise. A generator CNN was trained to transform low-dose CT images into routine-dose CT images using voxelwise loss minimization. An adversarial discriminator CNN was simultaneously trained to distinguish the output of the generator from routine-dose CT images. The performance of this discriminator was used as an adversarial loss for the generator. Experiments were performed using CT images of an anthropomorphic phantom containing calcium inserts, as well as patient non-contrast-enhanced cardiac CT images. The phantom and patients were scanned at 20% and 100% routine clinical dose. Three training strategies were compared: the first used only voxelwise loss, the second combined voxelwise loss and adversarial loss, and the third used only adversarial loss. The results showed that training with only voxelwise loss resulted in the highest peak signal-to-noise ratio with respect to reference routine-dose images. However, CNNs trained with adversarial loss captured image statistics of routine-dose images better. Noise reduction improved quantification of low-density calcified inserts in phantom CT images and allowed coronary calcium scoring in low-dose patient CT images with high noise levels. Testing took less than 10 s per CT volume. CNN-based low-dose CT noise reduction in the image domain is feasible. Training with an adversarial network improves the CNNs ability to generate images with an appearance similar to that of reference routine-dose CT images.
Nonoperative management of blunt renal trauma: Is routine early follow-up imaging necessary?
Malcolm, John B; Derweesh, Ithaar H; Mehrazin, Reza; DiBlasio, Christopher J; Vance, David D; Joshi, Salil; Wake, Robert W; Gold, Robert
2008-01-01
Background There is no consensus on the role of routine follow-up imaging during nonoperative management of blunt renal trauma. We reviewed our experience with nonoperative management of blunt renal injuries in order to evaluate the utility of routine early follow-up imaging. Methods We reviewed all cases of blunt renal injury admitted for nonoperative management at our institution between 1/2002 and 1/2006. Data were compiled from chart review, and clinical outcomes were correlated with CT imaging results. Results 207 patients were identified (210 renal units). American Association for the Surgery of Trauma (AAST) grades I, II, III, IV, and V were assigned to 35 (16%), 66 (31%), 81 (39%), 26 (13%), and 2 (1%) renal units, respectively. 177 (84%) renal units underwent routine follow-up imaging 24–48 hours after admission. In three cases of grade IV renal injury, a ureteral stent was placed after serial imaging demonstrated persistent extravasation. In no other cases did follow-up imaging independently alter clinical management. There were no urologic complications among cases for which follow-up imaging was not obtained. Conclusion Routine follow-up imaging is unnecessary for blunt renal injuries of grades I-III. Grade IV renovascular injuries can be followed clinically without routine early follow-up imaging, but urine extravasation necessitates serial imaging to guide management decisions. The volume of grade V renal injuries in this study is not sufficient to support or contest the need for routine follow-up imaging. PMID:18768088
Verma, Nishant; Cowperthwaite, Matthew C.; Burnett, Mark G.; Markey, Mia K.
2013-01-01
Abstract Differentiating treatment-induced necrosis from tumor recurrence is a central challenge in neuro-oncology. These 2 very different outcomes after brain tumor treatment often appear similarly on routine follow-up imaging studies. They may even manifest with similar clinical symptoms, further confounding an already difficult process for physicians attempting to characterize a new contrast-enhancing lesion appearing on a patient's follow-up imaging. Distinguishing treatment necrosis from tumor recurrence is crucial for diagnosis and treatment planning, and therefore, much effort has been put forth to develop noninvasive methods to differentiate between these disparate outcomes. In this article, we review the latest developments and key findings from research studies exploring the efficacy of structural and functional imaging modalities for differentiating treatment necrosis from tumor recurrence. We discuss the possibility of computational approaches to investigate the usefulness of fine-grained imaging characteristics that are difficult to observe through visual inspection of images. We also propose a flexible treatment-planning algorithm that incorporates advanced functional imaging techniques when indicated by the patient's routine follow-up images and clinical condition. PMID:23325863
PDT - PARTICLE DISPLACEMENT TRACKING SOFTWARE
NASA Technical Reports Server (NTRS)
Wernet, M. P.
1994-01-01
Particle Imaging Velocimetry (PIV) is a quantitative velocity measurement technique for measuring instantaneous planar cross sections of a flow field. The technique offers very high precision (1%) directionally resolved velocity vector estimates, but its use has been limited by high equipment costs and complexity of operation. Particle Displacement Tracking (PDT) is an all-electronic PIV data acquisition and reduction procedure which is simple, fast, and easily implemented. The procedure uses a low power, continuous wave laser and a Charged Coupled Device (CCD) camera to electronically record the particle images. A frame grabber board in a PC is used for data acquisition and reduction processing. PDT eliminates the need for photographic processing, system costs are moderately low, and reduced data are available within seconds of acquisition. The technique results in velocity estimate accuracies on the order of 5%. The software is fully menu-driven from the acquisition to the reduction and analysis of the data. Options are available to acquire a single image or 5- or 25-field series of images separated in time by multiples of 1/60 second. The user may process each image, specifying its boundaries to remove unwanted glare from the periphery and adjusting its background level to clearly resolve the particle images. Data reduction routines determine the particle image centroids and create time history files. PDT then identifies the velocity vectors which describe the particle movement in the flow field. Graphical data analysis routines are included which allow the user to graph the time history files and display the velocity vector maps, interpolated velocity vector grids, iso-velocity vector contours, and flow streamlines. The PDT data processing software is written in FORTRAN 77 and the data acquisition routine is written in C-Language for 80386-based IBM PC compatibles running MS-DOS v3.0 or higher. Machine requirements include 4 MB RAM (3 MB Extended), a single or multiple frequency RGB monitor (EGA or better), a math co-processor, and a pointing device. The printers supported by the graphical analysis routines are the HP Laserjet+, Series II, and Series III with at least 1.5 MB memory. The data acquisition routines require the EPIX 4-MEG video board and optional 12.5MHz oscillator, and associated EPIX software. Data can be acquired from any CCD or RS-170 compatible video camera with pixel resolution of 600hX400v or better. PDT is distributed on one 5.25 inch 360K MS-DOS format diskette. Due to the use of required proprietary software, executable code is not provided on the distribution media. Compiling the source code requires the Microsoft C v5.1 compiler, Microsoft QuickC v2.0, the Microsoft Mouse Library, EPIX Image Processing Libraries, the Microway NDP-Fortran-386 v2.1 compiler, and the Media Cybernetics HALO Professional Graphics Kernal System. Due to the complexities of the machine requirements, COSMIC strongly recommends the purchase and review of the documentation prior to the purchase of the program. The source code, and sample input and output files are provided in PKZIP format; the PKUNZIP utility is included. PDT was developed in 1990. All trade names used are the property of their respective corporate owners.
Microvax-based data management and reduction system for the regional planetary image facilities
NASA Technical Reports Server (NTRS)
Arvidson, R.; Guinness, E.; Slavney, S.; Weiss, B.
1987-01-01
Presented is a progress report for the Regional Planetary Image Facilities (RPIF) prototype image data management and reduction system being jointly implemented by Washington University and the USGS, Flagstaff. The system will consist of a MicroVAX with a high capacity (approx 300 megabyte) disk drive, a compact disk player, an image display buffer, a videodisk player, USGS image processing software, and SYSTEM 1032 - a commercial relational database management package. The USGS, Flagstaff, will transfer their image processing software including radiometric and geometric calibration routines, to the MicroVAX environment. Washington University will have primary responsibility for developing the database management aspects of the system and for integrating the various aspects into a working system.
Image analysis library software development
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Bryant, J.
1977-01-01
The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.
DHMI: dynamic holographic microscopy interface
NASA Astrophysics Data System (ADS)
He, Xuefei; Zheng, Yujie; Lee, Woei Ming
2016-12-01
Digital holographic microscopy (DHM) is a powerful in-vitro biological imaging tool. In this paper, we report a fully automated off-axis digital holographic microscopy system completed with a graphical user interface in the Matlab environment. The interface primarily includes Fourier domain processing, phase reconstruction, aberration compensation and autofocusing. A variety of imaging operations such as region of interest selection, de-noising mode (filtering and averaging), low frame rate imaging for immediate reconstruction and high frame rate imaging routine ( 27 fps) are implemented to facilitate ease of use.
NASA Technical Reports Server (NTRS)
Rawson, R. F.; Hamilton, R. E.; Liskow, C. L.; Dias, A. R.; Jackson, P. L.
1981-01-01
An analysis of synthetic aperture radar data of SP Mountain was undertaken to demonstrate the use of digital image processing techniques to aid in geologic interpretation of SAR data. These data were collected with the ERIM X- and L-band airborne SAR using like- and cross-polarizations. The resulting signal films were used to produce computer compatible tapes, from which four-channel imagery was generated. Slant range-to-ground range and range-azimuth-scale corrections were made in order to facilitate image registration; intensity corrections were also made. Manual interpretation of the imagery showed that L-band represented the geology of the area better than X-band. Several differences between the various images were also noted. Further digital analysis of the corrected data was done for enhancement purposes. This analysis included application of an MSS differencing routine and development of a routine for removal of relief displacement. It was found that accurate registration of the SAR channels is critical to the effectiveness of the differencing routine. Use of the relief displacement algorithm on the SP Mountain data demonstrated the feasibility of the technique.
Increasing the speed of medical image processing in MatLab®
Bister, M; Yap, CS; Ng, KH; Tok, CH
2007-01-01
MatLab® has often been considered an excellent environment for fast algorithm development but is generally perceived as slow and hence not fit for routine medical image processing, where large data sets are now available e.g., high-resolution CT image sets with typically hundreds of 512x512 slices. Yet, with proper programming practices – vectorization, pre-allocation and specialization – applications in MatLab® can run as fast as in C language. In this article, this point is illustrated with fast implementations of bilinear interpolation, watershed segmentation and volume rendering. PMID:21614269
A computer system for processing data from routine pulmonary function tests.
Pack, A I; McCusker, R; Moran, F
1977-01-01
In larger pulmonary function laboratories there is a need for computerised techniques of data processing. A flexible computer system, which is used routinely, is described. The system processes data from a relatively large range of tests. Two types of output are produced--one for laboratory purposes, and one for return to the referring physician. The system adds an automatic interpretative report for each set of results. In developing the interpretative system it has been necessary to utilise a number of arbitrary definitions. The present terminology for reporting pulmonary function tests has limitations. The computer interpretation system affords the opportunity to take account of known interaction between measurements of function and different pathological states. Images PMID:329462
NASA Astrophysics Data System (ADS)
Welter, Petra; Deserno, Thomas M.; Gülpers, Ralph; Wein, Berthold B.; Grouls, Christoph; Günther, Rolf W.
2010-03-01
The large and continuously growing amount of medical image data demands access methods with regards to content rather than simple text-based queries. The potential benefits of content-based image retrieval (CBIR) systems for computer-aided diagnosis (CAD) are evident and have been approved. Still, CBIR is not a well-established part of daily routine of radiologists. We have already presented a concept of CBIR integration for the radiology workflow in accordance with the Integrating the Healthcare Enterprise (IHE) framework. The retrieval result is composed as a Digital Imaging and Communication in Medicine (DICOM) Structured Reporting (SR) document. The use of DICOM SR provides interchange with PACS archive and image viewer. It offers the possibility of further data mining and automatic interpretation of CBIR results. However, existing standard templates do not address the domain of CBIR. We present a design of a SR template customized for CBIR. Our approach is based on the DICOM standard templates and makes use of the mammography and chest CAD SR templates. Reuse of approved SR sub-trees promises a reliable design which is further adopted to the CBIR domain. We analyze the special CBIR requirements and integrate the new concept of similar images into our template. Our approach also includes the new concept of a set of selected images for defining the processed images for CBIR. A commonly accepted pre-defined template for the presentation and exchange of results in a standardized format promotes the widespread application of CBIR in radiological routine.
TU-FG-201-04: Computer Vision in Autonomous Quality Assurance of Linear Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, H; Jenkins, C; Yu, S
Purpose: Routine quality assurance (QA) of linear accelerators represents a critical and costly element of a radiation oncology center. Recently, a system was developed to autonomously perform routine quality assurance on linear accelerators. The purpose of this work is to extend this system and contribute computer vision techniques for obtaining quantitative measurements for a monthly multi-leaf collimator (MLC) QA test specified by TG-142, namely leaf position accuracy, and demonstrate extensibility for additional routines. Methods: Grayscale images of a picket fence delivery on a radioluminescent phosphor coated phantom are captured using a CMOS camera. Collected images are processed to correct formore » camera distortions, rotation and alignment, reduce noise, and enhance contrast. The location of each MLC leaf is determined through logistic fitting and a priori modeling based on knowledge of the delivered beams. Using the data collected and the criteria from TG-142, a decision is made on whether or not the leaf position accuracy of the MLC passes or fails. Results: The locations of all MLC leaf edges are found for three different picket fence images in a picket fence routine to 0.1mm/1pixel precision. The program to correct for image alignment and determination of leaf positions requires a runtime of 21– 25 seconds for a single picket, and 44 – 46 seconds for a group of three pickets on a standard workstation CPU, 2.2 GHz Intel Core i7. Conclusion: MLC leaf edges were successfully found using techniques in computer vision. With the addition of computer vision techniques to the previously described autonomous QA system, the system is able to quickly perform complete QA routines with minimal human contribution.« less
Integration of CBIR in radiological routine in accordance with IHE
NASA Astrophysics Data System (ADS)
Welter, Petra; Deserno, Thomas M.; Fischer, Benedikt; Wein, Berthold B.; Ott, Bastian; Günther, Rolf W.
2009-02-01
Increasing use of digital imaging processing leads to an enormous amount of imaging data. The access to picture archiving and communication systems (PACS), however, is solely textually, leading to sparse retrieval results because of ambiguous or missing image descriptions. Content-based image retrieval (CBIR) systems can improve the clinical diagnostic outcome significantly. However, current CBIR systems are not able to integrate their results with clinical workflow and PACS. Existing communication standards like DICOM and HL7 leave many options for implementation and do not ensure full interoperability. We present a concept of the standardized integration of a CBIR system for the radiology workflow in accordance with the Integrating the Healthcare Enterprise (IHE) framework. This is based on the IHE integration profile 'Post-Processing Workflow' (PPW) defining responsibilities as well as standardized communication and utilizing the DICOM Structured Report (DICOM SR). Because nowadays most of PACS and RIS systems are not yet fully IHE compliant to PPW, we also suggest an intermediate approach with the concepts of the CAD-PACS Toolkit. The integration is independent of the particular PACS and RIS system. Therefore, it supports the widespread application of CBIR in radiological routine. As a result, the approach is exemplarily applied to the Image Retrieval in Medical Applications (IRMA) framework.
Document Examination: Applications of Image Processing Systems.
Kopainsky, B
1989-12-01
Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.
Cowley, M J; Mantle, J A; Rogers, W J; Russell, R O; Rackley, C E; Logic, J R
1979-06-01
It has been suggested that diffuse Tc-99m pyrophosphate precordial activity may be due to persistent blood-pool activity in routine delayed views during myocardial imaging. To answer this question, we reviewed myocardial scintigrams recorded 60--90 min following the injection of 12--15 mCi of Tc-99m pyrophosphate for the presence of diffuse precordial activity, and compared these with early images of the blood pool in 265 patients. Diffuse activity in the delayed images was identified in 48 patients: in 20 with acute myocardial infarction and in 28 with no evidence of it. Comparison of these routine delayed images with early views of the blood pool revealed two types of patterns. In patients with acute infarction, 95% had delayed images that were distinguishable from blood pool either because the activity was smaller than the early blood pool, or by the presence of localized activity superimposed on diffuse activity identical to blood pool. In those without infarction, 93% had activity distribution in routine delayed views matching that in the early blood-pool images. The usefulness of the diffuse TcPPi precordial activity in myocardial infarction is improved when early blood-pool imaging is used to exclude persistence of blood-pool activity as its cause. Moreover, it does not require additional amounts of radioactivity nor complex computer processing, a feature that may be of value in the community hospital using the technique to "rule out" infarction 24--72 hr after onset of suggestive symptoms.
Characterizing probe performance in the aberration corrected STEM.
Batson, P E
2006-01-01
Sub-Angstrom imaging using the 120 kV IBM STEM is now routine if the probe optics is carefully controlled and fully characterized. However, multislice simulation using at least a frozen phonon approximation is required to understand the Annular Dark Field image contrast. Analysis of silicon dumbbell structures in the [110] and [211] projections illustrate this finding. Using fast image acquisition, atomic movement appears ubiquitous under the electron beam, and may be useful to illuminate atomic level processes.
Yield of Routine Image-Guided Biopsy of Renal Mass Thermal Ablation Zones: 11-Year Experience.
Wasnik, Ashish P; Higgins, Ellen J; Fox, Giovanna A; Caoili, Elaine M; Davenport, Matthew S
2018-06-19
To determine the yield of routine image-guided core biopsy of renal cell carcinoma (RCC) thermal ablation zones. Institutional review board approval was obtained for this Health Insurance Portability and Accountability Act-compliant quality improvement effort. Routine core biopsy of RCC ablation zones was performed 2 months postablation from July 2003 to December 2014. Routine nicotinamide adenine dinucleotide staining was performed by specialized genitourinary pathologists to assess cell viability. The original purpose of performing routine postablation biopsy was to verify, in addition to imaging, whether the mass was completely treated. Imaging was stratified as negative, indeterminate, or positive for viable malignancy. Histology was stratified as negative, indeterminate, positive, or nondiagnostic for viable malignancy. Histology results were compared to prebiopsy imaging findings. Routine ablation zone biopsy was performed after 50% (146/292) of index ablations (24 cryoablations, 122 radiofrequency ablations), and postablation imaging was performed more often with multiphasic computed tomography than magnetic resonance imaging (100 vs 46, p < 0.0001). When imaging was negative (n = 117), biopsy added no additional information (92% [n = 108] negative, 0.9% [n = 1] indeterminate, 7% [n = 8] nondiagnostic). When imaging was indeterminate (n = 19), 11% (n = 2) of biopsies had viable RCC and 89% (n = 17) were negative. When imaging was positive, biopsy detected viable neoplasm in only 10% (1/10) of cases; 80% (8/10) were negative and 10% (1/10) were nondiagnostic. Routine biopsy of renal ablation zones to validate postablation imaging results was not value-added and therefore was discontinued at the study institution. Copyright © 2018. Published by Elsevier Inc.
An advanced software suite for the processing and analysis of silicon luminescence images
NASA Astrophysics Data System (ADS)
Payne, D. N. R.; Vargas, C.; Hameiri, Z.; Wenham, S. R.; Bagnall, D. M.
2017-06-01
Luminescence imaging is a versatile characterisation technique used for a broad range of research and industrial applications, particularly for the field of photovoltaics where photoluminescence and electroluminescence imaging is routinely carried out for materials analysis and quality control. Luminescence imaging can reveal a wealth of material information, as detailed in extensive literature, yet these techniques are often only used qualitatively instead of being utilised to their full potential. Part of the reason for this is the time and effort required for image processing and analysis in order to convert image data to more meaningful results. In this work, a custom built, Matlab based software suite is presented which aims to dramatically simplify luminescence image processing and analysis. The suite includes four individual programs which can be used in isolation or in conjunction to achieve a broad array of functionality, including but not limited to, point spread function determination and deconvolution, automated sample extraction, image alignment and comparison, minority carrier lifetime calibration and iron impurity concentration mapping.
Realistic Simulations of Coronagraphic Observations with WFIRST
NASA Astrophysics Data System (ADS)
Rizzo, Maxime; Zimmerman, Neil; Roberge, Aki; Lincowski, Andrew; Arney, Giada; Stark, Chris; Jansen, Tiffany; Turnbull, Margaret; WFIRST Science Investigation Team (Turnbull)
2018-01-01
We present a framework to simulate observing scenarios with the WFIRST Coronagraphic Instrument (CGI). The Coronagraph and Rapid Imaging Spectrograph in Python (crispy) is an open-source package that can be used to create CGI data products for analysis and development of post-processing routines. The software convolves time-varying coronagraphic PSFs with realistic astrophysical scenes which contain a planetary architecture, a consistent dust structure, and a background field composed of stars and galaxies. The focal plane can be read out by a WFIRST electron-multiplying CCD model directly, or passed through a WFIRST integral field spectrograph model first. Several elementary post-processing routines are provided as part of the package.
Pingali, Sai Ravi; Jewell, Sarah W; Havlat, Luiza; Bast, Martin A; Thompson, Jonathan R; Eastwood, Daniel C; Bartlett, Nancy L; Armitage, James O; Wagner-Johnston, Nina D; Vose, Julie M; Fenske, Timothy S
2014-07-15
The objective of this study was to compare the outcomes of patients with classical Hodgkin lymphoma (cHL) who achieved complete remission with frontline therapy and then underwent either clinical surveillance or routine surveillance imaging. In total, 241 patients who were newly diagnosed with cHL between January 2000 and December 2010 at 3 participating tertiary care centers and achieved complete remission after first-line therapy were retrospectively analyzed. Of these, there were 174 patients in the routine surveillance imaging group and 67 patients in the clinical surveillance group, based on the intended mode of surveillance. In the routine surveillance imaging group, the intended plan of surveillance included computed tomography and/or positron emission tomography scans; whereas, in the clinical surveillance group, the intended plan of surveillance was clinical examination and laboratory studies, and scans were obtained only to evaluate concerning signs or symptoms. Baseline patient characteristics, prognostic features, treatment records, and outcomes were collected. The primary objective was to compare overall survival for patients in both groups. For secondary objectives, we compared the success of second-line therapy and estimated the costs of imaging for each group. After 5 years of follow-up, the overall survival rate was 97% (95% confidence interval, 92%-99%) in the routine surveillance imaging group and 96% (95% confidence interval, 87%-99%) in the clinical surveillance group (P = .41). There were few relapses in each group, and all patients who relapsed in both groups achieved complete remission with second-line therapy. The charges associated with routine surveillance imaging were significantly higher than those for the clinical surveillance strategy, with no apparent clinical benefit. Clinical surveillance was not inferior to routine surveillance imaging in patients with cHL who achieved complete remission with frontline therapy. Routine surveillance imaging was associated with significantly increased estimated imaging charges. © 2014 American Cancer Society.
Performance of the SIR-B digital image processing subsystem
NASA Technical Reports Server (NTRS)
Curlander, J. C.
1986-01-01
A ground-based system to generate digital SAR image products has been developed and implemented in support of the SIR-B mission. This system is designed to achieve the maximum throughput while meeting strict image fidelity criteria. Its capabilities include: automated radiometric and geometric correction of the output imagery; high-precision absolute location without tiepoint registration; filtering of the raw data to remove spurious signals from alien radars; and automated catologing to maintain a full set of radar and image production facility in support of the SIR-B science investigators routinely produces over 80 image frames per week.
Advances in medical image computing.
Tolxdorff, T; Deserno, T M; Handels, H; Meinzer, H-P
2009-01-01
Medical image computing has become a key technology in high-tech applications in medicine and an ubiquitous part of modern imaging systems and the related processes of clinical diagnosis and intervention. Over the past years significant progress has been made in the field, both on methodological and on application level. Despite this progress there are still big challenges to meet in order to establish image processing routinely in health care. In this issue, selected contributions of the German Conference on Medical Image Processing (BVM) are assembled to present latest advances in the field of medical image computing. The winners of scientific awards of the German Conference on Medical Image Processing (BVM) 2008 were invited to submit a manuscript on their latest developments and results for possible publication in Methods of Information in Medicine. Finally, seven excellent papers were selected to describe important aspects of recent advances in the field of medical image processing. The selected papers give an impression of the breadth and heterogeneity of new developments. New methods for improved image segmentation, non-linear image registration and modeling of organs are presented together with applications of image analysis methods in different medical disciplines. Furthermore, state-of-the-art tools and techniques to support the development and evaluation of medical image processing systems in practice are described. The selected articles describe different aspects of the intense development in medical image computing. The image processing methods presented enable new insights into the patient's image data and have the future potential to improve medical diagnostics and patient treatment.
Enhancement of chest radiographs using eigenimage processing
NASA Astrophysics Data System (ADS)
Bones, Philip J.; Butler, Anthony P. H.; Hurrell, Michael
2006-08-01
Frontal chest radiographs ("chest X-rays") are routinely used by medical personnel to assess patients for a wide range of suspected disorders. Often large numbers of images need to be analyzed. Furthermore, at times the images need to analyzed ("reported") when no radiological expert is available. A system which enhances the images in such a way that abnormalities are more obvious is likely to reduce the chance that an abnormality goes unnoticed. The authors previously reported the use of principal components analysis to derive a basis set of eigenimages from a training set made up of images from normal subjects. The work is here extended to investigate how best to emphasize the abnormalities in chest radiographs. Results are also reported for various forms of image normalizing transformations used in performing the eigenimage processing.
Marcinková, Mária; Straka, Ľubomír; Novomeský, František; Janík, Martin; Štuller, František; Krajčovič, Jozef
2018-01-01
Massive progress in developing even more precise imaging modalities influenced all medical branches including the forensic medicine. In forensic anthropology, an inevitable part of forensic medicine itself, the use of all imaging modalities becomes even more important. Despite of acquiring more accurate informations about the deceased, all of them can be used in the process of identification and/or age estimation. X - ray imaging is most commonly used in detecting foreign bodies or various pathological changes of the deceased. Computed tomography, on the other hand, can be very helpful in the process of identification, whereas outcomes of this examination can be used for virtual reconstruction of living objects. Magnetic resonance imaging offers new opportunities in detecting cardiovascular pathological processes or develompental anomalies. Ultrasonography provides promising results in age estimation of living subjects without excessive doses of radiation. Processing the latest information sources available, authors introduce the application examples of X - ray imaging, computed tomography, magnetic resonance imaging and ultrasonography in everyday forensic medicine routine, with particular focusing on forensic anthropology.
Exploration of Mars by Mariner 9 - Television sensors and image processing.
NASA Technical Reports Server (NTRS)
Cutts, J. A.
1973-01-01
Two cameras equipped with selenium sulfur slow scan vidicons were used in the orbital reconnaissance of Mars by the U.S. Spacecraft Mariner 9 and the performance characteristics of these devices are presented. Digital image processing techniques have been widely applied in the analysis of images of Mars and its satellites. Photometric and geometric distortion corrections, image detail enhancement and transformation to standard map projection have been routinely employed. More specializing applications included picture differencing, limb profiling, solar lighting corrections, noise removal, line plots and computer mosaics. Information on enhancements as well as important picture geometric information was stored in a master library. Display of the library data in graphic or numerical form was accomplished by a data management computer program.
Nakada, Tsutomu; Matsuzawa, Hitoshi; Fujii, Yukihiko; Takahashi, Hitoshi; Nishizawa, Masatoyo; Kwee, Ingrid L
2006-07-01
Clinical magnetic resonance imaging (MRI) has recently entered the "high-field" era, and systems equipped with 3.0-4.0T superconductive magnets are becoming the gold standard for diagnostic imaging. While higher signal-to-noise ratio (S/N) is a definite advantage of higher field systems, higher susceptibility effect remains to be a significant trade-off. To take advantage of a higher field system in performing routine clinical images of higher anatomical resolution, we implemented a vector contrast image technique to 3.0T imaging, three-dimensional anisotropy contrast (3DAC), with a PROPELLER (Periodically Rotated Overlapping Parallel Lines with Enhanced Reconstruction) sequence, a method capable of effectively eliminating undesired artifacts on rapid diffusion imaging sequences. One hundred subjects (20 normal volunteers and 80 volunteers with various central nervous system diseases) participated in the study. Anisotropic diffusion-weighted PROPELLER images were obtained on a General Electric (Waukesha, WI, USA) Signa 3.0T for each axis, with b-value of 1100 sec/mm(2). Subsequently, 3DAC images were constructed using in-house software written on MATLAB (MathWorks, Natick, MA, USA). The vector contrast allows for providing exquisite anatomical detail illustrated by clear identification of all major tracts through the entire brain. 3DAC images provide better anatomical resolution for brainstem glioma than higher-resolution T2 reversed images. Degenerative processes of disease-specific tracts were clearly identified as illustrated in cases of multiple system atrophy and Joseph-Machado disease. Anatomical images of significantly higher resolution than the best current standard, T2 reversed images, were successfully obtained. As a technique readily applicable under routine clinical setting, 3DAC PROPELLER on a 3.0T system will be a powerful addition to diagnostic imaging.
Automated detection of changes in sequential color ocular fundus images
NASA Astrophysics Data System (ADS)
Sakuma, Satoshi; Nakanishi, Tadashi; Takahashi, Yasuko; Fujino, Yuichi; Tsubouchi, Tetsuro; Nakanishi, Norimasa
1998-06-01
A recent trend is the automatic screening of color ocular fundus images. The examination of such images is used in the early detection of several adult diseases such as hypertension and diabetes. Since this type of examination is easier than CT, costs less, and has no harmful side effects, it will become a routine medical examination. Normal ocular fundus images are found in more than 90% of all people. To deal with the increasing number of such images, this paper proposes a new approach to process them automatically and accurately. Our approach, based on individual comparison, identifies changes in sequential images: a previously diagnosed normal reference image is compared to a non- diagnosed image.
Imaging and Analytics: The changing face of Medical Imaging
NASA Astrophysics Data System (ADS)
Foo, Thomas
There have been significant technological advances in imaging capability over the past 40 years. Medical imaging capabilities have developed rapidly, along with technology development in computational processing speed and miniaturization. Moving to all-digital, the number of images that are acquired in a routine clinical examination has increased dramatically from under 50 images in the early days of CT and MRI to more than 500-1000 images today. The staggering number of images that are routinely acquired poses significant challenges for clinicians to interpret the data and to correctly identify the clinical problem. Although the time provided to render a clinical finding has not substantially changed, the amount of data available for interpretation has grown exponentially. In addition, the image quality (spatial resolution) and information content (physiologically-dependent image contrast) has also increased significantly with advances in medical imaging technology. On its current trajectory, medical imaging in the traditional sense is unsustainable. To assist in filtering and extracting the most relevant data elements from medical imaging, image analytics will have a much larger role. Automated image segmentation, generation of parametric image maps, and clinical decision support tools will be needed and developed apace to allow the clinician to manage, extract and utilize only the information that will help improve diagnostic accuracy and sensitivity. As medical imaging devices continue to improve in spatial resolution, functional and anatomical information content, image/data analytics will be more ubiquitous and integral to medical imaging capability.
Fast Image Subtraction Using Multi-cores and GPUs
NASA Astrophysics Data System (ADS)
Hartung, Steven; Shukla, H.
2013-01-01
Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.
Comparison of breast percent density estimation from raw versus processed digital mammograms
NASA Astrophysics Data System (ADS)
Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina
2011-03-01
We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.
NASA Astrophysics Data System (ADS)
Retheesh, R.; Ansari, Md. Zaheer; Radhakrishnan, P.; Mujeeb, A.
2018-03-01
This study demonstrates the feasibility of a view-based method, the motion history image (MHI) to map biospeckle activity around the scar region in a green orange fruit. The comparison of MHI with the routine intensity-based methods validated the effectiveness of the proposed method. The results show that MHI can be implementated as an alternative online image processing tool in the biospeckle analysis.
Tomek, Jakub; Novak, Ondrej; Syka, Josef
2013-07-01
Two-Photon Processor (TPP) is a versatile, ready-to-use, and freely available software package in MATLAB to process data from in vivo two-photon calcium imaging. TPP includes routines to search for cell bodies in full-frame (Search for Neural Cells Accelerated; SeNeCA) and line-scan acquisition, routines for calcium signal calculations, filtering, spike-mining, and routines to construct parametric fields. Searching for somata in artificial in vivo data, our algorithm achieved better performance than human annotators. SeNeCA copes well with uneven background brightness and in-plane motion artifacts, the major problems in simple segmentation methods. In the fast mode, artificial in vivo images with a resolution of 256 × 256 pixels containing ≈ 100 neurons can be processed at a rate up to 175 frames per second (tested on Intel i7, 8 threads, magnetic hard disk drive). This speed of a segmentation algorithm could bring new possibilities into the field of in vivo optophysiology. With such a short latency (down to 5-6 ms on an ordinary personal computer) and using some contemporary optogenetic tools, it will allow experiments in which a control program can continuously evaluate the occurrence of a particular spatial pattern of activity (a possible correlate of memory or cognition) and subsequently inhibit/stimulate the entire area of the circuit or inhibit/stimulate a different part of the neuronal system. TPP will be freely available on our public web site. Similar all-in-one and freely available software has not yet been published.
TESS Data Processing and Quick-look Pipeline
NASA Astrophysics Data System (ADS)
Fausnaugh, Michael; Huang, Xu; Glidden, Ana; Guerrero, Natalia; TESS Science Office
2018-01-01
We describe the data analysis procedures and pipelines for the Transiting Exoplanet Survey Satellite (TESS). We briefly review the processing pipeline developed and implemented by the Science Processing Operations Center (SPOC) at NASA Ames, including pixel/full-frame image calibration, photometric analysis, pre-search data conditioning, transiting planet search, and data validation. We also describe data-quality diagnostic analyses and photometric performance assessment tests. Finally, we detail a "quick-look pipeline" (QLP) that has been developed by the MIT branch of the TESS Science Office (TSO) to provide a fast and adaptable routine to search for planet candidates in the 30 minute full-frame images.
NASA Technical Reports Server (NTRS)
1980-01-01
MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.
CT radiation profile width measurement using CR imaging plate raw data
Yang, Chang‐Ying Joseph
2015-01-01
This technical note demonstrates computed tomography (CT) radiation profile measurement using computed radiography (CR) imaging plate raw data showing it is possible to perform the CT collimation width measurement using a single scan without saturating the imaging plate. Previously described methods require careful adjustments to the CR reader settings in order to avoid signal clipping in the CR processed image. CT radiation profile measurements were taken as part of routine quality control on 14 CT scanners from four vendors. CR cassettes were placed on the CT scanner bed, raised to isocenter, and leveled. Axial scans were taken at all available collimations, advancing the cassette for each scan. The CR plates were processed and raw CR data were analyzed using MATLAB scripts to measure collimation widths. The raw data approach was compared with previously established methodology. The quality control analysis scripts are released as open source using creative commons licensing. A log‐linear relationship was found between raw pixel value and air kerma, and raw data collimation width measurements were in agreement with CR‐processed, bit‐reduced data, using previously described methodology. The raw data approach, with intrinsically wider dynamic range, allows improved measurement flexibility and precision. As a result, we demonstrate a methodology for CT collimation width measurements using a single CT scan and without the need for CR scanning parameter adjustments which is more convenient for routine quality control work. PACS numbers: 87.57.Q‐, 87.59.bd, 87.57.uq PMID:26699559
Pratt, Harry; Hassanin, Kareem; Troughton, Lee D; Czanner, Gabriela; Zheng, Yalin; McCormick, Austin G; Hamill, Kevin J
2017-01-01
Application of sunscreen is a widely used mechanism for protecting skin from the harmful effects of UV light. However, protection can only be achieved through effective application, and areas that are routinely missed are likely at increased risk of UV damage. Here we sought to determine if specific areas of the face are missed during routine sunscreen application, and whether provision of public health information is sufficient to improve coverage. To investigate this, 57 participants were imaged with a UV sensitive camera before and after sunscreen application: first visit; minimal pre-instruction, second visit; provided with a public health information statement. Images were scored using a custom automated image analysis process designed to identify areas of high UV reflectance, i.e. missed during sunscreen application, and analysed for 5% significance. Analyses revealed eyelid and periorbital regions to be disproportionately missed during routine sunscreen application (median 14% missed in eyelid region vs 7% in rest of face, p<0.01). Provision of health information caused a significant improvement in coverage to eyelid areas in general however, the medial canthal area was still frequently missed. These data reveal that a public health announcement-type intervention could be effective at improving coverage of high risk areas of the face, however high risk areas are likely to remain unprotected therefore other mechanisms of sun protection should be widely promoted such as UV blocking sunglasses.
Buckler, Andrew J; Bresolin, Linda; Dunnick, N Reed; Sullivan, Daniel C; Aerts, Hugo J W L; Bendriem, Bernard; Bendtsen, Claus; Boellaard, Ronald; Boone, John M; Cole, Patricia E; Conklin, James J; Dorfman, Gary S; Douglas, Pamela S; Eidsaunet, Willy; Elsinger, Cathy; Frank, Richard A; Gatsonis, Constantine; Giger, Maryellen L; Gupta, Sandeep N; Gustafson, David; Hoekstra, Otto S; Jackson, Edward F; Karam, Lisa; Kelloff, Gary J; Kinahan, Paul E; McLennan, Geoffrey; Miller, Colin G; Mozley, P David; Muller, Keith E; Patt, Rick; Raunig, David; Rosen, Mark; Rupani, Haren; Schwartz, Lawrence H; Siegel, Barry A; Sorensen, A Gregory; Wahl, Richard L; Waterton, John C; Wolf, Walter; Zahlmann, Gudrun; Zimmerman, Brian
2011-06-01
Quantitative imaging biomarkers could speed the development of new treatments for unmet medical needs and improve routine clinical care. However, it is not clear how the various regulatory and nonregulatory (eg, reimbursement) processes (often referred to as pathways) relate, nor is it clear which data need to be collected to support these different pathways most efficiently, given the time- and cost-intensive nature of doing so. The purpose of this article is to describe current thinking regarding these pathways emerging from diverse stakeholders interested and active in the definition, validation, and qualification of quantitative imaging biomarkers and to propose processes to facilitate the development and use of quantitative imaging biomarkers. A flexible framework is described that may be adapted for each imaging application, providing mechanisms that can be used to develop, assess, and evaluate relevant biomarkers. From this framework, processes can be mapped that would be applicable to both imaging product development and to quantitative imaging biomarker development aimed at increasing the effectiveness and availability of quantitative imaging. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.10100800/-/DC1. RSNA, 2011
Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Murty Goddu, S.; Mutic, Sasa; Deasy, Joseph O.; Low, Daniel A.
2011-01-01
Purpose: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). Methods:DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. Results: DIRART provides a set of image processing∕registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. Conclusions: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. PMID:21361176
Functional Imaging Biomarkers: Potential to Guide an Individualised Approach to Radiotherapy.
Prestwich, R J D; Vaidyanathan, S; Scarsbrook, A F
2015-10-01
The identification of robust prognostic and predictive biomarkers would transform the ability to implement an individualised approach to radiotherapy. In this regard, there has been a surge of interest in the use of functional imaging to assess key underlying biological processes within tumours and their response to therapy. Importantly, functional imaging biomarkers hold the potential to evaluate tumour heterogeneity/biology both spatially and temporally. An ever-increasing range of functional imaging techniques is now available primarily involving positron emission tomography and magnetic resonance imaging. Small-scale studies across multiple tumour types have consistently been able to correlate changes in functional imaging parameters during radiotherapy with disease outcomes. Considerable challenges remain before the implementation of functional imaging biomarkers into routine clinical practice, including the inherent temporal variability of biological processes within tumours, reproducibility of imaging, determination of optimal imaging technique/combinations, timing during treatment and design of appropriate validation studies. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Kocna, P
1995-01-01
GastroBase, a clinical information system, incorporates patient identification, medical records, images, laboratory data, patient history, physical examination, and other patient-related information. Program modules are written in C; all data is processed using Novell-Btrieve data manager. Patient identification database represents the main core of this information systems. A graphic library developed in the past year and graphic modules with a special video-card enables the storing, archiving, and linking of different images to the electronic patient-medical-record. GastroBase has been running for more than four years in daily routine and the database contains more than 25,000 medical records and 1,500 images. This new version of GastroBase is now incorporated into the clinical information system of University Clinic in Prague.
[Imaging center - optimization of the imaging process].
Busch, H-P
2013-04-01
Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.
GAP: yet another image processing system for solar observations.
NASA Astrophysics Data System (ADS)
Keller, C. U.
GAP is a versatile, interactive image processing system for analyzing solar observations, in particular extended time sequences, and for preparing publication quality figures. It consists of an interpreter that is based on a language with a control flow similar to PASCAL and C. The interpreter may be accessed from a command line editor and from user-supplied functions, procedures, and command scripts. GAP is easily expandable via external FORTRAN programs that are linked to the GAP interface routines. The current version of GAP runs on VAX, DECstation, Sun, and Apollo computers. Versions for MS-DOS and OS/2 are in preparation.
Reducing uncertainty in wind turbine blade health inspection with image processing techniques
NASA Astrophysics Data System (ADS)
Zhang, Huiyi
Structural health inspection has been widely applied in the operation of wind farms to find early cracks in wind turbine blades (WTBs). Increased numbers of turbines and expanded rotor diameters are driving up the workloads and safety risks for site employees. Therefore, it is important to automate the inspection process as well as minimize the uncertainties involved in routine blade health inspection. In addition, crack documentation and trending is vital to assess rotor blade and turbine reliability in the 20 year designed life span. A new crack recognition and classification algorithm is described that can support automated structural health inspection of the surface of large composite WTBs. The first part of the study investigated the feasibility of digital image processing in WTB health inspection and defined the capability of numerically detecting cracks as small as hairline thickness. The second part of the study identified and analyzed the uncertainty of the digital image processing method. A self-learning algorithm was proposed to recognize and classify cracks without comparing a blade image to a library of crack images. The last part of the research quantified the uncertainty in the field conditions and the image processing methods.
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.
2002-05-01
Reliable evaluation of medical image processing is of major importance for routine applications. Nonetheless, evaluation is often omitted or methodically defective when novel approaches or algorithms are introduced. Adopted from medical diagnosis, we define the following criteria to classify reference standards: 1. Reliance, if the generation or capturing of test images for evaluation follows an exactly determined and reproducible protocol. 2. Equivalence, if the image material or relationships considered within an algorithmic reference standard equal real-life data with respect to structure, noise, or other parameters of importance. 3. Independence, if any reference standard relies on a different procedure than that to be evaluated, or on other images or image modalities than that used routinely. This criterion bans the simultaneous use of one image for both, training and test phase. 4. Relevance, if the algorithm to be evaluated is self-reproducible. If random parameters or optimization strategies are applied, reliability of the algorithm must be shown before the reference standard is applied for evaluation. 5. Significance, if the number of reference standard images that are used for evaluation is sufficient large to enable statistically founded analysis. We demand that a true gold standard must satisfy the Criteria 1 to 3. Any standard only satisfying two criteria, i.e., Criterion 1 and Criterion 2 or Criterion 1 and Criterion 3, is referred to as silver standard. Other standards are termed to be from plastic. Before exhaustive evaluation based on gold or silver standards is performed, its relevance must be shown (Criterion 4) and sufficient tests must be carried out to found statistical analysis (Criterion 5). In this paper, examples are given for each class of reference standards.
Development of 2D deconvolution method to repair blurred MTSAT-1R visible imagery
NASA Astrophysics Data System (ADS)
Khlopenkov, Konstantin V.; Doelling, David R.; Okuyama, Arata
2014-09-01
Spatial cross-talk has been discovered in the visible channel data of the Multi-functional Transport Satellite (MTSAT)-1R. The slight image blurring is attributed to an imperfection in the mirror surface caused either by flawed polishing or a dust contaminant. An image processing methodology is described that employs a two-dimensional deconvolution routine to recover the original undistorted MTSAT-1R data counts. The methodology assumes that the dispersed portion of the signal is small and distributed randomly around the optical axis, which allows the image blurring to be described by a point spread function (PSF) based on the Gaussian profile. The PSF is described by 4 parameters, which are solved using a maximum likelihood estimator using coincident collocated MTSAT-2 images as truth. A subpixel image matching technique is used to align the MTSAT-2 pixels into the MTSAT-1R projection and to correct for navigation errors and cloud displacement due to the time and viewing geometry differences between the two satellite observations. An optimal set of the PSF parameters is derived by an iterative routine based on the 4-dimensional Powell's conjugate direction method that minimizes the difference between PSF-corrected MTSAT-1R and collocated MTSAT-2 images. This iterative approach is computationally intensive and was optimized analytically as well as by coding in assembly language incorporating parallel processing. The PSF parameters were found to be consistent over the 5-days of available daytime coincident MTSAT-1R and MTSAT-2 images, and can easily be applied to the MTSAT-1R imager pixel level counts to restore the original quality of the entire MTSAT-1R record.
Susceptibility weighted imaging: differentiating between calcification and hemosiderin*
Barbosa, Jeam Haroldo Oliveira; Santos, Antonio Carlos; Salmon, Carlos Ernesto Garrido
2015-01-01
Objective To present a detailed explanation on the processing of magnetic susceptibility weighted imaging (SWI), demonstrating the effects of echo time and sensitive mask on the differentiation between calcification and hemosiderin. Materials and Methods Computed tomography and magnetic resonance (magnitude and phase) images of six patients (age range 41– 54 years; four men) were retrospectively selected. The SWI images processing was performed using the Matlab’s own routine. Results Four out of the six patients showed calcifications at computed tomography images and their SWI images demonstrated hyperintense signal at the calcification regions. The other patients did not show any calcifications at computed tomography, and SWI revealed the presence of hemosiderin deposits with hypointense signal. Conclusion The selection of echo time and of the mask may change all the information on SWI images, and compromise the diagnostic reliability. Amongst the possible masks, the authors highlight that the sigmoid mask allows for contrasting calcifications and hemosiderin on a single SWI image. PMID:25987750
Kapke, Jonathan T; Epperla, Narendranath; Shah, Namrata; Richardson, Kristin; Carrum, George; Hari, Parameswaran N; Pingali, Sai R; Hamadani, Mehdi; Karmali, Reem; Fenske, Timothy S
2017-07-01
Patients with relapsed and refractory classical Hodgkin lymphoma (cHL) are often treated with autologous hematopoietic cell transplantation (auto-HCT). After auto-HCT, most transplant centers implement routine surveillance imaging to monitor for disease relapse; however, there is limited evidence to support this practice. In this multicenter, retrospective study, we identified cHL patients (n = 128) who received auto-HCT, achieved complete remission (CR) after transplantation, and then were followed with routine surveillance imaging. Of these, 29 (23%) relapsed after day 100 after auto-HCT. Relapse was detected clinically in 14 patients and with routine surveillance imaging in 15 patients. When clinically detected relapse was compared with to radiographically detected relapse respectively, the median overall survival (2084 days [range, 225-4161] vs. 2737 days [range, 172-2750]; P = .51), the median time to relapse (247 days [range, 141-3974] vs. 814 days [range, 96-1682]; P = .30) and the median postrelapse survival (674 days [range, 13-1883] vs. 1146 days [range, 4-2548]; P = .52) were not statistically different. In patients who never relapsed after auto-HCT, a median of 4 (range, 1-25) surveillance imaging studies were performed over a median follow-up period of 3.5 years. A minority of patients with cHL who achieve CR after auto-HCT will ultimately relapse. Surveillance imaging detected approximately half of relapses; however, outcomes were similar for those whose relapse was detected using routine surveillance imaging versus detected clinically in between surveillance imaging studies. There appears to be limited utility for routine surveillance imaging in cHL patients who achieve CR after auto-HCT. Copyright © 2017 Elsevier Inc. All rights reserved.
Wang, Jing; Wu, Yue; Yao, Zhenwei; Yang, Zhong
2014-12-01
The aim of this study was to explore the value of three-dimensional sampling perfection with application-optimized contrasts using different flip-angle evolutions (3D-SPACE) sequence in assessment of pituitary micro-lesions. Coronal 3D-SPACE as well as routine T1- and dynamic contrast-enhanced (DCE) T1-weighted images of the pituitary gland were acquired in 52 patients (48 women and four men; mean age, 32 years; age range, 17-50 years) with clinically suspected pituitary abnormality at 3.0 T, retrospectively. The interobserver agreement of assessment results was analyzed with K-statistics. Qualitative analyses were compared using Wilcoxon signed-rank test. There was good interobserver agreement of the independent evaluations for 3D-SPACE images (k = 0.892), fair for routine MR images (k = 0.649). At 3.0 T, 3D-SPACE provided significantly better images than routine MR images in terms of the boundary of pituitary gland, definition of pituitary lesions, and overall image quality. The evaluation of pituitary micro-lesions using combined routine and 3D-SPACE MR imaging was superior to that using only routine or 3D-SPACE imaging. The 3D-SPACE sequence can be used for appropriate and successful evaluation of the pituitary gland. We suggest 3D-SPACE sequence to be a powerful supplemental sequence in MR examinations with suspected pituitary micro-lesions.
Cellular image segmentation using n-agent cooperative game theory
NASA Astrophysics Data System (ADS)
Dimock, Ian B.; Wan, Justin W. L.
2016-03-01
Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.
NASA Technical Reports Server (NTRS)
Masuoka, E.
1985-01-01
Systematic noise is present in Airborne Imaging Spectrometer (AIS) data collected on October 26, 1983 and May 5, 1984 in grating position 0 (1.2 to 1.5 microns). In the October data set the noise occurs as 135 scan lines of low DN's every 270 scan lines. The noise is particularly bad in bands nine through thirty, restricting effective analysis to at best ten of the 32 bands. In the May data the regions of severe noise have been eliminated, but systematic noise is present with three frequencies (3, 106 and 200 scan lines) in all thirty two bands. The periodic nature of the noise in both data sets suggests that it could be removed as part of routine processing. This is necessary before classification routines or statistical analyses are used with these data.
Slonecker, E. Terrence; Fisher, Gary B.
2014-01-01
This evaluation was conducted to assess the potential for using both traditional remote sensing, such as aerial imagery, and emerging remote sensing technology, such as hyperspectral imaging, as tools for postclosure monitoring of selected hazardous waste sites. Sixteen deleted Superfund (SF) National Priorities List (NPL) sites in Pennsylvania were imaged with a Civil Air Patrol (CAP) Airborne Real-Time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor between 2009 and 2012. Deleted sites are those sites that have been remediated and removed from the NPL. The imagery was processed to radiance and atmospherically corrected to relative reflectance with standard software routines using the Environment for Visualizing Imagery (ENVI, ITT–VIS, Boulder, Colorado) software. Standard routines for anomaly detection, endmember collection, vegetation stress, and spectral analysis were applied.
Kokaly, Raymond F.
2011-01-01
This report describes procedures for installing and using the U.S. Geological Survey Processing Routines in IDL for Spectroscopic Measurements (PRISM) software. PRISM provides a framework to conduct spectroscopic analysis of measurements made using laboratory, field, airborne, and space-based spectrometers. Using PRISM functions, the user can compare the spectra of materials of unknown composition with reference spectra of known materials. This spectroscopic analysis allows the composition of the material to be identified and characterized. Among its other functions, PRISM contains routines for the storage of spectra in database files, import/export of ENVI spectral libraries, importation of field spectra, correction of spectra to absolute reflectance, arithmetic operations on spectra, interactive continuum removal and comparison of spectral features, correction of imaging spectrometer data to ground-calibrated reflectance, and identification and mapping of materials using spectral feature-based analysis of reflectance data. This report provides step-by-step instructions for installing the PRISM software and running its functions.
Rahman, Zia Ur; Sethi, Pooja; Murtaza, Ghulam; Virk, Hafeez Ul Hassan; Rai, Aitzaz; Mahmod, Masliza; Schoondyke, Jeffrey; Albalbissi, Kais
2017-01-01
Cardiovascular disease is a leading cause of morbidity and mortality globally. Early diagnostic markers are gaining popularity for better patient care disease outcomes. There is an increasing interest in noninvasive cardiac imaging biomarkers to diagnose subclinical cardiac disease. Feature tracking cardiac magnetic resonance imaging is a novel post-processing technique that is increasingly being employed to assess global and regional myocardial function. This technique has numerous applications in structural and functional diagnostics. It has been validated in multiple studies, although there is still a long way to go for it to become routine standard of care. PMID:28515849
Diazo processing of LANDSAT imagery: A low-cost instructional technique
NASA Technical Reports Server (NTRS)
Lusch, D. P.
1981-01-01
Diazo processing of LANDSAT imagery is a relatively simple and cost effective method of producing enhanced renditions of the visual LANDSAT products. This technique is capable of producing a variety of image enhancements which have value in a teaching laboratory environment. Additionally, with the appropriate equipment, applications research which relys on accurate and repeatable results is possible. Exposure and development equipment options, diazo materials, and enhancement routines are discussed.
Kijowski, Richard; Blankenbaker, Donna G; Munoz Del Rio, Alejandro; Baer, Geoffrey S; Graf, Ben K
2013-05-01
To determine whether the addition of a T2 mapping sequence to a routine magnetic resonance (MR) imaging protocol could improve diagnostic performance in the detection of surgically confirmed cartilage lesions within the knee joint at 3.0 T. This prospective study was approved by the institutional review board, and the requirement to obtain informed consent was waived. The study group consisted of 150 patients (76 male and 74 female patients with an average age of 41.2 and 41.5 years, respectively) who underwent MR imaging and arthroscopy of the knee joint. MR imaging was performed at 3.0 T by using a routine protocol with the addition of a sagittal T2 mapping sequence. Images from all MR examinations were reviewed in consensus by two radiologists before surgery to determine the presence or absence of cartilage lesions on each articular surface, first by using the routine MR protocol alone and then by using the routine MR protocol with T2 maps. Each articular surface was then evaluated at arthroscopy. Generalized estimating equation models were used to compare the sensitivity and specificity of the routine MR imaging protocol with and without T2 maps in the detection of surgically confirmed cartilage lesions. The sensitivity and specificity in the detection of 351 cartilage lesions were 74.6% and 97.8%, respectively, for the routine MR protocol alone and 88.9% and 93.1% for the routine MR protocol with T2 maps. Differences in sensitivity and specificity were statistically significant (P < .001). The addition of T2 maps to the routine MR imaging protocol significantly improved the sensitivity in the detection of 24 areas of cartilage softening (from 4.2% to 62%, P < .001), 41 areas of cartilage fibrillation (from 20% to 66%, P < .001), and 96 superficial partial-thickness cartilage defects (from 71% to 88%, P = .004). The addition of a T2 mapping sequence to a routine MR protocol at 3.0 T improved sensitivity in the detection of cartilage lesions within the knee joint from 74.6% to 88.9%, with only a small reduction in specificity. The greatest improvement in sensitivity with use of the T2 maps was in the identification of early cartilage degeneration. © RSNA, 2013.
Desai, Atman; Pendharkar, Arjun V; Swienckowski, Jessica G; Ball, Perry A; Lollis, Scott; Simmons, Nathan E
2015-11-23
Construct failure is an uncommon but well-recognized complication following anterior cervical corpectomy and fusion (ACCF). In order to screen for these complications, many centers routinely image patients at outpatient visits following surgery. There remains, however, little data on the utility of such imaging. The electronic medical record of all patients undergoing anterior cervical corpectomy and fusion at Dartmouth-Hitchcock Medical Center between 2004 and 2009 were reviewed. All patients had routine cervical spine radiographs performed perioperatively. Follow-up visits up to two years postoperatively were analyzed. Sixty-five patients (mean age 52.2) underwent surgery during the time period. Eighteen patients were female. Forty patients had surgery performed for spondylosis, 20 for trauma, three for tumor, and two for infection. Forty-three patients underwent one-level corpectomy, 20 underwent two-level corpectomy, and two underwent three-level corpectomy, using an allograft, autograft, or both. Sixty-two of the fusions were instrumented using a plate and 13 had posterior augmentation. Fifty-seven patients had follow-up with imaging at four to 12 weeks following surgery, 54 with plain radiographs, two with CT scans, and one with an MRI scan. Unexpected findings were noted in six cases. One of those patients, found to have asymptomatic recurrent kyphosis following a two-level corpectomy, had repeat surgery because of those findings. Only one further patient was found to have abnormal imaging up to two years, and this patient required no further intervention. Routine imaging after ACCF can demonstrate asymptomatic occurrences of clinically significant instrument failure. In 43 consecutive single-level ACCF however, routine imaging did not change management, even when an abnormality was discovered. This may suggest a limited role for routine imaging after ACCF in longer constructs involving multiple levels.
Routine Digital Pathology Workflow: The Catania Experience
Fraggetta, Filippo; Garozzo, Salvatore; Zannoni, Gian Franco; Pantanowitz, Liron; Rossi, Esther Diana
2017-01-01
Introduction: Successful implementation of whole slide imaging (WSI) for routine clinical practice has been accomplished in only a few pathology laboratories worldwide. We report the transition to an effective and complete digital surgical pathology workflow in the pathology laboratory at Cannizzaro Hospital in Catania, Italy. Methods: All (100%) permanent histopathology glass slides were digitized at ×20 using Aperio AT2 scanners. Compatible stain and scanning slide racks were employed to streamline operations. eSlide Manager software was bidirectionally interfaced with the anatomic pathology laboratory information system. Virtual slide trays connected to the two-dimensional (2D) barcode tracking system allowed pathologists to confirm that they were correctly assigned slides and that all tissues on these glass slides were scanned. Results: Over 115,000 glass slides were digitized with a scan fail rate of around 1%. Drying glass slides before scanning minimized them sticking to scanner racks. Implementation required introduction of a 2D barcode tracking system and modification of histology workflow processes. Conclusion: Our experience indicates that effective adoption of WSI for primary diagnostic use was more dependent on optimizing preimaging variables and integration with the laboratory information system than on information technology infrastructure and ensuring pathologist buy-in. Implementation of digital pathology for routine practice not only leveraged the benefits of digital imaging but also creates an opportunity for establishing standardization of workflow processes in the pathology laboratory. PMID:29416914
Routine Digital Pathology Workflow: The Catania Experience.
Fraggetta, Filippo; Garozzo, Salvatore; Zannoni, Gian Franco; Pantanowitz, Liron; Rossi, Esther Diana
2017-01-01
Successful implementation of whole slide imaging (WSI) for routine clinical practice has been accomplished in only a few pathology laboratories worldwide. We report the transition to an effective and complete digital surgical pathology workflow in the pathology laboratory at Cannizzaro Hospital in Catania, Italy. All (100%) permanent histopathology glass slides were digitized at ×20 using Aperio AT2 scanners. Compatible stain and scanning slide racks were employed to streamline operations. eSlide Manager software was bidirectionally interfaced with the anatomic pathology laboratory information system. Virtual slide trays connected to the two-dimensional (2D) barcode tracking system allowed pathologists to confirm that they were correctly assigned slides and that all tissues on these glass slides were scanned. Over 115,000 glass slides were digitized with a scan fail rate of around 1%. Drying glass slides before scanning minimized them sticking to scanner racks. Implementation required introduction of a 2D barcode tracking system and modification of histology workflow processes. Our experience indicates that effective adoption of WSI for primary diagnostic use was more dependent on optimizing preimaging variables and integration with the laboratory information system than on information technology infrastructure and ensuring pathologist buy-in. Implementation of digital pathology for routine practice not only leveraged the benefits of digital imaging but also creates an opportunity for establishing standardization of workflow processes in the pathology laboratory.
NASA Astrophysics Data System (ADS)
Taha, Z.; Razman, M. A. M.; Ghani, A. S. Abdul; Majeed, A. P. P. Abdul; Musa, R. M.; Adnan, F. A.; Sallehudin, M. F.; Mukai, Y.
2018-04-01
Fish Hunger behaviour is essential in determining the fish feeding routine, particularly for fish farmers. The inability to provide accurate feeding routines (under-feeding or over-feeding) may lead the death of the fish and consequently inhibits the quantity of the fish produced. Moreover, the excessive food that is not consumed by the fish will be dissolved in the water and accordingly reduce the water quality through the reduction of oxygen quantity. This problem also leads the death of the fish or even spur fish diseases. In the present study, a correlation of Barramundi fish-school behaviour with hunger condition through the hybrid data integration of image processing technique is established. The behaviour is clustered with respect to the position of the school size as well as the school density of the fish before feeding, during feeding and after feeding. The clustered fish behaviour is then classified through k-Nearest Neighbour (k-NN) learning algorithm. Three different variations of the algorithm namely cosine, cubic and weighted are assessed on its ability to classify the aforementioned fish hunger behaviour. It was found from the study that the weighted k-NN variation provides the best classification with an accuracy of 86.5%. Therefore, it could be concluded that the proposed integration technique may assist fish farmers in ascertaining fish feeding routine.
Uterus segmentation in dynamic MRI using LBP texture descriptors
NASA Astrophysics Data System (ADS)
Namias, R.; Bellemare, M.-E.; Rahim, M.; Pirró, N.
2014-03-01
Pelvic floor disorders cover pathologies of which physiopathology is not well understood. However cases get prevalent with an ageing population. Within the context of a project aiming at modelization of the dynamics of pelvic organs, we have developed an efficient segmentation process. It aims at alleviating the radiologist with a tedious one by one image analysis. From a first contour delineating the uterus-vagina set, the organ border is tracked along a dynamic mri sequence. The process combines movement prediction, local intensity and texture analysis and active contour geometry control. Movement prediction allows a contour intitialization for next image in the sequence. Intensity analysis provides image-based local contour detection enhanced by local binary pattern (lbp) texture descriptors. Geometry control prohibits self intersections and smoothes the contour. Results show the efficiency of the method with images produced in clinical routine.
NASA Astrophysics Data System (ADS)
Erberich, Stephan G.; Hoppe, Martin; Jansen, Christian; Schmidt, Thomas; Thron, Armin; Oberschelp, Walter
2001-08-01
In the last few years more and more University Hospitals as well as private hospitals changed to digital information systems for patient record, diagnostic files and digital images. Not only that patient management becomes easier, it is also very remarkable how clinical research can profit from Picture Archiving and Communication Systems (PACS) and diagnostic databases, especially from image databases. Since images are available on the finger tip, difficulties arise when image data needs to be processed, e.g. segmented, classified or co-registered, which usually demands a lot computational power. Today's clinical environment does support PACS very well, but real image processing is still under-developed. The purpose of this paper is to introduce a parallel cluster of standard distributed systems and its software components and how such a system can be integrated into a hospital environment. To demonstrate the cluster technique we present our clinical experience with the crucial but cost-intensive motion correction of clinical routine and research functional MRI (fMRI) data, as it is processed in our Lab on a daily basis.
Troughton, Lee D.; Czanner, Gabriela; Zheng, Yalin; McCormick, Austin G.
2017-01-01
Application of sunscreen is a widely used mechanism for protecting skin from the harmful effects of UV light. However, protection can only be achieved through effective application, and areas that are routinely missed are likely at increased risk of UV damage. Here we sought to determine if specific areas of the face are missed during routine sunscreen application, and whether provision of public health information is sufficient to improve coverage. To investigate this, 57 participants were imaged with a UV sensitive camera before and after sunscreen application: first visit; minimal pre-instruction, second visit; provided with a public health information statement. Images were scored using a custom automated image analysis process designed to identify areas of high UV reflectance, i.e. missed during sunscreen application, and analysed for 5% significance. Analyses revealed eyelid and periorbital regions to be disproportionately missed during routine sunscreen application (median 14% missed in eyelid region vs 7% in rest of face, p<0.01). Provision of health information caused a significant improvement in coverage to eyelid areas in general however, the medial canthal area was still frequently missed. These data reveal that a public health announcement-type intervention could be effective at improving coverage of high risk areas of the face, however high risk areas are likely to remain unprotected therefore other mechanisms of sun protection should be widely promoted such as UV blocking sunglasses. PMID:28968413
Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.
2017-01-01
Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324
Recovery of Background Structures in Nanoscale Helium Ion Microscope Imaging.
Carasso, Alfred S; Vladár, András E
2014-01-01
This paper discusses a two step enhancement technique applicable to noisy Helium Ion Microscope images in which background structures are not easily discernible due to a weak signal. The method is based on a preliminary adaptive histogram equalization, followed by 'slow motion' low-exponent Lévy fractional diffusion smoothing. This combined approach is unexpectedly effective, resulting in a companion enhanced image in which background structures are rendered much more visible, and noise is significantly reduced, all with minimal loss of image sharpness. The method also provides useful enhancements of scanning charged-particle microscopy images obtained by composing multiple drift-corrected 'fast scan' frames. The paper includes software routines, written in Interactive Data Language (IDL),(1) that can perform the above image processing tasks.
Mori, Shinichiro; Inaniwa, Taku; Kumagai, Motoki; Kuwae, Tsunekazu; Matsuzaki, Yuka; Furukawa, Takuji; Shirai, Toshiyuki; Noda, Koji
2012-06-01
To increase the accuracy of carbon ion beam scanning therapy, we have developed a graphical user interface-based digitally-reconstructed radiograph (DRR) software system for use in routine clinical practice at our center. The DRR software is used in particular scenarios in the new treatment facility to achieve the same level of geometrical accuracy at the treatment as at the imaging session. DRR calculation is implemented simply as the summation of CT image voxel values along the X-ray projection ray. Since we implemented graphics processing unit-based computation, the DRR images are calculated with a speed sufficient for the particular clinical practice requirements. Since high spatial resolution flat panel detector (FPD) images should be registered to the reference DRR images in patient setup process in any scenarios, the DRR images also needs higher spatial resolution close to that of FPD images. To overcome the limitation of the CT spatial resolution imposed by the CT voxel size, we applied image processing to improve the calculated DRR spatial resolution. The DRR software introduced here enabled patient positioning with sufficient accuracy for the implementation of carbon-ion beam scanning therapy at our center.
Image matrix processor for fast multi-dimensional computations
Roberson, George P.; Skeate, Michael F.
1996-01-01
An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.
Professional Risk: Sex, Lies, and Violence in the Films about Teachers
ERIC Educational Resources Information Center
Fedorov, Alexander; Levitskaya, Anastasia; Gorbatkova, Olga; Mikhaleva, Galina
2018-01-01
Pedagogical issues are rather popular in the world's cinematography. Images of school and university teachers occupy a special place in it. Hoping to attract as many viewers as possible the cinematography prefers to refer not to everyday routine education process but to "hot spots" of teaching associated mainly with sex, lies and…
Including the Child with Special Needs: Learning from Reggio Emilia
ERIC Educational Resources Information Center
Gilman, Sheryl
2007-01-01
Inclusive education aims toward integrating special needs students into all events of the typical classroom. For North American educators, the process of inclusion does not unfold naturally as in the routines of the Reggio Emilia approach. Reggio's powerful image of the child nourishes the authentic practice of maximizing each child's…
Software organization for a prolog-based prototyping system for machine vision
NASA Astrophysics Data System (ADS)
Jones, Andrew C.; Hack, Ralf; Batchelor, Bruce G.
1996-11-01
We describe PIP (prolog image processing)--a prototype system for interactive image processing using Prolog, implemented on an Apple Macintosh computer. PIP is the latest in a series of products that the third author has been involved in the implementation of, under the collective title Prolog+. PIP differs from our previous systems in two particularly important respects. The first is that whereas we previously required dedicated image processing hardware, the present system implements image processing routines using software. The second difference is that our present system is hierarchical in structure, where the top level of the hierarchy emulates Prolog+, but there is a flexible infrastructure which supports more sophisticated image manipulation which we will be able to exploit in due course . We discuss the impact of the Apple Macintosh operating system upon the implementation of the image processing functions, and the interface between these functions and the Prolog system. We also explain how the existing set of Prolog+ commands has been implemented. PIP is now nearing maturity, and we will make a version of it generally available in the near future. However, although the represent version of PIP constitutes a complete image processing tool, there are a number of ways in which we are intending to enhance future versions, with a view to added flexibility and efficiency: we discuss these ideas briefly near the end of the present paper.
Computer system for scanning tunneling microscope automation
NASA Astrophysics Data System (ADS)
Aguilar, M.; García, A.; Pascual, P. J.; Presa, J.; Santisteban, A.
1987-03-01
A computerized system for the automation of a scanning tunneling microscope is presented. It is based on an IBM personal computer (PC) either an XT or an AT, which performs the control, data acquisition and storage operations, displays the STM "images" in real time, and provides image processing tools for the restoration and analysis of data. It supports different data acquisition and control cards and image display cards. The software has been designed in a modular way to allow the replacement of these cards and other equipment improvements as well as the inclusion of user routines for data analysis.
Mutual information based feature selection for medical image retrieval
NASA Astrophysics Data System (ADS)
Zhi, Lijia; Zhang, Shaomin; Li, Yan
2018-04-01
In this paper, authors propose a mutual information based method for lung CT image retrieval. This method is designed to adapt to different datasets and different retrieval task. For practical applying consideration, this method avoids using a large amount of training data. Instead, with a well-designed training process and robust fundamental features and measurements, the method in this paper can get promising performance and maintain economic training computation. Experimental results show that the method has potential practical values for clinical routine application.
Image Segmentation Using Affine Wavelets
1991-12-12
accomplished by tile the matrixtoascii. c prograimi. TIl’ i’ rlage file is theim processed by the wave2 prograli which u ilizes MaIllat’s algo- 5-2 CLASS...1024 feet Figure 5.3. Frequency Content of Multiresolution Levels rithm. Details of the wave2 program can be found in the Appendix. One of the resulting...which comprise the wave2 program. 1. mainswave.c - The main driver program for wave. 2. loadimage.c - A routine to load the input image from an ascii
An open architecture for medical image workstation
NASA Astrophysics Data System (ADS)
Liang, Liang; Hu, Zhiqiang; Wang, Xiangyun
2005-04-01
Dealing with the difficulties of integrating various medical image viewing and processing technologies with a variety of clinical and departmental information systems and, in the meantime, overcoming the performance constraints in transferring and processing large-scale and ever-increasing image data in healthcare enterprise, we design and implement a flexible, usable and high-performance architecture for medical image workstations. This architecture is not developed for radiology only, but for any workstations in any application environments that may need medical image retrieving, viewing, and post-processing. This architecture contains an infrastructure named Memory PACS and different kinds of image applications built on it. The Memory PACS is in charge of image data caching, pre-fetching and management. It provides image applications with a high speed image data access and a very reliable DICOM network I/O. In dealing with the image applications, we use dynamic component technology to separate the performance-constrained modules from the flexibility-constrained modules so that different image viewing or processing technologies can be developed and maintained independently. We also develop a weakly coupled collaboration service, through which these image applications can communicate with each other or with third party applications. We applied this architecture in developing our product line and it works well. In our clinical sites, this architecture is applied not only in Radiology Department, but also in Ultrasonic, Surgery, Clinics, and Consultation Center. Giving that each concerned department has its particular requirements and business routines along with the facts that they all have different image processing technologies and image display devices, our workstations are still able to maintain high performance and high usability.
Automated processing for proton spectroscopic imaging using water reference deconvolution.
Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W
1994-06-01
Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.
Automated image analysis for quantification of reactive oxygen species in plant leaves.
Sekulska-Nalewajko, Joanna; Gocławski, Jarosław; Chojak-Koźniewska, Joanna; Kuźniak, Elżbieta
2016-10-15
The paper presents an image processing method for the quantitative assessment of ROS accumulation areas in leaves stained with DAB or NBT for H 2 O 2 and O 2 - detection, respectively. Three types of images determined by the combination of staining method and background color are considered. The method is based on the principle of supervised machine learning with manually labeled image patterns used for training. The method's algorithm is developed as a JavaScript macro in the public domain Fiji (ImageJ) environment. It allows to select the stained regions of ROS-mediated histochemical reactions, subsequently fractionated according to the weak, medium and intense staining intensity and thus ROS accumulation. It also evaluates total leaf blade area. The precision of ROS accumulation area detection is validated by the Dice Similarity Coefficient in the case of manual patterns. The proposed framework reduces the computation complexity, once prepared, requires less image processing expertise than the competitive methods and represents a routine quantitative imaging assay for a general histochemical image classification. Copyright © 2016 Elsevier Inc. All rights reserved.
Smooth 2D manifold extraction from 3D image stack
Shihavuddin, Asm; Basu, Sreetama; Rexhepaj, Elton; Delestro, Felipe; Menezes, Nikita; Sigoillot, Séverine M; Del Nery, Elaine; Selimi, Fekrije; Spassky, Nathalie; Genovesio, Auguste
2017-01-01
Three-dimensional fluorescence microscopy followed by image processing is routinely used to study biological objects at various scales such as cells and tissue. However, maximum intensity projection, the most broadly used rendering tool, extracts a discontinuous layer of voxels, obliviously creating important artifacts and possibly misleading interpretation. Here we propose smooth manifold extraction, an algorithm that produces a continuous focused 2D extraction from a 3D volume, hence preserving local spatial relationships. We demonstrate the usefulness of our approach by applying it to various biological applications using confocal and wide-field microscopy 3D image stacks. We provide a parameter-free ImageJ/Fiji plugin that allows 2D visualization and interpretation of 3D image stacks with maximum accuracy. PMID:28561033
Image processing and machine learning in the morphological analysis of blood cells.
Rodellar, J; Alférez, S; Acevedo, A; Molina, A; Merino, A
2018-05-01
This review focuses on how image processing and machine learning can be useful for the morphological characterization and automatic recognition of cell images captured from peripheral blood smears. The basics of the 3 core elements (segmentation, quantitative features, and classification) are outlined, and recent literature is discussed. Although red blood cells are a significant part of this context, this study focuses on malignant lymphoid cells and blast cells. There is no doubt that these technologies may help the cytologist to perform efficient, objective, and fast morphological analysis of blood cells. They may also help in the interpretation of some morphological features and may serve as learning and survey tools. Although research is still needed, it is important to define screening strategies to exploit the potential of image-based automatic recognition systems integrated in the daily routine of laboratories along with other analysis methodologies. © 2018 John Wiley & Sons Ltd.
Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.
Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan
2016-08-01
In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques.
Hofmann, Matthias; Pichler, Bernd; Schölkopf, Bernhard; Beyer, Thomas
2009-03-01
Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data.
Komeda, Yoriaki; Handa, Hisashi; Watanabe, Tomohiro; Nomura, Takanobu; Kitahashi, Misaki; Sakurai, Toshiharu; Okamoto, Ayana; Minami, Tomohiro; Kono, Masashi; Arizumi, Tadaaki; Takenaka, Mamoru; Hagiwara, Satoru; Matsui, Shigenaga; Nishida, Naoshi; Kashida, Hiroshi; Kudo, Masatoshi
2017-01-01
Computer-aided diagnosis (CAD) is becoming a next-generation tool for the diagnosis of human disease. CAD for colon polyps has been suggested as a particularly useful tool for trainee colonoscopists, as the use of a CAD system avoids the complications associated with endoscopic resections. In addition to conventional CAD, a convolutional neural network (CNN) system utilizing artificial intelligence (AI) has been developing rapidly over the past 5 years. We attempted to generate a unique CNN-CAD system with an AI function that studied endoscopic images extracted from movies obtained with colonoscopes used in routine examinations. Here, we report our preliminary results of this novel CNN-CAD system for the diagnosis of colon polyps. A total of 1,200 images from cases of colonoscopy performed between January 2010 and December 2016 at Kindai University Hospital were used. These images were extracted from the video of actual endoscopic examinations. Additional video images from 10 cases of unlearned processes were retrospectively assessed in a pilot study. They were simply diagnosed as either an adenomatous or nonadenomatous polyp. The number of images used by AI to learn to distinguish adenomatous from nonadenomatous was 1,200:600. These images were extracted from the videos of actual endoscopic examinations. The size of each image was adjusted to 256 × 256 pixels. A 10-hold cross-validation was carried out. The accuracy of the 10-hold cross-validation is 0.751, where the accuracy is the ratio of the number of correct answers over the number of all the answers produced by the CNN. The decisions by the CNN were correct in 7 of 10 cases. A CNN-CAD system using routine colonoscopy might be useful for the rapid diagnosis of colorectal polyp classification. Further prospective studies in an in vivo setting are required to confirm the effectiveness of a CNN-CAD system in routine colonoscopy. © 2017 S. Karger AG, Basel.
Recovery of Background Structures in Nanoscale Helium Ion Microscope Imaging
Carasso, Alfred S; Vladár, András E
2014-01-01
This paper discusses a two step enhancement technique applicable to noisy Helium Ion Microscope images in which background structures are not easily discernible due to a weak signal. The method is based on a preliminary adaptive histogram equalization, followed by ‘slow motion’ low-exponent Lévy fractional diffusion smoothing. This combined approach is unexpectedly effective, resulting in a companion enhanced image in which background structures are rendered much more visible, and noise is significantly reduced, all with minimal loss of image sharpness. The method also provides useful enhancements of scanning charged-particle microscopy images obtained by composing multiple drift-corrected ‘fast scan’ frames. The paper includes software routines, written in Interactive Data Language (IDL),1 that can perform the above image processing tasks. PMID:26601050
CERES: A Set of Automated Routines for Echelle Spectra
NASA Astrophysics Data System (ADS)
Brahm, Rafael; Jordán, Andrés; Espinoza, Néstor
2017-03-01
We present the Collection of Elemental Routines for Echelle Spectra (CERES). These routines were developed for the construction of automated pipelines for the reduction, extraction, and analysis of spectra acquired with different instruments, allowing the obtention of homogeneous and standardized results. This modular code includes tools for handling the different steps of the processing: CCD image reductions; identification and tracing of the echelle orders; optimal and rectangular extraction; computation of the wavelength solution; estimation of radial velocities; and rough and fast estimation of the atmospheric parameters. Currently, CERES has been used to develop automated pipelines for 13 different spectrographs, namely CORALIE, FEROS, HARPS, ESPaDOnS, FIES, PUCHEROS, FIDEOS, CAFE, DuPont/Echelle, Magellan/Mike, Keck/HIRES, Magellan/PFS, and APO/ARCES, but the routines can be easily used to deal with data coming from other spectrographs. We show the high precision in radial velocity that CERES achieves for some of these instruments, and we briefly summarize some results that have already been obtained using the CERES pipelines.
Image processing for improved eye-tracking accuracy
NASA Technical Reports Server (NTRS)
Mulligan, J. B.; Watson, A. B. (Principal Investigator)
1997-01-01
Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.
NASA Astrophysics Data System (ADS)
Knuth, F.; Crone, T. J.; Marburg, A.
2017-12-01
The Ocean Observatories Initiative's (OOI) Cabled Array is delivering real-time high-definition video data from an HD video camera (CAMHD), installed at the Mushroom hydrothermal vent in the ASHES hydrothermal vent field within the caldera of Axial Seamount, an active submarine volcano located approximately 450 kilometers off the coast of Washington at a depth of 1,542 m. Every three hours the camera pans, zooms and focuses in on nine distinct scenes of scientific interest across the vent, producing 14-minute-long videos during each run. This standardized video sampling routine enables scientists to programmatically analyze the content of the video using automated image analysis techniques. Each scene-specific time series dataset can service a wide range of scientific investigations, including the estimation of bacterial flux into the system by quantifying chemosynthetic bacterial clusters (floc) present in the water column, relating periodicity in hydrothermal vent fluid flow to earth tides, measuring vent chimney growth in response to changing hydrothermal fluid flow rates, or mapping the patterns of fauna colonization, distribution and composition across the vent over time. We are currently investigating the seventh scene in the sampling routine, focused on the bacterial mat covering the seafloor at the base of the vent. We quantify the change in bacterial mat coverage over time using image analysis techniques, and examine the relationship between mat coverage, fluid flow processes, episodic chimney collapse events, and other processes observed by Cabled Array instrumentation. This analysis is being conducted using cloud-enabled computer vision processing techniques, programmatic image analysis, and time-lapse video data collected over the course of the first CAMHD deployment, from November 2015 to July 2016.
Thorstenson, Sten; Molin, Jesper; Lundström, Claes
2014-01-01
Recent technological advances have improved the whole slide imaging (WSI) scanner quality and reduced the cost of storage, thereby enabling the deployment of digital pathology for routine diagnostics. In this paper we present the experiences from two Swedish sites having deployed routine large-scale WSI for primary review. At Kalmar County Hospital, the digitization process started in 2006 to reduce the time spent at the microscope in order to improve the ergonomics. Since 2008, more than 500,000 glass slides have been scanned in the routine operations of Kalmar and the neighboring Linköping University Hospital. All glass slides are digitally scanned yet they are also physically delivered to the consulting pathologist who can choose to review the slides on screen, in the microscope, or both. The digital operations include regular remote case reporting by a few hospital pathologists, as well as around 150 cases per week where primary review is outsourced to a private clinic. To investigate how the pathologists choose to use the digital slides, a web-based questionnaire was designed and sent out to the pathologists in Kalmar and Linköping. The responses showed that almost all pathologists think that ergonomics have improved and that image quality was sufficient for most histopathologic diagnostic work. 38 ± 28% of the cases were diagnosed digitally, but the survey also revealed that the pathologists commonly switch back and forth between digital and conventional microscopy within the same case. The fact that two full-scale digital systems have been implemented and that a large portion of the primary reporting is voluntarily performed digitally shows that large-scale digitization is possible today. PMID:24843825
Multidimensional Processing and Visual Rendering of Complex 3D Biomedical Images
NASA Technical Reports Server (NTRS)
Sams, Clarence F.
2016-01-01
The proposed technology uses advanced image analysis techniques to maximize the resolution and utility of medical imaging methods being used during spaceflight. We utilize COTS technology for medical imaging, but our applications require higher resolution assessment of the medical images than is routinely applied with nominal system software. By leveraging advanced data reduction and multidimensional imaging techniques utilized in analysis of Planetary Sciences and Cell Biology imaging, it is possible to significantly increase the information extracted from the onboard biomedical imaging systems. Year 1 focused on application of these techniques to the ocular images collected on ground test subjects and ISS crewmembers. Focus was on the choroidal vasculature and the structure of the optic disc. Methods allowed for increased resolution and quantitation of structural changes enabling detailed assessment of progression over time. These techniques enhance the monitoring and evaluation of crew vision issues during space flight.
High-performance floating-point image computing workstation for medical applications
NASA Astrophysics Data System (ADS)
Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin
1990-07-01
The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.
Development of imaging biomarkers and generation of big data.
Alberich-Bayarri, Ángel; Hernández-Navarro, Rafael; Ruiz-Martínez, Enrique; García-Castro, Fabio; García-Juan, David; Martí-Bonmatí, Luis
2017-06-01
Several image processing algorithms have emerged to cover unmet clinical needs but their application to radiological routine with a clear clinical impact is still not straightforward. Moving from local to big infrastructures, such as Medical Imaging Biobanks (millions of studies), or even more, Federations of Medical Imaging Biobanks (in some cases totaling to hundreds of millions of studies) require the integration of automated pipelines for fast analysis of pooled data to extract clinically relevant conclusions, not uniquely linked to medical imaging, but in combination to other information such as genetic profiling. A general strategy for the development of imaging biomarkers and their integration in the cloud for the quantitative management and exploitation in large databases is herein presented. The proposed platform has been successfully launched and is being validated nowadays among the early adopters' community of radiologists, clinicians, and medical imaging researchers.
Nitrosi, Andrea; Bertolini, Marco; Borasi, Giovanni; Botti, Andrea; Barani, Adriana; Rivetti, Stefano; Pierotti, Luisa
2009-12-01
Ideally, medical x-ray imaging systems should be designed to deliver maximum image quality at an acceptable radiation risk to the patient. Quality assurance procedures are employed to ensure that these standards are maintained. A quality control protocol for direct digital radiography (DDR) systems is described and discussed. Software to automatically process and analyze the required images was developed. In this paper, the initial results obtained on equipment of different DDR manufacturers were reported. The protocol was developed to highlight even small discrepancies in standard operating performance.
Ex-vivo imaging of excised tissue using vital dyes and confocal microscopy
Johnson, Simon; Rabinovitch, Peter
2012-01-01
Vital dyes routinely used for staining cultured cells can also be used to stain and image live tissue slices ex-vivo. Staining tissue with vital dyes allows researchers to collect structural and functional data simultaneously and can be used for qualitative or quantitative fluorescent image collection. The protocols presented here are useful for structural and functional analysis of viable properties of cells in intact tissue slices, allowing for the collection of data in a structurally relevant environment. With these protocols, vital dyes can be applied as a research tool to disease processes and properties of tissue not amenable to cell culture based studies. PMID:22752953
Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.
Fromm, S A; Sachse, C
2016-01-01
Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method. © 2016 Elsevier Inc. All rights reserved.
Image matrix processor for fast multi-dimensional computations
Roberson, G.P.; Skeate, M.F.
1996-10-15
An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.
The Next Generation of HLA Image Products
NASA Astrophysics Data System (ADS)
Gaffney, N. I.; Casertano, S.; Ferguson, B.
2012-09-01
We present the re-engineered pipeline based on existing and improved algorithms with the aim of improving processing quality, cross-instrument portability, data flow management, and software maintenance. The Hubble Legacy Archive (HLA) is a project to add value to the Hubble Space Telescope data archive by producing and delivering science-ready drizzled data products and source lists derived from these products. Initially, ACS, NICMOS, and WFCP2 data were combined using instrument-specific pipelines based on scripts developed to process the ACS GOODS data and a separate set of scripts to generate source extractor and DAOPhot source lists. The new pipeline, initially designed for WFC3 data, isolates instrument-specific processing and is easily extendable to other instruments and to generating wide-area mosaics. Significant improvements have been made in image combination using improved alignment, source detection, and background equalization routines. It integrates improved alignment procedures, better noise model, and source list generation within a single code base. Wherever practical, PyRAF based routines have been replaced with non-IRAF based python libraries (e.g. NumPy and PyFITS). The data formats have been modified to handle better and more consistent propagation of information from individual exposures to the combined products. A new exposure layer stores the effective exposure time for each pixel in the sky which is key in properly interpreting combined images from diverse data that were not initially planned to be mosaiced. We worked to improve the validity of the metadata within our FITS headers for these products relative to standard IRAF/PyRAF processing. Any keywords that pertain to individual exposures have been removed from the primary and extension headers and placed in a table extension for more direct and efficient perusal. This mechanism also allows for more detailed information on the processing of individual images to be stored and propagated providing a more hierarchical metadata storage system than key value pair FITS headers provide. In this poster we will discuss the changes to the pipeline processing and source list generation and the lessons learned which may be applicable to other archive projects as well as discuss our new metadata curation and preservation process.
NASA Technical Reports Server (NTRS)
Jones, J. R.; Bodenheimer, R. E.
1976-01-01
A simple programmable Tse processor organization and arithmetic operations necessary for extraction of the desired topological information are described. Hardware additions to this organization are discussed along with trade-offs peculiar to the tse computing concept. An improved organization is presented along with the complementary software for the various arithmetic operations. The performance of the two organizations is compared in terms of speed, power, and cost. Software routines developed to extract the desired information from an image are included.
dada - a web-based 2D detector analysis tool
NASA Astrophysics Data System (ADS)
Osterhoff, Markus
2017-06-01
The data daemon, dada, is a server backend for unified access to 2D pixel detector image data stored with different detectors, file formats and saved with varying naming conventions and folder structures across instruments. Furthermore, dada implements basic pre-processing and analysis routines from pixel binning over azimuthal integration to raster scan processing. Common user interactions with dada are by a web frontend, but all parameters for an analysis are encoded into a Uniform Resource Identifier (URI) which can also be written by hand or scripts for batch processing.
Digital images in the map revision process
NASA Astrophysics Data System (ADS)
Newby, P. R. T.
Progress towards the adoption of digital (or softcopy) photogrammetric techniques for database and map revision is reviewed. Particular attention is given to the Ordnance Survey of Great Britain, the author's former employer, where digital processes are under investigation but have not yet been introduced for routine production. Developments which may lead to increasing automation of database update processes appear promising, but because of the cost and practical problems associated with managing as well as updating large digital databases, caution is advised when considering the transition to softcopy photogrammetry for revision tasks.
VizieR Online Data Catalog: Palomar Transient Factory photometric observations (Arcavi+, 2014)
NASA Astrophysics Data System (ADS)
Arcavi, I.; Gal-Yam, A.; Sullivan, M.; Pan, Y.-C.; Cenko, S. B.; Horesh, A.; Ofek, E. O.; De Cia, A.; Yan, L.; Yang, C.-W.; Howell, D. A.; Tal, D.; Kulkarni, S. R.; Tendulkar, S. P.; Tang, S.; Xu, D.; Sternberg, A.; Cohen, J. G.; Bloom, J. S.; Nugent, P. E.; Kasliwal, M. M.; Perley, D. A.; Quimby, R. M.; Miller, A. A.; Theissen, C. A.; Laher, R. R.
2017-04-01
All the events from our archival search were discovered by the Palomar 48 inch Oschin Schmidt Telescope (P48) as part of the PTF survey using the Mould R-band filter. We obtained photometric observations in the R and g bands using P48, and in g, r, and i bands with the Palomar 60 inch telescope (P60; Cenko et al. 2006PASP..118.1396C). Initial processing of the P48 images was conducted by the Infrared Processing and Analysis Center (IPAC; Laher et al. 2014PASP..126..674L). Photometry was extracted using a custom PSF fitting routine (e.g., Sullivan et al. 2006AJ....131..960S), which measures the transient flux after image subtraction (using template images taken before the outburst or long after it faded). (1 data file).
Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool
NASA Astrophysics Data System (ADS)
Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.
1997-12-01
Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, A. L.; Biedron, S. G.; Milton, S. V.
At present, a variety of image-based diagnostics are used in particle accelerator systems. Often times, these are viewed by a human operator who then makes appropriate adjustments to the machine. Given recent advances in using convolutional neural networks (CNNs) for image processing, it should be possible to use image diagnostics directly in control routines (NN-based or otherwise). This is especially appealing for non-intercepting diagnostics that could run continuously during beam operation. Here, we show results of a first step toward implementing such a controller: our trained CNN can predict multiple simulated downstream beam parameters at the Fermilab Accelerator Science andmore » Technology (FAST) facility's low energy beamline using simulated virtual cathode laser images, gun phases, and solenoid strengths.« less
Computerized detection of leukocytes in microscopic leukorrhea images.
Zhang, Jing; Zhong, Ya; Wang, Xiangzhou; Ni, Guangming; Du, Xiaohui; Liu, Juanxiu; Liu, Lin; Liu, Yong
2017-09-01
Detection of leukocytes is critical for the routine leukorrhea exam, which is widely used in gynecological examinations. An elevated vaginal leukocyte count in women with bacterial vaginosis is a strong predictor of vaginal or cervical infections. In the routine leukorrhea exam, the counting of leukocytes is primarily performed by manual techniques. However, the viewing and counting of leukocytes from multiple high-power viewing fields on a glass slide under a microscope leads to subjectivity, low efficiency, and low accuracy. To date, many biological cells in stool, blood, and breast cancer have been studied to realize computerized detection; however, the detection of leukocytes in microscopic leukorrhea images has not been studied. Thus, there is an increasing need for computerized detection of leukocytes. There are two key processes in the computerized detection of leukocytes in digital image processing. One is segmentation; the other is intelligent classification. In this paper, we propose a combined ensemble to detect leukocytes in the microscopic leukorrhea image. After image segmentation and selecting likely leukocyte subimages, we obtain the leukocyte candidates. Then, for intelligent classification, we adopt two methods: feature extraction and classification by a support vector machine (SVM); applying a modified convolutional neural network (CNN) to the larger subimages. If different methods classify a candidate in the same category, the process is finished. If not, the outputs of the methods are provided to a classifier to further classify the candidate. After acquiring leukocyte candidates, we attempted three methods to perform classification. The first approach using features and SVM achieved 88% sensitivity, 97% specificity, and 92.5% accuracy. The second method using CNN achieved 95% sensitivity, 84% specificity, and 89.5% accuracy. Then, in the combination approach, we achieved 92% sensitivity, 95% specificity, and 93.5% accuracy. Finally, the images with marked and counted leukocytes were obtained. A novel computerized detection system was developed for automated detection of leukocytes in microscopic images. Different methods resulted in comparable overall qualities by enabling computerized detection of leukocytes. The proposed approach further improved the performance. This preliminary study proves the feasibility of computerized detection of leukocytes in clinical use. © 2017 American Association of Physicists in Medicine.
ExoSOFT: Exoplanet Simple Orbit Fitting Toolbox
NASA Astrophysics Data System (ADS)
Mede, Kyle; Brandt, Timothy D.
2017-08-01
ExoSOFT provides orbital analysis of exoplanets and binary star systems. It fits any combination of astrometric and radial velocity data, and offers four parameter space exploration techniques, including MCMC. It is packaged with an automated set of post-processing and plotting routines to summarize results, and is suitable for performing orbital analysis during surveys with new radial velocity and direct imaging instruments.
Radiomics: Images Are More than Pictures, They Are Data
Kinahan, Paul E.; Hricak, Hedvig
2016-01-01
In the past decade, the field of medical image analysis has grown exponentially, with an increased number of pattern recognition tools and an increase in data set sizes. These advances have facilitated the development of processes for high-throughput extraction of quantitative features that result in the conversion of images into mineable data and the subsequent analysis of these data for decision support; this practice is termed radiomics. This is in contrast to the traditional practice of treating medical images as pictures intended solely for visual interpretation. Radiomic data contain first-, second-, and higher-order statistics. These data are combined with other patient data and are mined with sophisticated bioinformatics tools to develop models that may potentially improve diagnostic, prognostic, and predictive accuracy. Because radiomics analyses are intended to be conducted with standard of care images, it is conceivable that conversion of digital images to mineable data will eventually become routine practice. This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer. PMID:26579733
Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan
2012-01-01
Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. Copyright © 2011 Elsevier Ltd. All rights reserved.
Automated management for pavement inspection system (AMPIS)
NASA Astrophysics Data System (ADS)
Chung, Hung Chi; Girardello, Roberto; Soeller, Tony; Shinozuka, Masanobu
2003-08-01
An automated in-situ road surface distress surveying and management system, AMPIS, has been developed on the basis of video images within the framework of GIS software. Video image processing techniques are introduced to acquire, process and analyze the road surface images obtained from a moving vehicle. ArcGIS platform is used to integrate the routines of image processing and spatial analysis in handling the full-scale metropolitan highway surface distress detection and data fusion/management. This makes it possible to present user-friendly interfaces in GIS and to provide efficient visualizations of surveyed results not only for the use of transportation engineers to manage road surveying documentations, data acquisition, analysis and management, but also for financial officials to plan maintenance and repair programs and further evaluate the socio-economic impacts of highway degradation and deterioration. A review performed in this study on fundamental principle of Pavement Management System (PMS) and its implementation indicates that the proposed approach of using GIS concept and its tools for PMS application will reshape PMS into a new information technology-based system providing a convenient and efficient pavement inspection and management.
GIS-based automated management of highway surface crack inspection system
NASA Astrophysics Data System (ADS)
Chung, Hung-Chi; Shinozuka, Masanobu; Soeller, Tony; Girardello, Roberto
2004-07-01
An automated in-situ road surface distress surveying and management system, AMPIS, has been developed on the basis of video images within the framework of GIS software. Video image processing techniques are introduced to acquire, process and analyze the road surface images obtained from a moving vehicle. ArcGIS platform is used to integrate the routines of image processing and spatial analysis in handling the full-scale metropolitan highway surface distress detection and data fusion/management. This makes it possible to present user-friendly interfaces in GIS and to provide efficient visualizations of surveyed results not only for the use of transportation engineers to manage road surveying documentations, data acquisition, analysis and management, but also for financial officials to plan maintenance and repair programs and further evaluate the socio-economic impacts of highway degradation and deterioration. A review performed in this study on fundamental principle of Pavement Management System (PMS) and its implementation indicates that the proposed approach of using GIS concept and its tools for PMS application will reshape PMS into a new information technology-based system that can provide convenient and efficient pavement inspection and management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Yuanyuan; Browning, Nigel D.
As gas-solid heterogeneous catalytic reactions are molecular in nature, a full mechanistic understanding of the process requires atomic scale characterization under realistic operating conditions. While atomic resolution imaging has become a routine in modern high-vacuum (scanning) transmission electron microscopy ((S)TEM), both image quality and resolution nominally degrade when reaction gases are introduced. In this work, we systematically assess the effects of different gases at various pressures on the quality and resolution of images obtained at room temperature in the annular dark field STEM imaging mode using a differentially pumped (DP) gas cell. This imaging mode is largely free from inelasticmore » scattering effects induced by the presence of gases and retains good imaging properties over a wide range of gas mass/pressures. We demonstrate the application of the ESTEM with atomic resolution images of a complex oxide alkane oxidation catalyst MoVNbTeOx (M1) immersed in light and heavy gas environments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Yuanyuan; Browning, Nigel D.
As gas-solid heterogeneous catalytic reactions are molecular in nature, a full mechanistic understanding of the process requires atomic scale characterization under realistic operating conditions. While atomic resolution imaging has become a routine in modern high-vacuum (scanning) transmission electron microscopy ((S)TEM), both image quality and resolution nominally degrade when reaction gases are introduced. In this work, we systematically assess the effects of different gases at various pressures on the quality and resolution of images obtained at room temperature in the annular dark field STEM imaging mode using a differentially pumped (DP) gas cell. This imaging mode is largely free from inelasticmore » scattering effects induced by the presence of gases and retains good imaging properties over a wide range of gas mass/pressures. Furthermore, we demonstrate the application of the ESTEM with atomic resolution images of a complex oxide alkane oxidation catalyst MoVNbTeOx (M1) immersed in light and heavy gas environments.« less
Epperla, Narendranath; Shah, Namrata; Hamadani, Mehdi; Richardson, Kristin; Kapke, Jonathan T; Patel, Asmita; Teegavarapu, Sravanthi P; Carrum, George; Hari, Parameswaran N; Pingali, Sai R; Karmali, Reem; Fenske, Timothy S
2016-12-01
For patients with relapsed or refractory diffuse large B-cell lymphoma (DLBCL), autologous hematopoietic cell transplantation (auto-HCT) is commonly used. After auto-HCT, DLBCL patients are often monitored with surveillance imaging. However, there is little evidence to support this practice. We performed a multicenter retrospective study of DLBCL patients who underwent auto-HCT (n = 160), who experienced complete remission after transplantation, and who then underwent surveillance imaging. Of these, only 45 patients experienced relapse after day +100 after auto-HCT, with relapse detected by routine imaging in 32 (71%) and relapse detected clinically in 13 (29%). Baseline patient characteristics were similar between the 2 groups. Comparing the radiographic and clinically detected relapse groups, the median time from diagnosis to auto-HCT (389 days vs. 621 days, P = .06) and the median follow-up after auto-HCT (2464 days vs. 1593 days P = .60) were similar. The median time to relapse after auto-HCT was 191 days in radiographically detected relapses compared to 492 days in clinically detected relapses (P = .35), and median postrelapse survival was 359 days in such patients compared to 123 days in patients with clinically detected relapse (P = .36). However, the median posttransplantation overall survival was not significantly different for patients with relapse detected by routine imaging versus relapse detected clinically (643 vs. 586 days, P = .68). A majority (71%) of DLBCL relapses after auto-HCT are detected by routine surveillance imaging. Overall, there appears to be limited utility for routine imaging after auto-HCT except in select cases where earlier detection and salvage therapy with allogeneic HCT is a potential option. Copyright © 2016 Elsevier Inc. All rights reserved.
Image Corruption Detection in Diffusion Tensor Imaging for Post-Processing and Real-Time Monitoring
Li, Yue; Shea, Steven M.; Lorenz, Christine H.; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu
2013-01-01
Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called “corrected Inter-Slice Intensity Discontinuity” (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies. PMID:24204551
Embedded real-time image processing hardware for feature extraction and clustering
NASA Astrophysics Data System (ADS)
Chiu, Lihu; Chang, Grant
2003-08-01
Printronix, Inc. uses scanner-based image systems to perform print quality measurements for line-matrix printers. The size of the image samples and image definition required make commercial scanners convenient to use. The image processing is relatively well defined, and we are able to simplify many of the calculations into hardware equations and "c" code. The process of rapidly prototyping the system using DSP based "c" code gets the algorithms well defined early in the development cycle. Once a working system is defined, the rest of the process involves splitting the task up for the FPGA and the DSP implementation. Deciding which of the two to use, the DSP or the FPGA, is a simple matter of trial benchmarking. There are two kinds of benchmarking: One for speed, and the other for memory. The more memory intensive algorithms should run in the DSP, and the simple real time tasks can use the FPGA most effectively. Once the task is split, we can decide which platform the algorithm should be executed. This involves prototyping all the code in the DSP, then timing various blocks of the algorithm. Slow routines can be optimized using the compiler tools, and if further reduction in time is needed, into tasks that the FPGA can perform.
MARS-MD: rejection based image domain material decomposition
NASA Astrophysics Data System (ADS)
Bateman, C. J.; Knight, D.; Brandwacht, B.; McMahon, J.; Healy, J.; Panta, R.; Aamir, R.; Rajendran, K.; Moghiseh, M.; Ramyar, M.; Rundle, D.; Bennett, J.; de Ruiter, N.; Smithies, D.; Bell, S. T.; Doesburg, R.; Chernoglazov, A.; Mandalika, V. B. H.; Walsh, M.; Shamshad, M.; Anjomrouz, M.; Atharifard, A.; Vanden Broeke, L.; Bheesette, S.; Kirkbride, T.; Anderson, N. G.; Gieseg, S. P.; Woodfield, T.; Renaud, P. F.; Butler, A. P. H.; Butler, P. H.
2018-05-01
This paper outlines image domain material decomposition algorithms that have been routinely used in MARS spectral CT systems. These algorithms (known collectively as MARS-MD) are based on a pragmatic heuristic for solving the under-determined problem where there are more materials than energy bins. This heuristic contains three parts: (1) splitting the problem into a number of possible sub-problems, each containing fewer materials; (2) solving each sub-problem; and (3) applying rejection criteria to eliminate all but one sub-problem's solution. An advantage of this process is that different constraints can be applied to each sub-problem if necessary. In addition, the result of this process is that solutions will be sparse in the material domain, which reduces crossover of signal between material images. Two algorithms based on this process are presented: the Segmentation variant, which uses segmented material classes to define each sub-problem; and the Angular Rejection variant, which defines the rejection criteria using the angle between reconstructed attenuation vectors.
NASA Astrophysics Data System (ADS)
Weihusen, Andreas; Ritter, Felix; Kröger, Tim; Preusser, Tobias; Zidowitz, Stephan; Peitgen, Heinz-Otto
2007-03-01
Image guided radiofrequency (RF) ablation has taken a significant part in the clinical routine as a minimally invasive method for the treatment of focal liver malignancies. Medical imaging is used in all parts of the clinical workflow of an RF ablation, incorporating treatment planning, interventional targeting and result assessment. This paper describes a software application, which has been designed to support the RF ablation workflow under consideration of the requirements of clinical routine, such as easy user interaction and a high degree of robust and fast automatic procedures, in order to keep the physician from spending too much time at the computer. The application therefore provides a collection of specialized image processing and visualization methods for treatment planning and result assessment. The algorithms are adapted to CT as well as to MR imaging. The planning support contains semi-automatic methods for the segmentation of liver tumors and the surrounding vascular system as well as an interactive virtual positioning of RF applicators and a concluding numerical estimation of the achievable heat distribution. The assessment of the ablation result is supported by the segmentation of the coagulative necrosis and an interactive registration of pre- and post-interventional image data for the comparison of tumor and necrosis segmentation masks. An automatic quantification of surface distances is performed to verify the embedding of the tumor area into the thermal lesion area. The visualization methods support representations in the commonly used orthogonal 2D view as well as in 3D scenes.
Shahzad, Khalid; Menon, Ashok; Turner, Paul; Ward, Jeremy; Pursnani, Kishore; Alkhaffaf, Bilal
2015-08-01
The prompt recognition of complications is essential in reducing morbidity following anti-reflux surgery. Consequently, many centres employ a policy of routine post-operative contrast studies. The study aimed to examine whether routine contrast studies more effectively recognised early post-operative complications following anti-reflux surgery compared with selective use. This was a retrospective analysis of 240 adults who had undergone primary anti-reflux surgery. Selective use of water-soluble contrast swallows was employed for 115 patients (Group 1) while 125 patients (Group 2) had routine studies. 10 (0.9%) patients from Group 1 underwent contrast studies, four (40%) of which were abnormal. Routine studies in Group 2 identified thirty-two abnormalities (27%) however the inter-group difference was not significant (p = 0.32). Only one case from group 2 required immediate re-intervention. This was not statistically significant (p = 0.78). Multivariate analysis found no significant association between selective or routine imaging and re-intervention rates. One patient from group 2 presented three days following discharge with wrap migration requiring reoperation despite a normal post-operative study. Routine use of contrast imaging following anti-reflux and hiatus hernia surgery is not necessary. It does not identify a significantly greater number of post-operative complications in comparison to selective use. Additionally, routine use of contrast studies does not ensure the diagnosis of all complications in the post-operative period. Copyright © 2015 IJS Publishing Group Limited. Published by Elsevier Ltd. All rights reserved.
Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas
2011-01-01
In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.
Open source bioimage informatics for cell biology.
Swedlow, Jason R; Eliceiri, Kevin W
2009-11-01
Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery.
Automatic Camera Orientation and Structure Recovery with Samantha
NASA Astrophysics Data System (ADS)
Gherardi, R.; Toldo, R.; Garro, V.; Fusiello, A.
2011-09-01
SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.
Astronomical Image Processing with Hadoop
NASA Astrophysics Data System (ADS)
Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-07-01
In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.
Vision based tunnel inspection using non-rigid registration
NASA Astrophysics Data System (ADS)
Badshah, Amir; Ullah, Shan; Shahzad, Danish
2015-04-01
Growing numbers of long tunnels across the globe has increased the need for safety measurements and inspections of tunnels in these days. To avoid serious damages, tunnel inspection is highly recommended at regular intervals of time to find any deformations or cracks at the right time. While following the stringent safety and tunnel accessibility standards, conventional geodetic surveying using techniques of civil engineering and other manual and mechanical methods are time consuming and results in troublesome of routine life. An automatic tunnel inspection by image processing techniques using non rigid registration has been proposed. There are many other image processing methods used for image registration purposes. Most of the processes are operation of images in its spatial domain like finding edges and corners by Harris edge detection method. These methods are quite time consuming and fail for some or other reasons like for blurred or images with noise. Due to use of image features directly by these methods in the process, are known by the group, correlation by image features. The other method is featureless correlation, in which the images are converted into its frequency domain and then correlated with each other. The shift in spatial domain is the same as in frequency domain, but the processing is order faster than in spatial domain. In the proposed method modified normalized phase correlation has been used to find any shift between two images. As pre pre-processing the tunnel images i.e. reference and template are divided into small patches. All these relative patches are registered by the proposed modified normalized phase correlation. By the application of the proposed algorithm we get the pixel movement of the images. And then these pixels shifts are converted to measuring units like mm, cm etc. After the complete process if there is any shift in the tunnel at described points are located.
Automatic analysis of microscopic images of red blood cell aggregates
NASA Astrophysics Data System (ADS)
Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.
2015-06-01
Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).
Plant features measurements for robotics
NASA Technical Reports Server (NTRS)
Miles, Gaines E.
1989-01-01
Initial studies of the technical feasibility of using machine vision and color image processing to measure plant health were performed. Wheat plants were grown in nutrient solutions deficient in nitrogen, potassium, and iron. An additional treatment imposed water stress on wheat plants which received a full complement of nutrients. The results for juvenile (less than 2 weeks old) wheat plants show that imaging technology can be used to detect nutrient deficiencies. The relative amount of green color in a leaf declined with increased water stress. The absolute amount of green was higher for nitrogen deficient leaves compared to the control plants. Relative greenness was lower for iron deficient leaves, but the absolute green values were higher. The data showed patterns across the leaf consistent with visual symptons. The development of additional color image processing routines to recognize these patterns would improve the performance of this sensor of plant health.
Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D
NASA Astrophysics Data System (ADS)
Bales, Ben; Pollock, Tresa; Petzold, Linda
2017-06-01
Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barty, Christopher P.J.
Lasers and laser-based sources are now routinely used to control and manipulate nuclear processes, e.g. fusion, fission and resonant nuclear excitation. Two such “nuclear photonics” activities with the potential for profound societal impact will be reviewed in this presentation: the pursuit of laser-driven inertial confinement fusion at the National Ignition Facility and the development of laser-based, mono-energetic gamma-rays for isotope-specific detection, assay and imaging of materials.
MARS spectral molecular imaging of lamb tissue: data collection and image analysis
NASA Astrophysics Data System (ADS)
Aamir, R.; Chernoglazov, A.; Bateman, C. J.; Butler, A. P. H.; Butler, P. H.; Anderson, N. G.; Bell, S. T.; Panta, R. K.; Healy, J. L.; Mohr, J. L.; Rajendran, K.; Walsh, M. F.; de Ruiter, N.; Gieseg, S. P.; Woodfield, T.; Renaud, P. F.; Brooke, L.; Abdul-Majid, S.; Clyne, M.; Glendenning, R.; Bones, P. J.; Billinghurst, M.; Bartneck, C.; Mandalika, H.; Grasset, R.; Schleich, N.; Scott, N.; Nik, S. J.; Opie, A.; Janmale, T.; Tang, D. N.; Kim, D.; Doesburg, R. M.; Zainon, R.; Ronaldson, J. P.; Cook, N. J.; Smithies, D. J.; Hodge, K.
2014-02-01
Spectral molecular imaging is a new imaging technique able to discriminate and quantify different components of tissue simultaneously at high spatial and high energy resolution. Our MARS scanner is an x-ray based small animal CT system designed to be used in the diagnostic energy range (20-140 keV). In this paper, we demonstrate the use of the MARS scanner, equipped with the Medipix3RX spectroscopic photon-processing detector, to discriminate fat, calcium, and water in tissue. We present data collected from a sample of lamb meat including bone as an illustrative example of human tissue imaging. The data is analyzed using our 3D Algebraic Reconstruction Algorithm (MARS-ART) and by material decomposition based on a constrained linear least squares algorithm. The results presented here clearly show the quantification of lipid-like, water-like and bone-like components of tissue. However, it is also clear to us that better algorithms could extract more information of clinical interest from our data. Because we are one of the first to present data from multi-energy photon-processing small animal CT systems, we make the raw, partial and fully processed data available with the intention that others can analyze it using their familiar routines. The raw, partially processed and fully processed data of lamb tissue along with the phantom calibration data can be found at http://hdl.handle.net/10092/8531.
Using Cell-ID 1.4 with R for Microscope-Based Cytometry
Bush, Alan; Chernomoretz, Ariel; Yu, Richard; Gordon, Andrew
2012-01-01
This unit describes a method for quantifying various cellular features (e.g., volume, total and subcellular fluorescence localization) from sets of microscope images of individual cells. It includes procedures for tracking cells over time. One purposefully defocused transmission image (sometimes referred to as bright-field or BF) is acquired to segment the image and locate each cell. Fluorescent images (one for each of the color channels to be analyzed) are then acquired by conventional wide-field epifluorescence or confocal microscopy. This method uses the image processing capabilities of Cell-ID (Gordon et al., 2007, as updated here) and data analysis by the statistical programming framework R (R-Development-Team, 2008), which we have supplemented with a package of routines for analyzing Cell-ID output. Both Cell-ID and the analysis package are open-source. PMID:23026908
Automated image analysis reveals the dynamic 3-dimensional organization of multi-ciliary arrays
Galati, Domenico F.; Abuin, David S.; Tauber, Gabriel A.; Pham, Andrew T.; Pearson, Chad G.
2016-01-01
ABSTRACT Multi-ciliated cells (MCCs) use polarized fields of undulating cilia (ciliary array) to produce fluid flow that is essential for many biological processes. Cilia are positioned by microtubule scaffolds called basal bodies (BBs) that are arranged within a spatially complex 3-dimensional geometry (3D). Here, we develop a robust and automated computational image analysis routine to quantify 3D BB organization in the ciliate, Tetrahymena thermophila. Using this routine, we generate the first morphologically constrained 3D reconstructions of Tetrahymena cells and elucidate rules that govern the kinetics of MCC organization. We demonstrate the interplay between BB duplication and cell size expansion through the cell cycle. In mutant cells, we identify a potential BB surveillance mechanism that balances large gaps in BB spacing by increasing the frequency of closely spaced BBs in other regions of the cell. Finally, by taking advantage of a mutant predisposed to BB disorganization, we locate the spatial domains that are most prone to disorganization by environmental stimuli. Collectively, our analyses reveal the importance of quantitative image analysis to understand the principles that guide the 3D organization of MCCs. PMID:26700722
NASA Astrophysics Data System (ADS)
Doin, Marie-Pierre; Lodge, Felicity; Guillaso, Stephane; Jolivet, Romain; Lasserre, Cecile; Ducret, Gabriel; Grandin, Raphael; Pathier, Erwan; Pinel, Virginie
2012-01-01
We assemble a processing chain that handles InSAR computation from raw data to time series analysis. A large part of the chain (from raw data to geocoded unwrapped interferograms) is based on ROI PAC modules (Rosen et al., 2004), with original routines rearranged and combined with new routines to process in series and in a common radar geometry all SAR images and interferograms. A new feature of the software is the range-dependent spectral filtering to improve coherence in interferograms with long spatial baselines. Additional components include a module to estimate and remove digital elevation model errors before unwrapping, a module to mitigate the effects of the atmospheric phase delay and remove residual orbit errors, and a module to construct the phase change time series from small baseline interferograms (Berardino et al. 2002). This paper describes the main elements of the processing chain and presents an example of application of the software using a data set from the ENVISAT mission covering the Etna volcano.
On the Implementation of a Land Cover Classification System for SAR Images Using Khoros
NASA Technical Reports Server (NTRS)
Medina Revera, Edwin J.; Espinosa, Ramon Vasquez
1997-01-01
The Synthetic Aperture Radar (SAR) sensor is widely used to record data about the ground under all atmospheric conditions. The SAR acquired images have very good resolution which necessitates the development of a classification system that process the SAR images to extract useful information for different applications. In this work, a complete system for the land cover classification was designed and programmed using the Khoros, a data flow visual language environment, taking full advantages of the polymorphic data services that it provides. Image analysis was applied to SAR images to improve and automate the processes of recognition and classification of the different regions like mountains and lakes. Both unsupervised and supervised classification utilities were used. The unsupervised classification routines included the use of several Classification/Clustering algorithms like the K-means, ISO2, Weighted Minimum Distance, and the Localized Receptive Field (LRF) training/classifier. Different texture analysis approaches such as Invariant Moments, Fractal Dimension and Second Order statistics were implemented for supervised classification of the images. The results and conclusions for SAR image classification using the various unsupervised and supervised procedures are presented based on their accuracy and performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aykac, Deniz; Chaum, Edward; Fox, Karen
A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion/anomaly detection is a low-cost way of achieving broad-based screening for diabetic retinopathy (DR) and other eye diseases. In the process of a routine eye-screening examination, other non-image data is often available which may be useful in automated diagnosis of disease. In this work, we report on the results of combining this non-image data with image data, using the protocol and processing steps of a prototype system for automated disease diagnosis of retina examinations from a telemedicine network. The system includes quality assessments, automated physiology detection,more » and automated lesion detection to create an archive of known cases. Non-image data such as diabetes onset date and hemoglobin A1c (HgA1c) for each patient examination are included as well, and the system is used to create a content-based image retrieval engine capable of automated diagnosis of disease into 'normal' and 'abnormal' categories. The system achieves a sensitivity and specificity of 91.2% and 71.6% using hold-one-out validation testing.« less
Automated tracking of lava lake level using thermal images at Kīlauea Volcano, Hawai’i
Patrick, Matthew R.; Swanson, Don; Orr, Tim R.
2016-01-01
Tracking the level of the lava lake in Halema‘uma‘u Crater, at the summit of Kīlauea Volcano, Hawai’i, is an essential part of monitoring the ongoing eruption and forecasting potentially hazardous changes in activity. We describe a simple automated image processing routine that analyzes continuously-acquired thermal images of the lava lake and measures lava level. The method uses three image segmentation approaches, based on edge detection, short-term change analysis, and composite temperature thresholding, to identify and track the lake margin in the images. These relative measurements from the images are periodically calibrated with laser rangefinder measurements to produce real-time estimates of lake elevation. Continuous, automated tracking of the lava level has been an important tool used by the U.S. Geological Survey’s Hawaiian Volcano Observatory since 2012 in real-time operational monitoring of the volcano and its hazard potential.
Practical considerations of image analysis and quantification of signal transduction IHC staining.
Grunkin, Michael; Raundahl, Jakob; Foged, Niels T
2011-01-01
The dramatic increase in computer processing power in combination with the availability of high-quality digital cameras during the last 10 years has fertilized the grounds for quantitative microscopy based on digital image analysis. With the present introduction of robust scanners for whole slide imaging in both research and routine, the benefits of automation and objectivity in the analysis of tissue sections will be even more obvious. For in situ studies of signal transduction, the combination of tissue microarrays, immunohistochemistry, digital imaging, and quantitative image analysis will be central operations. However, immunohistochemistry is a multistep procedure including a lot of technical pitfalls leading to intra- and interlaboratory variability of its outcome. The resulting variations in staining intensity and disruption of original morphology are an extra challenge for the image analysis software, which therefore preferably should be dedicated to the detection and quantification of histomorphometrical end points.
Panretinal, high-resolution color photography of the mouse fundus.
Paques, Michel; Guyomard, Jean-Laurent; Simonutti, Manuel; Roux, Michel J; Picaud, Serge; Legargasson, Jean-François; Sahel, José-Alain
2007-06-01
To analyze high-resolution color photographs of the mouse fundus. A contact fundus camera based on topical endoscopy fundus imaging (TEFI) was built. Fundus photographs of C57 and Balb/c mice obtained by TEFI were qualitatively analyzed. High-resolution digital imaging of the fundus, including the ciliary body, was routinely obtained. The reflectance and contrast of retinal vessels varied significantly with the amount of incident and reflected light and, thus, with the degree of fundus pigmentation. The combination of chromatic and spherical aberration favored blue light imaging, in term of both field and contrast. TEFI is a small, low-cost system that allows high-resolution color fundus imaging and fluorescein angiography in conscious mice. Panretinal imaging is facilitated by the presence of the large rounded lens. TEFI significantly improves the quality of in vivo photography of retina and ciliary process of mice. Resolution is, however, affected by chromatic aberration, and should be improved by monochromatic imaging.
The path to COVIS: A review of acoustic imaging of hydrothermal flow regimes
NASA Astrophysics Data System (ADS)
Bemis, Karen G.; Silver, Deborah; Xu, Guangyu; Light, Russ; Jackson, Darrell; Jones, Christopher; Ozer, Sedat; Liu, Li
2015-11-01
Acoustic imaging of hydrothermal flow regimes started with the incidental recognition of a plume on a routine sonar scan for obstacles in the path of the human-occupied submersible ALVIN. Developments in sonar engineering, acoustic data processing and scientific visualization have been combined to develop technology which can effectively capture the behavior of focused and diffuse hydrothermal discharge. This paper traces the development of these acoustic imaging techniques for hydrothermal flow regimes from their conception through to the development of the Cabled Observatory Vent Imaging Sonar (COVIS). COVIS has monitored such flow eight times a day for several years. Successful acoustic techniques for estimating plume entrainment, bending, vertical rise, volume flux, and heat flux are presented as is the state-of-the-art in diffuse flow detection.
Soft computing approach to 3D lung nodule segmentation in CT.
Badura, P; Pietka, E
2014-10-01
This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.
Zhan, Mei; Crane, Matthew M; Entchev, Eugeni V; Caballero, Antonio; Fernandes de Abreu, Diana Andrea; Ch'ng, QueeLim; Lu, Hang
2015-04-01
Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM). These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a guide, we envision the broad utility of the framework for diverse problems across different length scales and imaging methods.
Guidance for Efficient Small Animal Imaging Quality Control.
Osborne, Dustin R; Kuntner, Claudia; Berr, Stuart; Stout, David
2017-08-01
Routine quality control is a critical aspect of properly maintaining high-performance small animal imaging instrumentation. A robust quality control program helps produce more reliable data both for academic purposes and as proof of system performance for contract imaging work. For preclinical imaging laboratories, the combination of costs and available resources often limits their ability to produce efficient and effective quality control programs. This work presents a series of simplified quality control procedures that are accessible to a wide range of preclinical imaging laboratories. Our intent is to provide minimum guidelines for routine quality control that can assist preclinical imaging specialists in setting up an appropriate quality control program for their facility.
Practical Considerations for Clinical PET/MR Imaging.
Galgano, Samuel; Viets, Zachary; Fowler, Kathryn; Gore, Lael; Thomas, John V; McNamara, Michelle; McConathy, Jonathan
2018-01-01
Clinical PET/MR imaging is currently performed at a number of centers around the world as part of routine standard of care. This article focuses on issues and considerations for a clinical PET/MR imaging program, focusing on routine standard-of-care studies. Although local factors influence how clinical PET/MR imaging is implemented, the approaches and considerations described here intend to apply to most clinical programs. PET/MR imaging provides many more options than PET/computed tomography with diagnostic advantages for certain clinical applications but with added complexity. A recurring theme is matching the PET/MR imaging protocol to the clinical application to balance diagnostic accuracy with efficiency. Copyright © 2017 Elsevier Inc. All rights reserved.
Individualized radiotherapy by combining high-end irradiation and magnetic resonance imaging.
Combs, Stephanie E; Nüsslin, Fridtjof; Wilkens, Jan J
2016-04-01
Image-guided radiotherapy (IGRT) has been integrated into daily clinical routine and can today be considered the standard especially with high-dose radiotherapy. Currently imaging is based on MV- or kV-CT, which has clear limitations especially in soft-tissue contrast. Thus, combination of magnetic resonance (MR) imaging and high-end radiotherapy opens a new horizon. The intricate technical properties of MR imagers pose a challenge to technology when combined with radiation technology. Several solutions that are almost ready for routine clinical application have been developed. The clinical questions include dose-escalation strategies, monitoring of changes during treatment as well as imaging without additional radiation exposure during treatment.
Practical Considerations for Clinical PET/MR Imaging.
Galgano, Samuel; Viets, Zachary; Fowler, Kathryn; Gore, Lael; Thomas, John V; McNamara, Michelle; McConathy, Jonathan
2017-05-01
Clinical PET/MR imaging is currently performed at a number of centers around the world as part of routine standard of care. This article focuses on issues and considerations for a clinical PET/MR imaging program, focusing on routine standard-of-care studies. Although local factors influence how clinical PET/MR imaging is implemented, the approaches and considerations described here intend to apply to most clinical programs. PET/MR imaging provides many more options than PET/computed tomography with diagnostic advantages for certain clinical applications but with added complexity. A recurring theme is matching the PET/MR imaging protocol to the clinical application to balance diagnostic accuracy with efficiency. Copyright © 2016 Elsevier Inc. All rights reserved.
Imaging of gaseous oxygen through DFB laser illumination
NASA Astrophysics Data System (ADS)
Cocola, L.; Fedel, M.; Tondello, G.; Poletto, L.
2016-05-01
A Tunable Diode Laser Absorption Spectroscopy setup with Wavelength Modulation has been used together with a synchronous sampling imaging sensor to obtain two-dimensional transmission-mode images of oxygen content. Modulated laser light from a 760nm DFB source has been used to illuminate a scene from the back while image frames were acquired with a high dynamic range camera. Thanks to synchronous timing between the imaging device and laser light modulation, the traditional lock-in approach used in Wavelength Modulation Spectroscopy was replaced by image processing techniques, and many scanning periods were averaged together to allow resolution of small intensity variation over the already weak absorption signals from oxygen absorption band. After proper binning and filtering, the time-domain waveform obtained from each pixel in a set of frames representing the wavelength scan was used as the single detector signal in a traditional TDLAS-WMS setup, and so processed through a software defined digital lock-in demodulation and a second harmonic signal fitting routine. In this way the WMS artifacts of a gas absorption feature were obtained from each pixel together with intensity normalization parameter, allowing a reconstruction of oxygen distribution in a two-dimensional scene regardless from broadband transmitted intensity. As a first demonstration of the effectiveness of this setup, oxygen absorption images of similar containers filled with either oxygen or nitrogen were acquired and processed.
Landsat 8 Multispectral and Pansharpened Imagery Processing on the Study of Civil Engineering Issues
NASA Astrophysics Data System (ADS)
Lazaridou, M. A.; Karagianni, A. Ch.
2016-06-01
Scientific and professional interests of civil engineering mainly include structures, hydraulics, geotechnical engineering, environment, and transportation issues. Topics included in the context of the above may concern urban environment issues, urban planning, hydrological modelling, study of hazards and road construction. Land cover information contributes significantly on the study of the above subjects. Land cover information can be acquired effectively by visual image interpretation of satellite imagery or after applying enhancement routines and also by imagery classification. The Landsat Data Continuity Mission (LDCM - Landsat 8) is the latest satellite in Landsat series, launched in February 2013. Landsat 8 medium spatial resolution multispectral imagery presents particular interest in extracting land cover, because of the fine spectral resolution, the radiometric quantization of 12bits, the capability of merging the high resolution panchromatic band of 15 meters with multispectral imagery of 30 meters as well as the policy of free data. In this paper, Landsat 8 multispectral and panchromatic imageries are being used, concerning surroundings of a lake in north-western Greece. Land cover information is extracted, using suitable digital image processing software. The rich spectral context of the multispectral image is combined with the high spatial resolution of the panchromatic image, applying image fusion - pansharpening, facilitating in this way visual image interpretation to delineate land cover. Further processing concerns supervised image classification. The classification of pansharpened image preceded multispectral image classification. Corresponding comparative considerations are also presented.
Zhu, Yuanyuan; Browning, Nigel D.
2017-05-24
As gas-solid heterogeneous catalytic reactions are molecular in nature, a full mechanistic understanding of the process requires atomic scale characterization under realistic operating conditions. While atomic resolution imaging has become a routine in modern high-vacuum (scanning) transmission electron microscopy ((S)TEM), both image quality and resolution nominally degrade when reaction gases are introduced. In this work, we systematically assess the effects of different gases at various pressures on the quality and resolution of images obtained at room temperature in the annular dark field STEM imaging mode using a differentially pumped (DP) gas cell. This imaging mode is largely free from inelasticmore » scattering effects induced by the presence of gases and retains good imaging properties over a wide range of gas mass/pressures. Furthermore, we demonstrate the application of the ESTEM with atomic resolution images of a complex oxide alkane oxidation catalyst MoVNbTeOx (M1) immersed in light and heavy gas environments.« less
NASA Astrophysics Data System (ADS)
Taha, Z.; Razman, M. A. M.; Adnan, F. A.; Ghani, A. S. Abdul; Majeed, A. P. P. Abdul; Musa, R. M.; Sallehudin, M. F.; Mukai, Y.
2018-03-01
Fish Hunger behaviour is one of the important element in determining the fish feeding routine, especially for farmed fishes. Inaccurate feeding routines (under-feeding or over-feeding) lead the fishes to die and thus, reduces the total production of fishes. The excessive food which is not eaten by fish will be dissolved in the water and thus, reduce the water quality (oxygen quantity in the water will be reduced). The reduction of oxygen (water quality) leads the fish to die and in some cases, may lead to fish diseases. This study correlates Barramundi fish-school behaviour with hunger condition through the hybrid data integration of image processing technique. The behaviour is clustered with respect to the position of the centre of gravity of the school of fish prior feeding, during feeding and after feeding. The clustered fish behaviour is then classified by means of a machine learning technique namely Support vector machine (SVM). It has been shown from the study that the Fine Gaussian variation of SVM is able to provide a reasonably accurate classification of fish feeding behaviour with a classification accuracy of 79.7%. The proposed integration technique may increase the usefulness of the captured data and thus better differentiates the various behaviour of farmed fishes.
DICOMGrid: a middleware to integrate PACS and EELA-2 grid infrastructure
NASA Astrophysics Data System (ADS)
Moreno, Ramon A.; de Sá Rebelo, Marina; Gutierrez, Marco A.
2010-03-01
Medical images provide lots of information for physicians, but the huge amount of data produced by medical image equipments in a modern Health Institution is not completely explored in its full potential yet. Nowadays medical images are used in hospitals mostly as part of routine activities while its intrinsic value for research is underestimated. Medical images can be used for the development of new visualization techniques, new algorithms for patient care and new image processing techniques. These research areas usually require the use of huge volumes of data to obtain significant results, along with enormous computing capabilities. Such qualities are characteristics of grid computing systems such as EELA-2 infrastructure. The grid technologies allow the sharing of data in large scale in a safe and integrated environment and offer high computing capabilities. In this paper we describe the DicomGrid to store and retrieve medical images, properly anonymized, that can be used by researchers to test new processing techniques, using the computational power offered by grid technology. A prototype of the DicomGrid is under evaluation and permits the submission of jobs into the EELA-2 grid infrastructure while offering a simple interface that requires minimal understanding of the grid operation.
NASA Astrophysics Data System (ADS)
Boxx, Isaac; Carter, Campbell D.; Stöhr, Michael; Meier, Wolfgang
2013-05-01
An image-processing routine was developed to autonomously identify and statistically characterize flame-kernel events, wherein OH (from a planar laser-induced fluorescence, PLIF, measurement) appears in the probe region away from the contiguous OH layer. This routine was applied to datasets from two gas turbine model combustors that consist of thousands of joint OH-velocity images from kHz framerate OH-PLIF and particle image velocimetry (PIV). Phase sorting of the kernel centroids with respect to the dominant fluid-dynamic structure of the combustors (a helical precessing vortex core, PVC) indicates through-plane transport of reacting fluid best explains their sudden appearance in the PLIF images. The concentration of flame-kernel events around the periphery of the mean location of the PVC indicates they are likely the result of wrinkling and/or breakup of the primary flame sheet associated with the passage of the PVC as it circumscribes the burner centerline. The prevailing through-plane velocity of the swirling flow-field transports these fragments into the imaging plane of the OH-PLIF system. The lack of flame-kernel events near the center of the PVC (in which there is lower strain and longer fluid-dynamic residence times) indicates that auto-ignition is not a likely explanation for these flame kernels in a majority of cases. The lack of flame-kernel centroid variation in one flame in which there is no PVC further supports this explanation.
Open source bioimage informatics for cell biology
Swedlow, Jason R.; Eliceiri, Kevin W.
2009-01-01
Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery. PMID:19833518
Time Lapse Photography From Arctic Buoys
NASA Astrophysics Data System (ADS)
Valentic, T. A.; Matrai, P.; Woods, J. E.
2013-12-01
We have equipped a number of buoys with cameras that have been deployed throughout the Arctic. These systems need to be simple, reliable and low power. The images are transmitted over an Iridium satellite link and assembled into long running movies. We have captured a number of interesting events, observed the ice dynamics through the year and visits by local wildlife. Each of the systems have been deployed for periods of up to a year, with images every hour. The cameras have proved to be a great outreach tool and are routinely watched by number of people on our websites. This talk will present the techniques used in developing these camera systems, the methods used for reliably transmitting the images and the process for generating the movies.
Jia, Yuanyuan; He, Zhongshi; Gholipour, Ali; Warfield, Simon K
2016-11-01
In magnetic resonance (MR), hardware limitation, scanning time, and patient comfort often result in the acquisition of anisotropic 3-D MR images. Enhancing image resolution is desired but has been very challenging in medical image processing. Super resolution reconstruction based on sparse representation and overcomplete dictionary has been lately employed to address this problem; however, these methods require extra training sets, which may not be always available. This paper proposes a novel single anisotropic 3-D MR image upsampling method via sparse representation and overcomplete dictionary that is trained from in-plane high resolution slices to upsample in the out-of-plane dimensions. The proposed method, therefore, does not require extra training sets. Abundant experiments, conducted on simulated and clinical brain MR images, show that the proposed method is more accurate than classical interpolation. When compared to a recent upsampling method based on the nonlocal means approach, the proposed method did not show improved results at low upsampling factors with simulated images, but generated comparable results with much better computational efficiency in clinical cases. Therefore, the proposed approach can be efficiently implemented and routinely used to upsample MR images in the out-of-planes views for radiologic assessment and postacquisition processing.
Gorgolewski, Krzysztof J; Auer, Tibor; Calhoun, Vince D; Craddock, R Cameron; Das, Samir; Duff, Eugene P; Flandin, Guillaume; Ghosh, Satrajit S; Glatard, Tristan; Halchenko, Yaroslav O; Handwerker, Daniel A; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B Nolan; Nichols, Thomas E; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A; Varoquaux, Gaël; Poldrack, Russell A
2016-06-21
The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations.
Gorgolewski, Krzysztof J.; Auer, Tibor; Calhoun, Vince D.; Craddock, R. Cameron; Das, Samir; Duff, Eugene P.; Flandin, Guillaume; Ghosh, Satrajit S.; Glatard, Tristan; Halchenko, Yaroslav O.; Handwerker, Daniel A.; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B. Nolan; Nichols, Thomas E.; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A.; Varoquaux, Gaël; Poldrack, Russell A.
2016-01-01
The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations. PMID:27326542
[Digital pathology : The time has come!
Grobholz, R
2018-05-01
Digital pathology (DP) and whole-slide imaging (WSI) technology have matured substantially over the last few years. Meanwhile, commercial systems are available that can be used in routine practice. Illustration of DP experiences in a routine diagnostic setting. A DP system offers several advantages: 1) glass slides are no longer unique; 2) access to cases is possible from any location; 3) digital image analysis can be applied; and 4) archived WSI can be easily accessed. From this point, several secondary advantages arise: a) the slide compilation of the case and the case assignment is fast and safe; b) carrying cases to the pathologist is obsolete and paperless work is possible; c) WSI can be used for a second opinion and be accessible in remote locations; d) WSI of referred cases are still accessible after returning the slides; e) histological images can easily be provided in tumor boards; f) the office desk is clean; and g) a "home office" is possible. To introduce a DP system, a comprehensive workflow analysis is needed that clarifies the needs and wishes of the respective institute. In order to optimally meet the requirements, open DP platforms are of particular advantage, because they enable the integration of scanners from various manufacturers. Further developments in image analysis, such as virtual tissue reconstruction, could enrich the diagnostic process in the future and improve treatment quality.
Large Data at Small Universities: Astronomical processing using a computer classroom
NASA Astrophysics Data System (ADS)
Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen
2016-06-01
The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.
Quantitative imaging features: extension of the oncology medical image database
NASA Astrophysics Data System (ADS)
Patel, M. N.; Looney, P. T.; Young, K. C.; Halling-Brown, M. D.
2015-03-01
Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. With the advent of digital imaging modalities and the rapid growth in both diagnostic and therapeutic imaging, the ability to be able to harness this large influx of data is of paramount importance. The Oncology Medical Image Database (OMI-DB) was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, and annotations and where applicable expert determined ground truths describing features of interest. Medical imaging provides the ability to detect and localize many changes that are important to determine whether a disease is present or a therapy is effective by depicting alterations in anatomic, physiologic, biochemical or molecular processes. Quantitative imaging features are sensitive, specific, accurate and reproducible imaging measures of these changes. Here, we describe an extension to the OMI-DB whereby a range of imaging features and descriptors are pre-calculated using a high throughput approach. The ability to calculate multiple imaging features and data from the acquired images would be valuable and facilitate further research applications investigating detection, prognosis, and classification. The resultant data store contains more than 10 million quantitative features as well as features derived from CAD predictions. Theses data can be used to build predictive models to aid image classification, treatment response assessment as well as to identify prognostic imaging biomarkers.
Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D
2007-01-01
Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology.
A Procedure for High Resolution Satellite Imagery Quality Assessment
Crespi, Mattia; De Vendictis, Laura
2009-01-01
Data products generated from High Resolution Satellite Imagery (HRSI) are routinely evaluated during the so-called in-orbit test period, in order to verify if their quality fits the desired features and, if necessary, to obtain the image correction parameters to be used at the ground processing center. Nevertheless, it is often useful to have tools to evaluate image quality also at the final user level. Image quality is defined by some parameters, such as the radiometric resolution and its accuracy, represented by the noise level, and the geometric resolution and sharpness, described by the Modulation Transfer Function (MTF). This paper proposes a procedure to evaluate these image quality parameters; the procedure was implemented in a suitable software and tested on high resolution imagery acquired by the QuickBird, WorldView-1 and Cartosat-1 satellites. PMID:22412312
High-speed AFM and the reduction of tip-sample forces
NASA Astrophysics Data System (ADS)
Miles, Mervyn; Sharma, Ravi; Picco, Loren
High-speed DC-mode AFM has been shown to be routinely capable of imaging at video rate, and, if required, at over 1000 frames per second. At sufficiently high tip-sample velocities in ambient conditions, the tip lifts off the sample surface in a superlubricity process which reduces the level of shear forces imposed on the sample by the tip and therefore reduces the potential damage and distortion of the sample being imaged. High-frequency mechanical oscillations, both lateral and vertical, have been reported to reduced the tip-sample frictional forces. We have investigated the effect of combining linear high-speed scanning with these small amplitude high-frequency oscillations with the aim of reducing further the force interaction in high-speed imaging. Examples of this new version of high-speed AFM imaging will be presented for biological samples.
Routine Cross-Sectional Head Imaging Before Electroconvulsive Therapy: A Tertiary Center Experience.
Sajedi, Payam I; Mitchell, Jason; Herskovits, Edward H; Raghavan, Prashant
2016-04-01
Electroconvulsive therapy (ECT) is generally contraindicated in patients with intracranial mass lesions or in the presence of increased intracranial pressure. The purpose of this study was to determine the prevalence of incidental abnormalities on routine cross-sectional head imaging, including CT and MRI, that would preclude subsequent ECT. This retrospective study involved a review of the electronic medical records of 105 patients (totaling 108 imaging studies) between April 27, 2007, and March 20, 2015, referred for cranial CT or MRI with the primary indication of pre-ECT evaluation. The probability of occurrence of imaging findings that would preclude ECT was computed. A cost analysis was also performed on the practice of routine pre-ECT imaging. Of the 105 patients who presented with the primary indication of ECT clearance (totaling 108 scans), 1 scan (0.93%) revealed findings that precluded ECT. None of the studies demonstrated findings that indicated increased intracranial pressure. A cost analysis revealed that at least $18,662.70 and 521.97 relative value units must be expended to identify one patient with intracranial pathology precluding ECT. The findings of this study demonstrate an extremely low prevalence of findings that preclude ECT on routine cross-sectional head imaging. The costs incurred in identifying a potential contraindication are high. The authors suggest that the performance of pre-ECT neuroimaging be driven by the clinical examination. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
TU-H-206-01: An Automated Approach for Identifying Geometric Distortions in Gamma Cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, S; Nelson, J; Samei, E
2016-06-15
Purpose: To develop a clinically-deployable, automated process for detecting artifacts in routine nuclear medicine (NM) quality assurance (QA) bar phantom images. Methods: An artifact detection algorithm was created to analyze bar phantom images as part of an ongoing QA program. A low noise, high resolution reference image was acquired from an x-ray of the bar phantom with a Philips Digital Diagnost system utilizing image stitching. NM bar images, acquired for 5 million counts over a 512×512 matrix, were registered to the template image by maximizing mutual information (MI). The MI index was used as an initial test for artifacts; lowmore » values indicate an overall presence of distortions regardless of their spatial location. Images with low MI scores were further analyzed for bar linearity, periodicity, alignment, and compression to locate differences with respect to the template. Findings from each test were spatially correlated and locations failing multiple tests were flagged as potential artifacts requiring additional visual analysis. The algorithm was initially deployed for GE Discovery 670 and Infinia Hawkeye gamma cameras. Results: The algorithm successfully identified clinically relevant artifacts from both systems previously unnoticed by technologists performing the QA. Average MI indices for artifact-free images are 0.55. Images with MI indices < 0.50 have shown 100% sensitivity and specificity for artifact detection when compared with a thorough visual analysis. Correlation of geometric tests confirms the ability to spatially locate the most likely image regions containing an artifact regardless of initial phantom orientation. Conclusion: The algorithm shows the potential to detect gamma camera artifacts that may be missed by routine technologist inspections. Detection and subsequent correction of artifacts ensures maximum image quality and may help to identify failing hardware before it impacts clinical workflow. Going forward, the algorithm is being deployed to monitor data from all gamma cameras within our health system.« less
Storage and retrieval of large digital images
Bradley, J.N.
1998-01-20
Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T{sub ij}(x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T{sub ij}(x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T{sub ij}(x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval. 6 figs.
Storage and retrieval of large digital images
Bradley, Jonathan N.
1998-01-01
Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T.sub.ij (x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T.sub.ij (x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T.sub.ij (x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval.
Applying machine learning classification techniques to automate sky object cataloguing
NASA Astrophysics Data System (ADS)
Fayyad, Usama M.; Doyle, Richard J.; Weir, W. Nick; Djorgovski, Stanislav
1993-08-01
We describe the application of an Artificial Intelligence machine learning techniques to the development of an automated tool for the reduction of a large scientific data set. The 2nd Mt. Palomar Northern Sky Survey is nearly completed. This survey provides comprehensive coverage of the northern celestial hemisphere in the form of photographic plates. The plates are being transformed into digitized images whose quality will probably not be surpassed in the next ten to twenty years. The images are expected to contain on the order of 107 galaxies and 108 stars. Astronomers wish to determine which of these sky objects belong to various classes of galaxies and stars. Unfortunately, the size of this data set precludes analysis in an exclusively manual fashion. Our approach is to develop a software system which integrates the functions of independently developed techniques for image processing and data classification. Digitized sky images are passed through image processing routines to identify sky objects and to extract a set of features for each object. These routines are used to help select a useful set of attributes for classifying sky objects. Then GID3 (Generalized ID3) and O-B Tree, two inductive learning techniques, learns classification decision trees from examples. These classifiers will then be applied to new data. These developmnent process is highly interactive, with astronomer input playing a vital role. Astronomers refine the feature set used to construct sky object descriptions, and evaluate the performance of the automated classification technique on new data. This paper gives an overview of the machine learning techniques with an emphasis on their general applicability, describes the details of our specific application, and reports the initial encouraging results. The results indicate that our machine learning approach is well-suited to the problem. The primary benefit of the approach is increased data reduction throughput. Another benefit is consistency of classification. The classification rules which are the product of the inductive learning techniques will form an objective, examinable basis for classifying sky objects. A final, not to be underestimated benefit is that astronomers will be freed from the tedium of an intensely visual task to pursue more challenging analysis and interpretation problems based on automatically catalogued data.
Yu, Lifeng; Li, Zhoubo; Manduca, Armando; Blezek, Daniel J.; Hough, David M.; Venkatesh, Sudhakar K.; Brickner, Gregory C.; Cernigliaro, Joseph C.; Hara, Amy K.; Fidler, Jeff L.; Lake, David S.; Shiung, Maria; Lewis, David; Leng, Shuai; Augustine, Kurt E.; Carter, Rickey E.; Holmes, David R.; McCollough, Cynthia H.
2015-01-01
Purpose To determine if lower-dose computed tomographic (CT) scans obtained with adaptive image-based noise reduction (adaptive nonlocal means [ANLM]) or iterative reconstruction (sinogram-affirmed iterative reconstruction [SAFIRE]) result in reduced observer performance in the detection of malignant hepatic nodules and masses compared with routine-dose scans obtained with filtered back projection (FBP). Materials and Methods This study was approved by the institutional review board and was compliant with HIPAA. Informed consent was obtained from patients for the retrospective use of medical records for research purposes. CT projection data from 33 abdominal and 27 liver or pancreas CT examinations were collected (median volume CT dose index, 13.8 and 24.0 mGy, respectively). Hepatic malignancy was defined by progression or regression or with histopathologic findings. Lower-dose data were created by using a validated noise insertion method (10.4 mGy for abdominal CT and 14.6 mGy for liver or pancreas CT) and images reconstructed with FBP, ANLM, and SAFIRE. Four readers evaluated routine-dose FBP images and all lower-dose images, circumscribing liver lesions and selecting diagnosis. The jackknife free-response receiver operating characteristic figure of merit (FOM) was calculated on a per–malignant nodule or per-mass basis. Noninferiority was defined by the lower limit of the 95% confidence interval (CI) of the difference between lower-dose and routine-dose FOMs being less than −0.10. Results Twenty-nine patients had 62 malignant hepatic nodules and masses. Estimated FOM differences between lower-dose FBP and lower-dose ANLM versus routine-dose FBP were noninferior (difference: −0.041 [95% CI: −0.090, 0.009] and −0.003 [95% CI: −0.052, 0.047], respectively). In patients with dedicated liver scans, lower-dose ANLM images were noninferior (difference: +0.015 [95% CI: −0.077, 0.106]), whereas lower-dose FBP images were not (difference −0.049 [95% CI: −0.140, 0.043]). In 37 patients with SAFIRE reconstructions, the three lower-dose alternatives were found to be noninferior to the routine-dose FBP. Conclusion At moderate levels of dose reduction, lower-dose FBP images without ANLM or SAFIRE were noninferior to routine-dose images for abdominal CT but not for liver or pancreas CT. © RSNA, 2015 Online supplemental material is available for this article. PMID:26020436
2016-01-01
Translation of new 18F-fluorination reactions to produce radiotracers for human positron emission tomography (PET) imaging is rare because the chemistry must have useful scope and the process for 18F-labeled tracer production must be robust and simple to execute. The application of transition metal mediators has enabled impactful 18F-fluorination methods, but to date none of these reactions have been applied to produce a human-injectable PET tracer. In this article we present chemistry and process innovations that culminate in the first production from [18F]fluoride of human doses of [18F]5-fluorouracil, a PET tracer for cancer imaging in humans. The first preparation of nickel σ-aryl complexes by transmetalation from arylboronic acids or esters was developed and enabled the synthesis of the [18F]5-fluorouracil precursor. Routine production of >10 mCi doses of [18F]5-fluorouracil was accomplished with a new instrument for azeotrope-free [18F]fluoride concentration in a process that leverages the tolerance of water in nickel-mediated 18F-fluorination. PMID:27087736
Hoover, Andrew J; Lazari, Mark; Ren, Hong; Narayanam, Maruthi Kumar; Murphy, Jennifer M; van Dam, R Michael; Hooker, Jacob M; Ritter, Tobias
2016-04-11
Translation of new 18 F-fluorination reactions to produce radiotracers for human positron emission tomography (PET) imaging is rare because the chemistry must have useful scope and the process for 18 F-labeled tracer production must be robust and simple to execute. The application of transition metal mediators has enabled impactful 18 F-fluorination methods, but to date none of these reactions have been applied to produce a human-injectable PET tracer. In this article we present chemistry and process innovations that culminate in the first production from [ 18 F]fluoride of human doses of [ 18 F]5-fluorouracil, a PET tracer for cancer imaging in humans. The first preparation of nickel σ-aryl complexes by transmetalation from arylboronic acids or esters was developed and enabled the synthesis of the [ 18 F]5-fluorouracil precursor. Routine production of >10 mCi doses of [ 18 F]5-fluorouracil was accomplished with a new instrument for azeotrope-free [ 18 F]fluoride concentration in a process that leverages the tolerance of water in nickel-mediated 18 F-fluorination.
Sensor, signal, and image informatics - state of the art and current topics.
Lehmann, T M; Aach, T; Witte, H
2006-01-01
The number of articles published annually in the fields of biomedical signal and image acquisition and processing is increasing. Based on selected examples, this survey aims at comprehensively demonstrating the recent trends and developments. Four articles are selected for biomedical data acquisition covering topics such as dose saving in CT, C-arm X-ray imaging systems for volume imaging, and the replacement of dose-intensive CT-based diagnostic with harmonic ultrasound imaging. Regarding biomedical signal analysis (BSA), the four selected articles discuss the equivalence of different time-frequency approaches for signal analysis, an application to Cochlea implants, where time-frequency analysis is applied for controlling the replacement system, recent trends for fusion of different modalities, and the role of BSA as part of a brain machine interfaces. To cover the broad spectrum of publications in the field of biomedical image processing, six papers are focused. Important topics are content-based image retrieval in medical applications, automatic classification of tongue photographs from traditional Chinese medicine, brain perfusion analysis in single photon emission computed tomography (SPECT), model-based visualization of vascular trees, and virtual surgery, where enhanced visualization and haptic feedback techniques are combined with a sphere-filled model of the organ. The selected papers emphasize the five fields forming the chain of biomedical data processing: (1) data acquisition, (2) data reconstruction and pre-processing, (3) data handling, (4) data analysis, and (5) data visualization. Fields 1 and 2 form the sensor informatics, while fields 2 to 5 form signal or image informatics with respect to the nature of the data considered. Biomedical data acquisition and pre-processing, as well as data handling, analysis and visualization aims at providing reliable tools for decision support that improve the quality of health care. Comprehensive evaluation of the processing methods and their reliable integration in routine applications are future challenges in the field of sensor, signal and image informatics.
Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.
Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz
2017-06-01
Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayer, Andrew M.; Hsu, C.; Bettenhausen, Corey
Cases of absorbing aerosols above clouds (AAC), such as smoke or mineral dust, are omitted from most routinely-processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar
Medverd, Jonathan R; Cross, Nathan M; Font, Frank; Casertano, Andrew
2013-08-01
Radiologists routinely make decisions with only limited information when assigning protocol instructions for the performance of advanced medical imaging examinations. Opportunity exists to simultaneously improve the safety, quality and efficiency of this workflow through the application of an electronic solution leveraging health system resources to provide concise, tailored information and decision support in real-time. Such a system has been developed using an open source, open standards design for use within the Veterans Health Administration. The Radiology Protocol Tool Recorder (RAPTOR) project identified key process attributes as well as inherent weaknesses of paper processes and electronic emulators of paper processes to guide the development of its optimized electronic solution. The design provides a kernel that can be expanded to create an integrated radiology environment. RAPTOR has implications relevant to the greater health care community, and serves as a case model for modernization of legacy government health information systems.
IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING
NASA Technical Reports Server (NTRS)
Roth, D. J.
1994-01-01
IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.
NASA Astrophysics Data System (ADS)
Costa, Manuel F. M.; Jorge, Jorge M.
1998-01-01
The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.
NASA Astrophysics Data System (ADS)
Costa, Manuel F.; Jorge, Jorge M.
1997-12-01
The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.
Chen, Li-Hong; Jin, Chao; Li, Jian-Ying; Wang, Ge-Liang; Jia, Yong-Jun; Duan, Hai-Feng; Pan, Ning; Guo, Jianxin
2018-06-06
To compare image quality of two adaptive statistical iterative reconstruction (ASiR and ASiR-V) algorithms using objective and subjective metrics for routine liver CT, with the conventional filtered back projection (FBP) reconstructions as reference standards. This institutional review board-approved study included 52 patients with clinically suspected hepatic metastases. Patients were divided equally into ASiR and ASiR-V groups with same scan parameters. Images were reconstructed with ASiR and ASiR-V from 0 (FBP) to 100% blending percentages at 10% interval in its respective group. Mean and standard deviation of CT numbers for liver parenchyma were recorded. Two experienced radiologists reviewed all images for image quality blindly and independently. Data were statistically analyzed. There was no difference in CT dose index between ASiR and ASiR-V groups. As the percentage of ASiR and ASiR-V increased from 10 to 100% , image noise reduced by 8.6 -57.9% and 8.9-81.6%, respectively, compared with FBP. There was substantial interobserver agreement in image quality assessment for ASiR and ASiR-V images. Compared with FBP reconstruction, subjective image quality scores of ASiR and ASiR-V improved significantly as percentage increased from 10 to 80% for ASiR (peaked at 50% with 32.2% noise reduction) and from 10 to 90% (peaked at 60% with 51.5% noise reduction) for ASiR-V. Both ASiR and ASiR-V improved the objective and subjective image quality for routine liver CT compared with FBP. ASiR-V provided further image quality improvement with higher acceptable percentage than ASiR, and ASiR-V60% had the highest image quality score. Advances in knowledge: (1) Both ASiR and ASiR-V significantly reduce image noise compared with conventional FBP reconstruction. (2) ASiR-V with 60 blending percentage provides the highest image quality score in routine liver CT.
NASA Technical Reports Server (NTRS)
Davis, G. J.
1994-01-01
One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
Symmetrization for redundant channels
NASA Technical Reports Server (NTRS)
Tulplue, Bhalchandra R. (Inventor); Collins, Robert E. (Inventor)
1988-01-01
A plurality of redundant channels in a system each contain a global image of all the configuration data bases in each of the channels in the system. Each global image is updated periodically from each of the other channels via cross channel data links. The global images of the local configuration data bases in each channel are separately symmetrized using a voting process to generate a system signal configuration data base which is not written into by any other routine and is available for indicating the status of the system within each channel. Equalization may be imposed on a suspect signal and a number of chances for that signal to heal itself are provided before excluding it from future votes. Reconfiguration is accomplished upon detecting a channel which is deemed invalid. A reset function is provided which permits an externally generated reset signal to permit a previously excluded channel to be reincluded within the system. The updating of global images and/or the symmetrization process may be accomplished at substantially the same time within a synchronized time frame common to all channels.
Extraction of edge-based and region-based features for object recognition
NASA Astrophysics Data System (ADS)
Coutts, Benjamin; Ravi, Srinivas; Hu, Gongzhu; Shrikhande, Neelima
1993-08-01
One of the central problems of computer vision is object recognition. A catalogue of model objects is described as a set of features such as edges and surfaces. The same features are extracted from the scene and matched against the models for object recognition. Edges and surfaces extracted from the scenes are often noisy and imperfect. In this paper algorithms are described for improving low level edge and surface features. Existing edge extraction algorithms are applied to the intensity image to obtain edge features. Initial edges are traced by following directions of the current contour. These are improved by using corresponding depth and intensity information for decision making at branch points. Surface fitting routines are applied to the range image to obtain planar surface patches. An algorithm of region growing is developed that starts with a coarse segmentation and uses quadric surface fitting to iteratively merge adjacent regions into quadric surfaces based on approximate orthogonal distance regression. Surface information obtained is returned to the edge extraction routine to detect and remove fake edges. This process repeats until no more merging or edge improvement can take place. Both synthetic (with Gaussian noise) and real images containing multiple object scenes have been tested using the merging criteria. Results appeared quite encouraging.
Nap, Marius
2016-01-01
Digital pathology is indisputably connected with high demands on data traffic and storage. As a consequence, control of the logistic process and insight into the management of both traffic and storage is essential. We monitored data traffic from scanners to server and server to workstation and registered storage needs for diagnostic images and additional projects. The results showed that data traffic inside the hospital network (1 Gbps) never exceeded 80 Mbps for scanner-to-server activity, and activity from the server to the workstation took at most 5 Mbps. Data storage per image increased from 300 MB to an average of 600 MB as a result of camera and software updates, and, due to the increased scanning speed, the scanning time was reduced with almost 8 h/day. Introduction of a storage policy of only 12 months for diagnostic images and rescanning if needed resulted in a manageable storage window of 45 TB for the period of 1 year. Using simple registration tools allowed the transition of digital pathology into a concise package that allows planning and control. Incorporating retrieval of such information from scanning and storage devices will reduce the fear of losing control by the management when introducing digital pathology in daily routine. © 2016 S. Karger AG, Basel.
Optimized imaging of the midface and orbits
Langner, Sönke
2015-01-01
A variety of imaging techniques are available for imaging the midface and orbits. This review article describes the different imaging techniques based on the recent literature and discusses their impact on clinical routine imaging. Imaging protocols are presented for different diseases and the different imaging modalities. PMID:26770279
Monitoring radiation use in cardiac fluoroscopy imaging procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Nathaniel T.; Steiner, Stefan H.; Smith, Ian R.
2011-01-15
Purpose: Timely identification of systematic changes in radiation delivery of an imaging system can lead to a reduction in risk for the patients involved. However, existing quality assurance programs involving the routine testing of equipment performance using phantoms are limited in their ability to effectively carry out this task. To address this issue, the authors propose the implementation of an ongoing monitoring process that utilizes procedural data to identify unexpected large or small radiation exposures for individual patients, as well as to detect persistent changes in the radiation output of imaging platforms. Methods: Data used in this study were obtainedmore » from records routinely collected during procedures performed in the cardiac catheterization imaging facility at St. Andrew's War Memorial Hospital, Brisbane, Australia, over the period January 2008-March 2010. A two stage monitoring process employing individual and exponentially weighted moving average (EWMA) control charts was developed and used to identify unexpectedly high or low radiation exposure levels for individual patients, as well as detect persistent changes in the radiation output delivered by the imaging systems. To increase sensitivity of the charts, we account for variation in dose area product (DAP) values due to other measured factors (patient weight, fluoroscopy time, and digital acquisition frame count) using multiple linear regression. Control charts are then constructed using the residual values from this linear regression. The proposed monitoring process was evaluated using simulation to model the performance of the process under known conditions. Results: Retrospective application of this technique to actual clinical data identified a number of cases in which the DAP result could be considered unexpected. Most of these, upon review, were attributed to data entry errors. The charts monitoring the overall system radiation output trends demonstrated changes in equipment performance associated with relocation of the equipment to a new department. When tested under simulated conditions, the EWMA chart was capable of detecting a sustained 15% increase in average radiation output within 60 cases (<1 month of operation), while a 33% increase would be signaled within 20 cases. Conclusions: This technique offers a valuable enhancement to existing quality assurance programs in radiology that rely upon the testing of equipment radiation output at discrete time frames to ensure performance security.« less
Support Routines for In Situ Image Processing
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean
2013-01-01
This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the pointing of in situ cameras, (8) marsinvrange: Inverse of marsrange . given a range file, re-computes an XYZ file that closely matches the original. . marsproj: Projects an XYZ coordinate through the camera model, and reports the line/sample coordinates of the point in the image, (9) marsprojfid: Given the output of marsfidfinder, projects the XYZ locations and compares them to the found locations, creating a report showing the fiducial errors in each image. marsrad: Radiometrically corrects an image, (10) marsrelabel: Updates coordinate system or camera model labels in an image, (11) marstiexyz: Given a stereo pair, allows the user to interactively pick a point in each image and reports the XYZ value corresponding to that pair of locations. marsunmosaic: Extracts a single frame from a mosaic, which will be created such that it could have been an input to the original mosaic. Useful for creating simulated input frames using different camera models than the original mosaic used, and (12) merinverter: Uses an inverse lookup table to convert 8-bit telemetered data to its 12-bit original form. Can be used in other missions despite the name.
Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y
2014-07-08
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.
Sodium 3D COncentration MApping (COMA 3D) using 23Na and proton MRI
NASA Astrophysics Data System (ADS)
Truong, Milton L.; Harrington, Michael G.; Schepkin, Victor D.; Chekmenev, Eduard Y.
2014-10-01
Functional changes of sodium 3D MRI signals were converted into millimolar concentration changes using an open-source fully automated MATLAB toolbox. These concentration changes are visualized via 3D sodium concentration maps, and they are overlaid over conventional 3D proton images to provide high-resolution co-registration for easy correlation of functional changes to anatomical regions. Nearly 5000/h concentration maps were generated on a personal computer (ca. 2012) using 21.1 T 3D sodium MRI brain images of live rats with spatial resolution of 0.8 × 0.8 × 0.8 mm3 and imaging matrices of 60 × 60 × 60. The produced concentration maps allowed for non-invasive quantitative measurement of in vivo sodium concentration in the normal rat brain as a functional response to migraine-like conditions. The presented work can also be applied to sodium-associated changes in migraine, cancer, and other metabolic abnormalities that can be sensed by molecular imaging. The MATLAB toolbox allows for automated image analysis of the 3D images acquired on the Bruker platform and can be extended to other imaging platforms. The resulting images are presented in a form of series of 2D slices in all three dimensions in native MATLAB and PDF formats. The following is provided: (a) MATLAB source code for image processing, (b) the detailed processing procedures, (c) description of the code and all sub-routines, (d) example data sets of initial and processed data. The toolbox can be downloaded at: http://www.vuiis.vanderbilt.edu/ truongm/COMA3D/.
Exploratory analysis of TOF-SIMS data from biological surfaces
NASA Astrophysics Data System (ADS)
Vaidyanathan, Seetharaman; Fletcher, John S.; Henderson, Alex; Lockyer, Nicholas P.; Vickerman, John C.
2008-12-01
The application of multivariate analytical tools enables simplification of TOF-SIMS datasets so that useful information can be extracted from complex spectra and images, especially those that do not give readily interpretable results. There is however a challenge in understanding the outputs from such analyses. The problem is complicated when analysing images, given the additional dimensions in the dataset. Here we demonstrate how the application of simple pre-processing routines can enable the interpretation of TOF-SIMS spectra and images. For the spectral data, TOF-SIMS spectra used to discriminate bacterial isolates associated with urinary tract infection were studied. Using different criteria for picking peaks before carrying out PC-DFA enabled identification of the discriminatory information with greater certainty. For the image data, an air-dried salt stressed bacterial sample, discussed in another paper by us in this issue, was studied. Exploration of the image datasets with and without normalisation prior to multivariate analysis by PCA or MAF resulted in different regions of the image being highlighted by the techniques.
NASA Astrophysics Data System (ADS)
Liu, Xi; Zhou, Mei; Qiu, Song; Sun, Li; Liu, Hongying; Li, Qingli; Wang, Yiting
2017-12-01
Red blood cell counting, as a routine examination, plays an important role in medical diagnoses. Although automated hematology analyzers are widely used, manual microscopic examination by a hematologist or pathologist is still unavoidable, which is time-consuming and error-prone. This paper proposes a full-automatic red blood cell counting method which is based on microscopic hyperspectral imaging of blood smears and combines spatial and spectral information to achieve high precision. The acquired hyperspectral image data of the blood smear in the visible and near-infrared spectral range are firstly preprocessed, and then a quadratic blind linear unmixing algorithm is used to get endmember abundance images. Based on mathematical morphological operation and an adaptive Otsu’s method, a binaryzation process is performed on the abundance images. Finally, the connected component labeling algorithm with magnification-based parameter setting is applied to automatically select the binary images of red blood cell cytoplasm. Experimental results show that the proposed method can perform well and has potential for clinical applications.
Harnessing the power of multimedia in offender-based law enforcement information systems
NASA Astrophysics Data System (ADS)
Zimmerman, Alan P.
1997-02-01
Criminal offenders are increasingly administratively processed by automated multimedia information systems. During this processing, case and offender biographical data, mugshot photos, fingerprints and other valuable information and media are collected by law enforcement officers. As part of their criminal investigations, law enforcement officers are routinely called to solve criminal cases based upon limited evidence . . . evidence increasingly comprised of human DNA, ballistic casings and projectiles, chemical residues, latent fingerprints, surveillance camera facial images and voices. As multimedia systems receive greater use in law enforcement, traditional approaches used to index text data are not appropriate for images and signal data which comprise a multimedia database. Multimedia systems with integrated advanced pattern matching tools will provide law enforcement the ability to effectively locate multimedia information based upon content, without reliance upon the accuracy or completeness of text-based indexing.
Unsupervised color normalisation for H and E stained histopathology image analysis
NASA Astrophysics Data System (ADS)
Celis, Raúl; Romero, Eduardo
2015-12-01
In histology, each dye component attempts to specifically characterise different microscopic structures. In the case of the Hematoxylin-Eosin (H&E) stain, universally used for routine examination, quantitative analysis may often require the inspection of different morphological signatures related mainly to nuclei patterns, but also to stroma distribution. Nevertheless, computer systems for automatic diagnosis are often fraught by color variations ranging from the capturing device to the laboratory specific staining protocol and stains. This paper presents a novel colour normalisation method for H&E stained histopathology images. This method is based upon the opponent process theory and blindly estimates the best color basis for the Hematoxylin and Eosin stains without relying on prior knowledge. Stain Normalisation and Color Separation are transversal to any Framework of Histopathology Image Analysis.
On-orbit Performance and Calibration of the HMI Instrument
NASA Astrophysics Data System (ADS)
Hoeksema, J. Todd; Bush, Rock; HMI Calibration Team
2016-10-01
The Helioseismic and Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO) has observed the Sun almost continuously since the completion of commissioning in May 2010, returning more than 100,000,000 filtergrams from geosynchronous orbit. Diligent and exhaustive monitoring of the instrument's performance ensures that HMI functions properly and allows proper calibration of the full-disk images and processing of the HMI observables. We constantly monitor trends in temperature, pointing, mechanism behavior, and software errors. Cosmic ray contamination is detected and bad pixels are removed from each image. Routine calibration sequences and occasional special observing programs are used to measure the instrument focus, distortion, scattered light, filter profiles, throughput, and detector characteristics. That information is used to optimize instrument performance and adjust calibration of filtergrams and observables.
Will the future of knowledge work automation transform personalized medicine?
Naik, Gauri; Bhide, Sanika S
2014-09-01
Today, we live in a world of 'information overload' which demands high level of knowledge-based work. However, advances in computer hardware and software have opened possibilities to automate 'routine cognitive tasks' for knowledge processing. Engineering intelligent software systems that can process large data sets using unstructured commands and subtle judgments and have the ability to learn 'on the fly' are a significant step towards automation of knowledge work. The applications of this technology for high throughput genomic analysis, database updating, reporting clinically significant variants, and diagnostic imaging purposes are explored using case studies.
Berberich, Gabriele; Berberich, Martin; Grumpe, Arne; Wöhler, Christian; Schreiber, Ulrich
2013-01-01
Simple Summary For three years (2009–2012), two red wood ant mounds (Formica rufa-group), located at the seismically active Neuwied Basin (Eifel, Germany), have been monitored 24/7 by high-resolution cameras. Early results show that ants have a well-identifiable standard daily routine. Correlation with local seismic events suggests changes in the ants’ behavior hours before the earthquake: the nocturnal rest phase and daily activity are suppressed, and standard daily routine does not resume until the next day. At present, an automated image evaluation routine is being applied to the video streams. Based on this automated approach, a statistical analysis of the ant behavior will be carried out. Abstract Short-term earthquake predictions with an advance warning of several hours or days are currently not possible due to both incomplete understanding of the complex tectonic processes and inadequate observations. Abnormal animal behaviors before earthquakes have been reported previously, but create problems in monitoring and reliability. The situation is different with red wood ants (RWA; Formica rufa-group (Hymenoptera: Formicidae)). They have stationary mounds on tectonically active, gas-bearing fault systems. These faults may be potential earthquake areas. For three years (2009–2012), two red wood ant mounds (Formica rufa-group), located at the seismically active Neuwied Basin (Eifel, Germany), have been monitored 24/7 by high-resolution cameras with both a color and an infrared sensor. Early results show that ants have a well-identifiable standard daily routine. Correlation with local seismic events suggests changes in the ants’ behavior hours before the earthquake: the nocturnal rest phase and daily activity are suppressed, and standard daily routine does not resume until the next day. At present, an automated image evaluation routine is being applied to the more than 45,000 hours of video streams. Based on this automated approach, a statistical analysis of the ants’ behavior will be carried out. In addition, other parameters (climate, geotectonic and biological), which may influence behavior, will be included in the analysis. PMID:26487310
Cost-effectiveness of routine imaging of suspected appendicitis.
D'Souza, N; Marsden, M; Bottomley, S; Nagarajah, N; Scutt, F; Toh, S
2018-01-01
Introduction The misdiagnosis of appendicitis and consequent removal of a normal appendix occurs in one in five patients in the UK. On the contrary, in healthcare systems with routine cross-sectional imaging of suspected appendicitis, the negative appendicectomy rate is around 5%. If we could reduce the rate in the UK to similar numbers, would this be cost effective? This study aimed to calculate the financial impact of negative appendicectomy at the Queen Alexandra Hospital and to explore whether a policy of routine imaging of such patients could reduce hospital costs. Materials and methods We performed a retrospective analysis of all appendicectomies over a 1-year period at our institution. Data were extracted on outcomes including appendix histology, operative time and length of stay to calculate the negative appendicectomy rate and to analyse costs. Results A total of 531 patients over 5 years of age had an appendicectomy. The negative appendicectomy rate was 22% (115/531). The additional financial costs of negative appendicectomy to the hospital during this period were £270,861. Universal imaging of all patients with right iliac fossa pain that could result in a 5% negative appendicectomy rate would cost between £67,200 and £165,600 per year but could save £33,896 (magnetic resonance imaging), £105,896 (computed tomography) or £132,296 (ultrasound) depending on imaging modality used. Conclusions Negative appendicectomy is still too frequent and results in additional financial burden to the health service. Routine imaging of patients with suspected appendicitis would not only reduce the negative appendicectomy rate but could lead to cost savings and a better service for our patients.
Removal of intensity bias in magnitude spin-echo MRI images by nonlinear diffusion filtering
NASA Astrophysics Data System (ADS)
Samsonov, Alexei A.; Johnson, Chris R.
2004-05-01
MRI data analysis is routinely done on the magnitude part of complex images. While both real and imaginary image channels contain Gaussian noise, magnitude MRI data are characterized by Rice distribution. However, conventional filtering methods often assume image noise to be zero mean and Gaussian distributed. Estimation of an underlying image using magnitude data produces biased result. The bias may lead to significant image errors, especially in areas of low signal-to-noise ratio (SNR). The incorporation of the Rice PDF into a noise filtering procedure can significantly complicate the method both algorithmically and computationally. In this paper, we demonstrate that inherent image phase smoothness of spin-echo MRI images could be utilized for separate filtering of real and imaginary complex image channels to achieve unbiased image denoising. The concept is demonstrated with a novel nonlinear diffusion filtering scheme developed for complex image filtering. In our proposed method, the separate diffusion processes are coupled through combined diffusion coefficients determined from the image magnitude. The new method has been validated with simulated and real MRI data. The new method has provided efficient denoising and bias removal in conventional and black-blood angiography MRI images obtained using fast spin echo acquisition protocols.
Integration of High-resolution Data for Temporal Bone Surgical Simulations
Wiet, Gregory J.; Stredney, Don; Powell, Kimerly; Hittle, Brad; Kerwin, Thomas
2016-01-01
Purpose To report on the state of the art in obtaining high-resolution 3D data of the microanatomy of the temporal bone and to process that data for integration into a surgical simulator. Specifically, we report on our experience in this area and discuss the issues involved to further the field. Data Sources Current temporal bone image acquisition and image processing established in the literature as well as in house methodological development. Review Methods We reviewed the current English literature for the techniques used in computer-based temporal bone simulation systems to obtain and process anatomical data for use within the simulation. Search terms included “temporal bone simulation, surgical simulation, temporal bone.” Articles were chosen and reviewed that directly addressed data acquisition and processing/segmentation and enhancement with emphasis given to computer based systems. We present the results from this review in relationship to our approach. Conclusions High-resolution CT imaging (≤100μm voxel resolution), along with unique image processing and rendering algorithms, and structure specific enhancement are needed for high-level training and assessment using temporal bone surgical simulators. Higher resolution clinical scanning and automated processes that run in efficient time frames are needed before these systems can routinely support pre-surgical planning. Additionally, protocols such as that provided in this manuscript need to be disseminated to increase the number and variety of virtual temporal bones available for training and performance assessment. PMID:26762105
McNamara, Paula; Humphry, Ruth
2008-05-01
This study obtains a deeper understanding of the processes supporting the formation of young children's routines in a child care classroom. Eight infants and toddlers and their teachers from two child care classrooms were observed for 4 to 6 months during periods of regularly occurring activities. Detailed, moment-to-moment descriptions of their behaviors and interactions were analyzed. Eleven processes supported the development of children's routines. Teachers structured and guided the children's experiences in learning routines, and children initiated requests to do routines. The study also identified three processes where children invited, coached, and modeled, supporting one another in learning routines. Finally, familiar objects used in routines elicited the children's engagement.
Impact of audit of routine second-trimester cardiac images using a novel image-scoring method.
Sairam, S; Awadh, A M A; Cook, K; Papageorghiou, A T; Carvalho, J S
2009-05-01
To assess the impact of using an objective scoring method to audit cardiac images obtained as part of the routine 21-23-week anomaly scan. A prospective audit and re-audit (6 months later) were conducted on cardiac images obtained by sonographers during the routine anomaly scan. A new image-scoring method was devised based on expected features in the four-chamber and outflow tract views. For each patient, scores were awarded for documentation and quality of individual views. These were called 'Documentation Scores' and 'View Scores' and were added to give a 'Patient Score' which represented the quality of screening provided by the sonographer for that particular patient (maximum score, 15). In order to assess the overall performance of sonographers, an 'Audit Score' was calculated for each by averaging his or her Patient Scores. In addition, to assess each sonographer's performance in relation to particular aspects of the various views, each was given their own 'Sonographer View Scores', derived from image documentation and details of four-chamber view (magnification, valve offset and septum) and left and right outflow tract views. All images were scored by two reviewers, jointly in the primary audit and independently in the re-audit. The scores from primary and re-audit were compared to assess the impact of feedback from the primary audit. Eight sonographers participated in the study. The median Audit Score increased significantly (P < 0.01), from 10.8 (range, 9.8-12.4) in the primary audit to 12.4 (range, 10.4-13.6) in the re-audit. Scores allocated by the two reviewers in the re-audit were not significantly different (P = 0.08). Objective scoring of fetal heart images is feasible and has a positive impact on the quality of cardiac images acquired at the time of the routine anomaly scan. This audit tool has the potential to be applied in every obstetric scanning unit and may improve the effectiveness of screening for congenital heart defects.
Pothuaud, L; Benhamou, C L; Porion, P; Lespessailles, E; Harba, R; Levitz, P
2000-04-01
The purpose of this work was to understand how fractal dimension of two-dimensional (2D) trabecular bone projection images could be related to three-dimensional (3D) trabecular bone properties such as porosity or connectivity. Two alteration processes were applied to trabecular bone images obtained by magnetic resonance imaging: a trabeculae dilation process and a trabeculae removal process. The trabeculae dilation process was applied from the 3D skeleton graph to the 3D initial structure with constant connectivity. The trabeculae removal process was applied from the initial structure to an altered structure having 99% of porosity, in which both porosity and connectivity were modified during this second process. Gray-level projection images of each of the altered structures were simply obtained by summation of voxels, and fractal dimension (Df) was calculated. Porosity (phi) and connectivity per unit volume (Cv) were calculated from the 3D structure. Significant relationships were found between Df, phi, and Cv. Df values increased when porosity increased (dilation and removal processes) and when connectivity decreased (only removal process). These variations were in accordance with all previous clinical studies, suggesting that fractal evaluation of trabecular bone projection has real meaning in terms of porosity and connectivity of the 3D architecture. Furthermore, there was a statistically significant linear dependence between Df and Cv when phi remained constant. Porosity is directly related to bone mineral density and fractal dimension can be easily evaluated in clinical routine. These two parameters could be associated to evaluate the connectivity of the structure.
Inter-laboratory comparison of the in vivo comet assay including three image analysis systems.
Plappert-Helbig, Ulla; Guérard, Melanie
2015-12-01
To compare the extent of potential inter-laboratory variability and the influence of different comet image analysis systems, in vivo comet experiments were conducted using the genotoxicants ethyl methanesulfonate and methyl methanesulfonate. Tissue samples from the same animals were processed and analyzed-including independent slide evaluation by image analysis-in two laboratories with extensive experience in performing the comet assay. The analysis revealed low inter-laboratory experimental variability. Neither the use of different image analysis systems, nor the staining procedure of DNA (propidium iodide vs. SYBR® Gold), considerably impacted the results or sensitivity of the assay. In addition, relatively high stability of the staining intensity of propidium iodide-stained slides was found in slides that were refrigerated for over 3 months. In conclusion, following a thoroughly defined protocol and standardized routine procedures ensures that the comet assay is robust and generates comparable results between different laboratories. © 2015 Wiley Periodicals, Inc.
Application of optical character recognition in thermal image processing
NASA Astrophysics Data System (ADS)
Chan, W. T.; Sim, K. S.; Tso, C. P.
2011-07-01
This paper presents the results of a study on the reliability of the thermal imager compared to other devices that are used in preventive maintenance. Several case studies are used to facilitate the comparisons. When any device is found to perform unsatisfactorily where there is a suspected fault, its short-fall is determined so that the other devices may compensate, if possible. This study discovered that the thermal imager is not suitable or efficient enough for systems that happen to have little contrast in temperature between its parts or small but important parts that have their heat signatures obscured by those from other parts. The thermal imager is also found to be useful for preliminary examinations of certain systems, after which other more economical devices are suitable substitutes for further examinations. The findings of this research will be useful to the design and planning of preventive maintenance routines for industrial benefits.
An ice-motion tracking system at the Alaska SAR facility
NASA Technical Reports Server (NTRS)
Kwok, Ronald; Curlander, John C.; Pang, Shirley S.; Mcconnell, Ross
1990-01-01
An operational system for extracting ice-motion information from synthetic aperture radar (SAR) imagery is being developed as part of the Alaska SAR Facility. This geophysical processing system (GPS) will derive ice-motion information by automated analysis of image sequences acquired by radars on the European ERS-1, Japanese ERS-1, and Canadian RADARSAT remote sensing satellites. The algorithm consists of a novel combination of feature-based and area-based techniques for the tracking of ice floes that undergo translation and rotation between imaging passes. The system performs automatic selection of the image pairs for input to the matching routines using an ice-motion estimator. It is designed to have a daily throughput of ten image pairs. A description is given of the GPS system, including an overview of the ice-motion-tracking algorithm, the system architecture, and the ice-motion products that will be available for distribution to geophysical data users.
Nair, Madhu K; Pettigrew, James C; Loomis, Jeffrey S; Bates, Robert E; Kostewicz, Stephen; Robinson, Boyd; Sweitzer, Jean; Dolan, Teresa A
2009-06-01
The implementation of digital radiography in dentistry in a large healthcare enterprise setting is discussed. A distinct need for a dedicated dental picture archiving and communication systems (PACS) exists for seamless integration of different vendor products across the system. Complex issues are contended with as each clinical department migrated to a digital environment with unique needs and workflow patterns. The University of Florida has had a dental PACS installed over 2 years ago. This paper describes the process of conversion from film-based imaging from the planning stages through clinical implementation. Dentistry poses many unique challenges as it strives to achieve better integration with systems primarily designed for imaging; however, the technical requirements for high-resolution image capture in dentistry far exceed those in medicine, as most routine dental diagnostic tasks are challenging. The significance of specification, evaluation, vendor selection, installation, trial runs, training, and phased clinical implementation is emphasized.
NASA Astrophysics Data System (ADS)
Li, Senhu; Sarment, David
2015-12-01
Minimally invasive neurosurgery needs intraoperative imaging updates and high efficient image guide system to facilitate the procedure. An automatic image guided system utilized with a compact and mobile intraoperative CT imager was introduced in this work. A tracking frame that can be easily attached onto the commercially available skull clamp was designed. With known geometry of fiducial and tracking sensor arranged on this rigid frame that was fabricated through high precision 3D printing, not only was an accurate, fully automatic registration method developed in a simple and less-costly approach, but also it helped in estimating the errors from fiducial localization in image space through image processing, and in patient space through the calibration of tracking frame. Our phantom study shows the fiducial registration error as 0.348+/-0.028mm, comparing the manual registration error as 1.976+/-0.778mm. The system in this study provided a robust and accurate image-to-patient registration without interruption of routine surgical workflow and any user interactions involved through the neurosurgery.
Ilunga-Mbuyamba, Elisee; Avina-Cervantes, Juan Gabriel; Cepeda-Negrete, Jonathan; Ibarra-Manzano, Mario Alberto; Chalopin, Claire
2017-12-01
Brain tumor segmentation is a routine process in a clinical setting and provides useful information for diagnosis and treatment planning. Manual segmentation, performed by physicians or radiologists, is a time-consuming task due to the large quantity of medical data generated presently. Hence, automatic segmentation methods are needed, and several approaches have been introduced in recent years including the Localized Region-based Active Contour Model (LRACM). There are many popular LRACM, but each of them presents strong and weak points. In this paper, the automatic selection of LRACM based on image content and its application on brain tumor segmentation is presented. Thereby, a framework to select one of three LRACM, i.e., Local Gaussian Distribution Fitting (LGDF), localized Chan-Vese (C-V) and Localized Active Contour Model with Background Intensity Compensation (LACM-BIC), is proposed. Twelve visual features are extracted to properly select the method that may process a given input image. The system is based on a supervised approach. Applied specifically to Magnetic Resonance Imaging (MRI) images, the experiments showed that the proposed system is able to correctly select the suitable LRACM to handle a specific image. Consequently, the selection framework achieves better accuracy performance than the three LRACM separately. Copyright © 2017 Elsevier Ltd. All rights reserved.
PACS 2000: quality control using the task allocation chart
NASA Astrophysics Data System (ADS)
Norton, Gary S.; Romlein, John R.; Lyche, David K.; Richardson, Ronald R., Jr.
2000-05-01
Medical imaging's technological evolution in the next century will continue to include Picture Archive and Communication Systems (PACS) and teleradiology. It is difficult to predict radiology's future in the new millennium with both computed radiography and direct digital capture competing as the primary image acquisition methods for routine radiography. Changes in Computed Axial Tomography (CT) and Magnetic Resonance Imaging (MRI) continue to amaze the healthcare community. No matter how the acquisition, display, and archive functions change, Quality Control (QC) of the radiographic imaging chain will remain an important step in the imaging process. The Task Allocation Chart (TAC) is a tool that can be used in a medical facility's QC process to indicate the testing responsibilities of the image stakeholders and the medical informatics department. The TAC shows a grid of equipment to be serviced, tasks to be performed, and the organization assigned to perform each task. Additionally, skills, tasks, time, and references for each task can be provided. QC of the PACS must be stressed as a primary element of a PACS' implementation. The TAC can be used to clarify responsibilities during warranty and paid maintenance periods. Establishing a TAC a part of a PACS implementation has a positive affect on patient care and clinical acceptance.
JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases.
Feng, Guangjie; Burton, Nick; Hill, Bill; Davidson, Duncan; Kerwin, Janet; Scott, Mark; Lindsay, Susan; Baldock, Richard
2005-03-09
Many three-dimensional (3D) images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT
Bedayat, Arash; Kumamaru, Kanako; Powers, Sara L.; Signorelli, Jason; Steigner, Michael L.; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T.
2011-01-01
The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use. PMID:21336552
Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT.
Bedayat, Arash; Rybicki, Frank J; Kumamaru, Kanako; Powers, Sara L; Signorelli, Jason; Steigner, Michael L; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T
2012-02-01
The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use.
Radically Reducing Radiation Exposure during Routine Medical Imaging
Exposure to radiation from medical imaging in the United States has increased dramatically. NCI and several partner organizations sponsored a 2011 summit to promote efforts to reduce radiation exposure from medical imaging.
NASA Astrophysics Data System (ADS)
Faber, Tracy L.; Garcia, Ernest V.; Lalush, David S.; Segars, W. Paul; Tsui, Benjamin M.
2001-05-01
The spline-based Mathematical Cardiac Torso (MCAT) phantom is a realistic software simulation designed to simulate single photon emission computed tomographic (SPECT) data. It incorporates a heart model of known size and shape; thus, it is invaluable for measuring accuracy of acquisition, reconstruction, and post-processing routines. New functionality has been added by replacing the standard heart model with left ventricular (LV) epicaridal and endocardial surface points detected from actual patient SPECT perfusion studies. LV surfaces detected from standard post-processing quantitation programs are converted through interpolation in space and time into new B-spline models. Perfusion abnormalities are added to the model based on results of standard perfusion quantification. The new LV is translated and rotated to fit within existing atria and right ventricular models, which are scaled based on the size of the LV. Simulations were created for five different patients with myocardial infractions who had undergone SPECT perfusion imaging. Shape, size, and motion of the resulting activity map were compared visually to the original SPECT images. In all cases, size, shape and motion of simulated LVs matched well with the original images. Thus, realistic simulations with known physiologic and functional parameters can be created for evaluating efficacy of processing algorithms.
Imaging informatics for consumer health: towards a radiology patient portal
Arnold, Corey W; McNamara, Mary; El-Saden, Suzie; Chen, Shawn; Taira, Ricky K; Bui, Alex A T
2013-01-01
Objective With the increased routine use of advanced imaging in clinical diagnosis and treatment, it has become imperative to provide patients with a means to view and understand their imaging studies. We illustrate the feasibility of a patient portal that automatically structures and integrates radiology reports with corresponding imaging studies according to several information orientations tailored for the layperson. Methods The imaging patient portal is composed of an image processing module for the creation of a timeline that illustrates the progression of disease, a natural language processing module to extract salient concepts from radiology reports (73% accuracy, F1 score of 0.67), and an interactive user interface navigable by an imaging findings list. The portal was developed as a Java-based web application and is demonstrated for patients with brain cancer. Results and discussion The system was exhibited at an international radiology conference to solicit feedback from a diverse group of healthcare professionals. There was wide support for educating patients about their imaging studies, and an appreciation for the informatics tools used to simplify images and reports for consumer interpretation. Primary concerns included the possibility of patients misunderstanding their results, as well as worries regarding accidental improper disclosure of medical information. Conclusions Radiologic imaging composes a significant amount of the evidence used to make diagnostic and treatment decisions, yet there are few tools for explaining this information to patients. The proposed radiology patient portal provides a framework for organizing radiologic results into several information orientations to support patient education. PMID:23739614
SU-E-P-10: Imaging in the Cardiac Catheterization Lab - Technologies and Clinical Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fetterly, K
2014-06-01
Purpose: Diagnosis and treatment of cardiovascular disease in the cardiac catheterization laboratory is often aided by a multitude of imaging technologies. The purpose of this work is to highlight the contributions to patient care offered by the various imaging systems used during cardiovascular interventional procedures. Methods: Imaging technologies used in the cardiac catheterization lab were characterized by their fundamental technology and by the clinical applications for which they are used. Whether the modality is external to the patient, intravascular, or intracavity was specified. Specific clinical procedures for which multiple modalities are routinely used will be highlighted. Results: X-ray imaging modalitiesmore » include fluoroscopy/angiography and angiography CT. Ultrasound imaging is performed with external, trans-esophageal echocardiography (TEE), and intravascular (IVUS) transducers. Intravascular infrared optical coherence tomography (IVOCT) is used to assess vessel endothelium. Relatively large (>0.5 mm) anatomical structures are imaged with x-ray and ultrasound. IVUS and IVOCT provide high resolution images of vessel walls. Cardiac CT and MRI images are used to plan complex cardiovascular interventions. Advanced applications are used to spatially and temporally merge images from different technologies. Diagnosis and treatment of coronary artery disease frequently utilizes angiography and intra-vascular imaging, and treatment of complex structural heart conditions routinely includes use of multiple imaging modalities. Conclusion: There are several imaging modalities which are routinely used in the cardiac catheterization laboratory to diagnose and treat both coronary artery and structural heart disease. Multiple modalities are frequently used to enhance the quality and safety of procedures. The cardiac catheterization laboratory includes many opportunities for medical physicists to contribute substantially toward advancing patient care.« less
Wienert, Stephan; Beil, Michael; Saeger, Kai; Hufnagl, Peter; Schrader, Thomas
2009-01-09
The virtual microscopy is widely accepted in Pathology for educational purposes and teleconsultation but is far from the routine use in surgical pathology due to the technical requirements and some limitations. A technical problem is the limited bandwidth of a usual network and the delayed transmission rate and presentation time on the screen. In this study the process of secondary diagnostic was evaluated using the "T.Konsult Pathologie" service of the Professional Association of German Pathologists within the German breast cancer screening program. The characteristics of the access to the WSI (Whole Slide Images) have been analyzed to explore the possibilities of prefetching and caching to reduce the presentation and transfer time with the goal to increase user acceptance. The log files of the web server were analyzed to reconstruct the movements of the pathologist on the WSI and to create the observation path. Using a specialized tool the observation paths were extracted automatically from the log files. The attributes linearity, 3-point-linearity, changes per request, and number of consecutive requests were calculated to design, develop and evaluate different caching and prefetching strategies. The analysis of the observation paths showed that a complete accordance of two image requests is a very rare event. But more frequently a partial covering of two requested image areas can be found. In total 257 diagnostic paths from 131 WSI have been extracted and analysed. On average a diagnostic path consists of 16 image requests and takes 189 seconds between first and last image request. The mean linearity was 0,41 and the mean 3-point-linearity 0,85. Three different caching algorithms have been compared with respect to hit rate and additional image requests on the WSI server. Tests demonstrated that 95% of the diagnostic paths could be loaded without any deletion of entries in the cache (cache size 12,2 Megapixel). If the image parts are stored after JPEG compression this complies with less than 2 MB. WSI telepathology is a technology which offers the possibility to break the limitations of conventional static telepathology. The complete histological slide may be investigated instead of sets of images of lesions sampled by the presenting pathologist. The benefit is demonstrated by the high diagnostic security of 95% accordance between first and second diagnosis.
Wienert, Stephan; Beil, Michael; Saeger, Kai; Hufnagl, Peter; Schrader, Thomas
2009-01-01
Background The virtual microscopy is widely accepted in Pathology for educational purposes and teleconsultation but is far from the routine use in surgical pathology due to the technical requirements and some limitations. A technical problem is the limited bandwidth of a usual network and the delayed transmission rate and presentation time on the screen. Methods In this study the process of secondary diagnostic was evaluated using the "T.Konsult Pathologie" service of the Professional Association of German Pathologists within the German breast cancer screening program. The characteristics of the access to the WSI (Whole Slide Images) have been analyzed to explore the possibilities of prefetching and caching to reduce the presentation and transfer time with the goal to increase user acceptance. The log files of the web server were analyzed to reconstruct the movements of the pathologist on the WSI and to create the observation path. Using a specialized tool the observation paths were extracted automatically from the log files. The attributes linearity, 3-point-linearity, changes per request, and number of consecutive requests were calculated to design, develop and evaluate different caching and prefetching strategies. Results The analysis of the observation paths showed that a complete accordance of two image requests is a very rare event. But more frequently a partial covering of two requested image areas can be found. In total 257 diagnostic paths from 131 WSI have been extracted and analysed. On average a diagnostic path consists of 16 image requests and takes 189 seconds between first and last image request. The mean linearity was 0,41 and the mean 3-point-linearity 0,85. Three different caching algorithms have been compared with respect to hit rate and additional image requests on the WSI server. Tests demonstrated that 95% of the diagnostic paths could be loaded without any deletion of entries in the cache (cache size 12,2 Megapixel). If the image parts are stored after JPEG compression this complies with less than 2 MB. Discussion WSI telepathology is a technology which offers the possibility to break the limitations of conventional static telepathology. The complete histological slide may be investigated instead of sets of images of lesions sampled by the presenting pathologist. The benefit is demonstrated by the high diagnostic security of 95% accordance between first and second diagnosis. PMID:19134181
PET/CT in Radiation Therapy Planning.
Specht, Lena; Berthelsen, Anne Kiil
2018-01-01
Radiation therapy (RT) is an important component of the management of lymphoma patients. Most lymphomas are metabolically active and accumulate 18 F-fluorodeoxyglucose (FDG). Positron emission tomography with computer tomography (PET/CT) imaging using FDG is used routinely in staging and treatment evaluation. FDG-PET/CT imaging is now also used routinely for contouring the target for RT, and has been shown to change the irradiated volume significantly compared with CT imaging alone. Modern advanced imaging techniques with image fusion and motion management in combination with modern highly conformal RT techniques have increased the precision of RT, and have made it possible to reduce dramatically the risks of long-term side effects of treatment while maintaining the high cure rates for these diseases. Copyright © 2017 Elsevier Inc. All rights reserved.
Medical imaging: examples of clinical applications
NASA Astrophysics Data System (ADS)
Meinzer, H. P.; Thorn, M.; Vetter, M.; Hassenpflug, P.; Hastenteufel, M.; Wolf, I.
Clinical routine is currently producing a multitude of diagnostic digital images but only a few are used in therapy planning and treatment. Medical imaging is involved in both diagnosis and therapy. Using a computer, existing 2D images can be transformed into interactive 3D volumes and results from different modalities can be merged. Furthermore, it is possible to calculate functional areas that were not visible in the primary images. This paper presents examples of clinical applications that are integrated into clinical routine and are based on medical imaging fundamentals. In liver surgery, the importance of virtual planning is increasing because surgery is still the only possible curative procedure. Visualisation and analysis of heart defects are also gaining in significance due to improved surgery techniques. Finally, an outlook is provided on future developments in medical imaging using navigation to support the surgeon's work. The paper intends to give an impression of the wide range of medical imaging that goes beyond the mere calculation of medical images.
1984-12-01
BLOCK DATA Default values for variables input by menus. LIBR Interface with frame I/O routines. SNSR Interface with sensor routines. ATMOS Interface with...Routines Included in Frame I/O Interface Routine Description LIBR Selects options for input or output to a data library. FRREAD Reads frame from file and/or...Layer", Journal of Applied Meteorology 20, pp. 242-249, March 1981. 15 L.J. Harding, Numerical Analysis and Applications Software Abstracts, Computing
Enhnacing the science of the WFIRST coronagraph instrument with post-processing.
NASA Astrophysics Data System (ADS)
Pueyo, Laurent; WFIRST CGI data analysis and post-processing WG
2018-01-01
We summarize the results of a three years effort investigating how to apply to the WFIRST coronagraph instrument (CGI) modern image analysis methods, now routinely used with ground-based coronagraphs. In this post we quantify the gain associated post-processing for WFIRST-CGI observing scenarios simulated between 2013 and 2017. We also show based one simulations that spectrum of planet can be confidently retrieved using these processing tools with and Integral Field Spectrograph. We then discuss our work using CGI experimental data and quantify coronagraph post-processing testbed gains. We finally introduce stability metrics that are simple to define and measure, and place useful lower bound and upper bounds on the achievable RDI post-processing contrast gain. We show that our bounds hold in the case of the testbed data.
CRISPRED: CRISP imaging spectropolarimeter data reduction pipeline
NASA Astrophysics Data System (ADS)
de la Cruz Rodríguez, J.; Löfdahl, M. G.; Sütterlin, P.; Hillberg, T.; Rouppe van der Voort, L.
2017-08-01
CRISPRED reduces data from the CRISP imaging spectropolarimeter at the Swedish 1 m Solar Telescope (SST). It performs fitting routines, corrects optical aberrations from atmospheric turbulence as well as from the optics, and compensates for inter-camera misalignments, field-dependent and time-varying instrumental polarization, and spatial variation in the detector gain and in the zero level offset (bias). It has an object-oriented IDL structure with computationally demanding routines performed in C subprograms called as dynamically loadable modules (DLMs).
Evidence and diagnostic reporting in the IHE context.
Loef, Cor; Truyen, Roel
2005-05-01
Capturing clinical observations and findings during the diagnostic imaging process is increasingly becoming a critical step in diagnostic reporting. Standards developers-notably HL7 and DICOM-are making significant progress toward standards that enable exchanging clinical observations and findings among the various information systems of the healthcare enterprise. DICOM-like the HL7 Clinical Document Architecture (CDA) -uses templates and constrained, coded vocabulary (SNOMED, LOINC, etc.). Such a representation facilitates automated software recognition of findings and observations, intrapatient comparison, correlation to norms, and outcomes research. The scope of DICOM Structured Reporting (SR) includes many findings that products routinely create in digital form (measurements, computed estimates, etc.). In the Integrating the Healthcare Enterprise (IHE) framework, two Integration Profiles are defined for clinical data capture and diagnostic reporting: Evidence Document, and Simple Image and Numeric Report. This report describes these two DICOM SR-based integration profiles in the diagnostic reporting process.
Improving IUE High Dispersion Extraction
NASA Technical Reports Server (NTRS)
Lawton, Patricia J.; VanSteenberg, M. E.; Massa, D.
2007-01-01
We present a different method to extract high dispersion International Ultraviolet Explorer (IUE) spectra from the New Spectral Image Processing System (NEWSIPS) geometrically and photometrically corrected (SI HI) images of the echellogram. The new algorithm corrects many of the deficiencies that exist in the NEWSIPS high dispersion (SIHI) spectra . Specifically, it does a much better job of accounting for the overlap of the higher echelle orders, it eliminates a significant time dependency in the extracted spectra (which can be traced to the background model used in the NEWSIPS extractions), and it can extract spectra from echellogram images that are more highly distorted than the NEWSIPS extraction routines can handle. Together, these improvements yield a set of IUE high dispersion spectra whose scientific integrity is sign ificantly better than the NEWSIPS products. This work has been supported by NASA ADP grants.
Diagnosis of non-osseous spinal metastatic disease: the role of PET/CT and PET/MRI.
Batouli, Ali; Braun, John; Singh, Kamal; Gholamrezanezhad, Ali; Casagranda, Bethany U; Alavi, Abass
2018-06-01
The spine is the third most common site for distant metastasis in cancer patients with approximately 70% of patients with metastatic cancer having spinal involvement. Positron emission tomography (PET), combined with computed tomography (CT) or magnetic resonance imaging (MRI), has been deeply integrated in modern clinical oncology as a pivotal component of the diagnostic work-up of patients with cancer. PET is able to diagnose several neoplastic processes before any detectable morphological changes can be identified by anatomic imaging modalities alone. In this review, we discuss the role of PET/CT and PET/MRI in the diagnostic management of non-osseous metastatic disease of the spinal canal. While sometimes subtle, recognizing such disease on FDG PET/CT and PET/MRI imaging done routinely in cancer patients can guide treatment strategies to potentially prevent irreversible neurological damage.
Model Analysis of an Aircraft Fueslage Panel using Experimental and Finite-Element Techniques
NASA Technical Reports Server (NTRS)
Fleming, Gary A.; Buehrle, Ralph D.; Storaasli, Olaf L.
1998-01-01
The application of Electro-Optic Holography (EOH) for measuring the center bay vibration modes of an aircraft fuselage panel under forced excitation is presented. The requirement of free-free panel boundary conditions made the acquisition of quantitative EOH data challenging since large scale rigid body motions corrupted measurements of the high frequency vibrations of interest. Image processing routines designed to minimize effects of large scale motions were applied to successfully resurrect quantitative EOH vibrational amplitude measurements
Data-Base Software For Tracking Technological Developments
NASA Technical Reports Server (NTRS)
Aliberti, James A.; Wright, Simon; Monteith, Steve K.
1996-01-01
Technology Tracking System (TechTracS) computer program developed for use in storing and retrieving information on technology and related patent information developed under auspices of NASA Headquarters and NASA's field centers. Contents of data base include multiple scanned still images and quick-time movies as well as text. TechTracS includes word-processing, report-editing, chart-and-graph-editing, and search-editing subprograms. Extensive keyword searching capabilities enable rapid location of technologies, innovators, and companies. System performs routine functions automatically and serves multiple users.
Hoover, Andrew J.; Lazari, Mark; Ren, Hong; ...
2016-02-14
Translation of new 18F-fluorination reactions to produce radiotracers for human positron emission tomography (PET) imaging is rare because the chemistry must have useful scope and the process for 18F-labeled tracer production must be robust and simple to execute. The application of transition metal mediators has enabled impactful 18F-fluorination methods, but to date none of these reactions have been applied to produce a human-injectable PET tracer. In this article we present chemistry and process innovations that culminate in the first production from [ 18F]fluoride of human doses of [ 18F]5-fluorouracil, a PET tracer for cancer imaging in humans. Here, the firstmore » preparation of nickel σ-aryl complexes by transmetalation from arylboronic acids or esters was developed and enabled the synthesis of the [ 18F]5-fluorouracil precursor. Routine production of >10 mCi doses of [ 18F]5-fluorouracil was accomplished with a new instrument for azeotrope-free [ 18F]fluoride concentration in a process that leverages the tolerance of water in nickel-mediated 18F-fluorination.« less
Measuring upconversion nanoparticles photoluminescence lifetime with FastFLIM and phasor plots
NASA Astrophysics Data System (ADS)
Sun, Yuansheng; Lee, Hsien-Ming; Qiu, Hailin; Liao, Shih-Chu Jeff; Coskun, Ulas; Barbieri, Beniamino
2018-02-01
Photon upconversion is a nonlinear process in which the sequential of absorption of two or more photons leads to the anti-stoke emission. Different than the conventional multiphoton excitation process, upconversion can be efficiently performed at low excitation densities. Recent developments in lanthanide-doped upconversion nanoparticles (UCNPs) have led to a diversity of applications, including detecting and sensing of biomolecules, imaging of live cells, tissues and animals, cancer diagnostic and therapy, etc. Measuring the upconversion lifetime provides a new dimension of its imaging and opens a new window for its applications. Due to the long metastable intermediate excited state, UCNP typically has a long excited state lifetime ranging from sub-microseconds to milliseconds. Here, we present a novel development using the FastFLIM technique to measure UCNP lifetime by laser scanning confocal microscopy. FastFLIM is capable of measuring lifetime from 100 ps to 100 ms and features the high data collection efficiency (up to 140-million counts per second). Other than the traditional nonlinear least-square fitting analysis, the raw data acquired by FastFLIM can be directly processed by the model-free phasor plots approach for instant and unbiased lifetime results, providing the ideal routine for the UCNP photoluminescence lifetime microscopy imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoover, Andrew J.; Lazari, Mark; Ren, Hong
Translation of new 18F-fluorination reactions to produce radiotracers for human positron emission tomography (PET) imaging is rare because the chemistry must have useful scope and the process for 18F-labeled tracer production must be robust and simple to execute. The application of transition metal mediators has enabled impactful 18F-fluorination methods, but to date none of these reactions have been applied to produce a human-injectable PET tracer. In this article we present chemistry and process innovations that culminate in the first production from [ 18F]fluoride of human doses of [ 18F]5-fluorouracil, a PET tracer for cancer imaging in humans. Here, the firstmore » preparation of nickel σ-aryl complexes by transmetalation from arylboronic acids or esters was developed and enabled the synthesis of the [ 18F]5-fluorouracil precursor. Routine production of >10 mCi doses of [ 18F]5-fluorouracil was accomplished with a new instrument for azeotrope-free [ 18F]fluoride concentration in a process that leverages the tolerance of water in nickel-mediated 18F-fluorination.« less
[Structuralist reading of radiologic images].
Wackenheim, A
1984-02-01
The author suggests analysing the radiological image according to classical principles of structuralism, gestaltism, semiology, semantics. He describes applications in routine radiology: perception of complete theoretical displacement of parts of the image, phenomenology of three images (A-B-C) in theory and exams, mistake in perception by analogy.
Querleu, Denis; Planchamp, François; Narducci, Fabrice; Morice, Philippe; Joly, Florence; Genestie, Catherine; Haie-Meder, Christine; Thomas, Laurence; Quénel-Tueux, Nathalie; Daraï, Emile; Dorangeon, Pierre-Hervé; Marret, Henri; Taïeb, Sophie; Mazeau-Woynar, Valérie
2011-07-01
Endometrial cancer is the most common gynecological malignancy in France, with more than 6500 new cases in 2010. The French National Cancer Institute has been leading a clinical practice guidelines (CPG) project since 2008. This project involves the development and updating of evidence-based CPG in oncology. To develop CPG for diagnosis, treatment, and follow-up for patients with endometrial cancer. The guideline development process is based on systematic literature review and critical appraisal by experts, with feedback from specialists in cancer care delivery. The recommendations are thus based on the best available evidence and expert agreement. Main recommendations include a routine pelvic magnetic resonance imaging in association with magnetic resonance imaging exploration of the para-aortic lymph nodes for locoregional staging, surgical treatment based on total hysterectomy with bilateral salpingo-oophorectomy with or without lymphadenectomy, and clinical examination for the follow-up. The initial laparoscopic surgical approach is recommended for stage I tumors. Lymphadenectomy and postoperative external radiotherapy are recommended for patients with high risk of recurrence but are restricted for patients with low or intermediate risk. If brachytherapy is indicated, it should be given at a high-dose rate rather than a low-dose rate. Routine imaging, biologic tests, and vaginal smears are not indicated for follow-up.
Ex vivo applications of multiphoton microscopy in urology
NASA Astrophysics Data System (ADS)
Jain, Manu; Mukherjee, Sushmita
2016-03-01
Background: Routine urological surgery frequently requires rapid on-site histopathological tissue evaluation either during biopsy or intra-operative procedure. However, resected tissue needs to undergo processing, which is not only time consuming but may also create artifacts hindering real-time tissue assessment. Likewise, pathologist often relies on several ancillary methods, in addition to H&E to arrive at a definitive diagnosis. Although, helpful these techniques are tedious and time consuming and often show overlapping results. Therefore, there is a need for an imaging tool that can rapidly assess tissue in real-time at cellular level. Multiphoton microscopy (MPM) is one such technique that can generate histology-quality images from fresh and fixed tissue solely based on their intrinsic autofluorescence emission, without the need for tissue processing or staining. Design: Fresh tissue sections (neoplastic and non-neoplastic) from biopsy and surgical specimens of bladder and kidney were obtained. Unstained deparaffinized slides from biopsy of medical kidney disease and oncocytic renal neoplasms were also obtained. MPM images were acquired using with an Olympus FluoView FV1000MPE system. After imaging, fresh tissues were submitted for routine histopathology. Results: Based on the architectural and cellular details of the tissue, MPM could characterize normal components of bladder and kidney. Neoplastic tissue could be differentiated from non-neoplastic tissue and could be further classified as per histopathological convention. Some of the tumors had unique MPM signatures not otherwise seen on H&E sections. Various subtypes of glomerular lesions were identified as well as renal oncocytic neoplasms were differentiated on unstained deparaffinized slides. Conclusions: We envision MPM to become an integral part of regular diagnostic workflow for rapid assessment of tissue. MPM can be used to evaluate the adequacy of biopsies and triage tissues for ancillary studies. It can also be used as an adjunct to frozen section analysis for intra-operative margin assessment. Further, it can play an important role for pathologist for guiding specimen grossing, selecting tissue for tumor banking and as a rapid ancillary diagnostic tool.
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Pline, Alexander D.
1991-01-01
The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the USML-1 Spacelab mission planned for 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electronic, two-dimensional particle image velocimetry technique called particle displacement tracking (PDT) which uses a simple space domain particle tracking algorithm. The PDT system is successful in producing velocity vector fields from the raw video data. Application of the PDT technique to a sample data set yielded 1606 vectors in 30 seconds of processing time. A bottom viewing optical arrangement is used to image the illuminated plane, which causes keystone distortion in the final recorded image. A coordinate transformation was incorporated into the system software to correct this viewing angle distortion. PDT processing produced 1.8 percent false identifications, due to random particle locations. A highly successful routine for removing the false identifications was also incorporated, reducing the number of false identifications to 0.2 percent.
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Pline, Alexander D.
1991-01-01
The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the USML-1 Spacelab mission planned for 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electronic, two-dimensional particle image velocimetry technique called particle displacement tracking (PDT) which uses a simple space domain particle tracking algorithm. The PDT system is successful in producing velocity vector fields from the raw video data. Application of the PDT technique to a sample data set yielded 1606 vectors in 30 seconds of processing time. A bottom viewing optical arrangement is used to image the illuminated plane, which causes keystone distortion in the final recorded image. A coordinate transformation was incorporated into the system software to correct this viewing angle distortion. PDT processing produced 1.8 percent false identifications, due to random particle locations. A highly successful routine for removing the false identifications was also incorporated, reducing the number of false identifications to 0.2 percent.
Biological applications of confocal fluorescence polarization microscopy
NASA Astrophysics Data System (ADS)
Bigelow, Chad E.
Fluorescence polarization microscopy is a powerful modality capable of sensing changes in the physical properties and local environment of fluorophores. In this thesis we present new applications for the technique in cancer diagnosis and treatment and explore the limits of the modality in scattering media. We describe modifications to our custom-built confocal fluorescence microscope that enable dual-color imaging, optical fiber-based confocal spectroscopy and fluorescence polarization imaging. Experiments are presented that indicate the performance of the instrument for all three modalities. The limits of confocal fluorescence polarization imaging in scattering media are explored and the microscope parameters necessary for accurate polarization images in this regime are determined. A Monte Carlo routine is developed to model the effect of scattering on images. Included in it are routines to track the polarization state of light using the Mueller-Stokes formalism and a model for fluorescence generation that includes sampling the excitation light polarization ellipse, Brownian motion of excited-state fluorophores in solution, and dipole fluorophore emission. Results from this model are compared to experiments performed on a fluorophore-embedded polymer rod in a turbid medium consisting of polystyrene microspheres in aqueous suspension. We demonstrate the utility of the fluorescence polarization imaging technique for removal of contaminating autofluorescence and for imaging photodynamic therapy drugs in cell monolayers. Images of cells expressing green fluorescent protein are extracted from contaminating fluorescein emission. The distribution of meta-tetrahydroxypheny1chlorin in an EMT6 cell monolayer is also presented. A new technique for imaging enzyme activity is presented that is based on observing changes in the anisotropy of fluorescently-labeled substrates. Proof-of-principle studies are performed in a model system consisting of fluorescently labeled bovine serum albumin attached to sepharose beads. The action of trypsin and proteinase K on the albumin is monitored to demonstrate validity of the technique. Images of the processing of the albumin in J774 murine macrophages are also presented indicating large intercellular differences in enzyme activity. Future directions for the technique are also presented, including the design of enzyme probes specific for prostate specific antigen based on fluorescently-labeled dendrimers. A technique for enzyme imaging based on extracellular autofluorescence is also proposed.
ASTEP user's guide and software documentation
NASA Technical Reports Server (NTRS)
Gliniewicz, A. S.; Lachowski, H. M.; Pace, W. H., Jr.; Salvato, P., Jr.
1974-01-01
The Algorithm Simulation Test and Evaluation Program (ASTEP) is a modular computer program developed for the purpose of testing and evaluating methods of processing remotely sensed multispectral scanner earth resources data. ASTEP is written in FORTRAND V on the UNIVAC 1110 under the EXEC 8 operating system and may be operated in either a batch or interactive mode. The program currently contains over one hundred subroutines consisting of data classification and display algorithms, statistical analysis algorithms, utility support routines, and feature selection capability. The current program can accept data in LARSC1, LARSC2, ERTS, and Universal formats, and can output processed image or data tapes in Universal format.
"Proximal Sensing" capabilities for snow cover monitoring
NASA Astrophysics Data System (ADS)
Valt, Mauro; Salvatori, Rosamaria; Plini, Paolo; Salzano, Roberto; Giusti, Marco; Montagnoli, Mauro; Sigismondi, Daniele; Cagnati, Anselmo
2013-04-01
The seasonal snow cover represents one of the most important land cover class in relation to environmental studies in mountain areas, especially considering its variation during time. Snow cover and its extension play a relevant role for the studies on the atmospheric dynamics and the evolution of climate. It is also important for the analysis and management of water resources and for the management of touristic activities in mountain areas. Recently, webcam images collected at daily or even hourly intervals are being used as tools to observe the snow covered areas; those images, properly processed, can be considered a very important environmental data source. Images captured by digital cameras become a useful tool at local scale providing images even when the cloud coverage makes impossible the observation by satellite sensors. When suitably processed these images can be used for scientific purposes, having a good resolution (at least 800x600x16 million colours) and a very good sampling frequency (hourly images taken through the whole year). Once stored in databases, those images represent therefore an important source of information for the study of recent climatic changes, to evaluate the available water resources and to analyse the daily surface evolution of the snow cover. The Snow-noSnow software has been specifically designed to automatically detect the extension of snow cover collected from webcam images with a very limited human intervention. The software was tested on images collected on Alps (ARPAV webcam network) and on Apennine in a pilot station properly equipped for this project by CNR-IIA. The results obtained through the use of Snow-noSnow are comparable to the one achieved by photo-interpretation and could be considered as better as the ones obtained using the image segmentation routine implemented into image processing commercial softwares. Additionally, Snow-noSnow operates in a semi-automatic way and has a reduced processing time. The analysis of this kind of images could represent an useful element to support the interpretation of remote sensing images, especially those provided by high spatial resolution sensors. Keywords: snow cover monitoring, digital images, software, Alps, Apennines.
Effect of routine diagnostic imaging for patients with musculoskeletal disorders: A meta-analysis.
Karel, Yasmaine H J M; Verkerk, Karin; Endenburg, Silvio; Metselaar, Sven; Verhagen, Arianne P
2015-10-01
The increasing use of diagnostic imaging has led to high expenditures, unnecessary invasive procedures and/or false-positive diagnoses, without certainty that the patients actually benefit from these imaging procedures. This review explores whether diagnostic imaging leads to better patient-reported outcomes in individuals with musculoskeletal disorders. Databases were searched from inception to September 2013, together with scrutiny of selected bibliographies. Trials were eligible when: 1) a diagnostic imaging procedure was compared with any control group not getting or not receiving the results of imaging; 2) the population included individuals suffering from musculoskeletal disorders, and 3) if patient-reported outcomes were available. Primary outcome measures were pain and function. Secondary outcome measures were satisfaction and quality of life. Subgroup analysis was done for different musculoskeletal complaints and high technological medical imaging (MRI/CT). Eleven trials were eligible. The effects of diagnostic imaging were only evaluated in patients with low back pain (n=7) and knee complaints (n=4). Overall, there was a moderate level of evidence for no benefit of diagnostic imaging on all outcomes compared with controls. A significant but clinically irrelevant effect was found in favor of no (routine) imaging in low back pain patients in terms of pain severity at short [SMD 0.17 (0.04-0.31)] and long-term follow-up [SMD 0.13 (0.02-0.24)], and for overall improvement [RR 1.15 (1.03-1.28)]. Subgroup analysis did not significantly change these results. These results strengthen the available evidence that routine referral to diagnostic imaging by general practitioners for patients with knee and low back pain yields little to no benefit. Copyright © 2015 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
A new image representation for compact and secure communication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Lakshman; Skourikhine, A. N.
In many areas of nuclear materials management there is a need for communication, archival, and retrieval of annotated image data between heterogeneous platforms and devices to effectively implement safety, security, and safeguards of nuclear materials. Current image formats such as JPEG are not ideally suited in such scenarios as they are not scalable to different viewing formats, and do not provide a high-level representation of images that facilitate automatic object/change detection or annotation. The new Scalable Vector Graphics (SVG) open standard for representing graphical information, recommended by the World Wide Web Consortium (W3C) is designed to address issues of imagemore » scalability, portability, and annotation. However, until now there has been no viable technology to efficiently field images of high visual quality under this standard. Recently, LANL has developed a vectorized image representation that is compatible with the SVG standard and preserves visual quality. This is based on a new geometric framework for characterizing complex features in real-world imagery that incorporates perceptual principles of processing visual information known from cognitive psychology and vision science, to obtain a polygonal image representation of high fidelity. This representation can take advantage of all textual compression and encryption routines unavailable to other image formats. Moreover, this vectorized image representation can be exploited to facilitate automated object recognition that can reduce time required for data review. The objects/features of interest in these vectorized images can be annotated via animated graphics to facilitate quick and easy display and comprehension of processed image content.« less
VizieR Online Data Catalog: Observed light curve of (3200) Phaethon (Ansdell+, 2014)
NASA Astrophysics Data System (ADS)
Ansdell, M.; Meech, K. J.; Hainaut, O.; Buie, M. W.; Kaluna, H.; Bauer, J.; Dundon, L.
2017-04-01
We obtained time series photometry over 15 nights from 1994 to 2013. All but three nights used the Tektronix 2048x2048 pixel CCD camera on the University of Hawaii 2.2 m telescope on Mauna Kea. Two nights used the PRISM 2048x2048 pixel CCD camera on the Perkins 72 inch telescope at the Lowell Observatory in Flagstaff, Arizona, while one night used the Optic 2048x4096 CCD camera also on the University of Hawaii 2.2 m telescope. All observations used the standard Kron-Cousins R filter with the telescope guiding on (3200) Phaethon at non-sidereal rates. Raw images were processed with standard IRAF routines for bias subtraction, flat-fielding, and cosmic ray removal (Tody, 1986SPIE..627..733T). We constructed reference flat fields by median combining dithered images of either twilight or the object field (in both cases, flattening reduced gradients to <1% across the CCD). We performed photometry using the IRAF phot routine with circular apertures typically 5'' in radius, although aperture sizes changed depending on the night and/or exposure as they were chosen to consistently include 99.5% of the object's light. (1 data file).
The challenges of studying visual expertise in medical image diagnosis.
Gegenfurtner, Andreas; Kok, Ellen; van Geel, Koos; de Bruin, Anique; Jarodzka, Halszka; Szulewski, Adam; van Merriënboer, Jeroen Jg
2017-01-01
Visual expertise is the superior visual skill shown when executing domain-specific visual tasks. Understanding visual expertise is important in order to understand how the interpretation of medical images may be best learned and taught. In the context of this article, we focus on the visual skill of medical image diagnosis and, more specifically, on the methodological set-ups routinely used in visual expertise research. We offer a critique of commonly used methods and propose three challenges for future research to open up new avenues for studying characteristics of visual expertise in medical image diagnosis. The first challenge addresses theory development. Novel prospects in modelling visual expertise can emerge when we reflect on cognitive and socio-cultural epistemologies in visual expertise research, when we engage in statistical validations of existing theoretical assumptions and when we include social and socio-cultural processes in expertise development. The second challenge addresses the recording and analysis of longitudinal data. If we assume that the development of expertise is a long-term phenomenon, then it follows that future research can engage in advanced statistical modelling of longitudinal expertise data that extends the routine use of cross-sectional material through, for example, animations and dynamic visualisations of developmental data. The third challenge addresses the combination of methods. Alternatives to current practices can integrate qualitative and quantitative approaches in mixed-method designs, embrace relevant yet underused data sources and understand the need for multidisciplinary research teams. Embracing alternative epistemological and methodological approaches for studying visual expertise can lead to a more balanced and robust future for understanding superior visual skills in medical image diagnosis as well as other medical fields. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
NASA Astrophysics Data System (ADS)
Seers, T. D.; Hodgetts, D.
2013-12-01
Seers, T. D. & Hodgetts, D. School of Earth, Atmospheric and Environmental Sciences, University of Manchester, UK. M13 9PL. The detection of topological change at the Earth's surface is of considerable scholarly interest, allowing the quantification of the rates of geomorphic processes whilst providing lucid insights into the underlying mechanisms driving landscape evolution. In this regard, the past decade has witnessed the ever increasing proliferation of studies employing multi-temporal topographic data in within the geosciences, bolstered by continuing technical advancements in the acquisition and processing of prerequisite datasets. Provided by workers within the field of Computer Vision, multiview stereo (MVS) dense surface reconstructions, primed by structure-from-motion (SfM) based camera pose estimation represents one such development. Providing a cost effective, operationally efficient data capture medium, the modest requirement of a consumer grade camera for data collection coupled with the minimal user intervention required during post-processing makes SfM-MVS an attractive alternative to terrestrial laser scanners for collecting multi-temporal topographic datasets. However, in similitude to terrestrial scanner derived data, the co-registration of spatially coincident or partially overlapping scans produced by SfM-MVS presents a major technical challenge, particularly in the case of semi non-rigid scenes produced during topographic change detection studies. Moreover, the arbitrary scaling resulting from SfM ambiguity requires that a scale matrix must be estimated during the transformation, introducing further complexity into its formulation. Here, we present a novel, fully unsupervised algorithm which utilises non-linearly weighted image features for the solving the similarity transform (scale, translation rotation) between partially overlapping scans produced by SfM-MVS image processing. With the only initialization condition being partial intersection between input image sets, our method has major advantages over conventional iterative least squares minimization based methods (e.g. Iterative Closest Point variants), acting only on rigid areas of target scenes, being capable of reliably estimating the scaling factor and requiring no incipient estimation of the transformation to initialize (i.e. manual rough alignment). Moreover, because the solution is closed form, convergence is considerably more expedient that most iterative methods. It is hoped that the availability of improved co-registration routines, such as the one presented here, will facilitate the routine collection of multi-temporal topographic datasets by a wider range of geoscience practitioners.
Optimization of PROPELLER reconstruction for free-breathing T1-weighted cardiac imaging.
Huang, Teng-Yi; Tseng, Yu-Shen; Tang, Yu-Wei; Lin, Yi-Ru
2012-08-01
Clinical cardiac MR imaging techniques generally require patients to hold their breath during the scanning process to minimize respiratory motion-related artifacts. However, some patients cannot hold their breath because of illness or limited breath-hold capacity. This study aims to optimize the PROPELLER reconstruction for free-breathing myocardial T1-weighted imaging. Eight healthy volunteers (8 men; mean age 26.4 years) participated in this study after providing institutionally approved consent. The PROPELLER encoding method can reconstruct a low-resolution image from every blade because of k-space center oversampling. This study investigated the feasibility of extracting a respiratory trace from the PROPELLER blades by implementing a fully automatic region of interest selection and introducing a best template index to account for the property of the human respiration cycle. Results demonstrated that the proposed algorithm significantly improves the contrast-to-noise ratio and the image sharpness (p < 0.05). The PROPELLER method is expected to provide a robust tool for clinical application in free-breathing myocardial T1-weighted imaging. It could greatly facilitate the acquisition procedures during such a routine examination.
Correlative 3D imaging of Whole Mammalian Cells with Light and Electron Microscopy
Murphy, Gavin E.; Narayan, Kedar; Lowekamp, Bradley C.; Hartnell, Lisa M.; Heymann, Jurgen A. W.; Fu, Jing; Subramaniam, Sriram
2011-01-01
We report methodological advances that extend the current capabilities of ion-abrasion scanning electron microscopy (IA–SEM), also known as focused ion beam scanning electron microscopy, a newly emerging technology for high resolution imaging of large biological specimens in 3D. We establish protocols that enable the routine generation of 3D image stacks of entire plastic-embedded mammalian cells by IA-SEM at resolutions of ~10 to 20 nm at high contrast and with minimal artifacts from the focused ion beam. We build on these advances by describing a detailed approach for carrying out correlative live confocal microscopy and IA–SEM on the same cells. Finally, we demonstrate that by combining correlative imaging with newly developed tools for automated image processing, small 100 nm-sized entities such as HIV-1 or gold beads can be localized in SEM image stacks of whole mammalian cells. We anticipate that these methods will add to the arsenal of tools available for investigating mechanisms underlying host-pathogen interactions, and more generally, the 3D subcellular architecture of mammalian cells and tissues. PMID:21907806
Development of image processing method to detect noise in geostationary imagery
NASA Astrophysics Data System (ADS)
Khlopenkov, Konstantin V.; Doelling, David R.
2016-10-01
The Clouds and the Earth's Radiant Energy System (CERES) has incorporated imagery from 16 individual geostationary (GEO) satellites across five contiguous domains since March 2000. In order to derive broadband fluxes uniform across satellite platforms it is important to ensure a good quality of the input raw count data. GEO data obtained by older GOES imagers (such as MTSAT-1, Meteosat-5, Meteosat-7, GMS-5, and GOES-9) are known to frequently contain various types of noise caused by transmission errors, sync errors, stray light contamination, and others. This work presents an image processing methodology designed to detect most kinds of noise and corrupt data in all bands of raw imagery from modern and historic GEO satellites. The algorithm is based on a set of different approaches to detect abnormal image patterns, including inter-line and inter-pixel differences within a scanline, correlation between scanlines, analysis of spatial variance, and also a 2D Fourier analysis of the image spatial frequencies. In spite of computational complexity, the described method is highly optimized for performance to facilitate volume processing of multi-year data and runs in fully automated mode. Reliability of this noise detection technique has been assessed by human supervision for each GEO dataset obtained during selected time periods in 2005 and 2006. This assessment has demonstrated the overall detection accuracy of over 99.5% and the false alarm rate of under 0.3%. The described noise detection routine is currently used in volume processing of historical GEO imagery for subsequent production of global gridded data products and for cross-platform calibration.
How to Analyze Routines in Teachers' Thinking Processes during Lesson Planning.
ERIC Educational Resources Information Center
Bromme, Rainer
A justification for the study of teachers' routines, as they affect the preparation of lesson plans, prefaces this paper on teachers' thought processes during lesson planning. In focusing on the importance of research into teachers' routines, it is pointed out that lesson preparation and classroom routines permit teachers to direct attention to…
Detailed analysis of complex single molecule FRET data with the software MASH
NASA Astrophysics Data System (ADS)
Hadzic, Mélodie C. A. S.; Kowerko, Danny; Börner, Richard; Zelger-Paulus, Susann; Sigel, Roland K. O.
2016-04-01
The processing and analysis of surface-immobilized single molecule FRET (Förster resonance energy transfer) data follows systematic steps (e.g. single molecule localization, clearance of different sources of noise, selection of the conformational and kinetic model, etc.) that require a solid knowledge in optics, photophysics, signal processing and statistics. The present proceeding aims at standardizing and facilitating procedures for single molecule detection by guiding the reader through an optimization protocol for a particular experimental data set. Relevant features were determined from single molecule movies (SMM) imaging Cy3- and Cy5-labeled Sc.ai5γ group II intron molecules synthetically recreated, to test the performances of four different detection algorithms. Up to 120 different parameterizations per method were routinely evaluated to finally establish an optimum detection procedure. The present protocol is adaptable to any movie displaying surface-immobilized molecules, and can be easily reproduced with our home-written software MASH (multifunctional analysis software for heterogeneous data) and script routines (both available in the download section of www.chem.uzh.ch/rna).
Mediaprocessors in medical imaging for high performance and flexibility
NASA Astrophysics Data System (ADS)
Managuli, Ravi; Kim, Yongmin
2002-05-01
New high performance programmable processors, called mediaprocessors, have been emerging since the early 1990s for various digital media applications, such as digital TV, set-top boxes, desktop video conferencing, and digital camcorders. Modern mediaprocessors, e.g., TI's TMS320C64x and Hitachi/Equator Technologies MAP-CA, can offer high performance utilizing both instruction-level and data-level parallelism. During this decade, with continued performance improvement and cost reduction, we believe that the mediaprocessors will become a preferred choice in designing imaging and video systems due to their flexibility in incorporating new algorithms and applications via programming and faster-time-to-market. In this paper, we will evaluate the suitability of these mediaprocessors in medical imaging. We will review the core routines of several medical imaging modalities, such as ultrasound and DR, and present how these routines can be mapped to mediaprocessors and their resultant performance. We will analyze the architecture of several leading mediaprocessors. By carefully mapping key imaging routines, such as 2D convolution, unsharp masking, and 2D FFT, to the mediaprocessor, we have been able to achieve comparable (if not better) performance to that of traditional hardwired approaches. Thus, we believe that future medical imaging systems will benefit greatly from these advanced mediaprocessors, offering significantly increased flexibility and adaptability, reducing the time-to-market, and improving the cost/performance ratio compared to the existing systems while meeting the high computing requirements.
Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri
2014-01-01
In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.
Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.
2016-01-01
Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982
Main, Caroline; Stevens, Simon P; Bailey, Simon; Phillips, Robert; Pizer, Barry; Wheatley, Keith; Kearns, Pamela R; English, Martin; Wilne, Sophie; Wilson, Jayne S
2016-08-31
The aim of this study is to assess the impact of routine MRI surveillance to detect tumour recurrence in children with no new neurological signs or symptoms compared with alternative follow-up practices, including periodic clinical and physical examinations and the use of non-routine imaging upon presentation with disease signs or symptoms. Standard systematic review methods aimed at minimising bias will be employed for study identification, selection and data extraction. Ten electronic databases have been searched, and further citation searching and reference checking will be employed. Randomised and non-randomised controlled trials assessing the impact of routine surveillance MRI to detect tumour recurrence in children with no new neurological signs or symptoms compared to alternative follow-up schedules including imaging upon presentation with disease signs or symptoms will be included. The primary outcome is time to change in therapeutic intervention. Secondary outcomes include overall survival, surrogate survival outcomes, response rates, diagnostic yield per set of images, adverse events, quality of survival and validated measures of family psychological functioning and anxiety. Two reviewers will independently screen and select studies for inclusion. Quality assessment will be undertaken using the Cochrane Collaboration's tools for assessing risk of bias. Where possible, data will be summarised using combined estimates of effect for time to treatment change, survival outcomes and response rates using assumption-free methods. Further sub-group analyses and meta-regression models will be specified and undertaken to explore potential sources of heterogeneity between studies within each tumour type if necessary. Assessment of the impact of surveillance imaging in children with CNS tumours is methodologically complex. The evidence base is likely to be heterogeneous in terms of imaging protocols, definitions of radiological response and diagnostic accuracy of tumour recurrence due to changes in imaging technology over time. Furthermore, the delineation of tumour recurrence from either pseudo-progression or radiation necrosis after radiotherapy is potentially problematic and linked to the timing of follow-up assessments. However, given the current routine practice of MRI surveillance in the follow-up of children with CNS tumours in the UK and the resource implications, it is important to evaluate the cost-benefit profile of this practice. PROSPERO CRD42016036802.
Computed tomography of x-ray images using neural networks
NASA Astrophysics Data System (ADS)
Allred, Lloyd G.; Jones, Martin H.; Sheats, Matthew J.; Davis, Anthony W.
2000-03-01
Traditional CT reconstruction is done using the technique of Filtered Backprojection. While this technique is widely employed in industrial and medical applications, it is not generally understood that FB has a fundamental flaw. Gibbs phenomena states any Fourier reconstruction will produce errors in the vicinity of all discontinuities, and that the error will equal 28 percent of the discontinuity. A number of years back, one of the authors proposed a biological perception model whereby biological neural networks perceive 3D images from stereo vision. The perception model proports an internal hard-wired neural network which emulates the external physical process. A process is repeated whereby erroneous unknown internal values are used to generate an emulated signal with is compared to external sensed data, generating an error signal. Feedback from the error signal is then sued to update the erroneous internal values. The process is repeated until the error signal no longer decrease. It was soon realized that the same method could be used to obtain CT from x-rays without having to do Fourier transforms. Neural networks have the additional potential for handling non-linearities and missing data. The technique has been applied to some coral images, collected at the Los Alamos high-energy x-ray facility. The initial images show considerable promise, in some instances showing more detail than the FB images obtained from the same data. Although routine production using this new method would require a massively parallel computer, the method shows promise, especially where refined detail is required.
Nielsen, Patricia Switten; Lindebjerg, Jan; Rasmussen, Jan; Starklint, Henrik; Waldstrøm, Marianne; Nielsen, Bjarne
2010-12-01
Digitization of histologic slides is associated with many advantages, and its use in routine diagnosis holds great promise. Nevertheless, few articles evaluate virtual microscopy in routine settings. This study is an evaluation of the validity and diagnostic performance of virtual microscopy in routine histologic diagnosis of skin tumors. Our aim is to investigate whether conventional microscopy of skin tumors can be replaced by virtual microscopy. Ninety-six skin tumors and skin-tumor-like changes were consecutively gathered over a 1-week period. Specimens were routinely processed, and digital slides were captured on Mirax Scan (Carl Zeiss MicroImaging, Göttingen, Germany). Four pathologists evaluated the 96 virtual slides and the associated 96 conventional slides twice with intermediate time intervals of at least 3 weeks. Virtual slides that caused difficulties were reevaluated to identify possible reasons for this. The accuracy was 89.2% for virtual microscopy and 92.7% for conventional microscopy. All κ coefficients expressed very good intra- and interobserver agreement. The sensitivities were 85.7% (78.0%-91.0%) and 92.0% (85.5%-95.7%) for virtual and conventional microscopy, respectively. The difference between the sensitivities was 6.3% (0.8%-12.6%). The subsequent reevaluation showed that virtual slides were as useful as conventional slides when rendering a diagnosis. Differences seen are presumed to be due to the pathologists' lack of experience using the virtual microscope. We conclude that it is feasible to make histologic diagnosis on the skin tumor types represented in this study using virtual microscopy after pathologists have completed a period of training. Larger studies should be conducted to verify whether virtual microscopy can replace conventional microscopy in routine practice. Copyright © 2010 Elsevier Inc. All rights reserved.
Rapid Assessment of Contrast Sensitivity with Mobile Touch-screens
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
2013-01-01
The availability of low-cost high-quality touch-screen displays in modern mobile devices has created opportunities for new approaches to routine visual measurements. Here we describe a novel method in which subjects use a finger swipe to indicate the transition from visible to invisible on a grating which is swept in both contrast and frequency. Because a single image can be swiped in about a second, it is practical to use a series of images to zoom in on particular ranges of contrast or frequency, both to increase the accuracy of the measurements and to obtain an estimate of the reliability of the subject. Sensitivities to chromatic and spatio-temporal modulations are easily measured using the same method. We will demonstrate a prototype for Apple Computer's iPad-iPod-iPhone family of devices, implemented using an open-source scripting environment known as QuIP (QUick Image Processing,
ARCHANGEL: Galaxy Photometry System
NASA Astrophysics Data System (ADS)
Schombert, James
2011-07-01
ARCHANGEL is a Unix-based package for the surface photometry of galaxies. While oriented for large angular size systems (i.e. many pixels), its tools can be applied to any imaging data of any size. The package core contains routines to perform the following critical galaxy photometry functions: sky determination; frame cleaning; ellipse fitting; profile fitting; and total and isophotal magnitudes. The goal of the package is to provide an automated, assembly-line type of reduction system for galaxy photometry of space-based or ground-based imaging data. The procedures outlined in the documentation are flux independent, thus, these routines can be used for non-optical data as well as typical imaging datasets. ARCHANGEL has been tested on several current OS's (RedHat Linux, Ubuntu Linux, Solaris, Mac OS X). A tarball for installation is available at the download page. The main routines are Python and FORTRAN based, therefore, a current installation of Python and a FORTRAN compiler are required. The ARCHANGEL package also contains Python hooks to the PGPLOT package, an XML processor and network tools which automatically link to data archives (i.e. NED, HST, 2MASS, etc) to download images in a non-interactive manner.
Assessing the impact of PACS on patient care in a medical intensive care unit
NASA Astrophysics Data System (ADS)
Shile, Peter E.; Kundel, Harold L.; Seshadri, Sridhar B.; Carey, Bruce; Brikman, Inna; Kishore, Sheel; Feingold, Eric R.; Lanken, Paul N.
1993-09-01
In this paper we have present data from pilot studies to estimate the impact on patient care of an intensive care unit display station. The data were collected during two separate one-month periods in 1992. We compared these two different periods in terms of the relative speeds with which images were first viewed by MICU physicians. First, we found that images for routine chest radiographs (CXRs) are viewed by a greater number of physicians and slightly sooner with the PACS display station operating in the MICU than when it is not. Thus, for routine exams, PACS provide the potential for shortening of time intervals between exam completions and image-based clinical actions. A second finding is that the use of the display station for viewing non-routine CXRs is strongly influenced by the speed with which films are digitized. Hence, if film digitization is not rapid, the presence of a MICU display station is unlikely to contribute to a shortening of time intervals between exam completions and image-based clinical actions. This finding supports the use of computed radiography for CXRs in an intensive care unit.
Multiplex Staining by Sequential Immunostaining and Antibody Removal on Routine Tissue Sections.
Bolognesi, Maddalena Maria; Manzoni, Marco; Scalia, Carla Rossana; Zannella, Stefano; Bosisio, Francesca Maria; Faretta, Mario; Cattoretti, Giorgio
2017-08-01
Multiplexing, labeling for multiple immunostains in the very same cell or tissue section in situ, has raised considerable interest. The methods proposed include the use of labeled primary antibodies, spectral separation of fluorochromes, bleaching of the fluorophores or chromogens, blocking of previous antibody layers, all in various combinations. The major obstacles to the diffusion of this technique are high costs in custom antibodies and instruments, low throughput, and scarcity of specialized skills or facilities. We have validated a method based on common primary and secondary antibodies and diffusely available fluorescent image scanners. It entails rounds of four-color indirect immunofluorescence, image acquisition, and removal (stripping) of the antibodies, before another stain is applied. The images are digitally registered and the autofluorescence is subtracted. Removal of antibodies is accomplished by disulfide cleavage and a detergent or by a chaotropic salt treatment, this latter followed by antigen refolding. More than 30 different antibody stains can be applied to one single section from routinely fixed and embedded tissue. This method requires a modest investment in hardware and materials and uses freeware image analysis software. Multiplexing on routine tissue sections is a high throughput tool for in situ characterization of neoplastic, reactive, inflammatory, and normal cells.
Mesquita, D P; Dias, O; Amaral, A L; Ferreira, E C
2009-04-01
In recent years, a great deal of attention has been focused on the research of activated sludge processes, where the solid-liquid separation phase is frequently considered of critical importance, due to the different problems that severely affect the compaction and the settling of the sludge. Bearing that in mind, in this work, image analysis routines were developed in Matlab environment, allowing the identification and characterization of microbial aggregates and protruding filaments in eight different wastewater treatment plants, for a combined period of 2 years. The monitoring of the activated sludge contents allowed for the detection of bulking events proving that the developed image analysis methodology is adequate for a continuous examination of the morphological changes in microbial aggregates and subsequent estimation of the sludge volume index. In fact, the obtained results proved that the developed image analysis methodology is a feasible method for the continuous monitoring of activated sludge systems and identification of disturbances.
Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter
2017-06-28
High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.
Sujlana, Parvinder; Skrok, Jan; Fayad, Laura M
2018-04-01
Although postcontrast imaging has been used for many years in musculoskeletal imaging, dynamic contrast enhanced (DCE) MRI is not routinely used in many centers around the world. Unlike conventional contrast-enhanced sequences, DCE-MRI allows the evaluation of the temporal pattern of enhancement in the musculoskeletal system, perhaps best known for its use in oncologic applications (such as differentiating benign from malignant tumors, evaluating for treatment response after neoadjuvant chemotherapy, and differentiating postsurgical changes from residual tumor). However, DCE-MRI can also be used to evaluate inflammatory processes such as Charcot foot and synovitis, and evaluate bone perfusion in entities like Legg Calve Perthes disease and arthritis. Finally, vascular abnormalities and associated complications may be better characterized with DCE-MRI than conventional imaging. The goal of this article is to review the applications and technical aspects of DCE-MRI in the musculoskeletal system. 5 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2018;47:875-890. © 2017 International Society for Magnetic Resonance in Medicine.
Madriago, Erin J; Punn, Rajesh; Geeter, Natalie; Silverman, Norman H
2016-02-01
Trans-oesophageal echocardiographic imaging is valuable in the pre- and post-operative evaluation of children and adults with CHD; however, the frequency by which trans-oesophageal echocardiography guides the intra-operative course of patients is unknown. We retrospectively reviewed 1748 intra-operative trans-oesophageal echocardiograms performed between 1 October, 2005 and 31 December, 2010, and found 99 cases (5.7%) that required return to bypass, based in part upon the intra-operative echocardiographic findings. The diagnoses most commonly requiring further repair and subsequent imaging were mitral valve disease (20.9%), tricuspid valve disease (16.0%), atrioventricular canal defects (12.0%), and pulmonary valve disease (14.1%). The vast majority of those requiring immediate return to bypass benefited by avoiding subsequent operations and longer lengths of hospital stay. A total of 14 patients (0.8%) who received routine imaging required further surgical repair within 1 week, usually due to disease that developed over ensuing days. Patients who had second post-operative trans-oesophageal echocardiograms in the operating room rarely required re-operations, confirming the benefit of routine intra-operative imaging. This study represents a large single institutional review of intra-operative trans-oesophageal echocardiography, and confirms its applicability in the surgical repair of patients with CHD. Routine imaging accurately identifies patients requiring further intervention, does not confer additional risk of mortality or prolonged length of hospital stay, and prevents subsequent operations and associated sequelae in a substantial subset of patients. This study demonstrates the utility of echocardiography in intra-operative monitoring of surgical repair and highlights patients who are most likely to require return to bypass, as well as the co-morbidities of such manipulations.
Tureli, Derya; Altas, Hilal; Cengic, Ismet; Ekinci, Gazanfer; Baltacioglu, Feyyaz
2015-10-01
The aim of the study was to ascertain the learning curves for the radiology residents when first introduced to an anatomic structure in magnetic resonance images (MRI) to which they have not been previously exposed to. The iliolumbar ligament is a good marker for testing learning curves of radiology residents because the ligament is not part of a routine lumbar MRI reporting and has high variability in detection. Four radiologists, three residents without previous training and one mentor, studied standard axial T1- and T2-weighted images of routine lumbar MRI examinations. Radiologists had to define iliolumbar ligament while blinded to each other's findings. Interobserver agreement analyses, namely Cohen and Fleiss κ statistics, were performed for groups of 20 cases to evaluate the self-learning curve of radiology residents. Mean κ values of resident-mentor pairs were 0.431, 0.608, 0.604, 0.826, and 0.963 in the analysis of successive groups (P < .001). The results indicate that the concordance between the experienced and inexperienced radiologists started as weak (κ <0.5) and gradually became very acceptable (κ >0.8). Therefore, a junior radiology resident can obtain enough experience in identifying a rather ambiguous anatomic structure in routine MRI after a brief instruction of a few minutes by a mentor and studying approximately 80 cases by oneself. Implementing this methodology will help radiology educators obtain more concrete ideas on the optimal time and effort required for supported self-directed visual learning processes in resident education. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
[Three-dimensional reconstruction of functional brain images].
Inoue, M; Shoji, K; Kojima, H; Hirano, S; Naito, Y; Honjo, I
1999-08-01
We consider PET (positron emission tomography) measurement with SPM (Statistical Parametric Mapping) analysis to be one of the most useful methods to identify activated areas of the brain involved in language processing. SPM is an effective analytical method that detects markedly activated areas over the whole brain. However, with the conventional presentations of these functional brain images, such as horizontal slices, three directional projection, or brain surface coloring, makes understanding and interpreting the positional relationships among various brain areas difficult. Therefore, we developed three-dimensionally reconstructed images from these functional brain images to improve the interpretation. The subjects were 12 normal volunteers. The following three types of images were constructed: 1) routine images by SPM, 2) three-dimensional static images, and 3) three-dimensional dynamic images, after PET images were analyzed by SPM during daily dialog listening. The creation of images of both the three-dimensional static and dynamic types employed the volume rendering method by VTK (The Visualization Toolkit). Since the functional brain images did not include original brain images, we synthesized SPM and MRI brain images by self-made C++ programs. The three-dimensional dynamic images were made by sequencing static images with available software. Images of both the three-dimensional static and dynamic types were processed by a personal computer system. Our newly created images showed clearer positional relationships among activated brain areas compared to the conventional method. To date, functional brain images have been employed in fields such as neurology or neurosurgery, however, these images may be useful even in the field of otorhinolaryngology, to assess hearing and speech. Exact three-dimensional images based on functional brain images are important for exact and intuitive interpretation, and may lead to new developments in brain science. Currently, the surface model is the most common method of three-dimensional display. However, the volume rendering method may be more effective for imaging regions such as the brain.
Favazza, Christopher P.; Duan, Xinhui; Zhang, Yi; Yu, Lifeng; Leng, Shuai; Kofler, James M.; Bruesewitz, Michael R.; McCollough, Cynthia H.
2015-01-01
Through this investigation we developed a methodology to evaluate and standardize CT image quality from routine abdomen protocols across different manufacturers and models. The influence of manufacturer-specific automated exposure control systems on image quality was directly assessed to standardize performance across a range of patient sizes. We evaluated 16 CT scanners across our health system, including Siemens, GE, and Toshiba models. Using each practice’s routine abdomen protocol, we measured spatial resolution, image noise, and scanner radiation output (CTDIvol). Axial and in-plane spatial resolutions were assessed through slice sensitivity profile (SSP) and modulation transfer function (MTF) measurements, respectively. Image noise and CTDIvol values were obtained for three different phantom sizes. SSP measurements demonstrated a bimodal distribution in slice widths: an average of 6.2 ± 0.2 mm using GE’s “Plus” mode reconstruction setting and 5.0 ± 0.1 mm for all other scanners. MTF curves were similar for all scanners. Average spatial frequencies at 50%, 10%, and 2% MTF values were 3.24 ± 0.37, 6.20 ± 0.34, and 7.84 ± 0.70 lp/cm, respectively. For all phantom sizes, image noise and CTDIvol varied considerably: 6.5–13.3 HU (noise) and 4.8–13.3 mGy (CTDIvol) for the smallest phantom; 9.1–18.4 HU and 9.3–28.8 mGy for the medium phantom; and 7.8–23.4 HU and 16.0–48.1 mGy for the largest phantom. Using these measurements and benchmark SSP, MTF, and image noise targets, CT image quality can be standardized across a range of patient sizes. PMID:26459751
Ma, Xibo; Jin, Yushen; Wang, Yi; Zhang, Shuai; Peng, Dong; Yang, Xin; Wei, Shoushui; Chai, Wei; Li, Xuejun; Tian, Jie
2018-01-01
Tumor cell complete extinction is a crucial measure to evaluate antitumor efficacy. The difficulties in defining tumor margins and finding satellite metastases are the reason for tumor recurrence. A synergistic method based on multimodality molecular imaging needs to be developed so as to achieve the complete extinction of the tumor cells. In this study, graphene oxide conjugated with gold nanostars and chelated with Gd through 1,4,7,10-tetraazacyclododecane-N,N',N,N'-tetraacetic acid (DOTA) (GO-AuNS-DOTA-Gd) were prepared to target HCC-LM3-fLuc cells and used for therapy. For subcutaneous tumor, multimodality molecular imaging including photoacoustic imaging (PAI) and magnetic resonance imaging (MRI) and the related processing techniques were used to monitor the pharmacokinetics process of GO-AuNS-DOTA-Gd in order to determine the optimal time for treatment. For orthotopic tumor, MRI was used to delineate the tumor location and margin in vivo before treatment. Then handheld photoacoustic imaging system was used to determine the tumor location during the surgery and guided the photothermal therapy. The experiment result based on orthotopic tumor demonstrated that this synergistic method could effectively reduce tumor residual and satellite metastases by 85.71% compared with the routine photothermal method without handheld PAI guidance. These results indicate that this multimodality molecular imaging-guided photothermal therapy method is promising with a good prospect in clinical application.
International Ultraviolet Explorer Final Archive
NASA Technical Reports Server (NTRS)
1997-01-01
CSC processed IUE images through the Final Archive Data Processing System. Raw images were obtained from both NDADS and the IUEGTC optical disk platters for processing on the Alpha cluster, and from the IUEGTC optical disk platters for DECstation processing. Input parameters were obtained from the IUE database. Backup tapes of data to send to VILSPA were routinely made on the Alpha cluster. IPC handled more than 263 requests for priority NEWSIPS processing during the contract. Staff members also answered various questions and requests for information and sent copies of IUE documents to requesters. CSC implemented new processing capabilities into the NEWSIPS processing systems as they became available. In addition, steps were taken to improve efficiency and throughput whenever possible. The node TORTE was reconfigured as the I/O server for Alpha processing in May. The number of Alpha nodes used for the NEWSIPS processing queue was increased to a maximum of six in measured fashion in order to understand the dependence of throughput on the number of nodes and to be able to recognize when a point of diminishing returns was reached. With Project approval, generation of the VD FITS files was dropped in July. This action not only saved processing time but, even more significantly, also reduced the archive storage media requirements, and the time required to perform the archiving, drastically. The throughput of images verified through CDIVS and processed through NEWSIPS for the contract period is summarized below. The number of images of a given dispersion type and camera that were processed in any given month reflects several factors, including the availability of the required NEWSIPS software system, the availability of the corresponding required calibrations (e.g., the LWR high-dispersion ripple correction and absolute calibration), and the occurrence of reprocessing efforts such as that conducted to incorporate the updated SWP sensitivity-degradation correction in May.
Ringkob, T P; Swartz, D R; Greaser, M L
2004-05-01
Image analysis procedures for immunofluorescence microscopy were developed to measure muscle thin filament lengths of beef, rabbit, and chicken myofibrils. Strips of beef cutaneous trunci, rectus abdominis, psoas, and masseter; chicken pectoralis; and rabbit psoas muscles were excised 5 to 30 min postmortem. Fluorescein phalloidin and rhodamine myosin subfragment-1 (S1) were used to probe the myofibril structure. Digital images were recorded with a cooled charge-coupled device controlled with IPLab Spectrum software (Signal Analytics Corp.) on a Macintosh operating system. The camera was attached to an inverted microscope, using both the phase-contrast and fluorescence illumination modes. Unfixed myofibrils incubated with fluorescein phalloidin showed fluorescence primarily at the Z-line and the tips of the thin filaments in the overlap region. Images were processed using IPLab and the National Institutes of Health's Image software. A region of interest was selected and scaled by a factor of 18.18, which enlarged the image from 11 pixels/microm to approximately 200 pixels/microm. An X-Y plot was exported to Spectrum 1.1 (Academic Software Development Group), where the signal was processed with a second derivative routine, so a cursor function could be used to measure length. Fixation before phalloidin incubation resulted in greatest intensity at the Z lines but a more-uniform staining over the remainder of the thin filament zone. High-resolution image capture and processing showed that thin filament lengths were significantly different (P < 0.01) among beef, rabbit, and chicken, with lengths of 1.28 to 1.32 microm, 1.16 microm, and 1.05 microm, respectively. Measurements using the S1 signal confirmed the phalloidin results. Fluorescent probes may be useful to study sarcomere structure and help explain species and muscle differences in meat texture.
Neves, A A; Silva, E J; Roter, J M; Belladona, F G; Alves, H D; Lopes, R T; Paciornik, S; De-Deus, G A
2015-11-01
To propose an automated image processing routine based on free software to quantify root canal preparation outcomes in pairs of sound and instrumented roots after micro-CT scanning procedures. Seven mesial roots of human mandibular molars with different canal configuration systems were studied: (i) Vertucci's type 1, (ii) Vertucci's type 2, (iii) two individual canals, (iv) Vertucci's type 6, canals (v) with and (vi) without debris, and (vii) canal with visible pulp calcification. All teeth were instrumented with the BioRaCe system and scanned in a Skyscan 1173 micro-CT before and after canal preparation. After reconstruction, the instrumented stack of images (IS) was registered against the preoperative sound stack of images (SS). Image processing included contrast equalization and noise filtering. Sound canal volumes were obtained by a minimum threshold. For the IS, a fixed conservative threshold was chosen as the best compromise between instrumented canal and dentine whilst avoiding debris, resulting in instrumented canal plus empty spaces. Arithmetic and logical operations between sound and instrumented stacks were used to identify debris. Noninstrumented dentine was calculated using a minimum threshold in the IS and subtracting from the SS and total debris. Removed dentine volume was obtained by subtracting SS from IS. Quantitative data on total debris present in the root canal space after instrumentation, noninstrumented areas and removed dentine volume were obtained for each test case, as well as three-dimensional volume renderings. After standardization of acquisition, reconstruction and image processing micro-CT images, a quantitative approach for calculation of root canal biomechanical outcomes was achieved using free software. © 2014 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Bae, Youngwoo; Son, Taeyoon; Nelson, J. Stuart; Kim, Jae-Hong; Choi, Eung Ho; Jung, Byungjo
2010-01-01
Background/Purpose Digital color image analysis is currently considered as a routine procedure in dermatology. In our previous study, a multimodal facial color imaging modality (MFCIM), which provides a conventional, parallel- and cross-polarization, and fluorescent color image, was introduced for objective evaluation of various facial skin lesions. This study introduces a commercial version of MFCIM, DermaVision-PRO, for routine clinical use in dermatology and demonstrates its dermatological feasibility for cross-evaluation of skin lesions. Methods/Results Sample images of subjects with actinic keratosis or non-melanoma skin cancers were obtained at four different imaging modes. Various image analysis methods were applied to cross-evaluate the skin lesion and, finally, extract valuable diagnostic information. DermaVision-PRO is potentially a useful tool as an objective macroscopic imaging modality for quick prescreening and cross-evaluation of facial skin lesions. Conclusion DermaVision-PRO may be utilized as a useful tool for cross-evaluation of widely distributed facial skin lesions and an efficient database management of patient information. PMID:20923462
Hojjati, Mojgan; Van Hedent, Steven; Rassouli, Negin; Tatsuoka, Curtis; Jordan, David; Dhanantwari, Amar; Rajiah, Prabhakar
2017-11-01
To evaluate the image quality of routine diagnostic images generated from a novel detector-based spectral detector CT (SDCT) and compare it with CT images obtained from a conventional scanner with an energy-integrating detector (Brilliance iCT), Routine diagnostic (conventional/polyenergetic) images are non-material-specific images that resemble single-energy images obtained at the same radiation, METHODS: ACR guideline-based phantom evaluations were performed on both SDCT and iCT for CT adult body protocol. Retrospective analysis was performed on 50 abdominal CT scans from each scanner. Identical ROIs were placed at multiple locations in the abdomen and attenuation, noise, SNR, and CNR were measured. Subjective image quality analysis on a 5-point Likert scale was performed by 2 readers for enhancement, noise, and image quality. On phantom studies, SDCT images met the ACR requirements for CT number and deviation, CNR and effective radiation dose. In patients, the qualitative scores were significantly higher for the SDCT than the iCT, including enhancement (4.79 ± 0.38 vs. 4.60 ± 0.51, p = 0.005), noise (4.63 ± 0.42 vs. 4.29 ± 0.50, p = 0.000), and quality (4.85 ± 0.32, vs. 4.57 ± 0.50, p = 0.000). The SNR was higher in SDCT than iCT for liver (7.4 ± 4.2 vs. 7.2 ± 5.3, p = 0.662), spleen (8.6 ± 4.1 vs. 7.4 ± 3.5, p = 0.152), kidney (11.1 ± 6.3 vs. 8.7 ± 5.0, p = 0.033), pancreas (6.90 ± 3.45 vs 6.11 ± 2.64, p = 0.303), aorta (14.2 ± 6.2 vs. 11.0 ± 4.9, p = 0.007), but was slightly lower in lumbar-vertebra (7.7 ± 4.2 vs. 7.8 ± 4.5, p = 0.937). The CNR of the SDCT was also higher than iCT for all abdominal organs. Image quality of routine diagnostic images from the SDCT is comparable to images of a conventional CT scanner with energy-integrating detectors, making it suitable for diagnostic purposes.
Bagci, Ulas; Udupa, Jayaram K.; Mendhiratta, Neil; Foster, Brent; Xu, Ziyue; Yao, Jianhua; Chen, Xinjian; Mollura, Daniel J.
2013-01-01
We present a novel method for the joint segmentation of anatomical and functional images. Our proposed methodology unifies the domains of anatomical and functional images, represents them in a product lattice, and performs simultaneous delineation of regions based on random walk image segmentation. Furthermore, we also propose a simple yet effective object/background seed localization method to make the proposed segmentation process fully automatic. Our study uses PET, PET-CT, MRI-PET, and fused MRI-PET-CT scans (77 studies in all) from 56 patients who had various lesions in different body regions. We validated the effectiveness of the proposed method on different PET phantoms as well as on clinical images with respect to the ground truth segmentation provided by clinicians. Experimental results indicate that the presented method is superior to threshold and Bayesian methods commonly used in PET image segmentation, is more accurate and robust compared to the other PET-CT segmentation methods recently published in the literature, and also it is general in the sense of simultaneously segmenting multiple scans in real-time with high accuracy needed in routine clinical use. PMID:23837967
Automated seeding-based nuclei segmentation in nonlinear optical microscopy.
Medyukhina, Anna; Meyer, Tobias; Heuke, Sandro; Vogler, Nadine; Dietzek, Benjamin; Popp, Jürgen
2013-10-01
Nonlinear optical (NLO) microscopy based, e.g., on coherent anti-Stokes Raman scattering (CARS) or two-photon-excited fluorescence (TPEF) is a fast label-free imaging technique, with a great potential for biomedical applications. However, NLO microscopy as a diagnostic tool is still in its infancy; there is a lack of robust and durable nuclei segmentation methods capable of accurate image processing in cases of variable image contrast, nuclear density, and type of investigated tissue. Nonetheless, such algorithms specifically adapted to NLO microscopy present one prerequisite for the technology to be routinely used, e.g., in pathology or intraoperatively for surgical guidance. In this paper, we compare the applicability of different seeding and boundary detection methods to NLO microscopic images in order to develop an optimal seeding-based approach capable of accurate segmentation of both TPEF and CARS images. Among different methods, the Laplacian of Gaussian filter showed the best accuracy for the seeding of the image, while a modified seeded watershed segmentation was the most accurate in the task of boundary detection. The resulting combination of these methods followed by the verification of the detected nuclei performs high average sensitivity and specificity when applied to various types of NLO microscopy images.
Satellite Imaging in the Study of Pennsylvania's Environmental Issues.
ERIC Educational Resources Information Center
Nous, Albert P.
This document focuses on using satellite images from space in the classroom. There are two types of environmental satellites routinely broadcasting: (1) Polar-Orbiting Operational Environmental Satellites (POES), and (2) Geostationary Operational Environmental Satellites (GOES). Imaging and visualization techniques provide students with a better…
Smart image sensors: an emerging key technology for advanced optical measurement and microsystems
NASA Astrophysics Data System (ADS)
Seitz, Peter
1996-08-01
Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.
Klukkert, Marten; Wu, Jian X; Rantanen, Jukka; Carstensen, Jens M; Rades, Thomas; Leopold, Claudia S
2016-07-30
Monitoring of tablet quality attributes in direct vicinity of the production process requires analytical techniques that allow fast, non-destructive, and accurate tablet characterization. The overall objective of this study was to investigate the applicability of multispectral UV imaging as a reliable, rapid technique for estimation of the tablet API content and tablet hardness, as well as determination of tablet intactness and the tablet surface density profile. One of the aims was to establish an image analysis approach based on multivariate image analysis and pattern recognition to evaluate the potential of UV imaging for automatized quality control of tablets with respect to their intactness and surface density profile. Various tablets of different composition and different quality regarding their API content, radial tensile strength, intactness, and surface density profile were prepared using an eccentric as well as a rotary tablet press at compression pressures from 20MPa up to 410MPa. It was found, that UV imaging can provide both, relevant information on chemical and physical tablet attributes. The tablet API content and radial tensile strength could be estimated by UV imaging combined with partial least squares analysis. Furthermore, an image analysis routine was developed and successfully applied to the UV images that provided qualitative information on physical tablet surface properties such as intactness and surface density profiles, as well as quantitative information on variations in the surface density. In conclusion, this study demonstrates that UV imaging combined with image analysis is an effective and non-destructive method to determine chemical and physical quality attributes of tablets and is a promising approach for (near) real-time monitoring of the tablet compaction process and formulation optimization purposes. Copyright © 2015 Elsevier B.V. All rights reserved.
Visualization of Middle Ear Ossicles in Elder Subjects with Ultra-short Echo Time MR Imaging.
Naganawa, Shinji; Nakane, Toshiki; Kawai, Hisashi; Taoka, Toshiaki; Suzuki, Kojiro; Iwano, Shingo; Satake, Hiroko; Grodzki, David
2017-04-10
To evaluate the visualization of middle ear ossicles by ultra-short echo time magnetic resonance (MR) imaging at 3T in subjects over 50 years old. Sixty ears from 30 elder patients that underwent surgical or interventional treatment for neurovascular diseases were included (ages: 50-82, median age: 65; 10 men, 20 women). Patients received follow-up MR imaging including routine T 1 - and T 2 -weighted images, time-of-flight MR angiography, and ultra-short echo time imaging (PETRA, pointwise encoding time reduction with radial acquisition). All patients underwent computed tomography (CT) angiography before treatment. Thin-section source CT images were correlated with PETRA images. Scan parameters for PETRA were: TR 3.13, TE 0.07, flip angle 6 degrees, 0.83 × 0.83 × 0.83 mm resolution, 3 min 43 s scan time. Two radiologists retrospectively evaluated the visibility of each ossicular structure as positive or negative using PETRA images. The structures evaluated included the head of the malleus, manubrium of the malleus, body of the incus, long process of the incus, and the stapes. Signal intensity of the ossicles was classified as: between labyrinthine fluid and air, similar to labyrinthine fluid, between labyrinthine fluid and cerebellar parenchyma, or higher than cerebellar parenchyma. In all ears, the body of the incus was visible. The head of the malleus was visualized in 36/60 ears. The manubrium of the malleus and long process of the incus was visualized in 1/60 and 4/60 ears, respectively. The stapes were not visualized in any ear. Signal intensity of the visible structures was between labyrinthine fluid and air in all ears. The body of the incus was consistently visualized with intensity between air and labyrinthine fluid on PETRA images in aged subjects. Poor visualization of the manubrium of the malleus, long process of the incus, and the stapes limits clinical significance of middle ear imaging with current PETRA methods.
Thalamic Massa Intermedia Duplication in a Dysmorphic 14 month-old Toddler.
Whitehead, Matthew T
2015-06-01
The massa intermedia is an inconstant parenchymal band connecting the medial thalami. It may be thickened in various disease processes such as Chiari II malformation or absent in other disease states. However, the massa intermedia may also be absent in up to 30% of normal human brains. To the best of my knowledge, detailed imaging findings of massa intermedia duplication have only been described in a single case report. An additional case of thalamic massa intermedia duplication discovered on a routine brain MR performed for dysmorphic facial features is reported herein.
Additive Manufacturing Techniques for the Reconstruction of 3D Fetal Faces.
Speranza, Domenico; Citro, Daniela; Padula, Francesco; Motyl, Barbara; Marcolin, Federica; Calì, Michele; Martorelli, Massimo
2017-01-01
This paper deals with additive manufacturing techniques for the creation of 3D fetal face models starting from routine 3D ultrasound data. In particular, two distinct themes are addressed. First, a method for processing and building 3D models based on the use of medical image processing techniques is proposed. Second, the preliminary results of a questionnaire distributed to future parents consider the use of these reconstructions both from an emotional and an affective point of view. In particular, the study focuses on the enhancement of the perception of maternity or paternity and the improvement in the relationship between parents and physicians in case of fetal malformations, in particular facial or cleft lip diseases.
Hard X-Ray Flare Source Sizes Measured with the Ramaty High Energy Solar Spectroscopic Imager
NASA Technical Reports Server (NTRS)
Dennis, Brian R.; Pernak, Rick L.
2009-01-01
Ramaty High Energy Solar Spectroscopic Imager (RHESSI) observations of 18 double hard X-ray sources seen at energies above 25 keV are analyzed to determine the spatial extent of the most compact structures evident in each case. The following four image reconstruction algorithms were used: Clean, Pixon, and two routines using visibilities maximum entropy and forward fit (VFF). All have been adapted for this study to optimize their ability to provide reliable estimates of the sizes of the more compact sources. The source fluxes, sizes, and morphologies obtained with each method are cross-correlated and the similarities and disagreements are discussed. The full width at half-maximum (FWHM) of the major axes of the sources with assumed elliptical Gaussian shapes are generally well correlated between the four image reconstruction routines and vary between the RHESSI resolution limit of approximately 2" up to approximately 20" with most below 10". The FWHM of the minor axes are generally at or just above the RHESSI limit and hence should be considered as unresolved in most cases. The orientation angles of the elliptical sources are also well correlated. These results suggest that the elongated sources are generally aligned along a flare ribbon with the minor axis perpendicular to the ribbon. This is verified for the one flare in our list with coincident Transition Region and Coronal Explorer (TRACE) images. There is evidence for significant extra flux in many of the flares in addition to the two identified compact sources, thus rendering the VFF assumption of just two Gaussians inadequate. A more realistic approximation in many cases would be of two line sources with unresolved widths. Recommendations are given for optimizing the RHESSI imaging reconstruction process to ensure that the finest possible details of the source morphology become evident and that reliable estimates can be made of the source dimensions.
PACS in the Utrecht University Hospital: final conclusions of the clinical evaluation
NASA Astrophysics Data System (ADS)
Wilmink, J. B.; ter Haar Romeny, Bart M.; Barneveld Binkhuysen, Frits H.; Achterberg, A. J.; Zuiderveld, Karel J.; Calkoen, P.; Kouwenberg, Jef M.
1990-08-01
In the past three years, a clinical evaluation of a PACS has been performed in the Utrecht University Hospital as part of the Dutch PACS project. The clinical evaluation focussed on the following aspects: technical evaluation of the prototype PACS equipment coupled to the HIS; diagnostic accuracy studies; studies concerning the impact on the organization of the radiology-department and the referring wards; and cost-savings analysis. Some of the results of these subprojects have already been presented at previous SPIE conferences. In this paper the general condusions are presented about the usefulness of the evaluated PAC-System in the daily routine of radiology department and clinic. By making available the images of radiological examinations fast, complete, reliable and continously on the ward, concrete improvements with regard to the current process could be realized. The possibilities of PACS caused an increasing enthousiasm among the clinicians. By the easier access to all images of their patients during 24 hours/day, they saw more images on the day of the examination and images could be more easily used at consultations of other specialists. The overall conclusion is positive, but a lot of work has to be done to transform PACS from an experimental setup into a routine production system on which a flimless hospital can be based. A complete PACS needs an inteffigent Image Management System, which indudes prefetching algorithms based on data from the Hospital Information System and automated procedures for removing obsolete images from the local buffers in the workstations. As yet PACS is very expensive, and the direct savings in the hospital cannot compensate for the high costs of investment. Possibly PACS can contribute to a shorter stay of patients in the hospital. This will lead to savings for government and health insurance companies and they can be expected to contribute to PAS implementation studies.
NASA Astrophysics Data System (ADS)
Reilly, B. T.; Stoner, J. S.; Wiest, J.
2017-08-01
Computed tomography (CT) of sediment cores allows for high-resolution images, three-dimensional volumes, and down core profiles. These quantitative data are generated through the attenuation of X-rays, which are sensitive to sediment density and atomic number, and are stored in pixels as relative gray scale values or Hounsfield units (HU). We present a suite of MATLAB™ tools specifically designed for routine sediment core analysis as a means to standardize and better quantify the products of CT data collected on medical CT scanners. SedCT uses a graphical interface to process Digital Imaging and Communications in Medicine (DICOM) files, stitch overlapping scanned intervals, and create down core HU profiles in a manner robust to normal coring imperfections. Utilizing a random sampling technique, SedCT reduces data size and allows for quick processing on typical laptop computers. SedCTimage uses a graphical interface to create quality tiff files of CT slices that are scaled to a user-defined HU range, preserving the quantitative nature of CT images and easily allowing for comparison between sediment cores with different HU means and variance. These tools are presented along with examples from lacustrine and marine sediment cores to highlight the robustness and quantitative nature of this method.
Benefits of utilizing CellProfiler as a characterization tool for U-10Mo nuclear fuel
Collette, R.; Douglas, J.; Patterson, L.; ...
2015-05-01
Automated image processing techniques have the potential to aid in the performance evaluation of nuclear fuels by eliminating judgment calls that may vary from person-to-person or sample-to-sample. Analysis of in-core fuel performance is required for design and safety evaluations related to almost every aspect of the nuclear fuel cycle. This study presents a methodology for assessing the quality of uranium-molybdenum fuel images and describes image analysis routines designed for the characterization of several important microstructural properties. The analyses are performed in CellProfiler, an open-source program designed to enable biologists without training in computer vision or programming to automatically extract cellularmore » measurements from large image sets. The quality metric scores an image based on three parameters: the illumination gradient across the image, the overall focus of the image, and the fraction of the image that contains scratches. The metric presents the user with the ability to ‘pass’ or ‘fail’ an image based on a reproducible quality score. Passable images may then be characterized through a separate CellProfiler pipeline, which enlists a variety of common image analysis techniques. The results demonstrate the ability to reliably pass or fail images based on the illumination, focus, and scratch fraction of the image, followed by automatic extraction of morphological data with respect to fission gas voids, interaction layers, and grain boundaries.« less
An intersubject variable regional anesthesia simulator with a virtual patient architecture.
Ullrich, Sebastian; Grottke, Oliver; Fried, Eduard; Frommen, Thorsten; Liao, Wei; Rossaint, Rolf; Kuhlen, Torsten; Deserno, Thomas M
2009-11-01
The main purpose is to provide an intuitive VR-based training environment for regional anesthesia (RA). The research question is how to process subject-specific datasets, organize them in a meaningful way and how to perform the simulation for peripheral regions. We propose a flexible virtual patient architecture and methods to process datasets. Image acquisition, image processing (especially segmentation), interactive nerve modeling and permutations (nerve instantiation) are described in detail. The simulation of electric impulse stimulation and according responses are essential for the training of peripheral RA and solved by an approach based on the electric distance. We have created an XML-based virtual patient database with several subjects. Prototypes of the simulation are implemented and run on multimodal VR hardware (e.g., stereoscopic display and haptic device). A first user pilot study has confirmed our approach. The virtual patient architecture enables support for arbitrary scenarios on different subjects. This concept can also be used for other simulators. In future work, we plan to extend the simulation and conduct further evaluations in order to provide a tool for routine training for RA.
A simple method for panretinal imaging with the slit lamp.
Gellrich, Marcus-Matthias
2016-12-01
Slit lamp biomicroscopy of the retina with a convex lens is a key procedure in clinical practice. The methods presented enable ophthalmologists to adequately image large and peripheral parts of the fundus using a video-slit lamp and freely available stitching software. A routine examination of the fundus with a slit lamp and a +90 D lens is recorded on a video film. Later, sufficiently sharp still images are identified on the video sequence. These still images are imported into a freely available image-processing program (Hugin, for stitching mosaics together digitally) and corresponding points are marked on adjacent still images with some overlap. Using the digital stitching program Hugin panoramic overviews of the retina can be built which can extend to the equator. This allows to image diseases involving the whole retina or its periphery by performing a structured fundus examination with a video-slit lamp. Similar images with a video-slit lamp based on a fundus examination through a hand-held non-contact lens have not been demonstrated before. The methods presented enable those ophthalmologists without high-end imaging equipment to monitor pathological fundus findings. The suggested procedure might even be interesting for retinological departments if peripheral findings are to be documented which might be difficult with fundus cameras.
Real-time restoration of white-light confocal microscope optical sections
Balasubramanian, Madhusudhanan; Iyengar, S. Sitharama; Beuerman, Roger W.; Reynaud, Juan; Wolenski, Peter
2009-01-01
Confocal microscopes (CM) are routinely used for building 3-D images of microscopic structures. Nonideal imaging conditions in a white-light CM introduce additive noise and blur. The optical section images need to be restored prior to quantitative analysis. We present an adaptive noise filtering technique using Karhunen–Loéve expansion (KLE) by the method of snapshots and a ringing metric to quantify the ringing artifacts introduced in the images restored at various iterations of iterative Lucy–Richardson deconvolution algorithm. The KLE provides a set of basis functions that comprise the optimal linear basis for an ensemble of empirical observations. We show that most of the noise in the scene can be removed by reconstructing the images using the KLE basis vector with the largest eigenvalue. The prefiltering scheme presented is faster and does not require prior knowledge about image noise. Optical sections processed using the KLE prefilter can be restored using a simple inverse restoration algorithm; thus, the methodology is suitable for real-time image restoration applications. The KLE image prefilter outperforms the temporal-average prefilter in restoring CM optical sections. The ringing metric developed uses simple binary morphological operations to quantify the ringing artifacts and confirms with the visual observation of ringing artifacts in the restored images. PMID:20186290
Brenner, Stephan; De Allegri, Manuela; Gabrysch, Sabine; Chinkhumba, Jobiba; Sarker, Malabika; Muula, Adamson S
2015-01-01
A variety of clinical process indicators exists to measure the quality of care provided by maternal and neonatal health (MNH) programs. To allow comparison across MNH programs in low- and middle-income countries (LMICs), a core set of essential process indicators is needed. Although such a core set is available for emergency obstetric care (EmOC), the 'EmOC signal functions', a similar approach is currently missing for MNH routine care evaluation. We describe a strategy for identifying core process indicators for routine care and illustrate their usefulness in a field example. We first developed an indicator selection strategy by combining epidemiological and programmatic aspects relevant to MNH in LMICs. We then identified routine care process indicators meeting our selection criteria by reviewing existing quality of care assessment protocols. We grouped these indicators into three categories based on their main function in addressing risk factors of maternal or neonatal complications. We then tested this indicator set in a study assessing MNH quality of clinical care in 33 health facilities in Malawi. Our strategy identified 51 routine care processes: 23 related to initial patient risk assessment, 17 to risk monitoring, 11 to risk prevention. During the clinical performance assessment a total of 82 cases were observed. Birth attendants' adherence to clinical standards was lowest in relation to risk monitoring processes. In relation to major complications, routine care processes addressing fetal and newborn distress were performed relatively consistently, but there were major gaps in the performance of routine care processes addressing bleeding, infection, and pre-eclampsia risks. The identified set of process indicators could identify major gaps in the quality of obstetric and neonatal care provided during the intra- and immediate postpartum period. We hope our suggested indicators for essential routine care processes will contribute to streamlining MNH program evaluations in LMICs.
Brenner, Stephan; De Allegri, Manuela; Gabrysch, Sabine; Chinkhumba, Jobiba; Sarker, Malabika; Muula, Adamson S.
2015-01-01
Background A variety of clinical process indicators exists to measure the quality of care provided by maternal and neonatal health (MNH) programs. To allow comparison across MNH programs in low- and middle-income countries (LMICs), a core set of essential process indicators is needed. Although such a core set is available for emergency obstetric care (EmOC), the ‘EmOC signal functions’, a similar approach is currently missing for MNH routine care evaluation. We describe a strategy for identifying core process indicators for routine care and illustrate their usefulness in a field example. Methods We first developed an indicator selection strategy by combining epidemiological and programmatic aspects relevant to MNH in LMICs. We then identified routine care process indicators meeting our selection criteria by reviewing existing quality of care assessment protocols. We grouped these indicators into three categories based on their main function in addressing risk factors of maternal or neonatal complications. We then tested this indicator set in a study assessing MNH quality of clinical care in 33 health facilities in Malawi. Results Our strategy identified 51 routine care processes: 23 related to initial patient risk assessment, 17 to risk monitoring, 11 to risk prevention. During the clinical performance assessment a total of 82 cases were observed. Birth attendants’ adherence to clinical standards was lowest in relation to risk monitoring processes. In relation to major complications, routine care processes addressing fetal and newborn distress were performed relatively consistently, but there were major gaps in the performance of routine care processes addressing bleeding, infection, and pre-eclampsia risks. Conclusion The identified set of process indicators could identify major gaps in the quality of obstetric and neonatal care provided during the intra- and immediate postpartum period. We hope our suggested indicators for essential routine care processes will contribute to streamlining MNH program evaluations in LMICs. PMID:25875252
Image inversion analysis of the HST OTA (Hubble Space Telescope Optical Telescope Assembly), phase A
NASA Technical Reports Server (NTRS)
Litvak, M. M.
1991-01-01
Technical work during September-December 1990 consisted of: (1) analyzing HST point source images obtained from JPL; (2) retrieving phase information from the images by a direct (noniterative) technique; and (3) characterizing the wavefront aberration due to the errors in the Hubble Space Telescope (HST) mirrors, in a preliminary manner. This work was in support of JPL design of compensating optics for the next generation wide-field planetary camera on HST. This digital technique for phase retrieval from pairs of defocused images, is based on the energy transport equation between these image planes. In addition, an end-to-end wave optics routine, based on the JPL Code 5 prescription of the unaberrated HST and WFPC, was derived for output of the reference phase front when mirror error is absent. Also, the Roddier routine unwrapped the retrieved phase by inserting the required jumps of +/- 2(pi) radians for the sake of smoothness. A least-squares fitting routine, insensitive to phase unwrapping, but nonlinear, was used to obtain estimates of the Zernike polynomial coefficients that describe the aberration. The phase results were close to, but higher than, the expected error in conic constant of the primary mirror suggested by the fossil evidence. The analysis of aberration contributed by the camera itself could be responsible for the small discrepancy, but was not verified by analysis.
Effect of an imaging-based streamlined electronic healthcare process on quality and costs.
Bui, Alex A T; Taira, Ricky K; Goldman, Dana; Dionisio, John David N; Aberle, Denise R; El-Saden, Suzie; Sayre, James; Rice, Thomas; Kangarloo, Hooshang
2004-01-01
A streamlined process of care supported by technology and imaging may be effective in managing the overall healthcare process and costs. This study examined the effect of an imaging-based electronic process of care on costs and rates of hospitalization, emergency room (ER) visits, specialist diagnostic referrals, and patient satisfaction. A healthcare process was implemented for an employer group, highlighting improved patient access to primary care plus routine use of imaging and teleconsultation with diagnostic specialists. An electronic infrastructure supported patient access to physicians and communication among healthcare providers. The employer group, a self-insured company, manages a healthcare plan for its employees and their dependents: 4,072 employees were enrolled in the test group, and 7,639 in the control group. Outcome measures for expenses and frequency of hospitalizations, ER visits, traditional specialist referrals, primary care visits, and imaging utilization rates were measured using claims data over 1 year. Homogeneity tests of proportions were performed with a chi-square statistic, mean differences were tested by two-sample t-tests. Patient satisfaction with access to healthcare was gauged using results from an independent firm. Overall per member/per month costs post-implementation were lower in the enrolled population (126 dollars vs 160 dollars), even though occurrence of chronic/expensive diseases was higher in the enrolled group (18.8% vs 12.2%). Lower per member/per month costs were seen for inpatient (33.29 dollars vs 35.59 dollars); specialist referrals (21.36 dollars vs 26.84 dollars); and ER visits (3.68 dollars vs 5.22 dollars). Moreover, the utilization rate for hospital admissions, ER visits, and traditional specialist referrals were significantly lower in the enrolled group, although primary care and imaging utilization were higher. Comparison to similar employer groups showed that the company's costs were lower than national averages (119.24 dollars vs 146.32 dollars), indicating that the observed result was not attributable to normalization effects. Patient satisfaction with access to healthcare ranked in the top 21st percentile. A streamlined healthcare process supported by technology resulted in higher patient satisfaction and cost savings despite improved access to primary care and higher utilization of imaging.
Mansoor, Awais; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z.; Folio, Les R.; Udupa, Jayaram K.; Mollura, Daniel J.
2015-01-01
The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy–guided, and (e) machine learning–based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed. ©RSNA, 2015 PMID:26172351
Using normalization 3D model for automatic clinical brain quantative analysis and evaluation
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping
2003-05-01
Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.
NASA Astrophysics Data System (ADS)
Wang, Xingwei; Zheng, Bin; Li, Shibo; Mulvihill, John J.; Chen, Xiaodong; Liu, Hong
2010-07-01
Karyotyping is an important process to classify chromosomes into standard classes and the results are routinely used by the clinicians to diagnose cancers and genetic diseases. However, visual karyotyping using microscopic images is time-consuming and tedious, which reduces the diagnostic efficiency and accuracy. Although many efforts have been made to develop computerized schemes for automated karyotyping, no schemes can get be performed without substantial human intervention. Instead of developing a method to classify all chromosome classes, we develop an automatic scheme to detect abnormal metaphase cells by identifying a specific class of chromosomes (class 22) and prescreen for suspicious chronic myeloid leukemia (CML). The scheme includes three steps: (1) iteratively segment randomly distributed individual chromosomes, (2) process segmented chromosomes and compute image features to identify the candidates, and (3) apply an adaptive matching template to identify chromosomes of class 22. An image data set of 451 metaphase cells extracted from bone marrow specimens of 30 positive and 30 negative cases for CML is selected to test the scheme's performance. The overall case-based classification accuracy is 93.3% (100% sensitivity and 86.7% specificity). The results demonstrate the feasibility of applying an automated scheme to detect or prescreen the suspicious cancer cases.
How to make deposition of images a reality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guss, J. Mitchell, E-mail: mitchell.guss@sydney.edu.au; McMahon, Brian; School of Molecular Bioscience, The University of Sydney, Sydney, NSW 2006
2014-10-01
An analysis is performed of the technical and financial challenges to be overcome if deposition of primary experimental data is to become routine. The IUCr Diffraction Data Deposition Working Group is investigating the rationale and policies for routine deposition of diffraction images (and other primary experimental data sets). An information-management framework is described that should inform policy directions, and some of the technical and other issues that need to be addressed in an effort to achieve such a goal are analysed. In the near future, routine data deposition could be encouraged at one of the growing number of institutional repositoriesmore » that accept data sets or at a generic data-publishing web repository service. To realise all of the potential benefits of depositing diffraction data, specialized archives would be preferable. Funding such an initiative will be challenging.« less
Imaging mass spectrometry statistical analysis.
Jones, Emrys A; Deininger, Sören-Oliver; Hogendoorn, Pancras C W; Deelder, André M; McDonnell, Liam A
2012-08-30
Imaging mass spectrometry is increasingly used to identify new candidate biomarkers. This clinical application of imaging mass spectrometry is highly multidisciplinary: expertise in mass spectrometry is necessary to acquire high quality data, histology is required to accurately label the origin of each pixel's mass spectrum, disease biology is necessary to understand the potential meaning of the imaging mass spectrometry results, and statistics to assess the confidence of any findings. Imaging mass spectrometry data analysis is further complicated because of the unique nature of the data (within the mass spectrometry field); several of the assumptions implicit in the analysis of LC-MS/profiling datasets are not applicable to imaging. The very large size of imaging datasets and the reporting of many data analysis routines, combined with inadequate training and accessible reviews, have exacerbated this problem. In this paper we provide an accessible review of the nature of imaging data and the different strategies by which the data may be analyzed. Particular attention is paid to the assumptions of the data analysis routines to ensure that the reader is apprised of their correct usage in imaging mass spectrometry research. Copyright © 2012 Elsevier B.V. All rights reserved.
A device-dependent interface for interactive image display
NASA Technical Reports Server (NTRS)
Perkins, D. C.; Szczur, M. R.; Owings, J.; Jamros, R. K.
1984-01-01
The structure of the device independent Display Management Subsystem (DMS) and the interface routines that are available to the applications programmer for use in developing a set of portable image display utility programs are described.
Can we develop pathology-specific MRI contrast for "MR-negative" epilepsy?
Feindel, Kirk W
2013-05-01
Recent improvements in magnetic resonance imaging (MRI) hardware, software, and analysis routines are helping to put cases of "MR-negative" epilepsy on the decline. However, most standard-of-care MRI relies on careful manipulation and presentation of T1, T2, and diffusion-weighted contrast, which characterize the behavior of water in "bulk" tissue rather than providing pathology-specific contrast. Research efforts in MR physics continue to identify and develop novel theory, and methods such as diffusional kurtosis imaging (DKI) and temporal diffusion spectroscopy that can better characterize tissue substructure, and chemical exchange saturation transfer (CEST) that can target underlying biochemical processes. The potential role of each technique in targeting pathologies implicated in "MR-negative" epilepsy is outlined herein. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.
NASA Astrophysics Data System (ADS)
Jain, Manu; Rajadhyaksha, Milind; Nehal, Kishwer
2016-03-01
Confocal mosaicing microscopy (CMM) enables rapid imaging of large areas of fresh tissue ex vivo without the processing that is necessary for conventional histology. When performed with fluorescence mode using acridine orange (nuclear specific dye) it enhances nuclei-to-dermis contrast that enables detection of all types of BCCs including thin strands of infiltrative basal cell carcinomas (BCCs). Thus far, this technique has been mostly validated in research setting for the analysis of BCC tumor margins. Recently, CMM has been adopted and implemented in real clinical settings by some surgeons as an alternative tool to frozen section (FS) during Mohs surgery. In this review article we summarize the development of CMM guided imaging of ex vivo tissues from bench to bedside. We also present its current state of application in routine clinical workflow not only for the assessment of BCC margin but also for other skin cancers such as melanoma, SCC, and some infectious diseases where FS is not routinely performed. Lastly, we also discuss the potential limitations of this technology as well as future developments. As this technology advances further, it may serve as an adjunct to standard histology and enable rapid surgical pathology of skin cancers at the bedside.
NASA Astrophysics Data System (ADS)
Giese, A.; Böhringer, H. J.; Leppert, J.; Kantelhardt, S. R.; Lankenau, E.; Koch, P.; Birngruber, R.; Hüttmann, G.
2006-02-01
Optical coherence tomography (OCT) is a non-invasive imaging technique with a micrometer resolution. It allows non-contact / non-invasive analysis of central nervous system tissues with a penetration depth of 1-3,5 mm reaching a spatial resolution of approximately 4-15 μm. We have adapted spectral-domain OCT (SD-OCT) and time-domain OCT (TD-OCT) for intraoperative detection of residual tumor during brain tumor surgery. Human brain tumor tissue and areas of the resection cavity were analyzed during the resection of gliomas using this new technology. The site of analysis was registered using a neuronavigation system and biopsies were taken and submitted to routine histology. We have used post image acquisition processing to compensate for movements of the brain and to realign A-scan images for calculation of a light attenuation factor. OCT imaging of normal cortex and white matter showed a typical light attenuation profile. Tumor tissue depending on the cellularity of the specimen showed a loss of the normal light attenuation profile resulting in altered light attenuation coefficients compared to normal brain. Based on this parameter and the microstructure of the tumor tissue, which was entirely absent in normal tissue, OCT analysis allowed the discrimination of normal brain tissue, invaded brain, solid tumor tissue, and necrosis. Following macroscopically complete resections OCT analysis of the resection cavity displayed the typical microstructure and light attenuation profile of tumor tissue in some specimens, which in routine histology contained microscopic residual tumor tissue. We have demonstrated that this technology may be applied to the intraoperative detection of residual tumor during resection of human gliomas.
Celi, Simona; Berti, Sergio
2014-10-01
Optical coherence tomography (OCT) is a catheter-based medical imaging technique that produces cross-sectional images of blood vessels. This technique is particularly useful for studying coronary atherosclerosis. In this paper, we present a new framework that allows a segmentation and quantification of OCT images of coronary arteries to define the plaque type and stenosis grading. These analyses are usually carried out on-line on the OCT-workstation where measuring is mainly operator-dependent and mouse-based. The aim of this program is to simplify and improve the processing of OCT images for morphometric investigations and to present a fast procedure to obtain 3D geometrical models that can also be used for external purposes such as for finite element simulations. The main phases of our toolbox are the lumen segmentation and the identification of the main tissues in the artery wall. We validated the proposed method with identification and segmentation manually performed by expert OCT readers. The method was evaluated on ten datasets from clinical routine and the validation was performed on 210 images randomly extracted from the pullbacks. Our results show that automated segmentation of the vessel and of the tissue components are possible off-line with a precision that is comparable to manual segmentation for the tissue component and to the proprietary-OCT-console for the lumen segmentation. Several OCT sections have been processed to provide clinical outcome. Copyright © 2014 Elsevier B.V. All rights reserved.
Comparison of quality control software tools for diffusion tensor imaging.
Liu, Bilan; Zhu, Tong; Zhong, Jianhui
2015-04-01
Image quality of diffusion tensor imaging (DTI) is critical for image interpretation, diagnostic accuracy and efficiency. However, DTI is susceptible to numerous detrimental artifacts that may impair the reliability and validity of the obtained data. Although many quality control (QC) software tools are being developed and are widely used and each has its different tradeoffs, there is still no general agreement on an image quality control routine for DTIs, and the practical impact of these tradeoffs is not well studied. An objective comparison that identifies the pros and cons of each of the QC tools will be helpful for the users to make the best choice among tools for specific DTI applications. This study aims to quantitatively compare the effectiveness of three popular QC tools including DTI studio (Johns Hopkins University), DTIprep (University of North Carolina at Chapel Hill, University of Iowa and University of Utah) and TORTOISE (National Institute of Health). Both synthetic and in vivo human brain data were used to quantify adverse effects of major DTI artifacts to tensor calculation as well as the effectiveness of different QC tools in identifying and correcting these artifacts. The technical basis of each tool was discussed, and the ways in which particular techniques affect the output of each of the tools were analyzed. The different functions and I/O formats that three QC tools provide for building a general DTI processing pipeline and integration with other popular image processing tools were also discussed. Copyright © 2015 Elsevier Inc. All rights reserved.
Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko
2015-06-01
Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Kim, Jin-su; Moon, Yong-ju; Choi, Yun Sun; Park, Young Uk; Park, Seung Min; Lee, Kyung Tai
2012-01-01
The purpose of the present study was to clarify the usefulness of the oblique axial scan parallel to the course of the anterior talofibular ligament in magnetic resonance imaging of the anterior talofibular ligament in patients with chronic ankle instability. We evaluated this anterior talofibular ligament view and routine axial magnetic resonance imaging planes of 115 ankles. We diagnosed the grade of the anterior talofibular ligament injury and confirmed full-length views of the anterior talofibular ligament. Associated lesions were also checked. The subjective diagnostic convenience of associated problems was determined. The full-length view of the anterior talofibular ligament was checked in 85 (73.9%) patients in the routine axial view and 112 (97.4%) patients in the anterior talofibular ligament view. The grade of injury increased in the anterior talofibular ligament view in 26 (22.6%) patients compared with the routine axial view. There were 64 associated injuries. The anterior inferior tibiofibular ligament, posterior inferior tibiofibular ligament, and posterior tibialis tendinitis were more easily diagnosed on the routine axial view than on the anterior talofibular ligament view. An additional anterior talofibular ligament view is useful in the evaluation of the anterior talofibular ligament in patients with chronic ankle instability. Copyright © 2012 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Dynamic chest radiography: flat-panel detector (FPD) based functional X-ray imaging.
Tanaka, Rie
2016-07-01
Dynamic chest radiography is a flat-panel detector (FPD)-based functional X-ray imaging, which is performed as an additional examination in chest radiography. The large field of view (FOV) of FPDs permits real-time observation of the entire lungs and simultaneous right-and-left evaluation of diaphragm kinetics. Most importantly, dynamic chest radiography provides pulmonary ventilation and circulation findings as slight changes in pixel value even without the use of contrast media; the interpretation is challenging and crucial for a better understanding of pulmonary function. The basic concept was proposed in the 1980s; however, it was not realized until the 2010s because of technical limitations. Dynamic FPDs and advanced digital image processing played a key role for clinical application of dynamic chest radiography. Pulmonary ventilation and circulation can be quantified and visualized for the diagnosis of pulmonary diseases. Dynamic chest radiography can be deployed as a simple and rapid means of functional imaging in both routine and emergency medicine. Here, we focus on the evaluation of pulmonary ventilation and circulation. This review article describes the basic mechanism of imaging findings according to pulmonary/circulation physiology, followed by imaging procedures, analysis method, and diagnostic performance of dynamic chest radiography.
Companion diagnostics and molecular imaging-enhanced approaches for oncology clinical trials.
Van Heertum, Ronald L; Scarimbolo, Robert; Ford, Robert; Berdougo, Eli; O'Neal, Michael
2015-01-01
In the era of personalized medicine, diagnostic approaches are helping pharmaceutical and biotechnology sponsors streamline the clinical trial process. Molecular assays and diagnostic imaging are routinely being used to stratify patients for treatment, monitor disease, and provide reliable early clinical phase assessments. The importance of diagnostic approaches in drug development is highlighted by the rapidly expanding global cancer diagnostics market and the emergent attention of regulatory agencies worldwide, who are beginning to offer more structured platforms and guidance for this area. In this paper, we highlight the key benefits of using companion diagnostics and diagnostic imaging with a focus on oncology clinical trials. Nuclear imaging using widely available radiopharmaceuticals in conjunction with molecular imaging of oncology targets has opened the door to more accurate disease assessment and the modernization of standard criteria for the evaluation, staging, and treatment responses of cancer patients. Furthermore, the introduction and validation of quantitative molecular imaging continues to drive and optimize the field of oncology diagnostics. Given their pivotal role in disease assessment and treatment, the validation and commercialization of diagnostic tools will continue to advance oncology clinical trials, support new oncology drugs, and promote better patient outcomes.
Fusing Satellite-Derived Irradiance and Point Measurements through Optimal Interpolation
NASA Astrophysics Data System (ADS)
Lorenzo, A.; Morzfeld, M.; Holmgren, W.; Cronin, A.
2016-12-01
Satellite-derived irradiance is widely used throughout the design and operation of a solar power plant. While satellite-derived estimates cover a large area, they also have large errors compared to point measurements from sensors on the ground. We describe an optimal interpolation routine that fuses the broad spatial coverage of satellite-derived irradiance with the high accuracy of point measurements. The routine can be applied to any satellite-derived irradiance and point measurement datasets. Unique aspects of this work include the fact that information is spread using cloud location and thickness and that a number of point measurements are collected from rooftop PV systems. The routine is sensitive to errors in the satellite image geolocation, so care must be taken to adjust the cloud locations based on the solar and satellite geometries. Analysis of the optimal interpolation routine over Tucson, AZ, with 20 point measurements shows a significant improvement in the irradiance estimate for two distinct satellite image to irradiance algorithms. Improved irradiance estimates can be used for resource assessment, distributed generation production estimates, and irradiance forecasts.
Dubra, Alfredo; Sulai, Yusufu; Norris, Jennifer L.; Cooper, Robert F.; Dubis, Adam M.; Williams, David R.; Carroll, Joseph
2011-01-01
The rod photoreceptors are implicated in a number of devastating retinal diseases. However, routine imaging of these cells has remained elusive, even with the advent of adaptive optics imaging. Here, we present the first in vivo images of the contiguous rod photoreceptor mosaic in nine healthy human subjects. The images were collected with three different confocal adaptive optics scanning ophthalmoscopes at two different institutions, using 680 and 775 nm superluminescent diodes for illumination. Estimates of photoreceptor density and rod:cone ratios in the 5°–15° retinal eccentricity range are consistent with histological findings, confirming our ability to resolve the rod mosaic by averaging multiple registered images, without the need for additional image processing. In one subject, we were able to identify the emergence of the first rods at approximately 190 μm from the foveal center, in agreement with previous histological studies. The rod and cone photoreceptor mosaics appear in focus at different retinal depths, with the rod mosaic best focus (i.e., brightest and sharpest) being at least 10 μm shallower than the cones at retinal eccentricities larger than 8°. This study represents an important step in bringing high-resolution imaging to bear on the study of rod disorders. PMID:21750765
Tucker, F. Lee
2012-01-01
Modern breast imaging, including magnetic resonance imaging, provides an increasingly clear depiction of breast cancer extent, often with suboptimal pathologic confirmation. Pathologic findings guide management decisions, and small increments in reported tumor characteristics may rationalize significant changes in therapy and staging. Pathologic techniques to grossly examine resected breast tissue have changed little during this era of improved breast imaging and still rely primarily on the techniques of gross inspection and specimen palpation. Only limited imaging information is typically conveyed to pathologists, typically in the form of wire-localization images from breast-conserving procedures. Conventional techniques of specimen dissection and section submission destroy the three-dimensional integrity of the breast anatomy and tumor distribution. These traditional methods of breast specimen examination impose unnecessary limitations on correlation with imaging studies, measurement of cancer extent, multifocality, and margin distance. Improvements in pathologic diagnosis, reporting, and correlation of breast cancer characteristics can be achieved by integrating breast imagers into the specimen examination process and the use of large-format sections which preserve local anatomy. This paper describes the successful creation of a large-format pathology program to routinely serve all patients in a busy interdisciplinary breast center associated with a community-based nonprofit health system in the United States. PMID:23316372
NASA Astrophysics Data System (ADS)
Patel, M. N.; Looney, P.; Young, K.; Halling-Brown, M. D.
2014-03-01
Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. Over the past two decades both diagnostic and therapeutic imaging have undergone a rapid growth, the ability to be able to harness this large influx of medical images can provide an essential resource for research and training. Traditionally, the systematic collection of medical images for research from heterogeneous sites has not been commonplace within the NHS and is fraught with challenges including; data acquisition, storage, secure transfer and correct anonymisation. Here, we describe a semi-automated system, which comprehensively oversees the collection of both unprocessed and processed medical images from acquisition to a centralised database. The provision of unprocessed images within our repository enables a multitude of potential research possibilities that utilise the images. Furthermore, we have developed systems and software to integrate these data with their associated clinical data and annotations providing a centralised dataset for research. Currently we regularly collect digital mammography images from two sites and partially collect from a further three, with efforts to expand into other modalities and sites currently ongoing. At present we have collected 34,014 2D images from 2623 individuals. In this paper we describe our medical image collection system for research and discuss the wide spectrum of challenges faced during the design and implementation of such systems.
Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.
2013-01-01
Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203
NASA Astrophysics Data System (ADS)
Sun, Yuansheng; Coskun, Ulas; Liao, Shih-Chu Jeff; Barbieri, Beniamino
2018-02-01
Photoluminescence (PL) refers to light emission initiated by any form of photon excitation. PL spectroscopy and microscopy imaging has been widely applied in material, chemical and life sciences. Measuring its lifetime yields a new dimension of the PL imaging and opens new opportunities for many PL applications. In solar cell research, quantification of the PL lifetime has become an important evaluation for the characteristics of the Perovskite thin film. Depending upon the PL process (fluorescence, phosphorescence, photon upconversion, etc.), the PL lifetimes to be measured can vary in a wide timescale range (e.g. from sub-nanoseconds to microseconds or even milliseconds) - it is challenging to cover this wide range of lifetime measurements by a single technique efficiently. Here, we present a novel digital frequency domain (DFD) technique named FastFLIM, capable of measuring the PL lifetime from 100 ps to 100 ms at the high data collection efficiency (up to 140-million counts per second). Other than the traditional nonlinear leastsquare fitting analysis, the raw data acquired by FastFLIM can be directly processed by the model-free phasor plots approach for instant and unbiased lifetime results, providing the ideal routine for the PL lifetime microscopy imaging.
Aldridge, Matthew D; Waddington, Wendy W; Dickson, John C; Prakash, Vineet; Ell, Peter J; Bomanji, Jamshed B
2013-11-01
A three-dimensional model-based resolution recovery (RR) reconstruction algorithm that compensates for collimator-detector response, resulting in an improvement in reconstructed spatial resolution and signal-to-noise ratio of single-photon emission computed tomography (SPECT) images, was tested. The software is said to retain image quality even with reduced acquisition time. Clinically, any improvement in patient throughput without loss of quality is to be welcomed. Furthermore, future restrictions in radiotracer supplies may add value to this type of data analysis. The aims of this study were to assess improvement in image quality using the software and to evaluate the potential of performing reduced time acquisitions for bone and parathyroid SPECT applications. Data acquisition was performed using the local standard SPECT/CT protocols for 99mTc-hydroxymethylene diphosphonate bone and 99mTc-methoxyisobutylisonitrile parathyroid SPECT imaging. The principal modification applied was the acquisition of an eight-frame gated data set acquired using an ECG simulator with a fixed signal as the trigger. This had the effect of partitioning the data such that the effect of reduced time acquisitions could be assessed without conferring additional scanning time on the patient. The set of summed data sets was then independently reconstructed using the RR software to permit a blinded assessment of the effect of acquired counts upon reconstructed image quality as adjudged by three experienced observers. Data sets reconstructed with the RR software were compared with the local standard processing protocols; filtered back-projection and ordered-subset expectation-maximization. Thirty SPECT studies were assessed (20 bone and 10 parathyroid). The images reconstructed with the RR algorithm showed improved image quality for both full-time and half-time acquisitions over local current processing protocols (P<0.05). The RR algorithm improved image quality compared with local processing protocols and has been introduced into routine clinical use. SPECT acquisitions are now acquired at half of the time previously required. The method of binning the data can be applied to any other camera system to evaluate the reduction in acquisition time for similar processes. The potential for dose reduction is also inherent with this approach.
2015-04-01
Current routine MRI examinations rely on the acquisition of qualitative images that have a contrast "weighted" for a mixture of (magnetic) tissue properties. Recently, a novel approach was introduced, namely MR Fingerprinting (MRF) with a completely different approach to data acquisition, post-processing and visualization. Instead of using a repeated, serial acquisition of data for the characterization of individual parameters of interest, MRF uses a pseudo randomized acquisition that causes the signals from different tissues to have a unique signal evolution or 'fingerprint' that is simultaneously a function of the multiple material properties under investigation. The processing after acquisition involves a pattern recognition algorithm to match the fingerprints to a predefined dictionary of predicted signal evolutions. These can then be translated into quantitative maps of the magnetic parameters of interest. MR Fingerprinting (MRF) is a technique that could theoretically be applied to most traditional qualitative MRI methods and replaces them with acquisition of truly quantitative tissue measures. MRF is, thereby, expected to be much more accurate and reproducible than traditional MRI and should improve multi-center studies and significantly reduce reader bias when diagnostic imaging is performed. Key Points • MR fingerprinting (MRF) is a new approach to data acquisition, post-processing and visualization.• MRF provides highly accurate quantitative maps of T1, T2, proton density, diffusion.• MRF may offer multiparametric imaging with high reproducibility, and high potential for multicenter/ multivendor studies.
Asher, Elad; Reuveni, Haim; Shlomo, Nir; Gerber, Yariv; Beigel, Roy; Narodetski, Michael; Eldar, Michael; Or, Jacob; Hod, Hanoch; Shamiss, Arie; Matetzky, Shlomi
2015-01-01
The aim of this study was to compare in patients presenting with acute chest pain the clinical outcomes and cost-effectiveness of an accelerated diagnostic protocol utilizing contemporary technology in a chest pain unit versus routine care in an internal medicine department. Hospital and 90-day course were prospectively studied in 585 consecutive low-moderate risk acute chest pain patients, of whom 304 were investigated in a designated chest pain center using a pre-specified accelerated diagnostic protocol, while 281 underwent routine care in an internal medicine ward. Hospitalization was longer in the routine care compared with the accelerated diagnostic protocol group (p<0.001). During hospitalization, 298 accelerated diagnostic protocol patients (98%) vs. 57 (20%) routine care patients underwent non-invasive testing, (p<0.001). Throughout the 90-day follow-up, diagnostic imaging testing was performed in 125 (44%) and 26 (9%) patients in the routine care and accelerated diagnostic protocol patients, respectively (p<0.001). Ultimately, most patients in both groups had non-invasive imaging testing. Accelerated diagnostic protocol patients compared with those receiving routine care was associated with a lower incidence of readmissions for chest pain [8 (3%) vs. 24 (9%), p<0.01], and acute coronary syndromes [1 (0.3%) vs. 9 (3.2%), p<0.01], during the follow-up period. The accelerated diagnostic protocol remained a predictor of lower acute coronary syndromes and readmissions after propensity score analysis [OR = 0.28 (CI 95% 0.14-0.59)]. Cost per patient was similar in both groups [($2510 vs. $2703 for the accelerated diagnostic protocol and routine care group, respectively, (p = 0.9)]. An accelerated diagnostic protocol is clinically superior and as cost effective as routine in acute chest pain patients, and may save time and resources.
Asher, Elad; Reuveni, Haim; Shlomo, Nir; Gerber, Yariv; Beigel, Roy; Narodetski, Michael; Eldar, Michael; Or, Jacob; Hod, Hanoch; Shamiss, Arie; Matetzky, Shlomi
2015-01-01
Aims The aim of this study was to compare in patients presenting with acute chest pain the clinical outcomes and cost-effectiveness of an accelerated diagnostic protocol utilizing contemporary technology in a chest pain unit versus routine care in an internal medicine department. Methods and Results Hospital and 90-day course were prospectively studied in 585 consecutive low-moderate risk acute chest pain patients, of whom 304 were investigated in a designated chest pain center using a pre-specified accelerated diagnostic protocol, while 281 underwent routine care in an internal medicine ward. Hospitalization was longer in the routine care compared with the accelerated diagnostic protocol group (p<0.001). During hospitalization, 298 accelerated diagnostic protocol patients (98%) vs. 57 (20%) routine care patients underwent non-invasive testing, (p<0.001). Throughout the 90-day follow-up, diagnostic imaging testing was performed in 125 (44%) and 26 (9%) patients in the routine care and accelerated diagnostic protocol patients, respectively (p<0.001). Ultimately, most patients in both groups had non-invasive imaging testing. Accelerated diagnostic protocol patients compared with those receiving routine care was associated with a lower incidence of readmissions for chest pain [8 (3%) vs. 24 (9%), p<0.01], and acute coronary syndromes [1 (0.3%) vs. 9 (3.2%), p<0.01], during the follow-up period. The accelerated diagnostic protocol remained a predictor of lower acute coronary syndromes and readmissions after propensity score analysis [OR = 0.28 (CI 95% 0.14–0.59)]. Cost per patient was similar in both groups [($2510 vs. $2703 for the accelerated diagnostic protocol and routine care group, respectively, (p = 0.9)]. Conclusion An accelerated diagnostic protocol is clinically superior and as cost effective as routine in acute chest pain patients, and may save time and resources. PMID:25622029
Bohndiek, Sarah E.; Bodapati, Sandhya; Van De Sompel, Dominique; Kothapalli, Sri-Rajasekhar; Gambhir, Sanjiv S.
2013-01-01
Photoacoustic imaging combines the high contrast of optical imaging with the spatial resolution and penetration depth of ultrasound. This technique holds tremendous potential for imaging in small animals and importantly, is clinically translatable. At present, there is no accepted standard physical phantom that can be used to provide routine quality control and performance evaluation of photoacoustic imaging instruments. With the growing popularity of the technique and the advent of several commercial small animal imaging systems, it is important to develop a strategy for assessment of such instruments. Here, we developed a protocol for fabrication of physical phantoms for photoacoustic imaging from polyvinyl chloride plastisol (PVCP). Using this material, we designed and constructed a range of phantoms by tuning the optical properties of the background matrix and embedding spherical absorbing targets of the same material at different depths. We created specific designs to enable: routine quality control; the testing of robustness of photoacoustic signals as a function of background; and the evaluation of the maximum imaging depth available. Furthermore, we demonstrated that we could, for the first time, evaluate two small animal photoacoustic imaging systems with distinctly different light delivery, ultrasound imaging geometries and center frequencies, using stable physical phantoms and directly compare the results from both systems. PMID:24086557
Lee, Young Han
2018-04-04
The purposes of this study are to evaluate the feasibility of protocol determination with a convolutional neural networks (CNN) classifier based on short-text classification and to evaluate the agreements by comparing protocols determined by CNN with those determined by musculoskeletal radiologists. Following institutional review board approval, the database of a hospital information system (HIS) was queried for lists of MRI examinations, referring department, patient age, and patient gender. These were exported to a local workstation for analyses: 5258 and 1018 consecutive musculoskeletal MRI examinations were used for the training and test datasets, respectively. The subjects for pre-processing were routine or tumor protocols and the contents were word combinations of the referring department, region, contrast media (or not), gender, and age. The CNN Embedded vector classifier was used with Word2Vec Google news vectors. The test set was tested with each classification model and results were output as routine or tumor protocols. The CNN determinations were evaluated using the receiver operating characteristic (ROC) curves. The accuracies were evaluated by a radiologist-confirmed protocol as the reference protocols. The optimal cut-off values for protocol determination between routine protocols and tumor protocols was 0.5067 with a sensitivity of 92.10%, a specificity of 95.76%, and an area under curve (AUC) of 0.977. The overall accuracy was 94.2% for the ConvNet model. All MRI protocols were correct in the pelvic bone, upper arm, wrist, and lower leg MRIs. Deep-learning-based convolutional neural networks were clinically utilized to determine musculoskeletal MRI protocols. CNN-based text learning and applications could be extended to other radiologic tasks besides image interpretations, improving the work performance of the radiologist.
Routine Computer Tomography Imaging for the Detection of Recurrences in High-Risk Melanoma Patients.
Park, Tristen S; Phan, Giao Q; Yang, James C; Kammula, Udai; Hughes, Marybeth S; Trebska-McGowan, Kasia; Morton, Kathleen E; White, Donald E; Rosenberg, Steven A; Sherry, Richard M
2017-04-01
The use of routine CT imaging for surveillance in asymptomatic patients with cutaneous melanoma is controversial. We report our experience using a surveillance strategy that included CT imaging for a cohort of patients with high-risk melanoma. A total of 466 patients with high-risk cutaneous melanoma enrolled in adjuvant immunotherapy trials were followed for tumor progression by physical examination, labs, and CT imaging as defined by protocol. Evaluations were obtained at least every 6 months for year 1, every 6 months for year 2, and then annually for the remainder of the 5-year study. Time to tumor progression, sites of recurrence, and the method of relapse detection were identified. The patient cohort consisted of 115 stage II patients, 328 stage III patients, and 23 patients with resected stage IV melanoma. The medium time to progression for the 225 patients who developed tumor progression was 7 months. Tumor progression was detected by patients, physician examination or routine labs, or by CT imaging alone in 27, 14, and 59% of cases respectively. Melanoma recurrences were noted to be locoregional in 36% of cases and systemic in 64% of cases. Thirty percent of patients with locoregional relapse and 75% of patients with systemic relapse were detected solely by CT imaging. CT imaging alone detected the majority of sites of disease progression in our patients with high-risk cutaneous melanoma. This disease was not heralded by symptoms, physical examination, or blood work. Although the benefit of the early detection of advanced melanoma is unknown, this experience is relevant because of the rapid development and availability of potentially curative immunotherapies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, S; Wang, Y; Weng, H
Purpose To evaluate image quality and radiation dose of routine abdomen computed tomography exam with the automatic current modulation technique (ATCM) performed in two different brand 64-slice CT scanners in our site. Materials and Methods A retrospective review of routine abdomen CT exam performed with two scanners; scanner A and scanner B in our site. To calculate standard deviation of the portal hepatic level with a region of interest of 12.5 mm x 12.5mm represented to the image noise. The radiation dose was obtained from CT DICOM image information. Using Computed tomography dose index volume (CTDIv) to represented CT radiationmore » dose. The patient data in this study were with normal weight (about 65–75 Kg). Results The standard deviation of Scanner A was smaller than scanner B, the scanner A might with better image quality than scanner B. On the other hand, the radiation dose of scanner A was higher than scanner B(about higher 50–60%) with ATCM. Both of them, the radiation dose was under diagnostic reference level. Conclusion The ATCM systems in modern CT scanners can contribute a significant reduction in radiation dose to the patient. But the reduction by ATCM systems from different CT scanner manufacturers has slightly variation. Whatever CT scanner we use, it is necessary to find the acceptable threshold of image quality with the minimum possible radiation exposure to the patient in agreement with the ALARA principle.« less
The Potential for an Enhanced Role for MRI in Radiation-therapy Treatment Planning
Metcalfe, P.; Liney, G. P.; Holloway, L.; Walker, A.; Barton, M.; Delaney, G. P.; Vinod, S.; Tomé, W.
2013-01-01
The exquisite soft-tissue contrast of magnetic resonance imaging (MRI) has meant that the technique is having an increasing role in contouring the gross tumor volume (GTV) and organs at risk (OAR) in radiation therapy treatment planning systems (TPS). MRI-planning scans from diagnostic MRI scanners are currently incorporated into the planning process by being registered to CT data. The soft-tissue data from the MRI provides target outline guidance and the CT provides a solid geometric and electron density map for accurate dose calculation on the TPS computer. There is increasing interest in MRI machine placement in radiotherapy clinics as an adjunct to CT simulators. Most vendors now offer 70 cm bores with flat couch inserts and specialised RF coil designs. We would refer to these devices as MR-simulators. There is also research into the future application of MR-simulators independent of CT and as in-room image-guidance devices. It is within the background of this increased interest in the utility of MRI in radiotherapy treatment planning that this paper is couched. The paper outlines publications that deal with standard MRI sequences used in current clinical practice. It then discusses the potential for using processed functional diffusion maps (fDM) derived from diffusion weighted image sequences in tracking tumor activity and tumor recurrence. Next, this paper reviews publications that describe the use of MRI in patient-management applications that may, in turn, be relevant to radiotherapy treatment planning. The review briefly discusses the concepts behind functional techniques such as dynamic contrast enhanced (DCE), diffusion-weighted (DW) MRI sequences and magnetic resonance spectroscopic imaging (MRSI). Significant applications of MR are discussed in terms of the following treatment sites: brain, head and neck, breast, lung, prostate and cervix. While not yet routine, the use of apparent diffusion coefficient (ADC) map analysis indicates an exciting future application for functional MRI. Although DW-MRI has not yet been routinely used in boost adaptive techniques, it is being assessed in cohort studies for sub-volume boosting in prostate tumors. PMID:23617289
Lucyshyn, Joseph M.; Irvin, Larry K.; Blumberg, E. Richard; Laverty, Robelyn; Horner, Robert H.; Sprague, Jeffrey R.
2015-01-01
We conducted an observational study of parent-child interaction in home activity settings (routines) of families raising young children with developmental disabilities and problem behavior. Our aim was to empirically investigate the construct validity of coercion in typical but unsuccessful family routines. The long-term goal was to develop an expanded ecological unit of analysis that may contribute to sustainable behavioral family intervention. Ten children with autism and/or mental retardation and their families participated. Videotaped observations were conducted in typical but unsuccessful home routines. Parent-child interaction in routines was coded in real time and sequential analyses were conducted to test hypotheses about coercive processes. Following observation, families were interviewed about the social validity of the construct. Results confirmed the presence of statistically significant, attention-driven coercive processes in routines in which parents were occupied with non-child centered tasks. Results partially confirmed the presence of escape-driven coercive processes in routines in which parent demands are common. Additional analysis revealed an alternative pattern with greater magnitude. Family perspectives suggested the social validity of the construct. Results are discussed in terms of preliminary, partial evidence for coercive processes in routines of families of children with developmental disabilities. Implications for behavioral assessment and intervention design are discussed. PMID:26321883
Paz, Concepción; Conde, Marcos; Porteiro, Jacobo; Concheiro, Miguel
2017-01-01
This work introduces the use of machine vision in the massive bubble recognition process, which supports the validation of boiling models involving bubble dynamics, as well as nucleation frequency, active site density and size of the bubbles. The two algorithms presented are meant to be run employing quite standard images of the bubbling process, recorded in general-purpose boiling facilities. The recognition routines are easily adaptable to other facilities if a minimum number of precautions are taken in the setup and in the treatment of the information. Both the side and front projections of subcooled flow-boiling phenomenon over a plain plate are covered. Once all of the intended bubbles have been located in space and time, the proper post-process of the recorded data become capable of tracking each of the recognized bubbles, sketching their trajectories and size evolution, locating the nucleation sites, computing their diameters, and so on. After validating the algorithm’s output against the human eye and data from other researchers, machine vision systems have been demonstrated to be a very valuable option to successfully perform the recognition process, even though the optical analysis of bubbles has not been set as the main goal of the experimental facility. PMID:28632158
Landsat Data Continuity Mission Calibration and Validation
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Dabney, Philip W.; Storey, James C.; Morfitt, Ron; Knight, Ed; Kvaran, Geir; Lee, Kenton
2008-01-01
The primary payload for the Landsat Data Continuity Mission (LDCM) is the Operational Land Imager (OLI), being built by Ball Aerospace and Technologies, under contract to NASA. The OLI has spectral bands similar to the Landsat-7 ETM+, minus the thermal band and with two new bands, a 443 nm band and 1375 nm cirrus detection band. On-board calibration systems include two solar diffusers (routine and pristine), a shutter and three sets of internal lamps (routine, backup and pristine). Being a pushbroom opposed to a whiskbroom design of ETM+, the system poses new challenges for characterization and calibration, chief among them being the large focal plane with 75000+ detectors. A comprehensive characterization and calibration plan is in place for the instrument and the data throughout the mission including Ball, NASA and the United States Geological Survey, which will take over operations of LDCM after on-orbit commissioning. Driving radiometric calibration requirements for OLI data include radiance calibration to 5% uncertainty (1 q); reflectance calibration to 3% uncertainty (1 q) and relative (detector-to-detector) calibration to 0.5% (J (r). Driving geometric calibration requirements for OLI include bandto- band registration of 4.5 meters (90% confidence), absolute geodetic accuracy of 65 meters (90% CE) and relative geodetic accuracy of 25 meters (90% CE). Key spectral, spatial and radiometric characterization of the OLI will occur in thermal vacuum at Ball Aerospace. During commissioning the OLI will be characterized and calibrated using celestial (sun, moon, stars) sources and terrestrial sources. The USGS EROS ground processing system will incorporate an image assessment system similar to Landsat-7 for characterization and calibration. This system will have the added benefit that characterization data will be extracted as part of the normal image data processing, so that the characterization data available will be significantly larger than for Landsat-7 ETM+.
A deep learning method for classifying mammographic breast density categories.
Mohamed, Aly A; Berg, Wendie A; Peng, Hong; Luo, Yahong; Jankowitz, Rachel C; Wu, Shandong
2018-01-01
Mammographic breast density is an established risk marker for breast cancer and is visually assessed by radiologists in routine mammogram image reading, using four qualitative Breast Imaging and Reporting Data System (BI-RADS) breast density categories. It is particularly difficult for radiologists to consistently distinguish the two most common and most variably assigned BI-RADS categories, i.e., "scattered density" and "heterogeneously dense". The aim of this work was to investigate a deep learning-based breast density classifier to consistently distinguish these two categories, aiming at providing a potential computerized tool to assist radiologists in assigning a BI-RADS category in current clinical workflow. In this study, we constructed a convolutional neural network (CNN)-based model coupled with a large (i.e., 22,000 images) digital mammogram imaging dataset to evaluate the classification performance between the two aforementioned breast density categories. All images were collected from a cohort of 1,427 women who underwent standard digital mammography screening from 2005 to 2016 at our institution. The truths of the density categories were based on standard clinical assessment made by board-certified breast imaging radiologists. Effects of direct training from scratch solely using digital mammogram images and transfer learning of a pretrained model on a large nonmedical imaging dataset were evaluated for the specific task of breast density classification. In order to measure the classification performance, the CNN classifier was also tested on a refined version of the mammogram image dataset by removing some potentially inaccurately labeled images. Receiver operating characteristic (ROC) curves and the area under the curve (AUC) were used to measure the accuracy of the classifier. The AUC was 0.9421 when the CNN-model was trained from scratch on our own mammogram images, and the accuracy increased gradually along with an increased size of training samples. Using the pretrained model followed by a fine-tuning process with as few as 500 mammogram images led to an AUC of 0.9265. After removing the potentially inaccurately labeled images, AUC was increased to 0.9882 and 0.9857 for without and with the pretrained model, respectively, both significantly higher (P < 0.001) than when using the full imaging dataset. Our study demonstrated high classification accuracies between two difficult to distinguish breast density categories that are routinely assessed by radiologists. We anticipate that our approach will help enhance current clinical assessment of breast density and better support consistent density notification to patients in breast cancer screening. © 2017 American Association of Physicists in Medicine.
Lee, Seung Hyun; Kim, Myung-Joon; Yoon, Choon-Sik; Lee, Mi-Jung
2012-09-01
To retrospectively compare radiation dose and image quality of pediatric chest CT using a routine dose protocol reconstructed with filtered back projection (FBP) (the Routine study) and a low-dose protocol with 50% adaptive statistical iterative reconstruction (ASIR) (the ASIR study). We retrospectively reviewed chest CT performed in pediatric patients who underwent both the Routine study and the ASIR study on different days between January 2010 and August 2011. Volume CT dose indices (CTDIvol), dose length products (DLP), and effective doses were obtained to estimate radiation dose. The image quality was evaluated objectively as noise measured in the descending aorta and paraspinal muscle, and subjectively by three radiologists for noise, sharpness, artifacts, and diagnostic acceptability using a four-point scale. The paired Student's t-test and the Wilcoxon signed-rank test were used for statistical analysis. Twenty-six patients (M:F=13:13, mean age 11.7) were enrolled. The ASIR studies showed 60.3%, 56.2%, and 55.2% reductions in CTDIvol (from 18.73 to 7.43 mGy, P<0.001), DLP (from 307.42 to 134.51 mGy×cm, P<0.001), and effective dose (from 4.12 to 1.84 mSv, P<0.001), respectively, compared with the Routine studies. The objective noise was higher in the paraspinal muscle of the ASIR studies (20.81 vs. 16.67, P=0.004), but was not different in the aorta (18.23 vs. 18.72, P=0.726). The subjective image quality demonstrated no difference between the two studies. A low-dose protocol with 50% ASIR allows radiation dose reduction in pediatric chest CT by more than 55% while maintaining image quality. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Additive Manufacturing Techniques for the Reconstruction of 3D Fetal Faces
Citro, Daniela; Padula, Francesco; Motyl, Barbara; Marcolin, Federica; Calì, Michele
2017-01-01
This paper deals with additive manufacturing techniques for the creation of 3D fetal face models starting from routine 3D ultrasound data. In particular, two distinct themes are addressed. First, a method for processing and building 3D models based on the use of medical image processing techniques is proposed. Second, the preliminary results of a questionnaire distributed to future parents consider the use of these reconstructions both from an emotional and an affective point of view. In particular, the study focuses on the enhancement of the perception of maternity or paternity and the improvement in the relationship between parents and physicians in case of fetal malformations, in particular facial or cleft lip diseases. PMID:29410600
Favazza, Christopher P; Duan, Xinhui; Zhang, Yi; Yu, Lifeng; Leng, Shuai; Kofler, James M; Bruesewitz, Michael R; McCollough, Cynthia H
2015-11-07
Through this investigation we developed a methodology to evaluate and standardize CT image quality from routine abdomen protocols across different manufacturers and models. The influence of manufacturer-specific automated exposure control systems on image quality was directly assessed to standardize performance across a range of patient sizes. We evaluated 16 CT scanners across our health system, including Siemens, GE, and Toshiba models. Using each practice's routine abdomen protocol, we measured spatial resolution, image noise, and scanner radiation output (CTDIvol). Axial and in-plane spatial resolutions were assessed through slice sensitivity profile (SSP) and modulation transfer function (MTF) measurements, respectively. Image noise and CTDIvol values were obtained for three different phantom sizes. SSP measurements demonstrated a bimodal distribution in slice widths: an average of 6.2 ± 0.2 mm using GE's 'Plus' mode reconstruction setting and 5.0 ± 0.1 mm for all other scanners. MTF curves were similar for all scanners. Average spatial frequencies at 50%, 10%, and 2% MTF values were 3.24 ± 0.37, 6.20 ± 0.34, and 7.84 ± 0.70 lp cm(-1), respectively. For all phantom sizes, image noise and CTDIvol varied considerably: 6.5-13.3 HU (noise) and 4.8-13.3 mGy (CTDIvol) for the smallest phantom; 9.1-18.4 HU and 9.3-28.8 mGy for the medium phantom; and 7.8-23.4 HU and 16.0-48.1 mGy for the largest phantom. Using these measurements and benchmark SSP, MTF, and image noise targets, CT image quality can be standardized across a range of patient sizes.
Firmware Development Improves System Efficiency
NASA Technical Reports Server (NTRS)
Chern, E. James; Butler, David W.
1993-01-01
Most manufacturing processes require physical pointwise positioning of the components or tools from one location to another. Typical mechanical systems utilize either stop-and-go or fixed feed-rate procession to accomplish the task. The first approach achieves positional accuracy but prolongs overall time and increases wear on the mechanical system. The second approach sustains the throughput but compromises positional accuracy. A computer firmware approach has been developed to optimize this point wise mechanism by utilizing programmable interrupt controls to synchronize engineering processes 'on the fly'. This principle has been implemented in an eddy current imaging system to demonstrate the improvement. Software programs were developed that enable a mechanical controller card to transmit interrupts to a system controller as a trigger signal to initiate an eddy current data acquisition routine. The advantages are: (1) optimized manufacturing processes, (2) increased throughput of the system, (3) improved positional accuracy, and (4) reduced wear and tear on the mechanical system.
Wang, Ling-jia; Kissler, Hermann J; Wang, Xiaojun; Cochet, Olivia; Krzystyniak, Adam; Misawa, Ryosuke; Golab, Karolina; Tibudan, Martin; Grzanka, Jakub; Savari, Omid; Grose, Randall; Kaufman, Dixon B; Millis, Michael; Witkowski, Piotr
2015-01-01
Pancreatic islet mass, represented by islet equivalent (IEQ), is the most important parameter in decision making for clinical islet transplantation. To obtain IEQ, the sample of islets is routinely counted manually under a microscope and discarded thereafter. Islet purity, another parameter in islet processing, is routinely acquired by estimation only. In this study, we validated our digital image analysis (DIA) system developed using the software of Image Pro Plus for islet mass and purity assessment. Application of the DIA allows to better comply with current good manufacturing practice (cGMP) standards. Human islet samples were captured as calibrated digital images for the permanent record. Five trained technicians participated in determination of IEQ and purity by manual counting method and DIA. IEQ count showed statistically significant correlations between the manual method and DIA in all sample comparisons (r >0.819 and p < 0.0001). Statistically significant difference in IEQ between both methods was found only in High purity 100μL sample group (p = 0.029). As far as purity determination, statistically significant differences between manual assessment and DIA measurement was found in High and Low purity 100μL samples (p<0.005), In addition, islet particle number (IPN) and the IEQ/IPN ratio did not differ statistically between manual counting method and DIA. In conclusion, the DIA used in this study is a reliable technique in determination of IEQ and purity. Islet sample preserved as a digital image and results produced by DIA can be permanently stored for verification, technical training and islet information exchange between different islet centers. Therefore, DIA complies better with cGMP requirements than the manual counting method. We propose DIA as a quality control tool to supplement the established standard manual method for islets counting and purity estimation. PMID:24806436
Gooroochurn, M; Kerr, D; Bouazza-Marouf, K; Ovinis, M
2011-02-01
This paper describes the development of a registration framework for image-guided solutions to the automation of certain routine neurosurgical procedures. The registration process aligns the pose of the patient in the preoperative space to that of the intraoperative space. Computerized tomography images are used in the preoperative (planning) stage, whilst white light (TV camera) images are used to capture the intraoperative pose. Craniofacial landmarks, rather than artificial markers, are used as the registration basis for the alignment. To create further synergy between the user and the image-guided system, automated methods for extraction of these landmarks have been developed. The results obtained from the application of a polynomial neural network classifier based on Gabor features for the detection and localization of the selected craniofacial landmarks, namely the ear tragus and eye corners in the white light modality are presented. The robustness of the classifier to variations in intensity and noise is analysed. The results show that such a classifier gives good performance for the extraction of craniofacial landmarks.
Sifting Through SDO's AIA Cosmic Ray Hits to Find Treasure
NASA Astrophysics Data System (ADS)
Kirk, M. S.; Thompson, B. J.; Viall, N. M.; Young, P. R.
2017-12-01
The Solar Dynamics Observatory's Atmospheric Imaging Assembly (SDO AIA) has revolutionized solar imaging with its high temporal and spatial resolution, unprecedented spatial and temporal coverage, and seven EUV channels. Automated algorithms routinely clean these images to remove cosmic ray intensity spikes as a part of its preprocessing algorithm. We take a novel approach to survey the entire set of AIA "spike" data to identify and group compact brightenings across the entire SDO mission. The AIA team applies a de-spiking algorithm to remove magnetospheric particle impacts on the CCD cameras, but it has been found that compact, intense solar brightenings are often removed as well. We use the spike database to mine the data and form statistics on compact solar brightenings without having to process large volumes of full-disk AIA data. There are approximately 3 trillion "spiked pixels" removed from images over the mission to date. We estimate that 0.001% of those are of solar origin and removed by mistake, giving us a pre-segmented dataset of 30 million events. We explore the implications of these statistics and the physical qualities of the "spikes" of solar origin.
Improved defect analysis of Gallium Arsenide solar cells using image enhancement
NASA Technical Reports Server (NTRS)
Kilmer, Louis C.; Honsberg, Christiana; Barnett, Allen M.; Phillips, James E.
1989-01-01
A new technique has been developed to capture, digitize, and enhance the image of light emission from a forward biased direct bandgap solar cell. Since the forward biased light emission from a direct bandgap solar cell has been shown to display both qualitative and quantitative information about the solar cell's performance and its defects, signal processing techniques can be applied to the light emission images to identify and analyze shunt diodes. Shunt diodes are of particular importance because they have been found to be the type of defect which is likely to cause failure in a GaAs solar cell. The presence of a shunt diode can be detected from the light emission by using a photodetector to measure the quantity of light emitted at various current densities. However, to analyze how the shunt diodes affect the quality of the solar cell the pattern of the light emission must be studied. With the use of image enhancement routines, the light emission can be studied at low light emission levels where shunt diode effects are dominant.
Nanoscale imaging of clinical specimens using pathology-optimized expansion microscopy.
Zhao, Yongxin; Bucur, Octavian; Irshad, Humayun; Chen, Fei; Weins, Astrid; Stancu, Andreea L; Oh, Eun-Young; DiStasio, Marcello; Torous, Vanda; Glass, Benjamin; Stillman, Isaac E; Schnitt, Stuart J; Beck, Andrew H; Boyden, Edward S
2017-08-01
Expansion microscopy (ExM), a method for improving the resolution of light microscopy by physically expanding a specimen, has not been applied to clinical tissue samples. Here we report a clinically optimized form of ExM that supports nanoscale imaging of human tissue specimens that have been fixed with formalin, embedded in paraffin, stained with hematoxylin and eosin, and/or fresh frozen. The method, which we call expansion pathology (ExPath), converts clinical samples into an ExM-compatible state, then applies an ExM protocol with protein anchoring and mechanical homogenization steps optimized for clinical samples. ExPath enables ∼70-nm-resolution imaging of diverse biomolecules in intact tissues using conventional diffraction-limited microscopes and standard antibody and fluorescent DNA in situ hybridization reagents. We use ExPath for optical diagnosis of kidney minimal-change disease, a process that previously required electron microscopy, and we demonstrate high-fidelity computational discrimination between early breast neoplastic lesions for which pathologists often disagree in classification. ExPath may enable the routine use of nanoscale imaging in pathology and clinical research.
Brain Tumor Image Segmentation in MRI Image
NASA Astrophysics Data System (ADS)
Peni Agustin Tjahyaningtijas, Hapsari
2018-04-01
Brain tumor segmentation plays an important role in medical image processing. Treatment of patients with brain tumors is highly dependent on early detection of these tumors. Early detection of brain tumors will improve the patient’s life chances. Diagnosis of brain tumors by experts usually use a manual segmentation that is difficult and time consuming because of the necessary automatic segmentation. Nowadays automatic segmentation is very populer and can be a solution to the problem of tumor brain segmentation with better performance. The purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. There are number of existing review papers, focusing on traditional methods for MRI-based brain tumor image segmentation. this paper, we focus on the recent trend of automatic segmentation in this field. First, an introduction to brain tumors and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend of full automatic segmentaion are discussed. Finally, an assessment of the current state is presented and future developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed.
Reproducibility of radiomics for deciphering tumor phenotype with imaging
NASA Astrophysics Data System (ADS)
Zhao, Binsheng; Tan, Yongqiang; Tsai, Wei-Yann; Qi, Jing; Xie, Chuanmiao; Lu, Lin; Schwartz, Lawrence H.
2016-03-01
Radiomics (radiogenomics) characterizes tumor phenotypes based on quantitative image features derived from routine radiologic imaging to improve cancer diagnosis, prognosis, prediction and response to therapy. Although radiomic features must be reproducible to qualify as biomarkers for clinical care, little is known about how routine imaging acquisition techniques/parameters affect reproducibility. To begin to fill this knowledge gap, we assessed the reproducibility of a comprehensive, commonly-used set of radiomic features using a unique, same-day repeat computed tomography data set from lung cancer patients. Each scan was reconstructed at 6 imaging settings, varying slice thicknesses (1.25 mm, 2.5 mm and 5 mm) and reconstruction algorithms (sharp, smooth). Reproducibility was assessed using the repeat scans reconstructed at identical imaging setting (6 settings in total). In separate analyses, we explored differences in radiomic features due to different imaging parameters by assessing the agreement of these radiomic features extracted from the repeat scans reconstructed at the same slice thickness but different algorithms (3 settings in total). Our data suggest that radiomic features are reproducible over a wide range of imaging settings. However, smooth and sharp reconstruction algorithms should not be used interchangeably. These findings will raise awareness of the importance of properly setting imaging acquisition parameters in radiomics/radiogenomics research.
Advances in Projection Moire Interferometry Development for Large Wind Tunnel Applications
NASA Technical Reports Server (NTRS)
Fleming, Gary A.; Soto, Hector L.; South, Bruce W.; Bartram, Scott M.
1999-01-01
An instrument development program aimed at using Projection Moire Interferometry (PMI) for acquiring model deformation measurements in large wind tunnels was begun at NASA Langley Research Center in 1996. Various improvements to the initial prototype PMI systems have been made throughout this development effort. This paper documents several of the most significant improvements to the optical hardware and image processing software, and addresses system implementation issues for large wind tunnel applications. The improvements have increased both measurement accuracy and instrument efficiency, promoting the routine use of PMI for model deformation measurements in production wind tunnel tests.
Conversion of NIMROD simulation results for graphical analysis using VisIt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero-Talamas, C A
Software routines developed to prepare NIMROD [C. R. Sovinec et al., J. Comp. Phys. 195, 355 (2004)] results for three-dimensional visualization from simulations of the Sustained Spheromak Physics Experiment (SSPX ) [E. B. Hooper et al., Nucl. Fusion 39, 863 (1999)] are presented here. The visualization is done by first converting the NIMROD output to a format known as legacy VTK and then loading it to VisIt, a graphical analysis tool that includes three-dimensional rendering and various mathematical operations for large data sets. Sample images obtained from the processing of NIMROD data with VisIt are included.
The role of completion imaging following carotid artery endarterectomy.
Ricco, Jean-Baptiste; Schneider, Fabrice; Illuminati, Giulio; Samson, Russell H
2013-05-01
A variety of completion imaging methods can be used during carotid endarterectomy to recognize technical errors or intrinsic abnormalities such as mural thrombus or platelet aggregation, but none of these methods has achieved wide acceptance, and their ability to improve the outcome of the operation remains a matter of controversy. It is unclear if completion imaging is routinely necessary and which abnormalities require re-exploration. Proponents of routine completion imaging argue that identification of these abnormalities will allow their immediate correction and avoid a perioperative stroke. However, much of the evidence in favor of this argument is incidental, and many experienced vascular surgeons who perform carotid endarterectomy do not use any completion imaging technique and report equally good outcomes using a careful surgical protocol. Furthermore, certain postoperative strokes, including intracerebral hemorrhage and hyperperfusion syndrome, are unrelated to the surgical technique and cannot be prevented by completion imaging. This controversial subject is now open to discussion, and our debaters have been given the task to clarify the evidence to justify their preferred option for completion imaging during carotid endarterectomy. Copyright © 2013 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.
Whole body MRI: Improved Lesion Detection and Characterization With Diffusion Weighted Techniques
Attariwala, Rajpaul; Picker, Wayne
2013-01-01
Diffusion-weighted imaging (DWI) is an established functional imaging technique that interrogates the delicate balance of water movement at the cellular level. Technological advances enable this technique to be applied to whole-body MRI. Theory, b-value selection, common artifacts and target to background for optimized viewing will be reviewed for applications in the neck, chest, abdomen, and pelvis. Whole-body imaging with DWI allows novel applications of MRI to aid in evaluation of conditions such as multiple myeloma, lymphoma, and skeletal metastases, while the quantitative nature of this technique permits evaluation of response to therapy. Persisting signal at high b-values from restricted hypercellular tissue and viscous fluid also permits applications of DWI beyond oncologic imaging. DWI, when used in conjunction with routine imaging, can assist in detecting hemorrhagic degradation products, infection/abscess, and inflammation in colitis, while aiding with discrimination of free fluid and empyema, while limiting the need for intravenous contrast. DWI in conjunction with routine anatomic images provides a platform to improve lesion detection and characterization with findings rivaling other combined anatomic and functional imaging techniques, with the added benefit of no ionizing radiation. PMID:23960006
32 CFR 701.121 - Processing “routine use” disclosures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... DEPARTMENT OF THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.121 Processing “routine use... be in writing and state that it is being made under a “routine use” established by a specific PA... and maintain a disclosure accounting of the information released. (See § 701.111.) (b) Failure to cite...
Levator claviculae muscle discovered during physical examination for cervical lymphadenopathy.
Rosenheimer, J L; Loewy, J; Lozanoff, S
2000-01-01
During a routine physical examination of an adult female with a history of breast cancer and cervical lymphadenopathy, a mass was noted in the right supraclavicular region. The mass was unilateral and easily palpable along the superior border near the median aspect of the clavicle. Plain film radiography, performed to determine whether the mass represented an enlarged jugulo-omohyoid lymph node, revealed an elongated opaque mass in this region. Computed tomographic (CT) and magnetic resonance (MR) images were subsequently obtained. Sequential axial CT scans revealed a cylindrical mass that appeared to be independent of contiguous muscles, including the sternocleidomastoid, anterior, and middle scalene muscles. This mass attached inferiorly to the clavicle and superiorly to the transverse process of the sixth cervical vertebra. Sagittal, coronal, and axial MR scans confirmed the presence of a well-defined superficial mass. It is concluded that the mass represents a levator claviculae (cleidocervical) muscle. This observation underscores the importance of understanding soft tissue variants that may be encountered during a routine physical examination. Copyright 2000 Wiley-Liss, Inc.
The quality mammographic image. A review of its components.
Rickard, M T
1989-11-01
Seven major factors resulting in a quality or high contrast and high resolution mammographic image have been discussed. The following is a summary of their key features: 1) Dedicated mammographic equipment. --Molybdenum target material --Molybdenum filter, beryllium window --Low kVp usage, in range of 24 to 30 --Routine contact mammography performed at 25 kVp --Slightly lower kVp for coned compression --Slightly higher kVp for microfocus magnification 2) Film density --Phototimer with adjustable position --Calibration of phototimer to optimal optical density of approx. 1.4 over full kVp range 3) Breast Compression --General and focal (coned compression). --Essential to achieve proper contrast, resolution and breast immobility. --Foot controls preferable. 4) Focal Spot. --Size recommendation for contact work 0.3 mm. --Minimum power output of 100 mA at 25 kVp desirable to avoid movement blurring in contact grid work. --Size recommendation for magnification work 0.1 mm. 5) Grid. --Usage recommended as routine in all but magnification work. 6) Film-screen Combination. --High contrast--high speed film. --High resolution screen. --Specifically designed cassette for close film-screen contact and low radiation absorption. --Use of faster screens for magnification techniques. 7) Dedicated processing. --Increased developing time--40 to 45 seconds. --Increased developer temperature--35 to 38 degrees. --Adjusted replenishment rate and dryer temperature. All seven factors contributing to image contrast and resolution affect radiation dosage to the breast. The risk of increased dosage associated with the use of various techniques needs to be balanced against the risks of incorrect diagnosis associated with their non-use.(ABSTRACT TRUNCATED AT 250 WORDS)
Faster scanning and higher resolution: new setup for multilayer zone plate imaging
NASA Astrophysics Data System (ADS)
Osterhoff, Markus; Soltau, Jakob; Eberl, Christian; Krebs, Hans-Ulrich
2017-09-01
Hard x-ray imaging methods are routinely used in two and three spatial dimensions to tackle challenging scientific questions of the 21st century, e.g. catalytic processes in energy research and bio-physical experiments on the single-cell level [1-3]. Among the most important experimental techniques are scanning SAXS to probe the local orientation of filaments and fluorescence mapping to quantify the local composition. The routinely available spot size has been reduced to few tens of nanometres; but the real-space resolution of these techniques can degrade by (i) vibration or drift, and (ii) spreading of beam damage, especially for soft condensed matter on small length scales. We have recently developed new Multilayer Zone Plate (MZP) optics for focusing hard (14 keV) and very hard (60 keV to above 100 keV) x-rays down to spot sizes presumably on 5 or 10nm scale. Here we report on recent progress on a new MZP based sample scanner, and how to tackle beam damage spread. The Eiger detector synchronized to a piezo scanner enables to scan in a 2D continuous mode fields of view larger than 20μm squared, or for high resolution down to (virtual) pixel sizes of below 2nm, in about three minutes for 255×255 points (90 seconds after further improvements). Nano-SAXS measurements with more than one million real-space pixels, each containing a full diffraction image, can be carried out in less than one hour, as we have shown using a Siemens star test pattern.
Schaumberg, Andrew J.; Sirintrapun, S. Joseph; Al-Ahmadie, Hikmat A.; Schüffler, Peter J.; Fuchs, Thomas J.
2018-01-01
Modern digital pathology departments have grown to produce whole-slide image data at petabyte scale, an unprecedented treasure chest for medical machine learning tasks. Unfortunately, most digital slides are not annotated at the image level, hindering large-scale application of supervised learning. Manual labeling is prohibitive, requiring pathologists with decades of training and outstanding clinical service responsibilities. This problem is further aggravated by the United States Food and Drug Administration’s ruling that primary diagnosis must come from a glass slide rather than a digital image. We present the first end-to-end framework to overcome this problem, gathering annotations in a nonintrusive manner during a pathologist’s routine clinical work: (i) microscope-specific 3D-printed commodity camera mounts are used to video record the glass-slide-based clinical diagnosis process; (ii) after routine scanning of the whole slide, the video frames are registered to the digital slide; (iii) motion and observation time are estimated to generate a spatial and temporal saliency map of the whole slide. Demonstrating the utility of these annotations, we train a convolutional neural network that detects diagnosis-relevant salient regions, then report accuracy of 85.15% in bladder and 91.40% in prostate, with 75.00% accuracy when training on prostate but predicting in bladder, despite different pathologists examining the different tissues. When training on one patient but testing on another, AUROC in bladder is 0.79±0.11 and in prostate is 0.96±0.04. Our tool is available at https://bitbucket.org/aschaumberg/deepscope PMID:29601065
Evaluation of web-based annotation of ophthalmic images for multicentric clinical trials.
Chalam, K V; Jain, P; Shah, V A; Shah, Gaurav Y
2006-06-01
An Internet browser-based annotation system can be used to identify and describe features in digitalized retinal images, in multicentric clinical trials, in real time. In this web-based annotation system, the user employs a mouse to draw and create annotations on a transparent layer, that encapsulates the observations and interpretations of a specific image. Multiple annotation layers may be overlaid on a single image. These layers may correspond to annotations by different users on the same image or annotations of a temporal sequence of images of a disease process, over a period of time. In addition, geometrical properties of annotated figures may be computed and measured. The annotations are stored in a central repository database on a server, which can be retrieved by multiple users in real time. This system facilitates objective evaluation of digital images and comparison of double-blind readings of digital photographs, with an identifiable audit trail. Annotation of ophthalmic images allowed clinically feasible and useful interpretation to track properties of an area of fundus pathology. This provided an objective method to monitor properties of pathologies over time, an essential component of multicentric clinical trials. The annotation system also allowed users to view stereoscopic images that are stereo pairs. This web-based annotation system is useful and valuable in monitoring patient care, in multicentric clinical trials, telemedicine, teaching and routine clinical settings.
Adaptive optics imaging of geographic atrophy.
Gocho, Kiyoko; Sarda, Valérie; Falah, Sabrina; Sahel, José-Alain; Sennlaub, Florian; Benchaboune, Mustapha; Ullern, Martine; Paques, Michel
2013-05-01
To report the findings of en face adaptive optics (AO) near infrared (NIR) reflectance fundus flood imaging in eyes with geographic atrophy (GA). Observational clinical study of AO NIR fundus imaging was performed in 12 eyes of nine patients with GA, and in seven controls using a flood illumination camera operating at 840 nm, in addition to routine clinical examination. To document short term and midterm changes, AO imaging sessions were repeated in four patients (mean interval between sessions 21 days; median follow up 6 months). As compared with scanning laser ophthalmoscope imaging, AO NIR imaging improved the resolution of the changes affecting the RPE. Multiple hyporeflective clumps were seen within and around GA areas. Time-lapse imaging revealed micrometric-scale details of the emergence and progression of areas of atrophy as well as the complex kinetics of some hyporeflective clumps. Such dynamic changes were observed within as well as outside atrophic areas. in eyes affected by GA, AO nir imaging allows high resolution documentation of the extent of RPE damage. this also revealed that a complex, dynamic process of redistribution of hyporeflective clumps throughout the posterior pole precedes and accompanies the emergence and progression of atrophy. therefore, these clumps are probably also a biomarker of rpe damage. AO NIR imaging may, therefore, be of interest to detect the earliest stages, to document the retinal pathology and to monitor the progression oF GA. (ClinicalTrials.gov number, NCT01546181.).
A Review on Real-Time 3D Ultrasound Imaging Technology
Zeng, Zhaozheng
2017-01-01
Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail. PMID:28459067
Endoscopic fluorescence imaging for early assessment of anastomotic recurrence of Crohn's disease
NASA Astrophysics Data System (ADS)
Mordon, Serge R.; Maunoury, Vincent; Geboes, K.; Klein, Olivier; Desreumaux, P.; Debaert, A.; Colombel, Jean-Frederic
1999-02-01
Crohn's disease is an inflammatory bowel disease of unknown etiology. The mechanism of the initial mucosal alterations is still unclear: ulcerations overlying lymphoid follicles and/or vasculitis have been proposed as the early lesions. We have developed a new and original method combining endoscopy of fluorescence angiography for identifying the early pathological lesions, occurring in the neo-terminal ileum after right ileocolonic resection. The patient population consisted of 10 subjects enrolled in a prospective protocol of endoscopic follow-up at 3 and 12 months after surgery. Fluorescence imaging showed small spots giving a bright fluorescence distributed singly in mucosa which appeared normal in routine endoscopy. Histopathological examination demonstrated that the fluorescence of small spots originated from small, usually superficial, erosive lesions. In several cases, these erosive lesions occurred over lymphoid follicles. Endoscopic fluorescence imaging provides a suitable means of investigating the initial aspect of the Crohn's disease process in displaying some correlative findings between fluorescent aspects and early pathological mucosal alterations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, M.D.; Beck, R.N.
1988-06-01
This document describes several years research to improve PET imaging and diagnostic techniques in man. This program addresses the problems involving the basic science and technology underlying the physical and conceptual tools of radioactive tracer methodology as they relate to the measurement of structural and functional parameters of physiologic importance in health and disease. The principal tool is quantitative radionuclide imaging. The overall objective of this program is to further the development and transfer of radiotracer methodology from basic theory to routine clinical practice in order that individual patients and society as a whole will receive the maximum net benefitmore » from the new knowledge gained. The focus of the research is on the development of new instruments and radiopharmaceuticals, and the evaluation of these through the phase of clinical feasibility. The reports in the study were processed separately for the data bases. (TEM)« less
Automated Blazar Light Curves Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Spencer James
Every night in a remote clearing called Fenton Hill high in the Jemez Mountains of central New Mexico, a bank of robotically controlled telescopes tilt their lenses to the sky for another round of observation through digital imaging. Los Alamos National Laboratory’s Thinking Telescopes project is watching for celestial transients including high-power cosmic flashes called, and like all science, it can be messy work. To keep the project clicking along, Los Alamos scientists routinely install equipment upgrades, maintain the site, and refine the sophisticated machinelearning computer programs that process those images and extract useful data from them. Each week themore » system amasses 100,000 digital images of the heavens, some of which are compromised by clouds, wind gusts, focus problems, and so on. For a graduate student at the Lab taking a year’s break between master’s and Ph.D. studies, working with state-of-the-art autonomous telescopes that can make fundamental discoveries feels light years beyond the classroom.« less
A Review on Real-Time 3D Ultrasound Imaging Technology.
Huang, Qinghua; Zeng, Zhaozheng
2017-01-01
Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail.
Estimation of urinary stone composition by automated processing of CT images.
Chevreau, Grégoire; Troccaz, Jocelyne; Conort, Pierre; Renard-Penna, Raphaëlle; Mallet, Alain; Daudon, Michel; Mozer, Pierre
2009-10-01
The objective of this article was developing an automated tool for routine clinical practice to estimate urinary stone composition from CT images based on the density of all constituent voxels. A total of 118 stones for which the composition had been determined by infrared spectroscopy were placed in a helical CT scanner. A standard acquisition, low-dose and high-dose acquisitions were performed. All voxels constituting each stone were automatically selected. A dissimilarity index evaluating variations of density around each voxel was created in order to minimize partial volume effects: stone composition was established on the basis of voxel density of homogeneous zones. Stone composition was determined in 52% of cases. Sensitivities for each compound were: uric acid: 65%, struvite: 19%, cystine: 78%, carbapatite: 33.5%, calcium oxalate dihydrate: 57%, calcium oxalate monohydrate: 66.5%, brushite: 75%. Low-dose acquisition did not lower the performances (P < 0.05). This entirely automated approach eliminates manual intervention on the images by the radiologist while providing identical performances including for low-dose protocols.
Myoanatomy of the velvet worm leg revealed by laboratory-based nanofocus X-ray source tomography.
Müller, Mark; de Sena Oliveira, Ivo; Allner, Sebastian; Ferstl, Simone; Bidola, Pidassa; Mechlem, Korbinian; Fehringer, Andreas; Hehn, Lorenz; Dierolf, Martin; Achterhold, Klaus; Gleich, Bernhard; Hammel, Jörg U; Jahn, Henry; Mayer, Georg; Pfeiffer, Franz
2017-11-21
X-ray computed tomography (CT) is a powerful noninvasive technique for investigating the inner structure of objects and organisms. However, the resolution of laboratory CT systems is typically limited to the micrometer range. In this paper, we present a table-top nanoCT system in conjunction with standard processing tools that is able to routinely reach resolutions down to 100 nm without using X-ray optics. We demonstrate its potential for biological investigations by imaging a walking appendage of Euperipatoides rowelli , a representative of Onychophora-an invertebrate group pivotal for understanding animal evolution. Comparative analyses proved that the nanoCT can depict the external morphology of the limb with an image quality similar to scanning electron microscopy, while simultaneously visualizing internal muscular structures at higher resolutions than confocal laser scanning microscopy. The obtained nanoCT data revealed hitherto unknown aspects of the onychophoran limb musculature, enabling the 3D reconstruction of individual muscle fibers, which was previously impossible using any laboratory-based imaging technique.
Computerized Doppler Tomography and Spectrum Analysis of Carotid Artery Flow
Morton, Paul; Goldman, Dave; Nichols, W. Kirt
1981-01-01
Contrast angiography remains the definitive study in the evaluation of atherosclerotic occlusive vascular disease. However, a safer technique for serial screening of symptomatic patients and for routine follow up is necessary. Computerized pulsed Doppler ultrasonic arteriography is a noninvasive technique developed by Miles6 for imaging lateral, antero-posterior and transverse sections of the carotid artery. We [ill] this system with new software and hardware to analyze the three-dimensional blood flow data. The system now provides information about the location of the occlusive process in the artery and a semi-quantitative evaluation of the degree of obstruction. In addition, we interfaced a digital signal analyzer to the system which permits spectrum analysis of the pulsed Doppler signal. This addition has allowed us to identify lesions which are not yet hemodynamically significant. ImagesFig. 2bFig. 2c
Badam, Raj Kumar; Sownetha, Triekan; Babu, D B Gandhi; Waghray, Shefali; Reddy, Lavanya; Garlapati, Komali; Chavva, Sunanda
2017-01-01
The word "autopsy" denotes "to see with own eyes." Autopsy (postmortem) is a process that includes a thorough examination of a corpse noting everything related to anatomization, surface wounds, histological and culture studies. Virtopsy is a term extracted from two words "virtual" and "autopsy." It employs imaging methods that are routinely used in clinical medicine such as computed tomography and magnetic resonance imaging in the field of autopsy, to find the reason for death. Virtopsy is a multi-disciplinary technology that combines forensic medicine and pathology, roentgenology, computer graphics, biomechanics, and physics. It is rapidly gaining importance in the field of forensics. This approach has been recently used by forensic odontologists, but yet to make its own mark in the field. This article mainly deals with "virtopsy" where in various articles were web searched, relevant data was selected, extracted, and summarized here.
Quantitation of Cellular Dynamics in Growing Arabidopsis Roots with Light Sheet Microscopy
Birnbaum, Kenneth D.; Leibler, Stanislas
2011-01-01
To understand dynamic developmental processes, living tissues have to be imaged frequently and for extended periods of time. Root development is extensively studied at cellular resolution to understand basic mechanisms underlying pattern formation and maintenance in plants. Unfortunately, ensuring continuous specimen access, while preserving physiological conditions and preventing photo-damage, poses major barriers to measurements of cellular dynamics in growing organs such as plant roots. We present a system that integrates optical sectioning through light sheet fluorescence microscopy with hydroponic culture that enables us to image, at cellular resolution, a vertically growing Arabidopsis root every few minutes and for several consecutive days. We describe novel automated routines to track the root tip as it grows, to track cellular nuclei and to identify cell divisions. We demonstrate the system's capabilities by collecting data on divisions and nuclear dynamics. PMID:21731697
Hale, Matthew D; Zaman, Arshad; Morrall, Matthew C H J; Chumas, Paul; Maguire, Melissa J
2018-03-01
Presurgical evaluation for temporal lobe epilepsy routinely assesses speech and memory lateralization and anatomic localization of the motor and visual areas but not baseline musical processing. This is paramount in a musician. Although validated tools exist to assess musical ability, there are no reported functional magnetic resonance imaging (fMRI) paradigms to assess musical processing. We examined the utility of a novel fMRI paradigm in an 18-year-old left-handed pianist who underwent surgery for a left temporal low-grade ganglioglioma. Preoperative evaluation consisted of neuropsychological evaluation, T1-weighted and T2-weighted magnetic resonance imaging, and fMRI. Auditory blood oxygen level-dependent fMRI was performed using a dedicated auditory scanning sequence. Three separate auditory investigations were conducted: listening to, humming, and thinking about a musical piece. All auditory fMRI paradigms activated the primary auditory cortex with varying degrees of auditory lateralization. Thinking about the piece additionally activated the primary visual cortices (bilaterally) and right dorsolateral prefrontal cortex. Humming demonstrated left-sided predominance of auditory cortex activation with activity observed in close proximity to the tumor. This study demonstrated an fMRI paradigm for evaluating musical processing that could form part of preoperative assessment for patients undergoing temporal lobe surgery for epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.
Intrahospital teleradiology from the emergency room
NASA Astrophysics Data System (ADS)
Fuhrman, Carl R.; Slasky, B. S.; Gur, David; Lattner, Stefanie; Herron, John M.; Plunkett, Michael B.; Towers, Jeffrey D.; Thaete, F. Leland
1993-09-01
Off-hour operations of the modern emergency room presents a challenge to conventional image management systems. To assess the utility of intrahospital teleradiology systems from the emergency room (ER), we installed a high-resolution film digitizer which was interfaced to a central archive and to a workstation at the main reading room. The system was designed to allow for digitization of images as soon as the films were processed. Digitized images were autorouted to both destinations, and digitized images could be laser printed (if desired). Almost real time interpretations of nonselected cases were performed at both locations (conventional film in the ER and a workstation in the main reading room), and an analysis of disagreements was performed. Our results demonstrate that in spite of a `significant' difference in reporting, `clinically significant differences' were found in less than 5% of cases. Folder management issues, preprocessing, image orientation, and setting reasonable lookup tables for display were identified as the main limitations to the systems' routine use in a busy environment. The main limitation of the conventional film was the identification of subtle abnormalities in the bright regions of the film. Once identified on either system (conventional film or soft display), all abnormalities were visible and detectable on both display modalities.
The Propeller Belts in Saturn A Ring
2017-01-30
This image from NASA's Cassini mission shows a region in Saturn's A ring. The level of detail is twice as high as this part of the rings has ever been seen before. The view contains many small, bright blemishes due to cosmic rays and charged particle radiation near the planet. The view shows a section of the A ring known to researchers for hosting belts of propellers -- bright, narrow, propeller-shaped disturbances in the ring produced by the gravity of unseen embedded moonlets. Several small propellers are visible in this view. These are on the order of 10 times smaller than the large, bright propellers whose orbits scientists have routinely tracked (and which are given nicknames for famous aviators). This image is a lightly processed version, with minimal enhancement, preserving all original details present in the image. he image was taken in visible light with the Cassini spacecraft wide-angle camera on Dec. 18, 2016. The view was obtained at a distance of approximately 33,000 miles (54,000 kilometers) from the rings and looks toward the unilluminated side of the rings. Image scale is about a quarter-mile (330 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA21059
Triceps tendon tear in a middle-aged weightlifter.
Molloy, Joseph M; Aberle, Curtis J; Escobar, Eduardo
2013-11-01
The patient was a 47-year-old man who was evaluated by a physical therapist for a chief complaint of posterior right elbow pain. The patient routinely participated in weightlifting activities and reported a sudden onset of triceps weakness and posterior elbow pain while performing clap push-ups 3 days prior. A physician assistant ordered radiographs, which were initially interpreted as normal, and routine magnetic resonance imaging for the right elbow. Following examination by a physical therapist, due to concern for a triceps tendon tear, the previously ordered magnetic resonance imaging was expedited, which revealed a partial triceps tendon tear with partial tendon retraction medially.
Minimally invasive surgery: only as good as the picture.
Drury, Nigel E.; Pollard, Rebecca; Dyer, Jonathan P.
2004-01-01
BACKGROUND: In minimally invasive surgery, there is increased reliance on real-time 2-dimensional images. The fibre-optic light lead is one of the most frequently damaged elements of the 'imaging chain', leading to a poor quality picture. METHODS: Light leads with a honeycomb projection were connected to a light source and the resulting beam directed at a sheet of paper. Darkened sectors with diminished or absent light transmission were recorded. RESULTS: All suitable light leads in routine use were examined. A mean of 22.2% (SD 7.8%) of the projection had diminished or absent light transmission. CONCLUSION: Sub-optimal endoscopic equipment was in routine use. PMID:15005945
Digital Images over the Internet: Rome Reborn at the Library of Congress.
ERIC Educational Resources Information Center
Valauskas, Edward J.
1994-01-01
Describes digital images of incunabula from the Library of the Vatican that are available over the Internet based on an actual exhibit that was displayed at the Library of Congress. Viewers, i.e., compression routines created to efficiently send color images, are explained; and other digital exhibits are described. (Contains three references.)…
NASA Astrophysics Data System (ADS)
Lifshitz, Ronen; Kimchy, Yoav; Gelbard, Nir; Leibushor, Avi; Golan, Oleg; Elgali, Avner; Hassoon, Salah; Kaplan, Max; Smirnov, Michael; Shpigelman, Boaz; Bar-Ilan, Omer; Rubin, Daniel; Ovadia, Alex
2017-03-01
An ingestible capsule for colorectal cancer screening, based on ionizing-radiation imaging, has been developed and is in advanced stages of system stabilization and clinical evaluation. The imaging principle allows future patients using this technology to avoid bowel cleansing, and to continue the normal life routine during procedure. The Check-Cap capsule, or C-Scan ® Cap, imaging principle is essentially based on reconstructing scattered radiation, while both radiation source and radiation detectors reside within the capsule. The radiation source is a custom-made radioisotope encased in a small canister, collimated into rotating beams. While traveling along the human colon, irradiation occurs from within the capsule towards the colon wall. Scattering of radiation occurs both inside and outside the colon segment; some of this radiation is scattered back and detected by sensors onboard the capsule. During procedure, the patient receives small amounts of contrast agent as an addition to his/her normal diet. The presence of contrast agent inside the colon dictates the dominant physical processes to become Compton Scattering and X-Ray Fluorescence (XRF), which differ mainly by the energy of scattered photons. The detector readout electronics incorporates low-noise Single Photon Counting channels, allowing separation between the products of these different physical processes. Separating between radiation energies essentially allows estimation of the distance from the capsule to the colon wall, hence structural imaging of the intraluminal surface. This allows imaging of structural protrusions into the colon volume, especially focusing on adenomas that may develop into colorectal cancer.
A method for the automated processing and analysis of images of ULVWF-platelet strings.
Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V
2013-01-01
We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.
Jeurissen, Ben; Leemans, Alexander; Sijbers, Jan
2014-10-01
Ensuring one is using the correct gradient orientations in a diffusion MRI study can be a challenging task. As different scanners, file formats and processing tools use different coordinate frame conventions, in practice, users can end up with improperly oriented gradient orientations. Using such wrongly oriented gradient orientations for subsequent diffusion parameter estimation will invalidate all rotationally variant parameters and fiber tractography results. While large misalignments can be detected by visual inspection, small rotations of the gradient table (e.g. due to angulation of the acquisition plane), are much more difficult to detect. In this work, we propose an automated method to align the coordinate frame of the gradient orientations with that of the corresponding diffusion weighted images, using a metric based on whole brain fiber tractography. By transforming the gradient table and measuring the average fiber trajectory length, we search for the transformation that results in the best global 'connectivity'. To ensure a fast calculation of the metric we included a range of algorithmic optimizations in our tractography routine. To make the optimization routine robust to spurious local maxima, we use a stochastic optimization routine that selects a random set of seed points on each evaluation. Using simulations, we show that our method can recover the correct gradient orientations with high accuracy and precision. In addition, we demonstrate that our technique can successfully recover rotated gradient tables on a wide range of clinically realistic data sets. As such, our method provides a practical and robust solution to an often overlooked pitfall in the processing of diffusion MRI. Copyright © 2014 Elsevier B.V. All rights reserved.
Nin, Carlos Shuler; Marchiori, Edson; Irion, Klaus Loureiro; Paludo, Artur de Oliveira; Alves, Giordano Rafael Tronco; Hochhegger, Daniela Reis; Hochhegger, Bruno
2013-01-01
OBJECTIVE: To assess the routine use of barium swallow study in patients with chronic cough. METHODS: Between October of 2011 and March of 2012, 95 consecutive patients submitted to chest X-ray due to chronic cough (duration > 8 weeks) were included in the study. For study purposes, additional images were obtained immediately after the oral administration of 5 mL of a 5% barium sulfate suspension. Two radiologists systematically evaluated all of the images in order to identify any pathological changes. Fisher's exact test and the chi-square test for categorical data were used in the comparisons. RESULTS: The images taken immediately after barium swallow revealed significant pathological conditions that were potentially related to chronic cough in 12 (12.6%) of the 95 patients. These conditions, which included diaphragmatic hiatal hernia, esophageal neoplasm, achalasia, esophageal diverticulum, and abnormal esophageal dilatation, were not detected on the images taken without contrast. After appropriate treatment, the symptoms disappeared in 11 (91.6%) of the patients, whereas the treatment was ineffective in 1 (8.4%). We observed no complications related to barium swallow, such as contrast aspiration. CONCLUSIONS: Barium swallow improved the detection of significant radiographic findings related to chronic cough in 11.5% of patients. These initial findings suggest that the routine use of barium swallow can significantly increase the sensitivity of chest X-rays in the detection of chronic cough-related etiologies. PMID:24473762
Kim, Bum-Joon; Hong, Ki-Sun; Park, Kyung-Jae; Park, Dong-Hyuk; Chung, Yong-Gu; Kang, Shin-Hyuk
2012-12-01
The prefabrication of customized cranioplastic implants has been introduced to overcome the difficulties of intra-operative implant molding. The authors present a new technique, which consists of the prefabrication of implant molds using three-dimensional (3D) printers and polymethyl-methacrylate (PMMA) casting. A total of 16 patients with large skull defects (>100 cm(2)) underwent cranioplasty between November 2009 and April 2011. For unilateral cranial defects, 3D images of the skull were obtained from preoperative axial 1-mm spiral computed tomography (CT) scans. The image of the implant was generated by a digital subtraction mirror-imaging process using the normal side of the cranium as a model. For bilateral cranial defects, precraniectomy routine spiral CT scan data were merged with postcraniectomy 3D CT images following a smoothing process. Prefabrication of the mold was performed by the 3D printer. Intraoperatively, the PMMA implant was created with the prefabricated mold, and fit into the cranial defect. The median operation time was 184.36±26.07 minutes. Postoperative CT scans showed excellent restoration of the symmetrical contours and curvature of the cranium in all cases. The median follow-up period was 23 months (range, 14-28 months). Postoperative infection was developed in one case (6.2%) who had an open wound defect previously. Customized cranioplasty PMMA implants using 3D printer may be a useful technique for the reconstruction of various cranial defects.
Craniofacial Manifestations of Systemic Disorders: CT and MR Imaging Findings and Imaging Approach.
Andreu-Arasa, V Carlota; Chapman, Margaret N; Kuno, Hirofumi; Fujita, Akifumi; Sakai, Osamu
2018-01-01
Many systemic diseases or conditions can affect the maxillofacial bones; however, they are often overlooked or incidentally found at routine brain or head and neck imaging performed for other reasons. Early identification of some conditions may significantly affect patient care and alter outcomes. Early recognition of nonneoplastic hematologic disorders, such as thalassemia and sickle cell disease, may help initiate earlier treatment and prevent serious complications. The management of neoplastic diseases such as lymphoma, leukemia, or Langerhans cell histiocytosis may be different if diagnosed early, and metastases to the maxillofacial bones may be the first manifestation of an otherwise occult neoplasm. Endocrinologic and metabolic disorders also may manifest with maxillofacial conditions. Earlier recognition of osteoporosis may alter treatment and prevent complications such as insufficiency fractures, and identification of acromegaly may lead to surgical treatment if there is an underlying growth hormone-producing adenoma. Bone dysplasias sometimes are associated with skull base foraminal narrowing and subsequent involvement of the cranial nerves. Inflammatory processes such as rheumatoid arthritis and sarcoidosis may affect the maxillofacial bones, skull base, and temporomandibular joints. Radiologists should be familiar with the maxillofacial computed tomographic and magnetic resonance imaging findings of common systemic disorders because these may be the first manifestations of an otherwise unrevealed systemic process with potential for serious complications. Online supplemental material is available for this article. © RSNA, 2018.
NASA Astrophysics Data System (ADS)
Archip, Neculai; Fedorov, Andriy; Lloyd, Bryn; Chrisochoides, Nikos; Golby, Alexandra; Black, Peter M.; Warfield, Simon K.
2006-03-01
A major challenge in neurosurgery oncology is to achieve maximal tumor removal while avoiding postoperative neurological deficits. Therefore, estimation of the brain deformation during the image guided tumor resection process is necessary. While anatomic MRI is highly sensitive for intracranial pathology, its specificity is limited. Different pathologies may have a very similar appearance on anatomic MRI. Moreover, since fMRI and diffusion tensor imaging are not currently available during the surgery, non-rigid registration of preoperative MR with intra-operative MR is necessary. This article presents a translational research effort that aims to integrate a number of state-of-the-art technologies for MRI-guided neurosurgery at the Brigham and Women's Hospital (BWH). Our ultimate goal is to routinely provide the neurosurgeons with accurate information about brain deformation during the surgery. The current system is tested during the weekly neurosurgeries in the open magnet at the BWH. The preoperative data is processed, prior to the surgery, while both rigid and non-rigid registration algorithms are run in the vicinity of the operating room. The system is tested on 9 image datasets from 3 neurosurgery cases. A method based on edge detection is used to quantitatively validate the results. 95% Hausdorff distance between points of the edges is used to estimate the accuracy of the registration. Overall, the minimum error is 1.4 mm, the mean error 2.23 mm, and the maximum error 3.1 mm. The mean ratio between brain deformation estimation and rigid alignment is 2.07. It demonstrates that our results can be 2.07 times more precise then the current technology. The major contribution of the presented work is the rigid and non-rigid alignment of the pre-operative fMRI with intra-operative 0.5T MRI achieved during the neurosurgery.
Evaluation of Spontaneous Spinal Cerebrospinal Fluid Leaks Disease by Computerized Image Processing.
Yıldırım, Mustafa S; Kara, Sadık; Albayram, Mehmet S; Okkesim, Şükrü
2016-05-17
Spontaneous Spinal Cerebrospinal Fluid Leaks (SSCFL) is a disease based on tears on the dura mater. Due to widespread symptoms and low frequency of the disease, diagnosis is problematic. Diagnostic lumbar puncture is commonly used for diagnosing SSCFL, though it is invasive and may cause pain, inflammation or new leakages. T2-weighted MR imaging is also used for diagnosis; however, the literature on T2-weighted MRI states that findings for diagnosis of SSCFL could be erroneous when differentiating the diseased and control. One another technique for diagnosis is CT-myelography, but this has been suggested to be less successful than T2-weighted MRI and it needs an initial lumbar puncture. This study aimed to develop an objective, computerized numerical analysis method using noninvasive routine Magnetic Resonance Images that can be used in the evaluation and diagnosis of SSCFL disease. Brain boundaries were automatically detected using methods of mathematical morphology, and a distance transform was employed. According to normalized distances, average densities of certain sites were proportioned and a numerical criterion related to cerebrospinal fluid distribution was calculated. The developed method was able to differentiate between 14 patients and 14 control subjects significantly with p = 0.0088 and d = 0.958. Also, the pre and post-treatment MRI of four patients was obtained and analyzed. The results were differentiated statistically (p = 0.0320, d = 0.853). An original, noninvasive and objective diagnostic test based on computerized image processing has been developed for evaluation of SSCFL. To our knowledge, this is the first computerized image processing method for evaluation of the disease. Discrimination between patients and controls shows the validity of the method. Also, post-treatment changes observed in four patients support this verdict.
Three-dimensional spatiotemporal features for fast content-based retrieval of focal liver lesions.
Roy, Sharmili; Chi, Yanling; Liu, Jimin; Venkatesh, Sudhakar K; Brown, Michael S
2014-11-01
Content-based image retrieval systems for 3-D medical datasets still largely rely on 2-D image-based features extracted from a few representative slices of the image stack. Most 2 -D features that are currently used in the literature not only model a 3-D tumor incompletely but are also highly expensive in terms of computation time, especially for high-resolution datasets. Radiologist-specified semantic labels are sometimes used along with image-based 2-D features to improve the retrieval performance. Since radiological labels show large interuser variability, are often unstructured, and require user interaction, their use as lesion characterizing features is highly subjective, tedious, and slow. In this paper, we propose a 3-D image-based spatiotemporal feature extraction framework for fast content-based retrieval of focal liver lesions. All the features are computer generated and are extracted from four-phase abdominal CT images. Retrieval performance and query processing times for the proposed framework is evaluated on a database of 44 hepatic lesions comprising of five pathological types. Bull's eye percentage score above 85% is achieved for three out of the five lesion pathologies and for 98% of query lesions, at least one same type of lesion is ranked among the top two retrieved results. Experiments show that the proposed system's query processing is more than 20 times faster than other already published systems that use 2-D features. With fast computation time and high retrieval accuracy, the proposed system has the potential to be used as an assistant to radiologists for routine hepatic tumor diagnosis.
Ryu, Kyeong H; Baek, Hye J; Cho, Soo B; Moon, Jin I; Choi, Bo H; Park, Sung E; An, Hyo J
2017-11-01
Detection of skull metastases is as important as detection of brain metastases because early diagnosis of skull metastases is a crucial determinant of treatment. However, the skull can be a blind spot for assessing metastases on routine brain magnetic resonance imaging (MRI). To the best of our knowledge, the finding of skull metastases on arterial spin labeling (ASL) has not been reported. ASL is a specific MRI sequence for evaluating cerebral blood flow using magnetized endogenous inflow blood. This study uses ASL as a routine sequence of brain MRI protocol and describes 3 clinical cases of skull metastases identified by ASL. The study also highlights the clinical usefulness of ASL in detecting skull metastases. Three patients with known malignancy underwent brain MRI to evaluate for brain metastases. All of the skull metastases were conspicuously depicted on routine ASL images, and the lesions correlated well with other MRI sequences. Three patients received palliative chemotherapy. Three patients are being followed up regularly at the outpatient department. The routine use of ASL may help to detect lesions in blind spots, such as skull metastases, and to facilitate the evaluation of intracranial pathologies without the use of contrast materials in exceptional situations.
Toshiba General Hospital PACS for routine in- and outpatient clinics
NASA Astrophysics Data System (ADS)
Toshimitsu, Akihiro; Okazaki, Nobuo; Kura, Hiroyuki; Nishihara, Eitaro; Tsubura, Shinichi
1996-05-01
The Toshiba General Hospital introduced a departmental RIS/PACS (Radiology Information System/Picture Archiving and Communication System) in the radiology department in May, 1993. It has been used routinely since that time. In order to provide efficient means for clinicians to find and read many images, the system has been expanded to the neurosurgery and urology clinics and wards since May, 1995, and five image referring workstations now provide digital images to clinicians. In this paper we discuss an algorithm for image migration, one of the key issues to accomplish the expansion to outpatient clinics successfully, and propose the WYWIWYG (what you want is what you get) image transfer logic. This is the logic used to transfer images that physicians require refer without increasing the traffic between the image server and referring workstations. We accomplish the WYWIWYG logic by prioritizing exams the physicians have not yet viewed and by finding historical exams according to the modality, anatomy, and marking. Clinicians gave us comments from their first use of the system and suggested that the PACS enables clinicians to review images more efficiently compared to a film-based system. Our experience suggests that it is a key to the effective application of PACS in outpatient clinics to incorporate consideration patterns of clinicians on the migration algorithm.
ACR appropriateness criteria blunt chest trauma.
Chung, Jonathan H; Cox, Christian W; Mohammed, Tan-Lucien H; Kirsch, Jacobo; Brown, Kathleen; Dyer, Debra Sue; Ginsburg, Mark E; Heitkamp, Darel E; Kanne, Jeffrey P; Kazerooni, Ella A; Ketai, Loren H; Ravenel, James G; Saleh, Anthony G; Shah, Rakesh D; Steiner, Robert M; Suh, Robert D
2014-04-01
Imaging is paramount in the setting of blunt trauma and is now the standard of care at any trauma center. Although anteroposterior radiography has inherent limitations, the ability to acquire a radiograph in the trauma bay with little interruption in clinical survey, monitoring, and treatment, as well as radiography's accepted role in screening for traumatic aortic injury, supports the routine use of chest radiography. Chest CT or CT angiography is the gold-standard routine imaging modality for detecting thoracic injuries caused by blunt trauma. There is disagreement on whether routine chest CT is necessary in all patients with histories of blunt trauma. Ultimately, the frequency and timing of CT chest imaging should be site specific and should depend on the local resources of the trauma center as well as patient status. Ultrasound may be beneficial in the detection of pneumothorax, hemothorax, and pericardial hemorrhage; transesophageal echocardiography is a first-line imaging tool in the setting of suspected cardiac injury. In the blunt trauma setting, MRI and nuclear medicine likely play no role in the acute setting, although these modalities may be helpful as problem-solving tools after initial assessment. The ACR Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed every 2 years by a multidisciplinary expert panel. The guideline development and review include an extensive analysis of current medical literature from peer-reviewed journals and the application of a well-established consensus methodology (modified Delphi) to rate the appropriateness of imaging and treatment procedures by the panel. In those instances in which evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Gupta, Ajay; Gialdini, Gino; Lerario, Michael P; Baradaran, Hediyeh; Giambrone, Ashley; Navi, Babak B; Marshall, Randolph S; Iadecola, Costantino; Kamel, Hooman
2015-06-15
Magnetic resonance imaging of carotid plaque can aid in stroke risk stratification in patients with carotid stenosis. However, the prevalence of complicated carotid plaque in patients with cryptogenic stroke is uncertain, especially as assessed by plaque imaging techniques routinely included in acute stroke magnetic resonance imaging protocols. We assessed whether the magnetic resonance angiography-defined presence of intraplaque high-intensity signal (IHIS), a marker of intraplaque hemorrhage, is associated with ipsilateral cryptogenic stroke. Cryptogenic stroke patients with magnetic resonance imaging evidence of unilateral anterior circulation infarction and without hemodynamically significant (≥50%) stenosis of the cervical carotid artery were identified from a prospective stroke registry at a tertiary-care hospital. High-risk plaque was assessed by evaluating for IHIS on routine magnetic resonance angiography source images using a validated technique. To compare the presence of IHIS on the ipsilateral versus contralateral side within individual patients, we used McNemar's test for correlated proportions. A total of 54 carotid arteries in 27 unique patients were included. A total of 6 patients (22.2%) had IHIS-positive nonstenosing carotid plaque ipsilateral to the side of ischemic stroke compared to 0 patients who had IHIS-positive carotid plaques contralateral to the side of stroke (P=0.01). Stroke severity measures, diagnostic evaluations, and prevalence of vascular risk factors were not different between the IHIS-positive and IHIS-negative groups. Our findings suggest that a proportion of strokes classified as cryptogenic may be mechanistically related to complicated, nonhemodynamically significant cervical carotid artery plaque that can easily be detected by routine magnetic resonance imaging/magnetic resonance angiography acute stroke protocols. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Armand, Mehran; Armiger, Robert S.; Kutzer, Michael D.; Basafa, Ehsan; Kazanzides, Peter; Taylor, Russell H.
2012-01-01
Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34 ± 0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99 ± 0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines. PMID:22113773
Ding, George X; Alaei, Parham; Curran, Bruce; Flynn, Ryan; Gossman, Michael; Mackie, T Rock; Miften, Moyed; Morin, Richard; Xu, X George; Zhu, Timothy C
2018-05-01
With radiotherapy having entered the era of image guidance, or image-guided radiation therapy (IGRT), imaging procedures are routinely performed for patient positioning and target localization. The imaging dose delivered may result in excessive dose to sensitive organs and potentially increase the chance of secondary cancers and, therefore, needs to be managed. This task group was charged with: a) providing an overview on imaging dose, including megavoltage electronic portal imaging (MV EPI), kilovoltage digital radiography (kV DR), Tomotherapy MV-CT, megavoltage cone-beam CT (MV-CBCT) and kilovoltage cone-beam CT (kV-CBCT), and b) providing general guidelines for commissioning dose calculation methods and managing imaging dose to patients. We briefly review the dose to radiotherapy (RT) patients resulting from different image guidance procedures and list typical organ doses resulting from MV and kV image acquisition procedures. We provide recommendations for managing the imaging dose, including different methods for its calculation, and techniques for reducing it. The recommended threshold beyond which imaging dose should be considered in the treatment planning process is 5% of the therapeutic target dose. Although the imaging dose resulting from current kV acquisition procedures is generally below this threshold, the ALARA principle should always be applied in practice. Medical physicists should make radiation oncologists aware of the imaging doses delivered to patients under their care. Balancing ALARA with the requirement for effective target localization requires that imaging dose be managed based on the consideration of weighing risks and benefits to the patient. © 2018 American Association of Physicists in Medicine.
PET Imaging: Basics and New Trends
NASA Astrophysics Data System (ADS)
Dahlbom, Magnus
Positron Emission Tomography or PET is a noninvasive molecular imaging method used both in research to study biology and disease, and clinically as a routine diagnostic imaging tool. In PET imaging, the subject is injected with a tracer labeled with a positron-emitting isotope and is then placed in a scanner to localize the radioactive tracer in the body. The localization of the tracer utilizes the unique decay characteristics of isotopes decaying by positron emission. In the PET scanner, a large number of scintillation detectors use coincidence detection of the annihilation radiation that is emitted as a result of the positron decay. By collecting a large number of these coincidence events, together with tomographic image reconstruction methods, the 3-D distribution of the radioactive tracer in the body can be reconstructed. Depending on the type of tracer used, the distribution will reflect a particular biological process, such as glucose metabolism when fluoro-deoxyglucose is used. PET has evolved from a relatively inefficient single-slice imaging system with relatively poor spatial resolution to an efficient, high-resolution imaging modality which can acquire a whole-body scan in a few minutes. This chapter will describe the basic physics and instrumentation used in PET. The various corrections that are necessary to apply to the acquired data in order to produce quantitative images are also described. Finally, some of the latest trends in instrumentation development are also discussed.
On the analysis of time-of-flight spin-echo modulated dark-field imaging data
NASA Astrophysics Data System (ADS)
Sales, Morten; Plomp, Jeroen; Bouwman, Wim G.; Tremsin, Anton S.; Habicht, Klaus; Strobl, Markus
2017-06-01
Spin-Echo Modulated Small Angle Neutron Scattering with spatial resolution, i.e. quantitative Spin-Echo Dark Field Imaging, is an emerging technique coupling neutron imaging with spatially resolved quantitative small angle scattering information. However, the currently achieved relatively large modulation periods of the order of millimeters are superimposed to the images of the samples. So far this required an independent reduction and analyses of the image and scattering information encoded in the measured data and is involving extensive curve fitting routines. Apart from requiring a priori decisions potentially limiting the information content that is extractable also a straightforward judgment of the data quality and information content is hindered. In contrast we propose a significantly simplified routine directly applied to the measured data, which does not only allow an immediate first assessment of data quality and delaying decisions on potentially information content limiting further reduction steps to a later and better informed state, but also, as results suggest, generally better analyses. In addition the method enables to drop the spatial resolution detector requirement for non-spatially resolved Spin-Echo Modulated Small Angle Neutron Scattering.
Taylor, C R
2014-08-01
The traditional microscope, together with the "routine" hematoxylin and eosin (H & E) stain, remains the "gold standard" for diagnosis of cancer and other diseases; remarkably, it and the majority of associated biological stains are more than 150 years old. Immunohistochemistry has added to the repertoire of "stains" available. Because of the need for specific identification and even measurement of "biomarkers," immunohistochemistry has increased the demand for consistency of performance and interpretation of staining results. Rapid advances in the capabilities of digital imaging hardware and software now offer a realistic route to improved reproducibility, accuracy and quantification by utilizing whole slide digital images for diagnosis, education and research. There also are potential efficiencies in work flow and the promise of powerful new analytical methods; however, there also are challenges with respect to validation of the quality and fidelity of digital images, including the standard H & E stain, so that diagnostic performance by pathologists is not compromised when they rely on whole slide images instead of traditional stained tissues on glass slides.
[Proton imaging applications for proton therapy: state of the art].
Amblard, R; Floquet, V; Angellier, G; Hannoun-Lévi, J M; Hérault, J
2015-04-01
Proton therapy allows a highly precise tumour volume irradiation with a low dose delivered to the healthy tissues. The steep dose gradients observed and the high treatment conformity require a precise knowledge of the proton range in matter and the target volume position relative to the beam. Thus, proton imaging allows an improvement of the treatment accuracy, and thereby, in treatment quality. Initially suggested in 1963, radiographic imaging with proton is still not used in clinical routine. The principal difficulty is the lack of spatial resolution, induced by the multiple Coulomb scattering of protons with nuclei. Moreover, its realization for all clinical locations requires relatively high energies that are previously not considered for clinical routine. Abandoned for some time in favor of X-ray technologies, research into new imaging methods using protons is back in the news because of the increase of proton radiation therapy centers in the world. This article exhibits a non-exhaustive state of the art in proton imaging. Copyright © 2015 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
Investigation into process-induced de-aggregation of cohesive micronised API particles.
Hoffmann, Magnus; Wray, Patrick S; Gamble, John F; Tobyn, Mike
2015-09-30
The aim of this study was to assess the impact of unit processes on the de-aggregation of a cohesive micronised API within a pharmaceutical formulation using near-infrared chemical imaging. The impact on the primary API particles was also investigated using an image-based particle characterization system with integrated Raman analysis. The blended material was shown to contain large, API rich domains which were distributed in-homogeneously across the sample, suggesting that the blending process was not aggressive enough to disperse aggregates of micronised drug particles. Cone milling, routinely used to improve the homogeneity of such cohesive formulations, was observed to substantially reduce the number and size of API rich domains; however, several smaller API domains survived the milling process. Conveyance of the cone milled formulation through the Alexanderwerk WP120 powder feed system completely dispersed all remaining aggregates. Importantly, powder feed transmission of the un-milled formulation was observed to produce an equally homogeneous API distribution. The size of the micronised primary drug particles remained unchanged during powder feed transmission. These findings provide further evidence that this powder feed system does induce shear, and is in fact better able to disperse aggregates of a cohesive micronised API within a blend than the blend-mill-blend step. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
VIRTUAL FRAME BUFFER INTERFACE
NASA Technical Reports Server (NTRS)
Wolfe, T. L.
1994-01-01
Large image processing systems use multiple frame buffers with differing architectures and vendor supplied user interfaces. This variety of architectures and interfaces creates software development, maintenance, and portability problems for application programs. The Virtual Frame Buffer Interface program makes all frame buffers appear as a generic frame buffer with a specified set of characteristics, allowing programmers to write code which will run unmodified on all supported hardware. The Virtual Frame Buffer Interface converts generic commands to actual device commands. The virtual frame buffer consists of a definition of capabilities and FORTRAN subroutines that are called by application programs. The virtual frame buffer routines may be treated as subroutines, logical functions, or integer functions by the application program. Routines are included that allocate and manage hardware resources such as frame buffers, monitors, video switches, trackballs, tablets and joysticks; access image memory planes; and perform alphanumeric font or text generation. The subroutines for the various "real" frame buffers are in separate VAX/VMS shared libraries allowing modification, correction or enhancement of the virtual interface without affecting application programs. The Virtual Frame Buffer Interface program was developed in FORTRAN 77 for a DEC VAX 11/780 or a DEC VAX 11/750 under VMS 4.X. It supports ADAGE IK3000, DEANZA IP8500, Low Resolution RAMTEK 9460, and High Resolution RAMTEK 9460 Frame Buffers. It has a central memory requirement of approximately 150K. This program was developed in 1985.
Automated Inspection of Power Line Corridors to Measure Vegetation Undercut Using Uav-Based Images
NASA Astrophysics Data System (ADS)
Maurer, M.; Hofer, M.; Fraundorfer, F.; Bischof, H.
2017-08-01
Power line corridor inspection is a time consuming task that is performed mostly manually. As the development of UAVs made huge progress in recent years, and photogrammetric computer vision systems became well established, it is time to further automate inspection tasks. In this paper we present an automated processing pipeline to inspect vegetation undercuts of power line corridors. For this, the area of inspection is reconstructed, geo-referenced, semantically segmented and inter class distance measurements are calculated. The presented pipeline performs an automated selection of the proper 3D reconstruction method for on the one hand wiry (power line), and on the other hand solid objects (surrounding). The automated selection is realized by performing pixel-wise semantic segmentation of the input images using a Fully Convolutional Neural Network. Due to the geo-referenced semantic 3D reconstructions a documentation of areas where maintenance work has to be performed is inherently included in the distance measurements and can be extracted easily. We evaluate the influence of the semantic segmentation according to the 3D reconstruction and show that the automated semantic separation in wiry and dense objects of the 3D reconstruction routine improves the quality of the vegetation undercut inspection. We show the generalization of the semantic segmentation to datasets acquired using different acquisition routines and to varied seasons in time.
NASA Astrophysics Data System (ADS)
Liu, Zhen; Pu, Fang; Liu, Jianhua; Jiang, Liyan; Yuan, Qinghai; Li, Zhengqiang; Ren, Jinsong; Qu, Xiaogang
2013-05-01
Novel nanoparticulate contrast agents with low systemic toxicity and inexpensive character have exhibited more advantages over routinely used small molecular contrast agents for the diagnosis and prognosis of disease. Herein, we designed and synthesized PEGylated hybrid ytterbia nanoparticles as high-performance nanoprobes for X-ray computed tomography (CT) imaging and magnetic resonance (MR) imaging both in vitro and in vivo. These well-defined nanoparticles were facile to prepare and cost-effective, meeting the criteria as a biomedical material. Compared with routinely used Iobitridol in clinic, our PEG-Yb2O3:Gd nanoparticles could provide much significantly enhanced contrast upon various clinical voltages ranging from 80 kVp to 140 kVp owing to the high atomic number and well-positioned K-edge energy of ytterbium. By the doping of gadolinium, our nanoparticulate contrast agent could perform perfect MR imaging simultaneously, revealing similar organ enrichment and bio-distribution with the CT imaging results. The super improvement in imaging efficiency was mainly attributed to the high content of Yb and Gd in a single nanoparticle, thus making these nanoparticles suitable for dual-modal diagnostic imaging with a low single-injection dose. In addition, detailed toxicological study in vitro and in vivo indicated that uniformly sized PEG-Yb2O3:Gd nanoparticles possessed excellent biocompatibility and revealed overall safety.Novel nanoparticulate contrast agents with low systemic toxicity and inexpensive character have exhibited more advantages over routinely used small molecular contrast agents for the diagnosis and prognosis of disease. Herein, we designed and synthesized PEGylated hybrid ytterbia nanoparticles as high-performance nanoprobes for X-ray computed tomography (CT) imaging and magnetic resonance (MR) imaging both in vitro and in vivo. These well-defined nanoparticles were facile to prepare and cost-effective, meeting the criteria as a biomedical material. Compared with routinely used Iobitridol in clinic, our PEG-Yb2O3:Gd nanoparticles could provide much significantly enhanced contrast upon various clinical voltages ranging from 80 kVp to 140 kVp owing to the high atomic number and well-positioned K-edge energy of ytterbium. By the doping of gadolinium, our nanoparticulate contrast agent could perform perfect MR imaging simultaneously, revealing similar organ enrichment and bio-distribution with the CT imaging results. The super improvement in imaging efficiency was mainly attributed to the high content of Yb and Gd in a single nanoparticle, thus making these nanoparticles suitable for dual-modal diagnostic imaging with a low single-injection dose. In addition, detailed toxicological study in vitro and in vivo indicated that uniformly sized PEG-Yb2O3:Gd nanoparticles possessed excellent biocompatibility and revealed overall safety. Electronic supplementary information (ESI) available. See DOI: 10.1039/c3nr00491k
Genomics, proteomics, MEMS and SAIF: which role for diagnostic imaging?
Grassi, R; Lagalla, R; Rotondo, A
2008-09-01
In these three words--genomics, proteomics and nanotechnologies--is the future of medicine of the third millennium, which will be characterised by more careful attention to disease prevention, diagnosis and treatment. Molecular imaging appears to satisfy this requirement. It is emerging as a new science that brings together molecular biology and in vivo imaging and represents the key for the application of personalized medicine. Micro-PET (positron emission tomography), micro-SPECT (single photon emission computed tomography), micro-CT (computed tomography), micro-MR (magnetic resonance), micro-US (ultrasound) and optical imaging are all molecular imaging techniques, several of which are applied only in preclinical settings on animal models. Others, however, are applied routinely in both clinical and preclinical setting. Research on small animals allows investigation of the genesis and development of diseases, as well as drug efficacy and the development of personalized therapies, through the study of biological processes that precede the expression of common symptoms of a pathology. Advances in molecular imaging were made possible only by collaboration among scientists in the fields of radiology, chemistry, molecular and cell biology, physics, mathematics, pharmacology, gene therapy and oncology. Although until now researchers have traditionally limited their interactions, it is only by increasing these connections that the current gaps in terminology, methods and approaches that inhibit scientific progress can be eliminated.
Rapid motion compensation for prostate biopsy using GPU.
Shen, Feimo; Narayanan, Ramkrishnan; Suri, Jasjit S
2008-01-01
Image-guided procedures have become routine in medicine. Due to the nature of three-dimensional (3-D) structure of the target organs, two-dimensional (2-D) image acquisition is gradually being replaced by 3-D imaging. Specifically in the diagnosis of prostate cancer, biopsy can be performed using 3-D transrectal ultrasound (TRUS) image guidance. Because prostatic cancers are multifocal, it is crucial to accurately guide biopsy needles towards planned targets. Further the gland tends to move due to external physical disturbances, discomfort introduced by the procedure or intrinsic peristalsis. As a result the exact position of the gland must be rapidly updated so as to correspond with the originally acquired 3-D TRUS volume prior to biopsy planning. A graphics processing unit (GPU) is used in this study to compute rapid updates performing 3-D motion compensation via registration of the live 2-D image and the acquired 3-D TRUS volume. The parallel computational framework on the GPU is exploited resulting in mean compute times of 0.46 seconds for updating the position of a live 2-D buffer image containing 91,000 pixels. A 2x sub-sampling resulted in a further improvement to 0.19 seconds. With the increase in GPU multiprocessors and sub-sampling, we observe that real time motion compensation can be achieved.
Optimization of Whole-body Zebrafish Sectioning Methods for Mass Spectrometry Imaging
Mass spectrometry imaging (MSI) methods and protocols have become widely adapted to a variety of tissues and species. However, the MSI literature lacks information on whole-body cryosection preparation for the zebrafish (ZF; Danio rerio), a model organism routinely used in devel...
Monitoring landslide dynamics using timeseries of UAV imagery
NASA Astrophysics Data System (ADS)
de Jong, S. M.; Van Beek, L. P.
2017-12-01
Landslides are worldwide occurring processes that can have large economic impact and sometimes result in fatalities. Multiple factors are important in landslide processes and can make an area prone to landslide activity. Human factors like drainage and removal of vegetation or land clearing are examples of factors that may cause a landslide. Other environmental factors such as topography and the shear strength of the slope material are more difficult to control. Triggering factors for landslides are typically heavy rainfall events or sometimes by earthquakes or under cutting processes by a river. The collection of data about existing landslides in a given area is important for predicting future landslides in that region. We have setup a monitoring program for landslide using cameras aboard Unmanned Airborne Vehicles. UAV with cameras are able to collect ultra-high resolution images and UAVs can be operated in a very flexible way, they just fit in the back of a car. Here, in this study we used Unmanned Aerial Vehicles to collect a time series of high-resolution images over landslides in France and Australia. The algorithm used to process the UAV images into OrthoMosaics and OrthoDEMs is Structure from Motion (SfM). The process generally results in centimeter precision in the horizontal and vertical direction. Such multi-temporal datasets enable the detection of landslide area, the leading edge slope, temporal patterns and volumetric changes of particular areas of the landslide. We measured and computed surface movement of the landslide using the COSI-Corr image correlation algorithm with ground validation. Our study shows the possibilities of generating accurate Digital Surface Models (DSMs) of landslides using images collected with an Unmanned Aerial Vehicle (UAV). The technique is robust and repeatable such that a substantial time series of datasets can be routinely collected. It is shown that a time-series of UAV images can be used to map landslide movements with centimeter accuracy. It also found that there can be a cyclical nature to the slope of the leading edge of the landslide, suggesting that the steepness of the slope can be used to predict the next forward surge of the leading edge.
Anti-nuclear antibody screening using HEp-2 cells.
Buchner, Carol; Bryant, Cassandra; Eslami, Anna; Lakos, Gabriella
2014-06-23
The American College of Rheumatology position statement on ANA testing stipulates the use of IIF as the gold standard method for ANA screening(1). Although IIF is an excellent screening test in expert hands, the technical difficulties of processing and reading IIF slides--such as the labor intensive slide processing, manual reading, the need for experienced, trained technologists and the use of dark room--make the IIF method difficult to fit in the workflow of modern, automated laboratories. The first and crucial step towards high quality ANA screening is careful slide processing. This procedure is labor intensive, and requires full understanding of the process, as well as attention to details and experience. Slide reading is performed by fluorescent microscopy in dark rooms, and is done by trained technologists who are familiar with the various patterns, in the context of cell cycle and the morphology of interphase and dividing cells. Provided that IIF is the first line screening tool for SARD, understanding the steps to correctly perform this technique is critical. Recently, digital imaging systems have been developed for the automated reading of IIF slides. These systems, such as the NOVA View Automated Fluorescent Microscope, are designed to streamline the routine IIF workflow. NOVA View acquires and stores high resolution digital images of the wells, thereby separating image acquisition from interpretation; images are viewed an interpreted on high resolution computer monitors. It stores images for future reference and supports the operator's interpretation by providing fluorescent light intensity data on the images. It also preliminarily categorizes results as positive or negative, and provides pattern recognition for positive samples. In summary, it eliminates the need for darkroom, and automates and streamlines the IIF reading/interpretation workflow. Most importantly, it increases consistency between readers and readings. Moreover, with the use of barcoded slides, transcription errors are eliminated by providing sample traceability and positive patient identification. This results in increased patient data integrity and safety. The overall goal of this video is to demonstrate the IIF procedure, including slide processing, identification of common IIF patterns, and the introduction of new advancements to simplify and harmonize this technique.
An interesting case report of vertebral artery dissection following polytrauma.
Acharya, Vikas; Chandrasekaran, Suresh; Nair, Sujit
2016-01-01
The authors present an interesting case of a 19-year-old male who presented as a polytrauma patient following a fall from a height. He was initially managed on the intensive care unit with intracranial pressure bolt monitoring after being intubated and sedated and having his other traumatic injuries stabilized. Upon attempting to wean sedation and extubation a repeat CT scan of the head was undertaken and showed a new area suggested of cerebral infarction, this was a new finding. Further imaging found that he had a cervical vertebral artery dissection following this polytrauma mode of injury. The incidence of vertebral artery dissection following generalized or local trauma is rising but routine imaging/screening in these patients is not undertaken. Our report displays select images related to this case report and emphasizes the consideration of routine imaging in head and neck traumatic injuries to diagnose internal carotid and/or vertebral artery dissections much earlier. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Tang, Hui; Yu, Nan; Jia, Yongjun; Yu, Yong; Duan, Haifeng; Han, Dong; Ma, Guangming; Ren, Chenglong; He, Taiping
2018-01-01
To evaluate the image quality improvement and noise reduction in routine dose, non-enhanced chest CT imaging by using a new generation adaptive statistical iterative reconstruction (ASIR-V) in comparison with ASIR algorithm. 30 patients who underwent routine dose, non-enhanced chest CT using GE Discovery CT750HU (GE Healthcare, Waukesha, WI) were included. The scan parameters included tube voltage of 120 kVp, automatic tube current modulation to obtain a noise index of 14HU, rotation speed of 0.6 s, pitch of 1.375:1 and slice thickness of 5 mm. After scanning, all scans were reconstructed with the recommended level of 40%ASIR for comparison purpose and different percentages of ASIR-V from 10% to 100% in a 10% increment. The CT attenuation values and SD of the subcutaneous fat, back muscle and descending aorta were measured at the level of tracheal carina of all reconstructed images. The signal-to-noise ratio (SNR) was calculated with SD representing image noise. The subjective image quality was independently evaluated by two experienced radiologists. For all ASIR-V images, the objective image noise (SD) of fat, muscle and aorta decreased and SNR increased along with increasing ASIR-V percentage. The SD of 30% ASIR-V to 100% ASIR-V was significantly lower than that of 40% ASIR (p < 0.05). In terms of subjective image evaluation, all ASIR-V reconstructions had good diagnostic acceptability. However, the 50% ASIR-V to 70% ASIR-V series showed significantly superior visibility of small structures when compared with the 40% ASIR and ASIR-V of other percentages (p < 0.05), and 60% ASIR-V was the best series of all ASIR-V images, with a highest subjective image quality. The image sharpness was significantly decreased in images reconstructed by 80% ASIR-V and higher. In routine dose, non-enhanced chest CT, ASIR-V shows greater potential in reducing image noise and artefacts and maintaining image sharpness when compared to the recommended level of 40%ASIR algorithm. Combining both the objective and subjective evaluation of images, non-enhanced chest CT images reconstructed with 60% ASIR-V have the highest image quality. Advances in knowledge: This is the first clinical study to evaluate the clinical value of ASIR-V in the same patients using the same CT scanner in the non-enhanced chest CT scans. It suggests that ASIR-V provides the better image quality and higher diagnostic confidence in comparison with ASIR algorithm.
Jeong, Eun-Kee; Sung, Young-Hoon; Kim, Seong-Eun; Zuo, Chun; Shi, Xianfeng; Mellon, Eric A; Renshaw, Perry F
2011-08-01
High-energy phosphate metabolism, which allows the synthesis and regeneration of adenosine triphosphate (ATP), is a vital process for neuronal survival and activity. In particular, creatine kinase (CK) serves as an energy reservoir for the rapid buffering of ATP levels. Altered CK enzyme activity, reflecting compromised high-energy phosphate metabolism or mitochondrial dysfunction in the brain, can be assessed using magnetization transfer (MT) MRS. MT (31)P MRS has been used to measure the forward CK reaction rate in animal and human brain, employing a surface radiofrequency coil. However, long acquisition times and excessive radiofrequency irradiation prevent these methods from being used routinely for clinical evaluations. In this article, a new MT (31)P MRS method is presented, which can be practically used to measure the CK forward reaction rate constant in a clinical MRI system employing a volume head (31)P coil for spatial localization, without contamination from the scalp muscle, and an acquisition time of 30 min. Other advantages associated with the method include radiofrequency homogeneity within the regions of interest of the brain using a volume coil with image-selected in vivo spectroscopy localization, and reduction of the specific absorption rate using nonadiabatic radiofrequency pulses for MT saturation. The mean value of k(f) was measured as 0.320 ± 0.075 s(-1) from 10 healthy volunteers with an age range of 18-40 years. These values are consistent with those obtained using earlier methods, and the technique may be used routinely to evaluate energetic processes in the brain on a clinical MRI system. Copyright © 2010 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Denker, Carsten; Kuckein, Christoph; Verma, Meetu; González Manrique, Sergio J.; Diercke, Andrea; Enke, Harry; Klar, Jochen; Balthasar, Horst; Louis, Rohan E.; Dineva, Ekaterina
2018-05-01
In high-resolution solar physics, the volume and complexity of photometric, spectroscopic, and polarimetric ground-based data significantly increased in the last decade, reaching data acquisition rates of terabytes per hour. This is driven by the desire to capture fast processes on the Sun and the necessity for short exposure times “freezing” the atmospheric seeing, thus enabling ex post facto image restoration. Consequently, large-format and high-cadence detectors are nowadays used in solar observations to facilitate image restoration. Based on our experience during the “early science” phase with the 1.5 m GREGOR solar telescope (2014–2015) and the subsequent transition to routine observations in 2016, we describe data collection and data management tailored toward image restoration and imaging spectroscopy. We outline our approaches regarding data processing, analysis, and archiving for two of GREGOR’s post-focus instruments (see http://gregor.aip.de), i.e., the GREGOR Fabry–Pérot Interferometer (GFPI) and the newly installed High-Resolution Fast Imager (HiFI). The heterogeneous and complex nature of multidimensional data arising from high-resolution solar observations provides an intriguing but also a challenging example for “big data” in astronomy. The big data challenge has two aspects: (1) establishing a workflow for publishing the data for the whole community and beyond and (2) creating a collaborative research environment (CRE), where computationally intense data and postprocessing tools are colocated and collaborative work is enabled for scientists of multiple institutes. This requires either collaboration with a data center or frameworks and databases capable of dealing with huge data sets based on virtual observatory (VO) and other community standards and procedures.
NASA Astrophysics Data System (ADS)
Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.
1999-05-01
PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.
A Freeware Path to Neutron Computed Tomography
NASA Astrophysics Data System (ADS)
Schillinger, Burkhard; Craft, Aaron E.
Neutron computed tomography has become a routine method at many neutron sources due to the availability of digital detection systems, powerful computers and advanced software. The commercial packages Octopus by Inside Matters and VGStudio by Volume Graphics have been established as a quasi-standard for high-end computed tomography. However, these packages require a stiff investment and are available to the users only on-site at the imaging facility to do their data processing. There is a demand from users to have image processing software at home to do further data processing; in addition, neutron computed tomography is now being introduced even at smaller and older reactors. Operators need to show a first working tomography setup before they can obtain a budget to build an advanced tomography system. Several packages are available on the web for free; however, these have been developed for X-rays or synchrotron radiation and are not immediately useable for neutron computed tomography. Three reconstruction packages and three 3D-viewers have been identified and used even for Gigabyte datasets. This paper is not a scientific publication in the classic sense, but is intended as a review to provide searchable help to make the described packages usable for the tomography community. It presents the necessary additional preprocessing in ImageJ, some workarounds for bugs in the software, and undocumented or badly documented parameters that need to be adapted for neutron computed tomography. The result is a slightly complicated, but surprisingly high-quality path to neutron computed tomography images in 3D, but not a replacement for the even more powerful commercial software mentioned above.
Deng, Hang; Fitts, Jeffrey P.; Peters, Catherine A.
2016-02-01
This paper presents a new method—the Technique of Iterative Local Thresholding (TILT)—for processing 3D X-ray computed tomography (xCT) images for visualization and quantification of rock fractures. The TILT method includes the following advancements. First, custom masks are generated by a fracture-dilation procedure, which significantly amplifies the fracture signal on the intensity histogram used for local thresholding. Second, TILT is particularly well suited for fracture characterization in granular rocks because the multi-scale Hessian fracture (MHF) filter has been incorporated to distinguish fractures from pores in the rock matrix. Third, TILT wraps the thresholding and fracture isolation steps in an optimized iterativemore » routine for binary segmentation, minimizing human intervention and enabling automated processing of large 3D datasets. As an illustrative example, we applied TILT to 3D xCT images of reacted and unreacted fractured limestone cores. Other segmentation methods were also applied to provide insights regarding variability in image processing. The results show that TILT significantly enhanced separability of grayscale intensities, outperformed the other methods in automation, and was successful in isolating fractures from the porous rock matrix. Because the other methods are more likely to misclassify fracture edges as void and/or have limited capacity in distinguishing fractures from pores, those methods estimated larger fracture volumes (up to 80 %), surface areas (up to 60 %), and roughness (up to a factor of 2). In conclusion, these differences in fracture geometry would lead to significant disparities in hydraulic permeability predictions, as determined by 2D flow simulations.« less
LORENZ: a system for planning long-bone fracture reduction
NASA Astrophysics Data System (ADS)
Birkfellner, Wolfgang; Burgstaller, Wolfgang; Wirth, Joachim; Baumann, Bernard; Jacob, Augustinus L.; Bieri, Kurt; Traud, Stefan; Strub, Michael; Regazzoni, Pietro; Messmer, Peter
2003-05-01
Long bone fractures belong to the most common injuries encountered in clinical routine trauma surgery. Preoperative assessment and decision making is usually based on standard 2D radiographs of the injured limb. Taking into account that a 3D - imaging modality such as computed tomography (CT) is not used for diagnosis in clinical routine, we have designed LORENZ, a fracture reduction planning tool based on such standard radiographs. Taking into account the considerable success of so-called image free navigation systems for total knee replacement in orthopaedic surgery, we assume that a similar tool for long bone fracture reposition should have considerable impact on computer-aided trauma surgery in a standard clinical routine setup. The case for long bone fracture reduction is, however, somewhat more complicated since not only scale independent angles indicating biomechanical measures such as varus and valgus are involved. Reduction path planning requires that the individual anatomy and the classification of the fracture is taken into account. In this paper, we present the basic ideas of this planning tool, it's current state, and the methodology chosen. LORENZ takes one or more conventional radiographs of the broken limb as input data. In addition, one or more x-rays of the opposite healthy bone are taken and mirrored if necessary. A most adequate CT model is being selected from a database; currently, this is achieved by using a scale space approach on the digitized x-ray images and comparing standard perspective renderings to these x-rays. After finding a CT-volume with a similar bone, a triangulated surface model is generated, and the surgeon can break the bone and arrange the fragments in 3D according to the x-ray images of the broken bone. Common osteosynthesis plates and implants can be loaded from CAD-datasets and are visualized as well. In addition, LORENZ renders virtual x-ray views of the fracture reduction process. The hybrid surface/voxel rendering engine of LORENZ also features full collision detection of fragments and implants by using the RAPID collision detection library. The reduction path is saved, and a TCP/IP interface to a robot for executing the reduction was added. LORENZ is platform independent and was programmed using Qt, AVW and OpenGL. We present a prototype for computer-aided fracture reduction planning based on standard radiographs. First test on clinical CT-Xray image pairs showed good performance; a current effort focuses on improving the speed of model retrieval by using orthonormal image moment decomposition, and on clinical evaluation for both training and surgical planning purposes. Furthermore, user-interface aspects are currently under evaluation and will be discussed.
Pharmacologic intervention as an alternative to exercise stress
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gould, K.L.
1987-04-01
Although thallium exercise imaging has served an important role in clinical cardiology, it is significantly limited by suboptimal sensitivity and specificity, particularly in asymptomatic man. The increasing recognition of silent myocardial ischemia, the significant prevalence of coronary artery disease in asymptomatic middle age men, and the frequent occurrence of myocardial infarction without preceding symptoms in 60% of cases emphasizes the need for a more definitive, noninvasive diagnostic test for the presence of coronary artery disease suitable for screening in asymptomatic or symptomatic patients. Intravenous dipyridamole combined with handgrip stress provides a potent stimulus for purposes of diagnostic perfusion imaging. Althoughmore » planar and single photon emission computed tomography (SPECT) imaging also have played an important role, these techniques are seriously hindered by their inability to quantitate radiotracer uptake or image modest differences in maximum relative flow caused by coronary artery stenosis. Accordingly, the combination of dipyridamole-handgrip stress with positron imaging of myocardial perfusion has become a powerful diagnostic tool suitable for routine clinical use. With the availability of generator-produced rubidium-82, dedicated clinically oriented positron cameras, the routine application of positron imaging to clinical cardiology has become feasible. 75 references.« less
Lithospheric Structure and Dynamics: Insights Facilitated by the IRIS/PASSCAL Facility
NASA Astrophysics Data System (ADS)
Meltzer, A.
2002-12-01
Through the development of community-based facilities in portable array seismology, a wide-range of seismic methods are now standard tools for imaging the Earth's interior, extending geologic observations made at the surface to depth. The IRIS/PASSCAL program provides the seismological community with the ability to routinely field experimental programs, from high-resolution seismic reflection profiling of the near surface to lithospheric scale imaging with both active and passive source arrays, to understand the tectonic evolution of continents, how they are assembled, disassembled, and modified through time. As our ability to record and process large volumes of data has improved we have moved from simple 1-D velocity models and 2-D structural cross sections of the subsurface to 3-D and 4-D images to correlate complex surface tectonics to processes in the Earth's interior. Data from individual IRIS/PASSCAL experiments has fostered multidisciplinary studies, bringing together geologists, geochemists, and geophysicists to work together on common problems. As data is collected from a variety of tectonic environments around the globe common elements begin to emerge. We now recognize and study the inherent lateral and vertical heterogeneity in the crust and mantle lithosphere and its role in controlling deformation, the importance of low velocity mobile mantle in supporting topography, and the importance of fluids and fluid migration in magmatic and deformational processes. We can image and map faults, fault zones, and fault networks to study them as systems rather than isolated planes of deformation to better understand earthquake nucleation, rupture, and propagation. An additional benefit of these community-based facilities is the pooling of resources to develop effective and sustainable education and outreach programs. These programs attract new students to pursue careers in earth science, engage the general public in the scientific enterprise, raise the profile of the earth sciences, and reveal the importance of earth processes in shaping the environment in which we live. Future challenges facing our community include continued evolution of existing facilities to keep pace with scientific inquiry, routinely utilizing fully 3-D and where appropriate 4-D data sets to understand earth structure and dynamics, and the manipulation, and analysis of large multidisciplinary data sets. Community models should be considered as a mechanism to integrate, analyze, and share data and results within a process oriented framework. Exciting developments on the horizon include EarthScope. To maximize the potential for significant advances in our understanding of tectonic processes, observations from new EarthScope facilities must be integrated with additional geologic data sets of similar quality and resolution. New real-time data streams combined with new data integration, analysis, and visualization tools will provide us with the ability to integrate data across a continuous range of spatial scales providing a new and coherent view of lithospheric dynamics from local to plate scale.
Kluge, Annette; Gronau, Norbert
2018-01-01
To cope with the already large, and ever increasing, amount of information stored in organizational memory, "forgetting," as an important human memory process, might be transferred to the organizational context. Especially in intentionally planned change processes (e.g., change management), forgetting is an important precondition to impede the recall of obsolete routines and adapt to new strategic objectives accompanied by new organizational routines. We first comprehensively review the literature on the need for organizational forgetting and particularly on accidental vs. intentional forgetting. We discuss the current state of the art of theory and empirical evidence on forgetting from cognitive psychology in order to infer mechanisms applicable to the organizational context. In this respect, we emphasize retrieval theories and the relevance of retrieval cues important for forgetting. Subsequently, we transfer the empirical evidence that the elimination of retrieval cues leads to faster forgetting to the forgetting of organizational routines, as routines are part of organizational memory. We then propose a classification of cues (context, sensory, business process-related cues) that are relevant in the forgetting of routines, and discuss a meta-cue called the "situational strength" cue, which is relevant if cues of an old and a new routine are present simultaneously. Based on the classification as business process-related cues (information, team, task, object cues), we propose mechanisms to accelerate forgetting by eliminating specific cues based on the empirical and theoretical state of the art. We conclude that in intentional organizational change processes, the elimination of cues to accelerate forgetting should be used in change management practices.
Kluge, Annette; Gronau, Norbert
2018-01-01
To cope with the already large, and ever increasing, amount of information stored in organizational memory, “forgetting,” as an important human memory process, might be transferred to the organizational context. Especially in intentionally planned change processes (e.g., change management), forgetting is an important precondition to impede the recall of obsolete routines and adapt to new strategic objectives accompanied by new organizational routines. We first comprehensively review the literature on the need for organizational forgetting and particularly on accidental vs. intentional forgetting. We discuss the current state of the art of theory and empirical evidence on forgetting from cognitive psychology in order to infer mechanisms applicable to the organizational context. In this respect, we emphasize retrieval theories and the relevance of retrieval cues important for forgetting. Subsequently, we transfer the empirical evidence that the elimination of retrieval cues leads to faster forgetting to the forgetting of organizational routines, as routines are part of organizational memory. We then propose a classification of cues (context, sensory, business process-related cues) that are relevant in the forgetting of routines, and discuss a meta-cue called the “situational strength” cue, which is relevant if cues of an old and a new routine are present simultaneously. Based on the classification as business process-related cues (information, team, task, object cues), we propose mechanisms to accelerate forgetting by eliminating specific cues based on the empirical and theoretical state of the art. We conclude that in intentional organizational change processes, the elimination of cues to accelerate forgetting should be used in change management practices. PMID:29449821
Itsukage, Shizu; Sowa, Yoshihiro; Goto, Mariko; Taguchi, Tetsuya; Numajiri, Toshiaki
2017-01-01
Objective: Preoperative prediction of breast volume is important in the planning of breast reconstructive surgery. In this study, we prospectively estimated the accuracy of measurement of breast volume using data from 2 routine modalities, mammography and magnetic resonance imaging, by comparison with volumes of mastectomy specimens. Methods: The subjects were 22 patients (24 breasts) who were scheduled to undergo total mastectomy for breast cancer. Preoperatively, magnetic resonance imaging volume measurement was performed using a medical imaging system and the mammographic volume was calculated using a previously proposed formula. Volumes of mastectomy specimens were measured intraoperatively using a method based on Archimedes' principle and Newton's third law. Results: The average breast volumes measured on magnetic resonance imaging and mammography were 318.47 ± 199.4 mL and 325.26 ± 217.36 mL, respectively. The correlation coefficients with mastectomy specimen volumes were 0.982 for magnetic resonance imaging and 0.911 for mammography. Conclusions: Breast volume measurement using magnetic resonance imaging was highly accurate but requires data analysis software. In contrast, breast volume measurement with mammography requires only a simple formula and is sufficiently accurate, although the accuracy was lower than that obtained with magnetic resonance imaging. These results indicate that mammography could be an alternative modality for breast volume measurement as a substitute for magnetic resonance imaging.
Itsukage, Shizu; Goto, Mariko; Taguchi, Tetsuya; Numajiri, Toshiaki
2017-01-01
Objective: Preoperative prediction of breast volume is important in the planning of breast reconstructive surgery. In this study, we prospectively estimated the accuracy of measurement of breast volume using data from 2 routine modalities, mammography and magnetic resonance imaging, by comparison with volumes of mastectomy specimens. Methods: The subjects were 22 patients (24 breasts) who were scheduled to undergo total mastectomy for breast cancer. Preoperatively, magnetic resonance imaging volume measurement was performed using a medical imaging system and the mammographic volume was calculated using a previously proposed formula. Volumes of mastectomy specimens were measured intraoperatively using a method based on Archimedes’ principle and Newton's third law. Results: The average breast volumes measured on magnetic resonance imaging and mammography were 318.47 ± 199.4 mL and 325.26 ± 217.36 mL, respectively. The correlation coefficients with mastectomy specimen volumes were 0.982 for magnetic resonance imaging and 0.911 for mammography. Conclusions: Breast volume measurement using magnetic resonance imaging was highly accurate but requires data analysis software. In contrast, breast volume measurement with mammography requires only a simple formula and is sufficiently accurate, although the accuracy was lower than that obtained with magnetic resonance imaging. These results indicate that mammography could be an alternative modality for breast volume measurement as a substitute for magnetic resonance imaging. PMID:29308107
2014-10-01
Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The study investigates whether fusion PET/MRI imaging with 18F- choline PET/CT and...imaging with 18F- choline PET/CT and diffusion-weighted MRI can be successfully applied to target prostate cancer using image-guided prostate...Completed task. The 18F- choline synthesis was implemented and optimized for routine radiotracer production. RDRC committee approval as part of the IRB
IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM
NASA Technical Reports Server (NTRS)
Martin, M. D.
1994-01-01
The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the screen at once, the image can be "subsampled." For example, if the image were subsampled by a factor of 2, every other pixel from every other line would be displayed, starting from the upper left corner of the image. Any positive integer may be used for subsampling. The user may produce a histogram of an image file, which is a graph showing the number of pixels per DN value, or per range of DN values, for the entire image. IMDISP can also plot the DN value versus pixels along a line between two points on the image. The user can "stretch" or increase the contrast of an image by specifying low and high DN values; all pixels with values lower than the specified "low" will then become black, and all pixels higher than the specified "high" value will become white. Pixels between the low and high values will be evenly shaded between black and white. IMDISP is written in a modular form to make it easy to change it to work with different display devices or on other computers. The code can also be adapted for use in other application programs. There are device dependent image display modules, general image display subroutines, image I/O routines, and image label and command line parsing routines. The IMDISP system is written in C-language (94%) and Assembler (6%). It was implemented on an IBM PC with the MS DOS 3.21 operating system. IMDISP has a memory requirement of about 142k bytes. IMDISP was developed in 1989 and is a copyrighted work with all copyright vested in NASA. Additional planetary images can be obtained from the National Space Science Data Center at (301) 286-6695.
Laňková, Martina; Humpolíčková, Jana; Vosolsobě, Stanislav; Cit, Zdeněk; Lacek, Jozef; Čovan, Martin; Čovanová, Milada; Hof, Martin; Petrášek, Jan
2016-04-01
A number of fluorescence microscopy techniques are described to study dynamics of fluorescently labeled proteins, lipids, nucleic acids, and whole organelles. However, for studies of plant plasma membrane (PM) proteins, the number of these techniques is still limited because of the high complexity of processes that determine the dynamics of PM proteins and the existence of cell wall. Here, we report on the usage of raster image correlation spectroscopy (RICS) for studies of integral PM proteins in suspension-cultured tobacco cells and show its potential in comparison with the more widely used fluorescence recovery after photobleaching method. For RICS, a set of microscopy images is obtained by single-photon confocal laser scanning microscopy (CLSM). Fluorescence fluctuations are subsequently correlated between individual pixels and the information on protein mobility are extracted using a model that considers processes generating the fluctuations such as diffusion and chemical binding reactions. As we show here using an example of two integral PM transporters of the plant hormone auxin, RICS uncovered their distinct short-distance lateral mobility within the PM that is dependent on cytoskeleton and sterol composition of the PM. RICS, which is routinely accessible on modern CLSM instruments, thus represents a valuable approach for studies of dynamics of PM proteins in plants.
Using machine learning techniques to automate sky survey catalog generation
NASA Technical Reports Server (NTRS)
Fayyad, Usama M.; Roden, J. C.; Doyle, R. J.; Weir, Nicholas; Djorgovski, S. G.
1993-01-01
We describe the application of machine classification techniques to the development of an automated tool for the reduction of a large scientific data set. The 2nd Palomar Observatory Sky Survey provides comprehensive photographic coverage of the northern celestial hemisphere. The photographic plates are being digitized into images containing on the order of 10(exp 7) galaxies and 10(exp 8) stars. Since the size of this data set precludes manual analysis and classification of objects, our approach is to develop a software system which integrates independently developed techniques for image processing and data classification. Image processing routines are applied to identify and measure features of sky objects. Selected features are used to determine the classification of each object. GID3* and O-BTree, two inductive learning techniques, are used to automatically learn classification decision trees from examples. We describe the techniques used, the details of our specific application, and the initial encouraging results which indicate that our approach is well-suited to the problem. The benefits of the approach are increased data reduction throughput, consistency of classification, and the automated derivation of classification rules that will form an objective, examinable basis for classifying sky objects. Furthermore, astronomers will be freed from the tedium of an intensely visual task to pursue more challenging analysis and interpretation problems given automatically cataloged data.
NASA Technical Reports Server (NTRS)
Garbeff, Theodore J., II; Baerny, Jennifer K.
2017-01-01
The following details recent efforts undertaken at the NASA Ames Unitary Plan wind tunnels to design and deploy an advanced, production-level infrared (IR) flow visualization data system. Highly sensitive IR cameras, coupled with in-line image processing, have enabled the visualization of wind tunnel model surface flow features as they develop in real-time. Boundary layer transition, shock impingement, junction flow, vortex dynamics, and buffet are routinely observed in both transonic and supersonic flow regimes all without the need of dedicated ramps in test section total temperature. Successful measurements have been performed on wing-body sting mounted test articles, semi-span floor mounted aircraft models, and sting mounted launch vehicle configurations. The unique requirements of imaging in production wind tunnel testing has led to advancements in the deployment of advanced IR cameras in a harsh test environment, robust data acquisition storage and workflow, real-time image processing algorithms, and evaluation of optimal surface treatments. The addition of a multi-camera IR flow visualization data system to the Ames UPWT has demonstrated itself to be a valuable analyses tool in the study of new and old aircraft/launch vehicle aerodynamics and has provided new insight for the evaluation of computational techniques.
NASA Astrophysics Data System (ADS)
Celli, Jonathan P.; Rizvi, Imran; Evans, Conor L.; Abu-Yousif, Adnan O.; Hasan, Tayyaba
2010-09-01
Three-dimensional tumor models have emerged as valuable in vitro research tools, though the power of such systems as quantitative reporters of tumor growth and treatment response has not been adequately explored. We introduce an approach combining a 3-D model of disseminated ovarian cancer with high-throughput processing of image data for quantification of growth characteristics and cytotoxic response. We developed custom MATLAB routines to analyze longitudinally acquired dark-field microscopy images containing thousands of 3-D nodules. These data reveal a reproducible bimodal log-normal size distribution. Growth behavior is driven by migration and assembly, causing an exponential decay in spatial density concomitant with increasing mean size. At day 10, cultures are treated with either carboplatin or photodynamic therapy (PDT). We quantify size-dependent cytotoxic response for each treatment on a nodule by nodule basis using automated segmentation combined with ratiometric batch-processing of calcein and ethidium bromide fluorescence intensity data (indicating live and dead cells, respectively). Both treatments reduce viability, though carboplatin leaves micronodules largely structurally intact with a size distribution similar to untreated cultures. In contrast, PDT treatment disrupts micronodular structure, causing punctate regions of toxicity, shifting the distribution toward smaller sizes, and potentially increasing vulnerability to subsequent chemotherapeutic treatment.
Optimization of image quality and dose for Varian aS500 electronic portal imaging devices (EPIDs).
McGarry, C K; Grattan, M W D; Cosgrove, V P
2007-12-07
This study was carried out to investigate whether the electronic portal imaging (EPI) acquisition process could be optimized, and as a result tolerance and action levels be set for the PIPSPro QC-3V phantom image quality assessment. The aim of the optimization process was to reduce the dose delivered to the patient while maintaining a clinically acceptable image quality. This is of interest when images are acquired in addition to the planned patient treatment, rather than images being acquired using the treatment field during a patient's treatment. A series of phantoms were used to assess image quality for different acquisition settings relative to the baseline values obtained following acceptance testing. Eight Varian aS500 EPID systems on four matched Varian 600C/D linacs and four matched Varian 2100C/D linacs were compared for consistency of performance and images were acquired at the four main orthogonal gantry angles. Images were acquired using a 6 MV beam operating at 100 MU min(-1) and the low-dose acquisition mode. Doses used in the comparison were measured using a Farmer ionization chamber placed at d(max) in solid water. The results demonstrated that the number of reset frames did not have any influence on the image contrast, but the number of frame averages did. The expected increase in noise with corresponding decrease in contrast was also observed when reducing the number of frame averages. The optimal settings for the low-dose acquisition mode with respect to image quality and dose were found to be one reset frame and three frame averages. All patients at the Northern Ireland Cancer Centre are now imaged using one reset frame and three frame averages in the 6 MV 100 MU min(-1) low-dose acquisition mode. Routine EPID QC contrast tolerance (+/-10) and action (+/-20) levels using the PIPSPro phantom based around expected values of 190 (Varian 600C/D) and 225 (Varian 2100C/D) have been introduced. The dose at dmax from electronic portal imaging has been reduced by approximately 28%, and while the image quality has been reduced, the images produced are still clinically acceptable.
Computer-aided diagnosis and artificial intelligence in clinical imaging.
Shiraishi, Junji; Li, Qiang; Appelbaum, Daniel; Doi, Kunio
2011-11-01
Computer-aided diagnosis (CAD) is rapidly entering the radiology mainstream. It has already become a part of the routine clinical work for the detection of breast cancer with mammograms. The computer output is used as a "second opinion" in assisting radiologists' image interpretations. The computer algorithm generally consists of several steps that may include image processing, image feature analysis, and data classification via the use of tools such as artificial neural networks (ANN). In this article, we will explore these and other current processes that have come to be referred to as "artificial intelligence." One element of CAD, temporal subtraction, has been applied for enhancing interval changes and for suppressing unchanged structures (eg, normal structures) between 2 successive radiologic images. To reduce misregistration artifacts on the temporal subtraction images, a nonlinear image warping technique for matching the previous image to the current one has been developed. Development of the temporal subtraction method originated with chest radiographs, with the method subsequently being applied to chest computed tomography (CT) and nuclear medicine bone scans. The usefulness of the temporal subtraction method for bone scans was demonstrated by an observer study in which reading times and diagnostic accuracy improved significantly. An additional prospective clinical study verified that the temporal subtraction image could be used as a "second opinion" by radiologists with negligible detrimental effects. ANN was first used in 1990 for computerized differential diagnosis of interstitial lung diseases in CAD. Since then, ANN has been widely used in CAD schemes for the detection and diagnosis of various diseases in different imaging modalities, including the differential diagnosis of lung nodules and interstitial lung diseases in chest radiography, CT, and position emission tomography/CT. It is likely that CAD will be integrated into picture archiving and communication systems and will become a standard of care for diagnostic examinations in daily clinical work. Copyright © 2011 Elsevier Inc. All rights reserved.
Benchmark datasets for 3D MALDI- and DESI-imaging mass spectrometry.
Oetjen, Janina; Veselkov, Kirill; Watrous, Jeramie; McKenzie, James S; Becker, Michael; Hauberg-Lotte, Lena; Kobarg, Jan Hendrik; Strittmatter, Nicole; Mróz, Anna K; Hoffmann, Franziska; Trede, Dennis; Palmer, Andrew; Schiffler, Stefan; Steinhorst, Klaus; Aichler, Michaela; Goldin, Robert; Guntinas-Lichius, Orlando; von Eggeling, Ferdinand; Thiele, Herbert; Maedler, Kathrin; Walch, Axel; Maass, Peter; Dorrestein, Pieter C; Takats, Zoltan; Alexandrov, Theodore
2015-01-01
Three-dimensional (3D) imaging mass spectrometry (MS) is an analytical chemistry technique for the 3D molecular analysis of a tissue specimen, entire organ, or microbial colonies on an agar plate. 3D-imaging MS has unique advantages over existing 3D imaging techniques, offers novel perspectives for understanding the spatial organization of biological processes, and has growing potential to be introduced into routine use in both biology and medicine. Owing to the sheer quantity of data generated, the visualization, analysis, and interpretation of 3D imaging MS data remain a significant challenge. Bioinformatics research in this field is hampered by the lack of publicly available benchmark datasets needed to evaluate and compare algorithms. High-quality 3D imaging MS datasets from different biological systems at several labs were acquired, supplied with overview images and scripts demonstrating how to read them, and deposited into MetaboLights, an open repository for metabolomics data. 3D imaging MS data were collected from five samples using two types of 3D imaging MS. 3D matrix-assisted laser desorption/ionization imaging (MALDI) MS data were collected from murine pancreas, murine kidney, human oral squamous cell carcinoma, and interacting microbial colonies cultured in Petri dishes. 3D desorption electrospray ionization (DESI) imaging MS data were collected from a human colorectal adenocarcinoma. With the aim to stimulate computational research in the field of computational 3D imaging MS, selected high-quality 3D imaging MS datasets are provided that could be used by algorithm developers as benchmark datasets.
How to make deposition of images a reality
Guss, J. Mitchell; McMahon, Brian
2014-01-01
The IUCr Diffraction Data Deposition Working Group is investigating the rationale and policies for routine deposition of diffraction images (and other primary experimental data sets). An information-management framework is described that should inform policy directions, and some of the technical and other issues that need to be addressed in an effort to achieve such a goal are analysed. In the near future, routine data deposition could be encouraged at one of the growing number of institutional repositories that accept data sets or at a generic data-publishing web repository service. To realise all of the potential benefits of depositing diffraction data, specialized archives would be preferable. Funding such an initiative will be challenging. PMID:25286838
Patterson, Emily S.; Rayo, Mike; Gill, Carolina; Gurcan, Metin N.
2011-01-01
Background: Adoption of digital images for pathological specimens has been slower than adoption of digital images in radiology, despite a number of anticipated advantages for digital images in pathology. In this paper, we explore the factors that might explain this slower rate of adoption. Materials and Method: Semi-structured interviews on barriers and facilitators to the adoption of digital images were conducted with two radiologists, three pathologists, and one pathologist's assistant. Results: Barriers and facilitators to adoption of digital images were reported in the areas of performance, workflow-efficiency, infrastructure, integration with other software, and exposure to digital images. The primary difference between the settings was that performance with the use of digital images as compared to the traditional method was perceived to be higher in radiology and lower in pathology. Additionally, exposure to digital images was higher in radiology than pathology, with some radiologists exclusively having been trained and/or practicing with digital images. The integration of digital images both improved and reduced efficiency in routine and non-routine workflow patterns in both settings, and was variable across the different organizations. A comparison of these findings with prior research on adoption of other health information technologies suggests that the barriers to adoption of digital images in pathology are relatively tractable. Conclusions: Improving performance using digital images in pathology would likely accelerate adoption of innovative technologies that are facilitated by the use of digital images, such as electronic imaging databases, electronic health records, double reading for challenging cases, and computer-aided diagnostic systems. PMID:21383925
Near Real-Time Photometric Data Processing for the Solar Mass Ejection Imager (SMEI)
NASA Astrophysics Data System (ADS)
Hick, P. P.; Buffington, A.; Jackson, B. V.
2004-12-01
The Solar Mass Ejection Imager (SMEI) records a photometric white-light response of the interplanetary medium from Earth over most of the sky in near real time. In the first two years of operation the instrument has recorded the inner heliospheric response to several hundred CMEs, including the May 28, 2003 and the October 28, 2003 halo CMEs. In this preliminary work we present the techniques required to process the SMEI data from the time the raw CCD images become available to their final assembly in photometrically accurate maps of the sky brightness relative to a long-term time base. Processing of the SMEI data includes integration of new data into the SMEI data base; a conditioning program that removes from the raw CCD images an electronic offset ("pedestal") and a temperature-dependent dark current pattern; an "indexing" program that places these CCD images onto a high-resolution sidereal grid using known spacecraft pointing information. At this "indexing" stage further conditioning removes the bulk of the the effects of high-energy-particle hits ("cosmic rays"), space debris inside the field of view, and pixels with a sudden state change ("flipper pixels"). Once the high-resolution grid is produced, it is reformatted to a lower-resolution set of sidereal maps of sky brightness. From these sidereal maps we remove bright stars, background stars, and a zodiacal cloud model (their brightnesses are retained as additional data products). The final maps can be represented in any convenient sky coordinate system. Common formats are Sun-centered Hammer-Aitoff or "fisheye" maps. Time series at selected locations on these maps are extracted and processed further to remove aurorae, variable stars and other unwanted signals. These time series (with a long-term base removed) are used in 3D tomographic reconstructions. The data processing is distributed over multiple PCs running Linux, and, runs as much as possible automatically using recurring batch jobs ('cronjobs'). The batch scrips are controlled by Python scripts. The core data processing routines are written in several computer languages: Fortran, C++ and IDL.
STAR (Simple Tool for Automated Reasoning): Tutorial guide and reference manual
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1985-01-01
STAR is an interactive, interpreted programming language for the development and operation of Artificial Intelligence application systems. The language is intended for use primarily in the development of software application systems which rely on a combination of symbolic processing, central to the vast majority of AI algorithms, with routines and data structures defined in compiled languages such as C, FORTRAN and PASCAL. References to routines and data structures defined in compiled languages are intermixed with symbolic structures in STAR, resulting in a hybrid operating environment in which symbolic and non-symbolic processing and organization of data may interact to a high degree within the execution of particular application systems. The STAR language was developed in the course of a project involving AI techniques in the interpretation of imaging spectrometer data and is derived in part from a previous language called CLIP. The interpreter for STAR is implemented as a program defined in the language C and has been made available for distribution in source code form through NASA's Computer Software Management and Information Center (COSMIC). Contained within this report are the STAR Tutorial Guide, which introduces the language in a step-by-step manner, and the STAR Reference Manual, which provides a detailed summary of the features of STAR.
OpenROCS: a software tool to control robotic observatories
NASA Astrophysics Data System (ADS)
Colomé, Josep; Sanz, Josep; Vilardell, Francesc; Ribas, Ignasi; Gil, Pere
2012-09-01
We present the Open Robotic Observatory Control System (OpenROCS), an open source software platform developed for the robotic control of telescopes. It acts as a software infrastructure that executes all the necessary processes to implement responses to the system events that appear in the routine and non-routine operations associated to data-flow and housekeeping control. The OpenROCS software design and implementation provides a high flexibility to be adapted to different observatory configurations and event-action specifications. It is based on an abstract model that is independent of the specific hardware or software and is highly configurable. Interfaces to the system components are defined in a simple manner to achieve this goal. We give a detailed description of the version 2.0 of this software, based on a modular architecture developed in PHP and XML configuration files, and using standard communication protocols to interface with applications for hardware monitoring and control, environment monitoring, scheduling of tasks, image processing and data quality control. We provide two examples of how it is used as the core element of the control system in two robotic observatories: the Joan Oró Telescope at the Montsec Astronomical Observatory (Catalonia, Spain) and the SuperWASP Qatar Telescope at the Roque de los Muchachos Observatory (Canary Islands, Spain).
OXSA: An open-source magnetic resonance spectroscopy analysis toolbox in MATLAB.
Purvis, Lucian A B; Clarke, William T; Biasiolli, Luca; Valkovič, Ladislav; Robson, Matthew D; Rodgers, Christopher T
2017-01-01
In vivo magnetic resonance spectroscopy provides insight into metabolism in the human body. New acquisition protocols are often proposed to improve the quality or efficiency of data collection. Processing pipelines must also be developed to use these data optimally. Current fitting software is either targeted at general spectroscopy fitting, or for specific protocols. We therefore introduce the MATLAB-based OXford Spectroscopy Analysis (OXSA) toolbox to allow researchers to rapidly develop their own customised processing pipelines. The toolbox aims to simplify development by: being easy to install and use; seamlessly importing Siemens Digital Imaging and Communications in Medicine (DICOM) standard data; allowing visualisation of spectroscopy data; offering a robust fitting routine; flexibly specifying prior knowledge when fitting; and allowing batch processing of spectra. This article demonstrates how each of these criteria have been fulfilled, and gives technical details about the implementation in MATLAB. The code is freely available to download from https://github.com/oxsatoolbox/oxsa.
Hoekstra, Carlijn E L; Prijs, Vera F; van Zanten, Gijsbert A
2015-02-01
To assess the diagnostic yield of a routine magnetic resonance imaging (MRI) scan in patients with (unilateral) chronic tinnitus, to define the frequency of incidental findings, and to assess the clinical relevance of potentially found anterior inferior cerebellar artery (AICA) loops. Retrospective cohort study. Tertiary Tinnitus Care Group at the University Medical Center Utrecht. Three hundred twenty-one patients with chronic tinnitus. Routine diagnostic magnetic resonance imaging (MRI) and diagnostic auditory brainstem responses (ABR) when an AICA loop was found. Relationship between abnormalities on MRI and tinnitus. In 138 patients (45%), an abnormality on the MRI scan was described. In only 7 patients (2.2%), the abnormality probably related to the patient's tinnitus. Results were not significantly better in patients with unilateral tinnitus (abnormalities in 3.2%). Incidental findings, not related to the tinnitus, were found in 41% of the patients. In 70 patients (23%), an AICA loop was found in the internal auditory canal. No significant relationships were found between the presence of an AICA loop and the side of the tinnitus, abnormalities on the ABR or complaints specific to nerve compression syndrome. A routine MRI is of little or no value in patients with tinnitus with persistent complaints. Anterior inferior cerebellar artery loops are often encountered on an MRI scan but rarely relate to the tinnitus and should thus be considered incidental findings. It is advised to only perform an MRI when on clinical grounds a specific etiology with tinnitus as the symptom seems probable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dinwiddie, Ralph Barton; Lloyd, Peter D; Dehoff, Ryan R
2016-01-01
The Department of Energy s (DOE) Manufacturing Demonstration Facility (MDF) at Oak Ridge National Laboratory (ORNL) provides world-leading capabilities in advanced manufacturing (AM) facilities which leverage previous, on-going government investments in materials science research and characterization. MDF contains systems for fabricating components with complex geometries using AM techniques (i.e. 3D-Printing). Various metal alloy printers, for example, use electron beam melting (EBM) systems for creating these components which are otherwise extremely difficult- if not impossible- to machine. ORNL has partnered with manufacturers on improving the final part quality of components and developing new materials for further advancing these devices. One methodmore » being used to study (AM) processes in more depth relies on the advanced imaging capabilities at ORNL. High performance mid-wave infrared (IR) cameras are used for in-situ process monitoring and temperature measurements. However, standard factory calibrations are insufficient due to very low transmissions of the leaded glass window required for X-ray absorption. Two techniques for temperature calibrations will be presented and compared. In-situ measurement of emittance will also be discussed. Ample information can be learned from in-situ IR process monitoring of the EBM process. Ultimately, these imaging systems have the potential for routine use for online quality assurance and feedback control.« less
School Success as a Process of Structuration
ERIC Educational Resources Information Center
Tubin, Dorit
2015-01-01
Purpose: The purpose of the present study is to explore the process, routines, and structuration at successful schools leading their students to high achievements. Method: The approach of building a theory from case study research together with process perspective and an organizational routines model were applied to analyzing seven successful…
Intramolecular bonds resolved on a semiconductor surface
NASA Astrophysics Data System (ADS)
Sweetman, Adam; Jarvis, Samuel P.; Rahe, Philipp; Champness, Neil R.; Kantorovich, Lev; Moriarty, Philip
2014-10-01
Noncontact atomic force microscopy (NC-AFM) is now routinely capable of obtaining submolecular resolution, readily resolving the carbon backbone structure of planar organic molecules adsorbed on metal substrates. Here we show that the same resolution may also be obtained for molecules adsorbed on a reactive semiconducting substrate. Surprisingly, this resolution is routinely obtained without the need for deliberate tip functionalization. Intriguingly, we observe two chemically distinct apex types capable of submolecular imaging. We characterize our tip apices by "inverse imaging" of the silicon adatoms of the Si (111)-7×7 surface and support our findings with detailed density functional theory (DFT) calculations. We also show that intramolecular resolution on individual molecules may be readily obtained at 78 K, rather than solely at 5 K as previously demonstrated. Our results suggest a wide range of tips may be capable of producing intramolecular contrast for molecules adsorbed on semiconductor surfaces, leading to a much broader applicability for submolecular imaging protocols.
Intracranial translucency assessment at first trimester nuchal translucency ultrasound.
Lane, Annah; Lee, Ling; Traves, Donna; Lee, Andreas
2017-04-01
The antenatal diagnosis of open spina bifida (OSB), a neural tube defect, is predominantly made at the second trimester morphology scan by ultrasound detection of structural abnormalities resulting from the associated Chiari II malformation. Evidence has emerged suggesting that these structural abnormalities can be detected earlier, by examination of the posterior fossa as part of the first trimester nuchal translucency scan. In particular, absence of the intra-cranial translucency (IT) of the fourth ventricle has shown promise as a diagnostic marker of OSB, although the sensitivity and specificity of this finding varies widely in the literature. The aim of this study is to assess the feasibility of obtaining the image of the IT at our institution as part of the routine first trimester scan. This is a prospective study of 900 obstetric patients who presented to a tertiary women's imaging centre for routine first trimester nuchal translucency screening ultrasound for the year 2014. Their risk status was that of the general population (low risk) prior to presentation. A total of 158 patients were excluded, leaving a study group of 742. Sonographers obtained a mid-sagittal view of the fetal face with particular focus on optimum viewing of the IT. All images were examined by a Radiology Registrar for presence or absence of IT. Duration of each scan was documented. The IT image was successfully acquired in 94.9% of scans. Maternal pre-pregnancy BMI and fetal lie were shown to have a statistically significant effect on success of acquisition of the IT image. No cases of OSB were diagnosed during the study. Scan times were not lengthened by the addition of the image. We consider that acquisition of an image of the IT as part of the routine first trimester nuchal translucency scan is feasible, without lengthening appointment times. © 2016 The Royal Australian and New Zealand College of Radiologists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turkington, T.
This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Bemore » able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images for SPECT reconstructions. Become knowledgeable of items to be included in annual acceptance testing reports including CT dosimetry and PACS monitor measurements. T. Turkington, GE Healthcare.« less
Geada, Isidro Lorenzo; Ramezani-Dakhel, Hadi; Jamil, Tariq; Sulpizi, Marialore; Heinz, Hendrik
2018-02-19
Metallic nanostructures have become popular for applications in therapeutics, catalysts, imaging, and gene delivery. Molecular dynamics simulations are gaining influence to predict nanostructure assembly and performance; however, instantaneous polarization effects due to induced charges in the free electron gas are not routinely included. Here we present a simple, compatible, and accurate polarizable potential for gold that consists of a Lennard-Jones potential and a harmonically coupled core-shell charge pair for every metal atom. The model reproduces the classical image potential of adsorbed ions as well as surface, bulk, and aqueous interfacial properties in excellent agreement with experiment. Induced charges affect the adsorption of ions onto gold surfaces in the gas phase at a strength similar to chemical bonds while ions and charged peptides in solution are influenced at a strength similar to intermolecular bonds. The proposed model can be applied to complex gold interfaces, electrode processes, and extended to other metals.
Galilean satellite geomorphology
NASA Technical Reports Server (NTRS)
Malin, M. C.
1983-01-01
Research on this task consisted of the development and initial application of photometric and photoclinometric models using interactive computer image processing and graphics. New programs were developed to compute viewing and illumination angles for every picture element in a Voyager image using C-matrices and final Voyager ephemerides. These values were then used to transform each pixel to an illumination-oriented coordinate system. An iterative integration routine permits slope displacements to be computed from brightness variations, and correlated in the cross-sun direction, resulting in two dimensional topographic data. Figure 1 shows a 'wire-mesh' view of an impact crater on Ganymede, shown with a 10-fold vertical exaggeration. The crater, about 20 km in diameter, has a central mound and raised interior floor suggestive of viscous relaxation and rebound of the crater's topography. In addition to photoclinometry, the computer models that have been developed permit an examination on non-topographically-derived variations in surface brightness.
Badam, Raj Kumar; Sownetha, Triekan; Babu, D. B. Gandhi; Waghray, Shefali; Reddy, Lavanya; Garlapati, Komali; Chavva, Sunanda
2017-01-01
The word “autopsy” denotes “to see with own eyes.” Autopsy (postmortem) is a process that includes a thorough examination of a corpse noting everything related to anatomization, surface wounds, histological and culture studies. Virtopsy is a term extracted from two words “virtual” and “autopsy.” It employs imaging methods that are routinely used in clinical medicine such as computed tomography and magnetic resonance imaging in the field of autopsy, to find the reason for death. Virtopsy is a multi-disciplinary technology that combines forensic medicine and pathology, roentgenology, computer graphics, biomechanics, and physics. It is rapidly gaining importance in the field of forensics. This approach has been recently used by forensic odontologists, but yet to make its own mark in the field. This article mainly deals with “virtopsy” where in various articles were web searched, relevant data was selected, extracted, and summarized here. PMID:28584475
NASA Astrophysics Data System (ADS)
González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro
2012-07-01
A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.
Predicting consumer behavior: using novel mind-reading approaches.
Calvert, Gemma A; Brammer, Michael J
2012-01-01
Advances in machine learning as applied to functional magnetic resonance imaging (fMRI) data offer the possibility of pretesting and classifying marketing communications using unbiased pattern recognition algorithms. By using these algorithms to analyze brain responses to brands, products, or existing marketing communications that either failed or succeeded in the marketplace and identifying the patterns of brain activity that characterize success or failure, future planned campaigns or new products can now be pretested to determine how well the resulting brain responses match the desired (successful) pattern of brain activity without the need for verbal feedback. This major advance in signal processing is poised to revolutionize the application of these brain-imaging techniques in the marketing sector by offering greater accuracy of prediction in terms of consumer acceptance of new brands, products, and campaigns at a speed that makes them accessible as routine pretesting tools that will clearly demonstrate return on investment.
2.2 Å resolution cryo-EM structure of β-galactosidase in complex with a cell-permeant inhibitor.
Bartesaghi, Alberto; Merk, Alan; Banerjee, Soojay; Matthies, Doreen; Wu, Xiongwu; Milne, Jacqueline L S; Subramaniam, Sriram
2015-06-05
Cryo-electron microscopy (cryo-EM) is rapidly emerging as a powerful tool for protein structure determination at high resolution. Here we report the structure of a complex between Escherichia coli β-galactosidase and the cell-permeant inhibitor phenylethyl β-D-thiogalactopyranoside (PETG), determined by cryo-EM at an average resolution of ~2.2 angstroms (Å). Besides the PETG ligand, we identified densities in the map for ~800 water molecules and for magnesium and sodium ions. Although it is likely that continued advances in detector technology may further enhance resolution, our findings demonstrate that preparation of specimens of adequate quality and intrinsic protein flexibility, rather than imaging or image-processing technologies, now represent the major bottlenecks to routinely achieving resolutions close to 2 Å using single-particle cryo-EM. Copyright © 2015, American Association for the Advancement of Science.
Pfister, Karin; Schierling, Wilma; Jung, Ernst Michael; Apfelbeck, Hanna; Hennersperger, Christoph; Kasprzak, Piotr M
2016-01-01
To compare standardised 2D ultrasound (US) to the novel ultrasonographic imaging techniques 3D/4D US and image fusion (combined real-time display of B mode and CT scan) for routine measurement of aortic diameter in follow-up after endovascular aortic aneurysm repair (EVAR). 300 measurements were performed on 20 patients after EVAR by one experienced sonographer (3rd degree of the German society of ultrasound (DEGUM)) with a high-end ultrasound machine and a convex probe (1-5 MHz). An internally standardized scanning protocol of the aortic aneurysm diameter in B mode used a so called leading-edge method. In summary, five different US methods (2D, 3D free-hand, magnetic field tracked 3D - Curefab™, 4D volume sweep, image fusion), each including contrast-enhanced ultrasound (CEUS), were used for measurement of the maximum aortic aneurysm diameter. Standardized 2D sonography was the defined reference standard for statistical analysis. CEUS was used for endoleak detection. Technical success was 100%. In augmented transverse imaging the mean aortic anteroposterior (AP) diameter was 4.0±1.3 cm for 2D US, 4.0±1.2 cm for 3D Curefab™, and 3.9±1.3 cm for 4D US and 4.0±1.2 for image fusion. The mean differences were below 1 mm (0.2-0.9 mm). Concerning estimation of aneurysm growth, agreement was found between 2D, 3D and 4D US in 19 of the 20 patients (95%). Definitive decision could always be made by image fusion. CEUS was combined with all methods and detected two out of the 20 patients (10%) with an endoleak type II. In one case, endoleak feeding arteries remained unclear with 2D CEUS but could be clearly localized by 3D CEUS and image fusion. Standardized 2D US allows adequate routine follow-up of maximum aortic aneurysm diameter after EVAR. Image Fusion enables a definitive statement about aneurysm growth without the need for new CT imaging by combining the postoperative CT scan with real-time B mode in a dual image display. 3D/4D CEUS and image fusion can improve endoleak characterization in selected cases but are not mandatory for routine practice.
Fluorescence imaging in the upper gastrointestinal tract for the detection of dysplasic changes
NASA Astrophysics Data System (ADS)
Sukowski, Uwe; Ebert, Bernd; Ortner, Marianne; Mueller, Karsten; Voderholzer, W.; Weber-Eibel, J.; Dietel, M.; Lochs, Herbert; Rinneberg, Herbert H.
2001-10-01
During endoscopy of the esophagus fluorescence images were recorded at a delay of 20 ns after pulsed laser excitation simultaneously with conventional reflected white light images. To label malignant cells (dysplasia, tumor) 5-aminolaevulinic acid was applied prior to fluorescence guided bi-opsy. In this way pre-malignant and malignant lesions were detected not seen previously during routine endoscopy.
Andersson, M; Kolodziej, B; Andersson, R E
2017-10-01
The role of imaging in the diagnosis of appendicitis is controversial. This prospective interventional study and nested randomized trial analysed the impact of implementing a risk stratification algorithm based on the Appendicitis Inflammatory Response (AIR) score, and compared routine imaging with selective imaging after clinical reassessment. Patients presenting with suspicion of appendicitis between September 2009 and January 2012 from age 10 years were included at 21 emergency surgical centres and from age 5 years at three university paediatric centres. Registration of clinical characteristics, treatments and outcomes started during the baseline period. The AIR score-based algorithm was implemented during the intervention period. Intermediate-risk patients were randomized to routine imaging or selective imaging after clinical reassessment. The baseline period included 1152 patients, and the intervention period 2639, of whom 1068 intermediate-risk patients were randomized. In low-risk patients, use of the AIR score-based algorithm resulted in less imaging (19·2 versus 34·5 per cent; P < 0·001), fewer admissions (29·5 versus 42·8 per cent; P < 0·001), and fewer negative explorations (1·6 versus 3·2 per cent; P = 0·030) and operations for non-perforated appendicitis (6·8 versus 9·7 per cent; P = 0·034). Intermediate-risk patients randomized to the imaging and observation groups had the same proportion of negative appendicectomies (6·4 versus 6·7 per cent respectively; P = 0·884), number of admissions, number of perforations and length of hospital stay, but routine imaging was associated with an increased proportion of patients treated for appendicitis (53·4 versus 46·3 per cent; P = 0·020). AIR score-based risk classification can safely reduce the use of diagnostic imaging and hospital admissions in patients with suspicion of appendicitis. Registration number: NCT00971438 ( http://www.clinicaltrials.gov). © 2017 BJS Society Ltd Published by John Wiley & Sons Ltd.
An Automated Mouse Tail Vascular Access System by Vision and Pressure Feedback.
Chang, Yen-Chi; Berry-Pusey, Brittany; Yasin, Rashid; Vu, Nam; Maraglia, Brandon; Chatziioannou, Arion X; Tsao, Tsu-Chin
2015-08-01
This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system.
Radiotracers Used for the Scintigraphic Detection of Infection and Inflammation
Tsopelas, Chris
2015-01-01
Over the last forty years, a small group of commercial radiopharmaceuticals have found their way into routine medical use, for the diagnostic imaging of patients with infection or inflammation. These molecular radiotracers usually participate in the immune response to an antigen, by tagging leukocytes or other molecules/cells that are endogenous to the process. Currently there is an advancing effort by researchers in the preclinical domain to design and develop new agents for this application. This review discusses radiopharmaceuticals used in the nuclear medicine clinic today, as well as those potential radiotracers that exploit an organism's defence mechanisms to an infectious or inflammatory event. PMID:25741532
NASA Astrophysics Data System (ADS)
Hynek, Bernhard; Binder, Daniel; Boffi, Geo; Schöner, Wolfgang; Verhoeven, Geert
2014-05-01
Terrestrial photogrammetry was the standard method for mapping high mountain terrain in the early days of mountain cartography, until it was replaced by aerial photogrammetry and airborne laser scanning. Modern low-price digital single-lens reflex (DSLR) cameras and highly automatic and cheap digital computer vision software with automatic image matching and multiview-stereo routines suggest the rebirth of terrestrial photogrammetry, especially in remote regions, where airborne surveying methods are expensive due to high flight costs. Terrestrial photogrammetry and modern automated image matching is widely used in geodesy, however, its application in glaciology is still rare, especially for surveying ice bodies at the scale of some km², which is typical for valley glaciers. In August 2013 a terrestrial photogrammetric survey was carried out on Freya Glacier, a 6km² valley glacier next to Zackenberg Research Station in NE-Greenland, where a detailed glacier mass balance monitoring was initiated during the last IPY. Photos with a consumer grade digital camera (Nikon D7100) were taken from the ridges surrounding the glacier. To create a digital elevation model, the photos were processed with the software photoscan. A set of ~100 dGPS surveyed ground control points on the glacier surface was used to georeference and validate the final DEM. Aim of this study was to produce a high resolution and high accuracy DEM of the actual surface topography of the Freya glacier catchment with a novel approach and to explore the potential of modern low-cost terrestrial photogrammetry combined with state-of-the-art automated image matching and multiview-stereo routines for glacier monitoring and to communicate this powerful and cheap method within the environmental research and glacier monitoring community.
NASA Astrophysics Data System (ADS)
Olive, J. A. L.; Escartin, J.; Leclerc, F.; Garcia, R.; Gracias, N.; Odemar Science Party, T.
2016-12-01
While >70% of Earth's seismicity is submarine, almost all observations of earthquake-related ruptures and surface deformation are restricted to subaerial environments. Such observations are critical for understanding fault behavior and associated hazards (including tsunamis), but are not routinely conducted at the seafloor due to obvious constraints. During the 2013 ODEMAR cruise we used autonomous and remotely operated vehicles to map the Roseau normal Fault (Lesser Antilles), source of the 2004 Mw6.3 earthquake and associated tsunami (<3.5m run-up). These vehicles acquired acoustic (multibeam bathymetry) and optical data (video and electronic images) spanning from regional (>1 km) to outcrop (<1 m) scales. These high-resolution submarine observations, analogous to those routinely conducted subaerially, rely on advanced image and video processing techniques, such as mosaicking and structure-from-motion (SFM). We identify sub-vertical fault slip planes along the Roseau scarp, displaying coseismic deformation structures undoubtedly due to the 2004 event. First, video mosaicking allows us to identify the freshly exposed fault plane at the base of one of these scarps. A maximum vertical coseismic displacement of 0.9 m can be measured from the video-derived terrain models and the texture-mapped imagery, which have better resolution than any available acoustic systems (<10 cm). Second, seafloor photomosaics allow us to identify and map both additional sub-vertical fault scarps, and cracks and fissures at their base, recording hangingwall damage from the same event. These observations provide critical parameters to understand the seismic cycle and long-term seismic behavior of this submarine fault. Our work demonstrates the feasibility of extensive, high-resolution underwater surveys using underwater vehicles and novel imaging techniques, thereby opening new possibilities to study recent seafloor changes associated with tectonic, volcanic, or hydrothermal activity.
A graphic user interface for efficient 3D photo-reconstruction based on free software
NASA Astrophysics Data System (ADS)
Castillo, Carlos; James, Michael; Gómez, Jose A.
2015-04-01
Recently, different studies have stressed the applicability of 3D photo-reconstruction based on Structure from Motion algorithms in a wide range of geoscience applications. For the purpose of image photo-reconstruction, a number of commercial and freely available software packages have been developed (e.g. Agisoft Photoscan, VisualSFM). The workflow involves typically different stages such as image matching, sparse and dense photo-reconstruction, point cloud filtering and georeferencing. For approaches using open and free software, each of these stages usually require different applications. In this communication, we present an easy-to-use graphic user interface (GUI) developed in Matlab® code as a tool for efficient 3D photo-reconstruction making use of powerful existing software: VisualSFM (Wu, 2015) for photo-reconstruction and CloudCompare (Girardeau-Montaut, 2015) for point cloud processing. The GUI performs as a manager of configurations and algorithms, taking advantage of the command line modes of existing software, which allows an intuitive and automated processing workflow for the geoscience user. The GUI includes several additional features: a) a routine for significantly reducing the duration of the image matching operation, normally the most time consuming stage; b) graphical outputs for understanding the overall performance of the algorithm (e.g. camera connectivity, point cloud density); c) a number of useful options typically performed before and after the photo-reconstruction stage (e.g. removal of blurry images, image renaming, vegetation filtering); d) a manager of batch processing for the automated reconstruction of different image datasets. In this study we explore the advantages of this new tool by testing its performance using imagery collected in several soil erosion applications. References Girardeau-Montaut, D. 2015. CloudCompare documentation accessed at http://cloudcompare.org/ Wu, C. 2015. VisualSFM documentation access at http://ccwu.me/vsfm/doc.html#.
Software For Tie-Point Registration Of SAR Data
NASA Technical Reports Server (NTRS)
Rignot, Eric; Dubois, Pascale; Okonek, Sharon; Van Zyl, Jacob; Burnette, Fred; Borgeaud, Maurice
1995-01-01
SAR-REG software package registers synthetic-aperture-radar (SAR) image data to common reference frame based on manual tie-pointing. Image data can be in binary, integer, floating-point, or AIRSAR compressed format. For example, with map of soil characteristics, vegetation map, digital elevation map, or SPOT multispectral image, as long as user can generate binary image to be used by tie-pointing routine and data are available in one of the previously mentioned formats. Written in FORTRAN 77.
Routine upper gastrointestinal Gastrografin swallow after laparoscopic Roux-en-Y gastric bypass.
Sims, Thomas L; Mullican, Mary A; Hamilton, Elizabeth C; Provost, David A; Jones, Daniel B
2003-02-01
Upper gastrointestinal (UGI) swallow radiographs following laparoscopic Roux-en-Y gastric bypass (LRYGBP) may detect an obstruction or an anastomotic leak. The aim of our study was to determine the efficacy of routine imaging following LRYGBP. Radiograph reports were reviewed for 201 consecutive LRYGBP operations between April 1999 and June 2001. UGI swallow used Gastrografin, static films, fluoroscopic video, and a delayed image at 10 minutes. Mean values with one standard deviation were tested for significance (P < 0.05) using the Mann-Whitney U test statistic. Of 198 available reports, UGI detected jejunal efferent (Roux) limb narrowing (n = 17), partial obstruction (n = 12), anastomotic leak (n = 3), complete bowel obstruction (n = 3), diverticulum (n = 1), hiatal hernia (n = 1), and proximal Roux limb narrowing (n = 1). A normal study was reported in 160 cases (81%). Partial obstruction resolved without intervention. Complete obstruction required re-operation. Compared to 6 patients who developed delayed leaks, early identification of a leak by routine UGI swallow resulted in a shorter hospital stay (mean 7.7 +/- 1.5 days vs 40.2 +/- 12.3 days, P < 0.03). Early intervention after UGI swallow may lessen morbidity. Routine UGI swallow following LRYGBP does not obviate the importance of close clinical follow-up.
Diagnosis and treatment of dementia: 2. Diagnosis
Feldman, Howard H.; Jacova, Claudia; Robillard, Alain; Garcia, Angeles; Chow, Tiffany; Borrie, Michael; Schipper, Hyman M.; Blair, Mervin; Kertesz, Andrew; Chertkow, Howard
2008-01-01
Background Dementia can now be accurately diagnosed through clinical evaluation, cognitive screening, basic laboratory evaluation and structural imaging. A large number of ancillary techniques are also available to aid in diagnosis, but their role in the armamentarium of family physicians remains controversial. In this article, we provide physicians with practical guidance on the diagnosis of dementia based on recommendations from the Third Canadian Consensus Conference on the Diagnosis and Treatment of Dementia, held in March 2006. Methods We developed evidence-based guidelines using systematic literature searches, with specific criteria for study selection and quality assessment, and a clear and transparent decision-making process. We selected studies published from January 1996 to December 2005 that pertained to key diagnostic issues in dementia. We graded the strength of evidence using the criteria of the Canadian Task Force on Preventive Health Care. Results Of the 1591 articles we identified on all aspects of dementia diagnosis, 1095 met our inclusion criteria; 620 were deemed to be of good or fair quality. From a synthesis of the evidence in these studies, we made 32 recommendations related to the diagnosis of dementia. There are clinical criteria for diagnosing most forms of dementia. A standard diagnostic evaluation can be performd by family physicians over multiple visits. It involves a clinical history (from patient and caregiver), a physical examination and brief cognitive testing. A list of core laboratory tests is recommended. Structural imaging with computed tomography or magnetic resonance imaging is recommended in selected cases to rule out treatable causes of dementia or to rule in cerebrovascular disease. There is insufficient evidence to recommend routine functional imaging, measurement of biomarkers or neuropsychologic testing. Interpretation The diagnosis of dementia remains clinically integrative based on history, physical examination and brief cognitive testing. A number of core laboratory tests are also recommended. Structural neuroimaging is advised in selected cases. Other diagnostic approaches, including functional neuroimaging, neuropsychological testing and measurement of biomarkers, have shown promise but are not yet recommended for routine use by family physicians. PMID:18362376
Sharma, Shrushrita; Zhang, Yunyan
2017-01-01
Loss of tissue coherency in brain white matter is found in many neurological diseases such as multiple sclerosis (MS). While several approaches have been proposed to evaluate white matter coherency including fractional anisotropy and fiber tracking in diffusion-weighted imaging, few are available for standard magnetic resonance imaging (MRI). Here we present an image post-processing method for this purpose based on Fourier transform (FT) power spectrum. T2-weighted images were collected from 19 patients (10 relapsing-remitting and 9 secondary progressive MS) and 19 age- and gender-matched controls. Image processing steps included: computation, normalization, and thresholding of FT power spectrum; determination of tissue alignment profile and dominant alignment direction; and calculation of alignment complexity using a new measure named angular entropy. To test the validity of this method, we used a highly organized brain white matter structure, corpus callosum. Six regions of interest were examined from the left, central and right aspects of both genu and splenium. We found that the dominant orientation of each ROI derived from our method was significantly correlated with the predicted directions based on anatomy. There was greater angular entropy in patients than controls, and a trend to be greater in secondary progressive MS patients. These findings suggest that it is possible to detect tissue alignment and anisotropy using traditional MRI, which are routinely acquired in clinical practice. Analysis of FT power spectrum may become a new approach for advancing the evaluation and management of patients with MS and similar disorders. Further confirmation is warranted.
NASA Astrophysics Data System (ADS)
Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha
2018-01-01
It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.
Jini service to reconstruct tomographic data
NASA Astrophysics Data System (ADS)
Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.
2002-06-01
A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.
Poetzsch, Michael; Steuer, Andrea E; Roemmelt, Andreas T; Baumgartner, Markus R; Kraemer, Thomas
2014-12-02
Single hair analysis normally requires extensive sample preparation microscale protocols including time-consuming steps like segmentation and extraction. Matrix assisted laser desorption and ionization mass spectrometric imaging (MALDI-MSI) was shown to be an alternative tool in single hair analysis, but still, questions remain. Therefore, an investigation of MALDI-MSI in single hair analysis concerning the extraction process, usage of internal standard (IS), and influences on the ionization processes were systematically investigated to enable the reliable application to hair analysis. Furthermore, single dose detection, quantitative correlation to a single hair, and hair strand LC-MS/MS results were performed, and the performance was compared to LC-MS/MS single hair monitoring. The MALDI process was shown to be independent from natural hair color and not influenced by the presence of melanin. Ionization was shown to be reproducible along and in between different hair samples. MALDI image intensities in single hair and hair snippets showed good semiquantitative correlation to zolpidem hair concentrations obtained from validated routine LC-MS/MS methods. MALDI-MSI is superior to LC-MS/MS analysis when a fast, easy, and cheap sample preparation is necessary, whereas LC-MS/MS showed higher sensitivity with the ability of single dose detection for zolpidem. MALDI-MSI and LC-MS/MS segmental single hair analysis showed good correlation, and both are suitable for consumption monitoring of drugs of abuse with a high time resolution.
Kukkonen, C A
1995-06-01
High-speed information processing technologies being developed and applied by the Jet Propulsion Laboratory for NASA and Department of Defense mission needs have potential dual-uses in telemedicine and other medical applications. Fiber optic ground networks connected with microwave satellite links allow NASA to communicate with its astronauts in Earth orbit or on the moon, and with its deep space probes billions of miles away. These networks monitor the health of astronauts and or robotic spacecraft. Similar communications technology will also allow patients to communicate with doctors anywhere on Earth. NASA space missions have science as a major objective. Science sensors have become so sophisticated that they can take more data than our scientists can analyze by hand. High performance computers--workstations, supercomputer and massively parallel computers are being used to transform this data into knowledge. This is done using image processing, data visualization and other techniques to present the data--one's and zero's in forms that a human analyst can readily relate to and understand. Medical sensors have also explored in the in data output--witness CT scans, MRI, and ultrasound. This data must be presented in visual form and computers will allow routine combination of many two dimensional MRI images into three dimensional reconstructions of organs that then can be fully examined by physicians. Emerging technologies such as neural networks that are being "trained" to detect craters on planets or incoming missiles amongst decoys can be used to identify microcalcification in mammograms.
Gupta, Otkrist; Patalano II, Vincent; Mohit, Mrinal; Merchant, Rikin; Subramanian, S V
2018-01-01
Objectives Technology-enabled non-invasive diagnostic screening (TES) using smartphones and other point-of-care medical devices was evaluated in conjunction with conventional routine health screenings for the primary care screening of patients. Design Dental conditions, cardiac ECG arrhythmias, tympanic membrane disorders, blood oxygenation levels, optic nerve disorders and neurological fitness were evaluated using FDA-approved advanced smartphone powered technologies. Routine health screenings were also conducted. A novel remote web platform was developed to allow expert physicians to examine TES data and compare efficacy with routine health screenings. Setting The study was conducted at a primary care centre during the 2015 Kumbh Mela in Maharashtra, India. Participants 494 consenting 18–90 years old adults attending the 2015 Kumbh Mela were tested. Results TES and routine health screenings identified unique clinical conditions in distinct patients. Intraoral fluorescent imaging classified 63.3% of the population with dental caries and periodontal diseases. An association between poor oral health and cardiovascular illnesses was also identified. Tympanic membrane imaging detected eardrum abnormalities in 13.0% of the population, several with a medical history of hearing difficulties. Gait and coordination issues were discovered in eight subjects and one subject had arrhythmia. Cross-correlations were observed between low oxygen saturation and low body mass index (BMI) with smokers (p=0.0087 and p=0.0122, respectively), and high BMI was associated with elevated blood pressure in middle-aged subjects. Conclusions TES synergistically identified clinically significant abnormalities in several subjects who otherwise presented as normal in routine health screenings. Physicians validated TES findings and used routine health screening data and medical history responses for comprehensive diagnoses for at-risk patients. TES identified high prevalence of oral diseases, hypertension, obesity and ophthalmic conditions among the middle-aged and elderly Indian population, calling for public health interventions. PMID:29678964
Koulikov, Victoria; Lerman, Hedva; Kesler, Mikhail; Even-Sapir, Einat
2015-12-01
Cadmium zinc telluride (CZT) solid-state detectors have been recently introduced in the field of nuclear medicine in cardiology and breast imaging. The aim of the current study was to evaluate the performance of the novel detectors (CZT) compared to that of the routine NaI(Tl) in bone scintigraphy. A dual-headed CZT-based camera dedicated originally to breast imaging has been used, and in view of the limited size of the detectors, the hands were chosen as the organ for assessment. This is a clinical study. Fifty-eight consecutive patients (total 116 hands) referred for bone scan for suspected hand pathology gave their informed consent to have two acquisitions, using the routine camera and the CZT-based camera. The latter was divided into full-dose full-acquisition time (FD CZT) and reduced-dose short-acquisition time (RD CZT) on CZT technology, so three image sets were available for analysis. Data analysis included comparing the detection of hot lesions and identification of the metacarpophalangeal, proximal interphalangeal, and distal interphalangeal joints. A total of 69 hot lesions were detected on the CZT image sets; of these, 61 were identified as focal sites of uptake on NaI(Tl) data. On FD CZT data, 385 joints were identified compared to 168 on NaI(Tl) data (p < 0.001). There was no statistically significant difference in delineation of joints between FD and RD CZT data as the latter identified 383 joints. Bone scintigraphy using a CZT-based gamma camera is associated with improved lesion detection and anatomic definition. The superior physical characteristics of this technique raised a potential reduction in administered dose and/or acquisition time without compromising image quality.
Kwon, Heejin; Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun
2015-10-01
To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. 27 consecutive patients (mean body mass index: 23.55 kg m(-2) underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19-49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. This study represents the first clinical research experiment to use ASIR-V, the newest version of iterative reconstruction. Use of the ASIR-V algorithm decreased image noise and increased image quality when compared with the ASIR and FBP methods. These results suggest that high-quality low-dose CT may represent a new clinical option.
Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun
2015-01-01
Objective: To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. Methods: 27 consecutive patients (mean body mass index: 23.55 kg m−2 underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. Results: At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19–49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Conclusion: Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. Advances in knowledge: This study represents the first clinical research experiment to use ASIR-V, the newest version of iterative reconstruction. Use of the ASIR-V algorithm decreased image noise and increased image quality when compared with the ASIR and FBP methods. These results suggest that high-quality low-dose CT may represent a new clinical option. PMID:26234823
Fetal MRI: A Technical Update with Educational Aspirations
Gholipour, Ali; Estroff, Judith A.; Barnewolt, Carol E.; Robertson, Richard L.; Grant, P. Ellen; Gagoski, Borjan; Warfield, Simon K.; Afacan, Onur; Connolly, Susan A.; Neil, Jeffrey J.; Wolfberg, Adam; Mulkern, Robert V.
2015-01-01
Fetal magnetic resonance imaging (MRI) examinations have become well-established procedures at many institutions and can serve as useful adjuncts to ultrasound (US) exams when diagnostic doubts remain after US. Due to fetal motion, however, fetal MRI exams are challenging and require the MR scanner to be used in a somewhat different mode than that employed for more routine clinical studies. Herein we review the techniques most commonly used, and those that are available, for fetal MRI with an emphasis on the physics of the techniques and how to deploy them to improve success rates for fetal MRI exams. By far the most common technique employed is single-shot T2-weighted imaging due to its excellent tissue contrast and relative immunity to fetal motion. Despite the significant challenges involved, however, many of the other techniques commonly employed in conventional neuro- and body MRI such as T1 and T2*-weighted imaging, diffusion and perfusion weighted imaging, as well as spectroscopic methods remain of interest for fetal MR applications. An effort to understand the strengths and limitations of these basic methods within the context of fetal MRI is made in order to optimize their use and facilitate implementation of technical improvements for the further development of fetal MR imaging, both in acquisition and post-processing strategies. PMID:26225129
Comparison of parameter-adapted segmentation methods for fluorescence micrographs.
Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas
2011-11-01
Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.
Kim, Bum-Joon; Hong, Ki-Sun; Park, Kyung-Jae; Park, Dong-Hyuk; Chung, Yong-Gu
2012-01-01
Objective The prefabrication of customized cranioplastic implants has been introduced to overcome the difficulties of intra-operative implant molding. The authors present a new technique, which consists of the prefabrication of implant molds using three-dimensional (3D) printers and polymethyl-methacrylate (PMMA) casting. Methods A total of 16 patients with large skull defects (>100 cm2) underwent cranioplasty between November 2009 and April 2011. For unilateral cranial defects, 3D images of the skull were obtained from preoperative axial 1-mm spiral computed tomography (CT) scans. The image of the implant was generated by a digital subtraction mirror-imaging process using the normal side of the cranium as a model. For bilateral cranial defects, precraniectomy routine spiral CT scan data were merged with postcraniectomy 3D CT images following a smoothing process. Prefabrication of the mold was performed by the 3D printer. Intraoperatively, the PMMA implant was created with the prefabricated mold, and fit into the cranial defect. Results The median operation time was 184.36±26.07 minutes. Postoperative CT scans showed excellent restoration of the symmetrical contours and curvature of the cranium in all cases. The median follow-up period was 23 months (range, 14-28 months). Postoperative infection was developed in one case (6.2%) who had an open wound defect previously. Conclusion Customized cranioplasty PMMA implants using 3D printer may be a useful technique for the reconstruction of various cranial defects. PMID:23346326
Automatic detection of animals in mowing operations using thermal cameras.
Steen, Kim Arild; Villa-Henriksen, Andrés; Therkildsen, Ole Roland; Green, Ole
2012-01-01
During the last decades, high-efficiency farming equipment has been developed in the agricultural sector. This has also included efficiency improvement of moving techniques, which include increased working speeds and widths. Therefore, the risk of wild animals being accidentally injured or killed during routine farming operations has increased dramatically over the years. In particular, the nests of ground nesting bird species like grey partridge (Perdix perdix) or pheasant (Phasianus colchicus) are vulnerable to farming operations in their breeding habitat, whereas in mammals, the natural instinct of e.g., leverets of brown hare (Lepus europaeus) and fawns of roe deer (Capreolus capreolus) to lay low and still in the vegetation to avoid predators increase their risk of being killed or injured in farming operations. Various methods and approaches have been used to reduce wildlife mortality resulting from farming operations. However, since wildlife-friendly farming often results in lower efficiency, attempts have been made to develop automatic systems capable of detecting wild animals in the crop. Here we assessed the suitability of thermal imaging in combination with digital image processing to automatically detect a chicken (Gallus domesticus) and a rabbit (Oryctolagus cuniculus) in a grassland habitat. Throughout the different test scenarios, our study animals were detected with a high precision, although the most dense grass cover reduced the detection rate. We conclude that thermal imaging and digital imaging processing may be an important tool for the improvement of wildlife-friendly farming practices in the future.
Multi-detector CT imaging in the postoperative orthopedic patient with metal hardware.
Vande Berg, Bruno; Malghem, Jacques; Maldague, Baudouin; Lecouvet, Frederic
2006-12-01
Multi-detector CT imaging (MDCT) becomes routine imaging modality in the assessment of the postoperative orthopedic patients with metallic instrumentation that degrades image quality at MR imaging. This article reviews the physical basis and CT appearance of such metal-related artifacts. It also addresses the clinical value of MDCT in postoperative orthopedic patients with emphasis on fracture healing, spinal fusion or arthrodesis, and joint replacement. MDCT imaging shows limitations in the assessment of the bone marrow cavity and of the soft tissues for which MR imaging remains the imaging modality of choice despite metal-related anatomic distortions and signal alteration.
Multiprocessor graphics computation and display using transputers
NASA Technical Reports Server (NTRS)
Ellis, Graham K.
1988-01-01
A package of two-dimensional graphics routines was developed to run on a transputer-based parallel processing system. These routines were designed to enable applications programmers to easily generate and display results from the transputer network in a graphic format. The graphics procedures were designed for the lowest possible network communication overhead for increased performance. The routines were designed for ease of use and to present an intuitive approach to generating graphics on the transputer parallel processing system.
Semiautomated Workflow for Clinically Streamlined Glioma Parametric Response Mapping
Keith, Lauren; Ross, Brian D.; Galbán, Craig J.; Luker, Gary D.; Galbán, Stefanie; Zhao, Binsheng; Guo, Xiaotao; Chenevert, Thomas L.; Hoff, Benjamin A.
2017-01-01
Management of glioblastoma multiforme remains a challenging problem despite recent advances in targeted therapies. Timely assessment of therapeutic agents is hindered by the lack of standard quantitative imaging protocols for determining targeted response. Clinical response assessment for brain tumors is determined by volumetric changes assessed at 10 weeks post-treatment initiation. Further, current clinical criteria fail to use advanced quantitative imaging approaches, such as diffusion and perfusion magnetic resonance imaging. Development of the parametric response mapping (PRM) applied to diffusion-weighted magnetic resonance imaging has provided a sensitive and early biomarker of successful cytotoxic therapy in brain tumors while maintaining a spatial context within the tumor. Although PRM provides an earlier readout than volumetry and sometimes greater sensitivity compared with traditional whole-tumor diffusion statistics, it is not routinely used for patient management; an automated and standardized software for performing the analysis and for the generation of a clinical report document is required for this. We present a semiautomated and seamless workflow for image coregistration, segmentation, and PRM classification of glioblastoma multiforme diffusion-weighted magnetic resonance imaging scans. The software solution can be integrated using local hardware or performed remotely in the cloud while providing connectivity to existing picture archive and communication systems. This is an important step toward implementing PRM analysis of solid tumors in routine clinical practice. PMID:28286871
Computer-aided Classification of Mammographic Masses Using Visually Sensitive Image Features
Wang, Yunzhi; Aghaei, Faranak; Zarafshani, Ali; Qiu, Yuchen; Qian, Wei; Zheng, Bin
2017-01-01
Purpose To develop a new computer-aided diagnosis (CAD) scheme that computes visually sensitive image features routinely used by radiologists to develop a machine learning classifier and distinguish between the malignant and benign breast masses detected from digital mammograms. Methods An image dataset including 301 breast masses was retrospectively selected. From each segmented mass region, we computed image features that mimic five categories of visually sensitive features routinely used by radiologists in reading mammograms. We then selected five optimal features in the five feature categories and applied logistic regression models for classification. A new CAD interface was also designed to show lesion segmentation, computed feature values and classification score. Results Areas under ROC curves (AUC) were 0.786±0.026 and 0.758±0.027 when to classify mass regions depicting on two view images, respectively. By fusing classification scores computed from two regions, AUC increased to 0.806±0.025. Conclusion This study demonstrated a new approach to develop CAD scheme based on 5 visually sensitive image features. Combining with a “visual aid” interface, CAD results may be much more easily explainable to the observers and increase their confidence to consider CAD generated classification results than using other conventional CAD approaches, which involve many complicated and visually insensitive texture features. PMID:27911353
Review of Image Quality Measures for Solar Imaging
NASA Astrophysics Data System (ADS)
Popowicz, Adam; Radlak, Krystian; Bernacki, Krzysztof; Orlov, Valeri
2017-12-01
Observations of the solar photosphere from the ground encounter significant problems caused by Earth's turbulent atmosphere. Before image reconstruction techniques can be applied, the frames obtained in the most favorable atmospheric conditions (the so-called lucky frames) have to be carefully selected. However, estimating the quality of images containing complex photospheric structures is not a trivial task, and the standard routines applied in nighttime lucky imaging observations are not applicable. In this paper we evaluate 36 methods dedicated to the assessment of image quality, which were presented in the literature over the past 40 years. We compare their effectiveness on simulated solar observations of both active regions and granulation patches, using reference data obtained by the Solar Optical Telescope on the Hinode satellite. To create images that are affected by a known degree of atmospheric degradation, we employed the random wave vector method, which faithfully models all the seeing characteristics. The results provide useful information about the method performances, depending on the average seeing conditions expressed by the ratio of the telescope's aperture to the Fried parameter, D/r0. The comparison identifies three methods for consideration by observers: Helmli and Scherer's mean, the median filter gradient similarity, and the discrete cosine transform energy ratio. While the first method requires less computational effort and can be used effectively in virtually any atmospheric conditions, the second method shows its superiority at good seeing (D/r0<4). The third method should mainly be considered for the post-processing of strongly blurred images.
Navigation concepts for magnetic resonance imaging-guided musculoskeletal interventions.
Busse, Harald; Kahn, Thomas; Moche, Michael
2011-08-01
Image-guided musculoskeletal (MSK) interventions are a widely used alternative to open surgical procedures for various pathological findings in different body regions. They traditionally involve one of the established x-ray imaging techniques (radiography, fluoroscopy, computed tomography) or ultrasound scanning. Over the last decades, magnetic resonance imaging (MRI) has evolved into one of the most powerful diagnostic tools for nearly the whole body and has therefore been increasingly considered for interventional guidance as well.The strength of MRI for MSK applications is a combination of well-known general advantages, such as multiplanar and functional imaging capabilities, wide choice of tissue contrasts, and absence of ionizing radiation, as well as a number of MSK-specific factors, for example, the excellent depiction of soft-tissue tumors, nonosteolytic bone changes, and bone marrow lesions. On the downside, the magnetic resonance-compatible equipment needed, restricted space in the magnet, longer imaging times, and the more complex workflow have so far limited the number of MSK procedures under MRI guidance.Navigation solutions are generally a natural extension of any interventional imaging system, in particular, because powerful hardware and software for image processing have become routinely available. They help to identify proper access paths, provide accurate feedback on the instrument positions, facilitate the workflow in an MRI environment, and ultimately contribute to procedural safety and success.The purposes of this work were to describe some basic concepts and devices for MRI guidance of MSK procedures and to discuss technical and clinical achievements and challenges for some selected implementations.
Robust adaptive optics systems for vision science
NASA Astrophysics Data System (ADS)
Burns, S. A.; de Castro, A.; Sawides, L.; Luo, T.; Sapoznik, K.
2018-02-01
Adaptive Optics (AO) is of growing importance for understanding the impact of retinal and systemic diseases on the retina. While AO retinal imaging in healthy eyes is now routine, AO imaging in older eyes and eyes with optical changes to the anterior eye can be difficult and requires a control and an imaging system that is resilient when there is scattering and occlusion from the cornea and lens, as well as in the presence of irregular and small pupils. Our AO retinal imaging system combines evaluation of local image quality of the pupil, with spatially programmable detection. The wavefront control system uses a woofer tweeter approach, combining an electromagnetic mirror and a MEMS mirror and a single Shack Hartmann sensor. The SH sensor samples an 8 mm exit pupil and the subject is aligned to a region within this larger system pupil using a chin and forehead rest. A spot quality metric is calculated in real time for each lenslet. Individual lenslets that do not meet the quality metric are eliminated from the processing. Mirror shapes are smoothed outside the region of wavefront control when pupils are small. The system allows imaging even with smaller irregular pupils, however because the depth of field increases under these conditions, sectioning performance decreases. A retinal conjugate micromirror array selectively directs mid-range scatter to additional detectors. This improves detection of retinal capillaries even when the confocal image has poorer image quality that includes both photoreceptors and blood vessels.
ACR Appropriateness Criteria® Routine Chest Radiography.
McComb, Barbara L; Chung, Jonathan H; Crabtree, Traves D; Heitkamp, Darel E; Iannettoni, Mark D; Jokerst, Clinton; Saleh, Anthony G; Shah, Rakesh D; Steiner, Robert M; Mohammed, Tan-Lucien H; Ravenel, James G
2016-03-01
Chest radiographs are sometimes taken before surgeries and interventional procedures on hospital admissions and outpatients. This manuscript summarizes the American College of Radiology review of the literature and recommendations on routinely performed chest radiographies in these settings. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed every 3 years by a multidisciplinary expert panel. The guideline development and review include an extensive analysis of current medical literature from peer-reviewed journals and the application of a well-established consensus methodology (modified Delphi) to rate the appropriateness of imaging and treatment procedures by the panel. In those instances in which evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment.
Real-Time Detection of Sporadic Meteors in the Intensified TV Imaging Systems.
Vítek, Stanislav; Nasyrova, Maria
2017-12-29
The automatic observation of the night sky through wide-angle video systems with the aim of detecting meteor and fireballs is currently among routine astronomical observations. The observation is usually done in multi-station or network mode, so it is possible to estimate the direction and the speed of the body flight. The high velocity of the meteorite flying through the atmosphere determines the important features of the camera systems, namely the high frame rate. Thanks to high frame rates, such imaging systems produce a large amount of data, of which only a small fragment has scientific potential. This paper focuses on methods for the real-time detection of fast moving objects in the video sequences recorded by intensified TV systems with frame rates of about 60 frames per second. The goal of our effort is to remove all unnecessary data during the daytime and make free hard-drive capacity for the next observation. The processing of data from the MAIA (Meteor Automatic Imager and Analyzer) system is demonstrated in the paper.
NASA Astrophysics Data System (ADS)
Candeo, Alessia; Sana, Ilenia; Ferrari, Eleonora; Maiuri, Luigi; D'Andrea, Cosimo; Valentini, Gianluca; Bassi, Andrea
2016-05-01
Light sheet fluorescence microscopy has proven to be a powerful tool to image fixed and chemically cleared samples, providing in depth and high resolution reconstructions of intact mouse organs. We applied light sheet microscopy to image the mouse intestine. We found that large portions of the sample can be readily visualized, assessing the organ status and highlighting the presence of regions with impaired morphology. Yet, three-dimensional (3-D) sectioning of the intestine leads to a large dataset that produces unnecessary storage and processing overload. We developed a routine that extracts the relevant information from a large image stack and provides quantitative analysis of the intestine morphology. This result was achieved by a three step procedure consisting of: (1) virtually unfold the 3-D reconstruction of the intestine; (2) observe it layer-by-layer; and (3) identify distinct villi and statistically analyze multiple samples belonging to different intestinal regions. Even if the procedure has been developed for the murine intestine, most of the underlying concepts have a general applicability.
Dzyubachyk, Oleh; Essers, Jeroen; van Cappellen, Wiggert A; Baldeyron, Céline; Inagaki, Akiko; Niessen, Wiro J; Meijering, Erik
2010-10-01
Complete, accurate and reproducible analysis of intracellular foci from fluorescence microscopy image sequences of live cells requires full automation of all processing steps involved: cell segmentation and tracking followed by foci segmentation and pattern analysis. Integrated systems for this purpose are lacking. Extending our previous work in cell segmentation and tracking, we developed a new system for performing fully automated analysis of fluorescent foci in single cells. The system was validated by applying it to two common tasks: intracellular foci counting (in DNA damage repair experiments) and cell-phase identification based on foci pattern analysis (in DNA replication experiments). Experimental results show that the system performs comparably to expert human observers. Thus, it may replace tedious manual analyses for the considered tasks, and enables high-content screening. The described system was implemented in MATLAB (The MathWorks, Inc., USA) and compiled to run within the MATLAB environment. The routines together with four sample datasets are available at http://celmia.bigr.nl/. The software is planned for public release, free of charge for non-commercial use, after publication of this article.
Real-Time Detection of Sporadic Meteors in the Intensified TV Imaging Systems
2017-01-01
The automatic observation of the night sky through wide-angle video systems with the aim of detecting meteor and fireballs is currently among routine astronomical observations. The observation is usually done in multi-station or network mode, so it is possible to estimate the direction and the speed of the body flight. The high velocity of the meteorite flying through the atmosphere determines the important features of the camera systems, namely the high frame rate. Thanks to high frame rates, such imaging systems produce a large amount of data, of which only a small fragment has scientific potential. This paper focuses on methods for the real-time detection of fast moving objects in the video sequences recorded by intensified TV systems with frame rates of about 60 frames per second. The goal of our effort is to remove all unnecessary data during the daytime and make free hard-drive capacity for the next observation. The processing of data from the MAIA (Meteor Automatic Imager and Analyzer) system is demonstrated in the paper. PMID:29286294
Micro-pigmentation: implications for patients and professionals.
Collingridge, Kim; Calcluth, Julie
In the UK, reconstructive breast surgery is routinely offered to patients undergoing surgery for breast cancer. The results can be excellent, but without a nipple-areola complex the patient can feel incomplete. In response to patient need, an innovative nurse-led micro-pigmentation service has been developed in the authors' NHS trust, which provides women (and men) an opportunity to complete their reconstruction process. With the use of coloured pigments, micro-pigmentation creates a permanent image of a nipple-areola complex, which improves the aesthetic appearance of the surgically-created breast. As with the development of any new nurse-led innovation, the micro-pigmentation service has professional and client implications. Breast cancer can be devastating and may induce many psychological concerns, not least about body image and sexuality. This article addresses these issues, along with professional matters, such as autonomous practice, role expansion and the blurring of clinical boundaries. These factors are considered in relation to the nursing management of the micro-pigmentation service, where patient autonomy is encouraged to promote acceptance of self-image and closure on the breast cancer experience.
System for routine surface anthropometry using reprojection registration
NASA Astrophysics Data System (ADS)
Sadleir, R. J.; Owens, R. A.; Hartmann, P. E.
2003-11-01
Range data measurement can be usefully applied to non-invasive monitoring of anthropometric changes due to disease, healing or during normal physiological processes. We have developed a computer vision system that allows routine capture of biological surface shapes and accurate measurement of anthropometric changes, using a structured light stripe triangulation system. In many applications involving relocation of soft tissue for image-guided surgery or anthropometry it is neither accurate nor practical to apply fiducial markers directly to the body. This system features a novel method of achieving subject re-registration that involves application of fiducials by a standard data projector. Calibration of this reprojector is achieved using a variation of structured lighting techniques. The method allows accurate and comparable repositioning of elastic surfaces. Tests of repositioning using the reprojector found a significant improvement in subject registration compared to an earlier method which used video overlay comparison only. It has a current application to the measurement of breast volume changes in lactating mothers, but may be extended to any application where repeatable positioning and measurement is required.
A review of biomechanically informed breast image registration
NASA Astrophysics Data System (ADS)
Hipwell, John H.; Vavourakis, Vasileios; Han, Lianghao; Mertzanidou, Thomy; Eiben, Björn; Hawkes, David J.
2016-01-01
Breast radiology encompasses the full range of imaging modalities from routine imaging via x-ray mammography, magnetic resonance imaging and ultrasound (both two- and three-dimensional), to more recent technologies such as digital breast tomosynthesis, and dedicated breast imaging systems for positron emission mammography and ultrasound tomography. In addition new and experimental modalities, such as Photoacoustics, Near Infrared Spectroscopy and Electrical Impedance Tomography etc, are emerging. The breast is a highly deformable structure however, and this greatly complicates visual comparison of imaging modalities for the purposes of breast screening, cancer diagnosis (including image guided biopsy), tumour staging, treatment monitoring, surgical planning and simulation of the effects of surgery and wound healing etc. Due primarily to the challenges posed by these gross, non-rigid deformations, development of automated methods which enable registration, and hence fusion, of information within and across breast imaging modalities, and between the images and the physical space of the breast during interventions, remains an active research field which has yet to translate suitable methods into clinical practice. This review describes current research in the field of breast biomechanical modelling and identifies relevant publications where the resulting models have been incorporated into breast image registration and simulation algorithms. Despite these developments there remain a number of issues that limit clinical application of biomechanical modelling. These include the accuracy of constitutive modelling, implementation of representative boundary conditions, failure to meet clinically acceptable levels of computational cost, challenges associated with automating patient-specific model generation (i.e. robust image segmentation and mesh generation) and the complexity of applying biomechanical modelling methods in routine clinical practice.
Advanced endoscopic imaging in gastric neoplasia and preneoplasia
Lee, Jonathan W J; Lim, Lee Guan; Yeoh, Khay Guan
2017-01-01
Conventional white light endoscopy remains the current standard in routine clinical practice for early detection of gastric cancer. However, it may not accurately diagnose preneoplastic gastric lesions. The technological advancements in the field of endoscopic imaging for gastric lesions are fast growing. This article reviews currently available advanced endoscopic imaging modalities, in particular chromoendoscopy, narrow band imaging and confocal laser endomicroscopy, and their corresponding evidence shown to improve diagnosis of preneoplastic gastric lesions. Raman spectrometry and polarimetry are also introduced as promising emerging technologies. PMID:28176895
Real-time terahertz imaging through self-mixing in a quantum-cascade laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wienold, M., E-mail: martin.wienold@dlr.de; Rothbart, N.; Hübers, H.-W.
2016-07-04
We report on a fast self-mixing approach for real-time, coherent terahertz imaging based on a quantum-cascade laser and a scanning mirror. Due to a fast deflection of the terahertz beam, images with frame rates up to several Hz are obtained, eventually limited by the mechanical inertia of the employed scanning mirror. A phase modulation technique allows for the separation of the amplitude and phase information without the necessity of parameter fitting routines. We further demonstrate the potential for transmission imaging.
A problem-solving routine for improving hospital operations.
Ghosh, Manimay; Sobek Ii, Durward K
2015-01-01
The purpose of this paper is to examine empirically why a systematic problem-solving routine can play an important role in the process improvement efforts of hospitals. Data on 18 process improvement cases were collected through semi-structured interviews, reports and other documents, and artifacts associated with the cases. The data were analyzed using a grounded theory approach. Adherence to all the steps of the problem-solving routine correlated to greater degrees of improvement across the sample. Analysis resulted in two models. The first partially explains why hospital workers tended to enact short-term solutions when faced with process-related problems; and tended not seek longer-term solutions that prevent problems from recurring. The second model highlights a set of self-reinforcing behaviors that are more likely to address problem recurrence and result in sustained process improvement. The study was conducted in one hospital setting. Hospital managers can improve patient care and increase operational efficiency by adopting and diffusing problem-solving routines that embody three key characteristics. This paper offers new insights on why caregivers adopt short-term approaches to problem solving. Three characteristics of an effective problem-solving routine in a healthcare setting are proposed.
Infrared Radiography: Modeling X-ray Imaging without Harmful Radiation
ERIC Educational Resources Information Center
Zietz, Otto; Mylott, Elliot; Widenhorn, Ralf
2015-01-01
Planar x-ray imaging is a ubiquitous diagnostic tool and is routinely performed to diagnose conditions as varied as bone fractures and pneumonia. The underlying principle is that the varying attenuation coefficients of air, water, tissue, bone, or metal implants within the body result in non-uniform transmission of x-ray radiation. Through the…
Romano, A; Tavanti, F; Rossi Espagnet, M C; Terenzi, V; Cassoni, A; Suma, G; Boellis, A; Pierallini, A; Valentini, V; Bozzao, A
2015-01-01
In this preliminary report, we describe our experience with time-resolved imaging of contrast kinetics-MR angiography (TRICKS-MRA) in the assessment of head-neck vascular anomalies (HNVAs). We prospectively studied six consecutive patients with clinically suspected or diagnosed HNVAs. All of them underwent TRICKS-MRA of the head and neck as part of the routine for treatment planning. A digital subtraction angiography (DSA) was also performed. TRICKS-MRA could be achieved in all cases. Three subjects were treated based on TRICKS-MRA imaging findings and subsequent DSA examination. In all of them, DSA confirmed the vascular architecture of HNVAs shown by TRICKS-MRA. In the other three patients, a close follow up to assess the evolution of the suspected haemangioma was preferred. TRICKS sequences add important diagnostic information in cases of HNVAs, helpful for therapeutic decisions and post-treatment follow up. We recommend TRICKS-MRA use (if technically possible) as part of routine MRI protocol for HNVAs, representing a possible alternative imaging tool to conventional DSA.
Transoesophageal echocardiography in the dog.
Domenech, Oriol; Oliveira, Pedro
2013-11-01
Transoesophageal echocardiography (TEE) allows imaging of the heart through the oesophagus using a special transducer mounted on a modified endoscope. The proximity to the heart and minimal intervening structures enables the acquisition of high-resolution images that are consistently superior to routine transthoracic echocardiography and optimal imaging of the heart base anatomy and related structures. TEE provides high-quality real-time imaging free of ionizing radiation, making it an ideal instrument not only for diagnostic purposes, but also for monitoring surgical or minimally invasive cardiac procedures, non-cardiac procedures and critical cases in the intensive care unit. In human medicine, TEE is routinely used in these settings. In veterinary medicine, TEE is increasingly used in referral centres, especially for perioperative assessment and guidance of catheter-based cardiovascular procedures, such as patent ductus arteriosus, balloon valvuloplasty, and atrial and ventricular septal defect occlusion with vascular devices. TEE can also aid in heartworm retrieval procedures. The purpose of this paper is to review the current uses of TEE in veterinary medicine, focusing on technique, indications and complications. Copyright © 2013 Elsevier Ltd. All rights reserved.
Digital pathology: elementary, rapid and reliable automated image analysis.
Bouzin, Caroline; Saini, Monika L; Khaing, Kyi-Kyi; Ambroise, Jérôme; Marbaix, Etienne; Grégoire, Vincent; Bol, Vanesa
2016-05-01
Slide digitalization has brought pathology to a new era, including powerful image analysis possibilities. However, while being a powerful prognostic tool, immunostaining automated analysis on digital images is still not implemented worldwide in routine clinical practice. Digitalized biopsy sections from two independent cohorts of patients, immunostained for membrane or nuclear markers, were quantified with two automated methods. The first was based on stained cell counting through tissue segmentation, while the second relied upon stained area proportion within tissue sections. Different steps of image preparation, such as automated tissue detection, folds exclusion and scanning magnification, were also assessed and validated. Quantification of either stained cells or the stained area was found to be correlated highly for all tested markers. Both methods were also correlated with visual scoring performed by a pathologist. For an equivalent reliability, quantification of the stained area is, however, faster and easier to fine-tune and is therefore more compatible with time constraints for prognosis. This work provides an incentive for the implementation of automated immunostaining analysis with a stained area method in routine laboratory practice. © 2015 John Wiley & Sons Ltd.
Proposed Conceptual Requirements for the CTBT Knowledge Base,
1995-08-14
knowledge available to automated processing routines and human analysts are significant, and solving these problems is an essential step in ensuring...knowledge storage in a CTBT system. In addition to providing regional knowledge to automated processing routines, the knowledge base will also address
Thermoacoustic imaging of fresh prostates up to 6-cm diameter
NASA Astrophysics Data System (ADS)
Patch, S. K.; Hanson, E.; Thomas, M.; Kelly, H.; Jacobsohn, K.; See, W. A.
2013-03-01
Thermoacoustic (TA) imaging provides a novel contrast mechanism that may enable visualization of cancerous lesions which are not robustly detected by current imaging modalities. Prostate cancer (PCa) is the most notorious example. Imaging entire prostate glands requires 6 cm depth penetration. We therefore excite TA signal using submicrosecond VHF pulses (100 MHz). We will present reconstructions of fresh prostates imaged in a well-controlled benchtop TA imaging system. Chilled glycine solution is used as acoustic couplant. The urethra is routinely visualized as signal dropout; surgical staples formed from 100-micron wide wire bent to 3 mm length generate strong positive signal.
NASA Astrophysics Data System (ADS)
Le Bas, Tim; Scarth, Anthony; Bunting, Peter
2015-04-01
Traditional computer based methods for the interpretation of remotely sensed imagery use each pixel individually or the average of a small window of pixels to calculate a class or thematic value, which provides an interpretation. However when a human expert interprets imagery, the human eye is excellent at finding coherent and homogenous areas and edge features. It may therefore be advantageous for computer analysis to mimic human interpretation. A new toolbox for ArcGIS 10.x will be presented that segments the data layers into a set of polygons. Each polygon is defined by a K-means clustering and region growing algorithm, thus finding areas, their edges and any lineations in the imagery. Attached to each polygon are the characteristics of the imagery such as mean and standard deviation of the pixel values, within the polygon. The segmentation of imagery into a jigsaw of polygons also has the advantage that the human interpreter does not need to spend hours digitising the boundaries. The segmentation process has been taken from the RSGIS library of analysis and classification routines (Bunting et al., 2014). These routines are freeware and have been modified to be available in the ArcToolbox under the Windows (v7) operating system. Input to the segmentation process is a multi-layered raster image, for example; a Landsat image, or a set of raster datasets made up from derivatives of topography. The size and number of polygons are set by the user and are dependent on the imagery used. Examples will be presented of data from the marine environment utilising bathymetric depth, slope, rugosity and backscatter from a multibeam system. Meaningful classification of the polygons using their numerical characteristics is the next goal. Object based image analysis (OBIA) should help this workflow. Fully calibrated imagery systems will allow numerical classification to be translated into more readily understandable terms. Peter Bunting, Daniel Clewley, Richard M. Lucas and Sam Gillingham. 2014. The Remote Sensing and GIS Software Library (RSGISLib), Computers & Geosciences. Volume 62, Pages 216-226 http://dx.doi.org/10.1016/j.cageo.2013.08.007.
No Value for Routine Chest Radiography in the Work-Up of Early Stage Cervical Cancer Patients
Hoogendam, Jacob P.; Zweemer, Ronald P.; Verkooijen, Helena M.; de Jong, Pim A.; van den Bosch, Maurice A. A. J.; Verheijen, René H. M.; Veldhuis, Wouter B.
2015-01-01
Aim Evidence supporting the recommendation to include chest radiography in the work-up of all cervical cancer patients is limited. We investigated the diagnostic value of routine chest radiography in cervical cancer staging. Methods All consecutive cervical cancer patients who presented at our tertiary referral center in the Netherlands (January 2006 – September 2013), and for whom ≥6 months follow-up was available, were included. As part of the staging procedure, patients underwent a routine two-directional digital chest radiograph. Findings were compared to a composite reference standard consisting of all imaging studies and histology obtained during the 6 months following radiography. Results Of the 402 women who presented with cervical cancer, 288 (71.6%) underwent chest radiography and had ≥6 months follow-up. Early clinical stage (I/II) cervical cancer was present in 244/288 (84.7%) women, while 44 (15.3%) presented with advanced disease (stage III/IV). The chest radiograph of 1 woman – with advanced pre-radiograph stage (IVA) disease – showed findings consistent with pulmonary metastases. Radiographs of 7 other women – 4 early, 3 advanced stage disease – were suspicious for pulmonary metastases which was confirmed by additional imaging in only 1 woman (with pre-radiograph advanced stage (IIIB) disease) and excluded in 6 cases, including all women with early stage disease. In none of the 288 women were thoracic skeletal metastases identified on imaging or during 6 months follow up. Radiography was unremarkable in 76.4% of the study population, and showed findings unrelated to the cervical carcinoma in 21.2%. Conclusion Routine chest radiography was of no value for any of the early stage cervical cancer patients presenting at our tertiary center over a period of 7.7 years. PMID:26135733