Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms
Perez-Sanz, Fernando; Navarro, Pedro J
2017-01-01
Abstract The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. PMID:29048559
Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms.
Perez-Sanz, Fernando; Navarro, Pedro J; Egea-Cortines, Marcos
2017-11-01
The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. © The Author 2017. Published by Oxford University Press.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-05
... exposure control, image processing and reconstruction programs, patient and equipment supports, component..., acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and... may include was revised by adding automatic exposure control, image processing and reconstruction...
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2012 CFR
2012-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2014 CFR
2014-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
An automatic detection method for the boiler pipe header based on real-time image acquisition
NASA Astrophysics Data System (ADS)
Long, Yi; Liu, YunLong; Qin, Yongliang; Yang, XiangWei; Li, DengKe; Shen, DingJie
2017-06-01
Generally, an endoscope is used to test the inner part of the thermal power plants boiler pipe header. However, since the endoscope hose manual operation, the length and angle of the inserted probe cannot be controlled. Additionally, it has a big blind spot observation subject to the length of the endoscope wire. To solve these problems, an automatic detection method for the boiler pipe header based on real-time image acquisition and simulation comparison techniques was proposed. The magnetic crawler with permanent magnet wheel could carry the real-time image acquisition device to complete the crawling work and collect the real-time scene image. According to the obtained location by using the positioning auxiliary device, the position of the real-time detection image in a virtual 3-D model was calibrated. Through comparing of the real-time detection images and the computer simulation images, the defects or foreign matter fall into could be accurately positioning, so as to repair and clean up conveniently.
Ozhinsky, Eugene; Vigneron, Daniel B; Nelson, Sarah J
2011-04-01
To develop a technique for optimizing coverage of brain 3D (1) H magnetic resonance spectroscopic imaging (MRSI) by automatic placement of outer-volume suppression (OVS) saturation bands (sat bands) and to compare the performance for point-resolved spectroscopic sequence (PRESS) MRSI protocols with manual and automatic placement of sat bands. The automated OVS procedure includes the acquisition of anatomic images from the head, obtaining brain and lipid tissue maps, calculating optimal sat band placement, and then using those optimized parameters during the MRSI acquisition. The data were analyzed to quantify brain coverage volume and data quality. 3D PRESS MRSI data were acquired from three healthy volunteers and 29 patients using protocols that included either manual or automatic sat band placement. On average, the automatic sat band placement allowed the acquisition of PRESS MRSI data from 2.7 times larger brain volumes than the conventional method while maintaining data quality. The technique developed helps solve two of the most significant problems with brain PRESS MRSI acquisitions: limited brain coverage and difficulty in prescription. This new method will facilitate routine clinical brain 3D MRSI exams and will be important for performing serial evaluation of response to therapy in patients with brain tumors and other neurological diseases. Copyright © 2011 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Perner, Petra
2017-03-01
Molecular image-based techniques are widely used in medicine to detect specific diseases. Look diagnosis is an important issue but also the analysis of the eye plays an important role in order to detect specific diseases. These topics are important topics in medicine and the standardization of these topics by an automatic system can be a new challenging field for machine vision. Compared to iris recognition has the iris diagnosis much more higher demands for the image acquisition and interpretation of the iris. One understands by iris diagnosis (Iridology) the investigation and analysis of the colored part of the eye, the iris, to discover factors, which play an important role for the prevention and treatment of illnesses, but also for the preservation of an optimum health. An automatic system would pave the way for a much wider use of the iris diagnosis for the diagnosis of illnesses and for the purpose of individual health protection. With this paper, we describe our work towards an automatic iris diagnosis system. We describe the image acquisition and the problems with it. Different ways are explained for image acquisition and image preprocessing. We describe the image analysis method for the detection of the iris. The meta-model for image interpretation is given. Based on this model we show the many tasks for image analysis that range from different image-object feature analysis, spatial image analysis to color image analysis. Our first results for the recognition of the iris are given. We describe how detecting the pupil and not wanted lamp spots. We explain how to recognize orange blue spots in the iris and match them against the topological map of the iris. Finally, we give an outlook for further work.
Image acquisition device of inspection robot based on adaptive rotation regulation of polarizer
NASA Astrophysics Data System (ADS)
Dong, Maoqi; Wang, Xingguang; Liang, Tao; Yang, Guoqing; Zhang, Chuangyou; Gao, Faqin
2017-12-01
An image processing device of inspection robot with adaptive polarization adjustment is proposed, that the device includes the inspection robot body, the image collecting mechanism, the polarizer and the polarizer automatic actuating device. Where, the image acquisition mechanism is arranged at the front of the inspection robot body for collecting equipment image data in the substation. Polarizer is fixed on the automatic actuating device of polarizer, and installed in front of the image acquisition mechanism, and that the optical axis of the camera vertically goes through the polarizer and the polarizer rotates with the optical axis of the visible camera as the central axis. The simulation results show that the system solves the fuzzy problems of the equipment that are caused by glare, reflection of light and shadow, and the robot can observe details of the running status of electrical equipment. And the full coverage of the substation equipment inspection robot observation target is achieved, which ensures the safe operation of the substation equipment.
Bergmeister, Konstantin D; Gröger, Marion; Aman, Martin; Willensdorfer, Anna; Manzano-Szalai, Krisztina; Salminger, Stefan; Aszmann, Oskar C
2016-08-01
Skeletal muscle consists of different fiber types which adapt to exercise, aging, disease, or trauma. Here we present a protocol for fast staining, automatic acquisition, and quantification of fiber populations with ImageJ. Biceps and lumbrical muscles were harvested from Sprague-Dawley rats. Quadruple immunohistochemical staining was performed on single sections using antibodies against myosin heavy chains and secondary fluorescent antibodies. Slides were scanned automatically with a slide scanner. Manual and automatic analyses were performed and compared statistically. The protocol provided rapid and reliable staining for automated image acquisition. Analyses between manual and automatic data indicated Pearson correlation coefficients for biceps of 0.645-0.841 and 0.564-0.673 for lumbrical muscles. Relative fiber populations were accurate to a degree of ± 4%. This protocol provides a reliable tool for quantification of muscle fiber populations. Using freely available software, it decreases the required time to analyze whole muscle sections. Muscle Nerve 54: 292-299, 2016. © 2016 Wiley Periodicals, Inc.
Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone
NASA Astrophysics Data System (ADS)
Xia, G.; Hu, C.
2018-04-01
The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.
Automatic motion correction of clinical shoulder MR images
NASA Astrophysics Data System (ADS)
Manduca, Armando; McGee, Kiaran P.; Welch, Edward B.; Felmlee, Joel P.; Ehman, Richard L.
1999-05-01
A technique for the automatic correction of motion artifacts in MR images was developed. The algorithm uses only the raw (complex) data from the MR scanner, and requires no knowledge of the patient motion during the acquisition. It operates by searching over the space of possible patient motions and determining the motion which, when used to correct the image, optimizes the image quality. The performance of this algorithm was tested in coronal images of the rotator cuff in a series of 144 patients. A four observer comparison of the autocorrelated images with the uncorrected images demonstrated that motion artifacts were significantly reduced in 48% of the cases. The improvements in image quality were similar to those achieved with a previously reported navigator echo-based adaptive motion correction. The results demonstrate that autocorrelation is a practical technique for retrospectively reducing motion artifacts in a demanding clinical MRI application. It achieves performance comparable to a navigator based correction technique, which is significant because autocorrection does not require an imaging sequence that has been modified to explicitly track motion during acquisition. The approach is flexible and should be readily extensible to other types of MR acquisitions that are corrupted by global motion.
Automatic retinal interest evaluation system (ARIES).
Yin, Fengshou; Wong, Damon Wing Kee; Yow, Ai Ping; Lee, Beng Hai; Quan, Ying; Zhang, Zhuo; Gopalakrishnan, Kavitha; Li, Ruoying; Liu, Jiang
2014-01-01
In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases such as glaucoma, age-related macular degeneration and diabetic retinopathy. However, in practice, retinal image quality is a big concern as automatic systems without consideration of degraded image quality will likely generate unreliable results. In this paper, an automatic retinal image quality assessment system (ARIES) is introduced to assess both image quality of the whole image and focal regions of interest. ARIES achieves 99.54% accuracy in distinguishing fundus images from other types of images through a retinal image identification step in a dataset of 35342 images. The system employs high level image quality measures (HIQM) to perform image quality assessment, and achieves areas under curve (AUCs) of 0.958 and 0.987 for whole image and optic disk region respectively in a testing dataset of 370 images. ARIES acts as a form of automatic quality control which ensures good quality images are used for processing, and can also be used to alert operators of poor quality images at the time of acquisition.
Adaptive hyperspectral imager: design, modeling, and control
NASA Astrophysics Data System (ADS)
McGregor, Scot; Lacroix, Simon; Monmayrant, Antoine
2015-08-01
An adaptive, hyperspectral imager is presented. We propose a system with easily adaptable spectral resolution, adjustable acquisition time, and high spatial resolution which is independent of spectral resolution. The system yields the possibility to define a variety of acquisition schemes, and in particular near snapshot acquisitions that may be used to measure the spectral content of given or automatically detected regions of interest. The proposed system is modelled and simulated, and tests on a first prototype validate the approach to achieve near snapshot spectral acquisitions without resorting to any computationally heavy post-processing, nor cumbersome calibration
NASA Astrophysics Data System (ADS)
Adi, K.; Widodo, A. P.; Widodo, C. E.; Pamungkas, A.; Putranto, A. B.
2018-05-01
Traffic monitoring on road needs to be done, the counting of the number of vehicles passing the road is necessary. It is more emphasized for highway transportation management in order to prevent efforts. Therefore, it is necessary to develop a system that is able to counting the number of vehicles automatically. Video processing method is able to counting the number of vehicles automatically. This research has development a system of vehicle counting on toll road. This system includes processes of video acquisition, frame extraction, and image processing for each frame. Video acquisition is conducted in the morning, at noon, in the afternoon, and in the evening. This system employs of background subtraction and morphology methods on gray scale images for vehicle counting. The best vehicle counting results were obtained in the morning with a counting accuracy of 86.36 %, whereas the lowest accuracy was in the evening, at 21.43 %. Differences in morning and evening results are caused by different illumination in the morning and evening. This will cause the values in the image pixels to be different.
Automatización de la adquisición de campos planos de cielo durante el atardecer
NASA Astrophysics Data System (ADS)
Areal, M. B.; Acosta, J. A.; Buccino, A. P.; Perna, P.; Areso, O.; Mauas, P.
2016-08-01
Since 2009, the Instituto de Astronomia y Fisica del Espacio keeps in development an optical observatory mainly aimed to the detection of extrasolar planets and the monitoring of stellar activity. In this framework, the telescopes Meade LX200 16 Horacio Ghielmetti in the Complejo Astronomico El Leoncito, and MATE (Magnetic Activity and Transiting Exoplanets) in the Estación de Altura at the Observatorio Astronomico Felix Aguilar were assembled. Both telescopes can operate automatically through all night, which generates a massive volume of data. Because of this, it becomes essential the automatization of the acquisition and analysis of the regular observations as well as the calibration images; in particular the flat fields. In this work a method to simplify and automatize the acquisition of these images was developed. This method uses the luminosity values of the sky, registered by a weather station located next to the observation site.
[Advances in automatic detection technology for images of thin blood film of malaria parasite].
Juan-Sheng, Zhang; Di-Qiang, Zhang; Wei, Wang; Xiao-Guang, Wei; Zeng-Guo, Wang
2017-05-05
This paper reviews the computer vision and image analysis studies aiming at automated diagnosis or screening of malaria in microscope images of thin blood film smears. On the basis of introducing the background and significance of automatic detection technology, the existing detection technologies are summarized and divided into several steps, including image acquisition, pre-processing, morphological analysis, segmentation, count, and pattern classification components. Then, the principles and implementation methods of each step are given in detail. In addition, the promotion and application in automatic detection technology of thick blood film smears are put forwarded as questions worthy of study, and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.
2011-07-01
radar [e.g., synthetic aperture radar (SAR)]. EO/IR includes multi- and hyperspectral imaging. Signal processing of data from nonimaging sensors, such...enhanced recognition ability. Other nonimage -based techniques, such as category theory,45 hierarchical systems,46 and gradient index flow,47 are possible...the battle- field. There is a plethora of imaging and nonimaging sensors on the battlefield that are being networked together for trans- mission of
On the reproducibility of expert-operated and robotic ultrasound acquisitions.
Kojcev, Risto; Khakzar, Ashkan; Fuerst, Bernhard; Zettinig, Oliver; Fahkry, Carole; DeJong, Robert; Richmon, Jeremy; Taylor, Russell; Sinibaldi, Edoardo; Navab, Nassir
2017-06-01
We present the evaluation of the reproducibility of measurements performed using robotic ultrasound imaging in comparison with expert-operated sonography. Robotic imaging for interventional procedures may be a valuable contribution, but requires reproducibility for its acceptance in clinical routine. We study this by comparing repeated measurements based on robotic and expert-operated ultrasound imaging. Robotic ultrasound acquisition is performed in three steps under user guidance: First, the patient is observed using a 3D camera on the robot end effector, and the user selects the region of interest. This allows for automatic planning of the robot trajectory. Next, the robot executes a sweeping motion following the planned trajectory, during which the ultrasound images and tracking data are recorded. As the robot is compliant, deviations from the path are possible, for instance due to patient motion. Finally, the ultrasound slices are compounded to create a volume. Repeated acquisitions can be performed automatically by comparing the previous and current patient surface. After repeated image acquisitions, the measurements based on acquisitions performed by the robotic system and expert are compared. Within our case series, the expert measured the anterior-posterior, longitudinal, transversal lengths of both of the left and right thyroid lobes on each of the 4 healthy volunteers 3 times, providing 72 measurements. Subsequently, the same procedure was performed using the robotic system resulting in a cumulative total of 144 clinically relevant measurements. Our results clearly indicated that robotic ultrasound enables more repeatable measurements. A robotic ultrasound platform leads to more reproducible data, which is of crucial importance for planning and executing interventions.
Kim, Yoon-Chul; Nielsen, Jon-Fredrik; Nayak, Krishna S
2008-01-01
To develop a method that automatically corrects ghosting artifacts due to echo-misalignment in interleaved gradient-echo echo-planar imaging (EPI) in arbitrary oblique or double-oblique scan planes. An automatic ghosting correction technique was developed based on an alternating EPI acquisition and the phased-array ghost elimination (PAGE) reconstruction method. The direction of k-space traversal is alternated at every temporal frame, enabling lower temporal-resolution ghost-free coil sensitivity maps to be dynamically estimated. The proposed method was compared with conventional one-dimensional (1D) phase correction in axial, oblique, and double-oblique scan planes in phantom and cardiac in vivo studies. The proposed method was also used in conjunction with two-fold acceleration. The proposed method with nonaccelerated acquisition provided excellent suppression of ghosting artifacts in all scan planes, and was substantially more effective than conventional 1D phase correction in oblique and double-oblique scan planes. The feasibility of real-time reconstruction using the proposed technique was demonstrated in a scan protocol with 3.1-mm spatial and 60-msec temporal resolution. The proposed technique with nonaccelerated acquisition provides excellent ghost suppression in arbitrary scan orientations without a calibration scan, and can be useful for real-time interactive imaging, in which scan planes are frequently changed with arbitrary oblique orientations.
Planning applications in image analysis
NASA Technical Reports Server (NTRS)
Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.
1994-01-01
We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.
Secure Video Surveillance System Acquisition Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-12-04
The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less
Learning the manifold of quality ultrasound acquisition.
El-Zehiry, Noha; Yan, Michelle; Good, Sara; Fang, Tong; Zhou, S Kevin; Grady, Leo
2013-01-01
Ultrasound acquisition is a challenging task that requires simultaneous adjustment of several acquisition parameters (the depth, the focus, the frequency and its operation mode). If the acquisition parameters are not properly chosen, the resulting image will have a poor quality and will degrade the patient diagnosis and treatment workflow. Several hardware-based systems for autotuning the acquisition parameters have been previously proposed, but these solutions were largely abandoned because they failed to properly account for tissue inhomogeneity and other patient-specific characteristics. Consequently, in routine practice the clinician either uses population-based parameter presets or manually adjusts the acquisition parameters for each patient during the scan. In this paper, we revisit the problem of autotuning the acquisition parameters by taking a completely novel approach and producing a solution based on image analytics. Our solution is inspired by the autofocus capability of conventional digital cameras, but is significantly more challenging because the number of acquisition parameters is large and the determination of "good quality" images is more difficult to assess. Surprisingly, we show that the set of acquisition parameters which produce images that are favored by clinicians comprise a 1D manifold, allowing for a real-time optimization to maximize image quality. We demonstrate our method for acquisition parameter autotuning on several live patients, showing that our system can start with a poor initial set of parameters and automatically optimize the parameters to produce high quality images.
Uav-Based Automatic Tree Growth Measurement for Biomass Estimation
NASA Astrophysics Data System (ADS)
Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A.
2016-06-01
Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.
Passenger baggage object database (PBOD)
NASA Astrophysics Data System (ADS)
Gittinger, Jaxon M.; Suknot, April N.; Jimenez, Edward S.; Spaulding, Terry W.; Wenrich, Steve A.
2018-04-01
Detection of anomalies of interest in x-ray images is an ever-evolving problem that requires the rapid development of automatic detection algorithms. Automatic detection algorithms are developed using machine learning techniques, which would require developers to obtain the x-ray machine that was used to create the images being trained on, and compile all associated metadata for those images by hand. The Passenger Baggage Object Database (PBOD) and data acquisition application were designed and developed for acquiring and persisting 2-D and 3-D x-ray image data and associated metadata. PBOD was specifically created to capture simulated airline passenger "stream of commerce" luggage data, but could be applied to other areas of x-ray imaging to utilize machine-learning methods.
NASA Astrophysics Data System (ADS)
Samaille, T.; Colliot, O.; Cuingnet, R.; Jouvent, E.; Chabriat, H.; Dormont, D.; Chupin, M.
2012-02-01
White matter hyperintensities (WMH), commonly seen on FLAIR images in elderly people, are a risk factor for dementia onset and have been associated with motor and cognitive deficits. We present here a method to fully automatically segment WMH from T1 and FLAIR images. Iterative steps of non linear diffusion followed by watershed segmentation were applied on FLAIR images until convergence. Diffusivity function and associated contrast parameter were carefully designed to adapt to WMH segmentation. It resulted in piecewise constant images with enhanced contrast between lesions and surrounding tissues. Selection of WMH areas was based on two characteristics: 1) a threshold automatically computed for intensity selection, 2) main location of areas in white matter. False positive areas were finally removed based on their proximity with cerebrospinal fluid/grey matter interface. Evaluation was performed on 67 patients: 24 with amnestic mild cognitive impairment (MCI), from five different centres, and 43 with Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoaraiosis (CADASIL) acquired in a single centre. Results showed excellent volume agreement with manual delineation (Pearson coefficient: r=0.97, p<0.001) and substantial spatial correspondence (Similarity Index: 72%+/-16%). Our method appeared robust to acquisition differences across the centres as well as to pathological variability.
NASA Astrophysics Data System (ADS)
Montazeri, Sina; Gisinger, Christoph; Eineder, Michael; Zhu, Xiao xiang
2018-05-01
Geodetic stereo Synthetic Aperture Radar (SAR) is capable of absolute three-dimensional localization of natural Persistent Scatterer (PS)s which allows for Ground Control Point (GCP) generation using only SAR data. The prerequisite for the method to achieve high precision results is the correct detection of common scatterers in SAR images acquired from different viewing geometries. In this contribution, we describe three strategies for automatic detection of identical targets in SAR images of urban areas taken from different orbit tracks. Moreover, a complete work-flow for automatic generation of large number of GCPs using SAR data is presented and its applicability is shown by exploiting TerraSAR-X (TS-X) high resolution spotlight images over the city of Oulu, Finland and a test site in Berlin, Germany.
Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging
NASA Astrophysics Data System (ADS)
Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.
2010-04-01
The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.
Automatic Solitary Lung Nodule Detection in Computed Tomography Images Slices
NASA Astrophysics Data System (ADS)
Sentana, I. W. B.; Jawas, N.; Asri, S. A.
2018-01-01
Lung nodule is an early indicator of some lung diseases, including lung cancer. In Computed Tomography (CT) based image, nodule is known as a shape that appears brighter than lung surrounding. This research aim to develop an application that automatically detect lung nodule in CT images. There are some steps in algorithm such as image acquisition and conversion, image binarization, lung segmentation, blob detection, and classification. Data acquisition is a step to taking image slice by slice from the original *.dicom format and then each image slices is converted into *.tif image format. Binarization that tailoring Otsu algorithm, than separated the background and foreground part of each image slices. After removing the background part, the next step is to segment part of the lung only so the nodule can localized easier. Once again Otsu algorithm is use to detect nodule blob in localized lung area. The final step is tailoring Support Vector Machine (SVM) to classify the nodule. The application has succeed detecting near round nodule with a certain threshold of size. Those detecting result shows drawback in part of thresholding size and shape of nodule that need to enhance in the next part of the research. The algorithm also cannot detect nodule that attached to wall and Lung Chanel, since it depend the searching only on colour differences.
Control Method for Video Guidance Sensor System
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor)
2005-01-01
A method is provided for controlling operations in a video guidance sensor system wherein images of laser output signals transmitted by the system and returned from a target are captured and processed by the system to produce data used in tracking of the target. Six modes of operation are provided as follows: (i) a reset mode; (ii) a diagnostic mode; (iii) a standby mode; (iv) an acquisition mode; (v) a tracking mode; and (vi) a spot mode wherein captured images of returned laser signals are processed to produce data for all spots found in the image. The method provides for automatic transition to the standby mode from the reset mode after integrity checks are performed and from the diagnostic mode to the reset mode after diagnostic operations are commands is permitted only when the system is in the carried out. Further, acceptance of reset and diagnostic standby mode. The method also provides for automatic transition from the acquisition mode to the tracking mode when an acceptable target is found.
Experimental teaching and training system based on volume holographic storage
NASA Astrophysics Data System (ADS)
Jiang, Zhuqing; Wang, Zhe; Sun, Chan; Cui, Yutong; Wan, Yuhong; Zou, Rufei
2017-08-01
The experiment of volume holographic storage for teaching and training the practical ability of senior students in Applied Physics is introduced. The students can learn to use advanced optoelectronic devices and the automatic control means via this experiment, and further understand the theoretical knowledge of optical information processing and photonics disciplines that have been studied in some courses. In the experiment, multiplexing holographic recording and readout is based on Bragg selectivity of volume holographic grating, in which Bragg diffraction angle is dependent on grating-recording angel. By using different interference angle between reference and object beams, the holograms can be recorded into photorefractive crystal, and then the object images can be read out from these holograms via angular addressing by using the original reference beam. In this system, the experimental data acquisition and the control of the optoelectronic devices, such as the shutter on-off, image loaded in SLM and image acquisition of a CCD sensor, are automatically realized by using LabVIEW programming.
Control method for video guidance sensor system
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor)
2005-01-01
A method is provided for controlling operations in a video guidance sensor system wherein images of laser output signals transmitted by the system and returned from a target are captured and processed by the system to produce data used in tracking of the target. Six modes of operation are provided as follows: (i) a reset mode; (ii) a diagnostic mode; (iii) a standby mode; (iv) an acquisition mode; (v) a tracking mode; and (vi) a spot mode wherein captured images of returned laser signals are processed to produce data for all spots found in the image. The method provides for automatic transition to the standby mode from the reset mode after integrity checks are performed and from the diagnostic mode to the reset mode after diagnostic operations are carried out. Further, acceptance of reset and diagnostic commands is permitted only when the system is in the standby mode. The method also provides for automatic transition from the acquisition mode to the tracking mode when an acceptable target is found.
Koprowski, Robert
2014-07-04
Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator's (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient's back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects - error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18% for the nose, 10% for the cheeks, and 7% for the forehead. Similarly, when: (7) measuring the anterior eye chamber - there is an error of 20%; (8) measuring the tooth enamel thickness - error of 15%; (9) evaluating the mechanical properties of the cornea during pressure measurement - error of 47%. The paper presents vital, selected issues occurring when assessing the accuracy of designed automatic algorithms for image analysis and processing in bioengineering. The impact of acquisition of images on the problems arising in their analysis has been shown on selected examples. It has also been indicated to which elements of image analysis and processing special attention should be paid in their design.
A study of quantification of aortic compliance in mice using radial acquisition phase contrast MRI
NASA Astrophysics Data System (ADS)
Zhao, Xuandong
Spatiotemporal changes in blood flow velocity measured using Phase contrast Magnetic Resonance Imaging (MRI) can be used to quantify Pulse Wave Velocity (PWV) and Wall Shear Stress (WSS), well known indices of vessel compliance. A study was conducted to measure the PWV in the aortic arch in young healthy children using conventional phase contrast MRI and a post processing algorithm that automatically track the peak velocity in phase contrast images. It is shown that the PWV calculated using peak velocity-time data has less variability compared to that using mean velocity and flow. Conventional MR data acquisition techniques lack both the spatial and temporal resolution needed to accurately calculate PWV and WSS in in vivo studies using transgenic animal models of arterial diseases. Radial k-space acquisition can improve both spatial and temporal resolution. A major part of this thesis was devoted to developing technology for Radial Phase Contrast Magnetic Resonance (RPCMR) cine imaging on a 7 Tesla Animal scanner. A pulse sequence with asymmetric radial k-space acquisition was designed and implemented. Software developed to reconstruct the RPCMR images include gridding, density compensation and centering of k-Space that corrects the image ghosting introduced by hardware response time. Image processing software was developed to automatically segment the vessel lumen and correct for phase offset due to eddy currents. Finally, in vivo and ex vivo aortic compliance measurements were conducted in a well-established mouse model for atherosclerosis: Apolipoprotein E-knockout (ApoE-KO). Using RPCMR technique, a significantly higher PWV value as well as a higher average WSS was detected among 9 months old ApoE-KO mice compare to in wild type mice. A follow up ex-vivo test of tissue elasticity confirmed the impaired distensibility of aortic arteries among ApoE-KO mice.
[Electronic Device for Retinal and Iris Imaging].
Drahanský, M; Kolář, R; Mňuk, T
This paper describes design and construction of a new device for automatic capturing of eye retina and iris. This device has two possible ways of utilization - either for biometric purposes (persons recognition on the base of their eye characteristics) or for medical purposes as supporting diagnostic device. eye retina, eye iris, device, acquisition, image.
Automatic concrete cracks detection and mapping of terrestrial laser scan data
NASA Astrophysics Data System (ADS)
Rabah, Mostafa; Elhattab, Ahmed; Fayad, Atef
2013-12-01
Terrestrial laser scanning has become one of the standard technologies for object acquisition in surveying engineering. The high spatial resolution of imaging and the excellent capability of measuring the 3D space by laser scanning bear a great potential if combined for both data acquisition and data compilation. Automatic crack detection from concrete surface images is very effective for nondestructive testing. The crack information can be used to decide the appropriate rehabilitation method to fix the cracked structures and prevent any catastrophic failure. In practice, cracks on concrete surfaces are traced manually for diagnosis. On the other hand, automatic crack detection is highly desirable for efficient and objective crack assessment. The current paper submits a method for automatic concrete cracks detection and mapping from the data that was obtained during laser scanning survey. The method of cracks detection and mapping is achieved by three steps, namely the step of shading correction in the original image, step of crack detection and finally step of crack mapping and processing steps. The detected crack is defined in a pixel coordinate system. To remap the crack into the referred coordinate system, a reverse engineering is used. This is achieved by a hybrid concept of terrestrial laser-scanner point clouds and the corresponding camera image, i.e. a conversion from the pixel coordinate system to the terrestrial laser-scanner or global coordinate system. The results of the experiment show that the mean differences between terrestrial laser scan and the total station are about 30.5, 16.4 and 14.3 mms in x, y and z direction, respectively.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.
Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines
Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213
48 CFR 252.239-7018 - Supply chain risk.
Code of Federal Regulations, 2014 CFR
2014-10-01
... subsystem(s) of equipment, that is used in the automatic acquisition, storage, analysis, evaluation..., ancillary equipment (including imaging peripherals, input, output, and storage devices necessary for... of a computer, software, firmware and similar procedures, services (including support services), and...
Acoustic window planning for ultrasound acquisition.
Göbl, Rüdiger; Virga, Salvatore; Rackerseder, Julia; Frisch, Benjamin; Navab, Nassir; Hennersperger, Christoph
2017-06-01
Autonomous robotic ultrasound has recently gained considerable interest, especially for collaborative applications. Existing methods for acquisition trajectory planning are solely based on geometrical considerations, such as the pose of the transducer with respect to the patient surface. This work aims at establishing acoustic window planning to enable autonomous ultrasound acquisitions of anatomies with restricted acoustic windows, such as the liver or the heart. We propose a fully automatic approach for the planning of acquisition trajectories, which only requires information about the target region as well as existing tomographic imaging data, such as X-ray computed tomography. The framework integrates both geometrical and physics-based constraints to estimate the best ultrasound acquisition trajectories with respect to the available acoustic windows. We evaluate the developed method using virtual planning scenarios based on real patient data as well as for real robotic ultrasound acquisitions on a tissue-mimicking phantom. The proposed method yields superior image quality in comparison with a naive planning approach, while maintaining the necessary coverage of the target. We demonstrate that by taking image formation properties into account acquisition planning methods can outperform naive plannings. Furthermore, we show the need for such planning techniques, since naive approaches are not sufficient as they do not take the expected image quality into account.
Image acquisition system for traffic monitoring applications
NASA Astrophysics Data System (ADS)
Auty, Glen; Corke, Peter I.; Dunn, Paul; Jensen, Murray; Macintyre, Ian B.; Mills, Dennis C.; Nguyen, Hao; Simons, Ben
1995-03-01
An imaging system for monitoring traffic on multilane highways is discussed. The system, named Safe-T-Cam, is capable of operating 24 hours per day in all but extreme weather conditions and can capture still images of vehicles traveling up to 160 km/hr. Systems operating at different remote locations are networked to allow transmission of images and data to a control center. A remote site facility comprises a vehicle detection and classification module (VCDM), an image acquisition module (IAM) and a license plate recognition module (LPRM). The remote site is connected to the central site by an ISDN communications network. The remote site system is discussed in this paper. The VCDM consists of a video camera, a specialized exposure control unit to maintain consistent image characteristics, and a 'real-time' image processing system that processes 50 images per second. The VCDM can detect and classify vehicles (e.g. cars from trucks). The vehicle class is used to determine what data should be recorded. The VCDM uses a vehicle tracking technique to allow optimum triggering of the high resolution camera of the IAM. The IAM camera combines the features necessary to operate consistently in the harsh environment encountered when imaging a vehicle 'head-on' in both day and night conditions. The image clarity obtained is ideally suited for automatic location and recognition of the vehicle license plate. This paper discusses the camera geometry, sensor characteristics and the image processing methods which permit consistent vehicle segmentation from a cluttered background allowing object oriented pattern recognition to be used for vehicle classification. The image capture of high resolution images and the image characteristics required for the LPRMs automatic reading of vehicle license plates, is also discussed. The results of field tests presented demonstrate that the vision based Safe-T-Cam system, currently installed on open highways, is capable of producing automatic classification of vehicle class and recording of vehicle numberplates with a success rate around 90 percent in a period of 24 hours.
Automatic analysis for neuron by confocal laser scanning microscope
NASA Astrophysics Data System (ADS)
Satou, Kouhei; Aoki, Yoshimitsu; Mataga, Nobuko; Hensh, Takao K.; Taki, Katuhiko
2005-12-01
The aim of this study is to develop a system that recognizes both the macro- and microscopic configurations of nerve cells and automatically performs the necessary 3-D measurements and functional classification of spines. The acquisition of 3-D images of cranial nerves has been enabled by the use of a confocal laser scanning microscope, although the highly accurate 3-D measurements of the microscopic structures of cranial nerves and their classification based on their configurations have not yet been accomplished. In this study, in order to obtain highly accurate measurements of the microscopic structures of cranial nerves, existing positions of spines were predicted by the 2-D image processing of tomographic images. Next, based on the positions that were predicted on the 2-D images, the positions and configurations of the spines were determined more accurately by 3-D image processing of the volume data. We report the successful construction of an automatic analysis system that uses a coarse-to-fine technique to analyze the microscopic structures of cranial nerves with high speed and accuracy by combining 2-D and 3-D image analyses.
NASA Astrophysics Data System (ADS)
Okumura, Hiroshi; Takubo, Shoichiro; Kawasaki, Takeru; Abdullah, Indra Nugraha; Uchino, Osamu; Morino, Isamu; Yokota, Tatsuya; Nagai, Tomohiro; Sakai, Tetsu; Maki, Takashi; Arai, Kohei
2013-01-01
A web-base data acquisition and management system for GOSAT (Greenhouse gases Observation SATellite) validation lidar data-analysis has been developed. The system consists of data acquisition sub-system (DAS) and data management sub-system (DMS). DAS written in Perl language acquires AMeDAS (Automated Meteorological Data Acquisition System) ground-level local meteorological data, GPS Radiosonde upper-air meteorological data, ground-level oxidant data, skyradiometer data, skyview camera images, meteorological satellite IR image data and GOSAT validation lidar data. DMS written in PHP language demonstrates satellite-pass date and all acquired data. In this article, we briefly describe some improvement for higher performance and higher data usability. GPS Radiosonde upper-air meteorological data and U.S. standard atmospheric model in DAS automatically calculate molecule number density profiles. Predicted ozone density prole images above Saga city are also calculated by using Meteorological Research Institute (MRI) chemistry-climate model version 2 for comparison to actual ozone DIAL data.
Application test of a Detection Method for the Enclosed Turbine Runner Chamber
NASA Astrophysics Data System (ADS)
Liu, Yunlong; Shen, Dingjie; Xie, Yi; Yang, Xiangwei; Long, Yi; Li, Wenbo
2017-06-01
At present, for the existing problems of the testing methods for the key hidden metal components of the turbine runner chamber, such as the poor reliability, the inaccurate locating and the larger detection blind spots of the detection device, under the downtime without opening the cover of the hydropower turbine runner chamber, an automatic detection method based on real-time image acquisition and simulation comparison techniques was proposed. By using the permanent magnet wheel, the magnetic crawler which carry the real-time image acquisition device, could complete the crawling work on the inner surface of the enclosed chamber. Then the image acquisition device completed the real-time collection of the scene image of the enclosed chamber. According to the obtained location by using the positioning auxiliary device, the position of the real-time detection image in a virtual 3D model was calibrated. Through comparing of the real-time detection images and the computer simulation images, the defects or foreign matter fall into could be accurately positioning, so as to repair and clean up conveniently.
Tracking scanning laser ophthalmoscope (TSLO)
NASA Astrophysics Data System (ADS)
Hammer, Daniel X.; Ferguson, R. Daniel; Magill, John C.; White, Michael A.; Elsner, Ann E.; Webb, Robert H.
2003-07-01
The effectiveness of image stabilization with a retinal tracker in a multi-function, compact scanning laser ophthalmoscope (TSLO) was demonstrated in initial human subject tests. The retinal tracking system uses a confocal reflectometer with a closed loop optical servo system to lock onto features in the fundus. The system is modular to allow configuration for many research and clinical applications, including hyperspectral imaging, multifocal electroretinography (MFERG), perimetry, quantification of macular and photo-pigmentation, imaging of neovascularization and other subretinal structures (drusen, hyper-, and hypo-pigmentation), and endogenous fluorescence imaging. Optical hardware features include dual wavelength imaging and detection, integrated monochromator, higher-order motion control, and a stimulus source. The system software consists of a real-time feedback control algorithm and a user interface. Software enhancements include automatic bias correction, asymmetric feature tracking, image averaging, automatic track re-lock, and acquisition and logging of uncompressed images and video files. Normal adult subjects were tested without mydriasis to optimize the tracking instrumentation and to characterize imaging performance. The retinal tracking system achieves a bandwidth of greater than 1 kHz, which permits tracking at rates that greatly exceed the maximum rate of motion of the human eye. The TSLO stabilized images in all test subjects during ordinary saccades up to 500 deg/sec with an inter-frame accuracy better than 0.05 deg. Feature lock was maintained for minutes despite subject eye blinking. Successful frame averaging allowed image acquisition with decreased noise in low-light applications. The retinal tracking system significantly enhances the imaging capabilities of the scanning laser ophthalmoscope.
Acquisition of multiple image stacks with a confocal laser scanning microscope
NASA Astrophysics Data System (ADS)
Zuschratter, Werner; Steffen, Thomas; Braun, Katharina; Herzog, Andreas; Michaelis, Bernd; Scheich, Henning
1998-06-01
Image acquisition at high magnification is inevitably correlated with a limited view over the entire tissue section. To overcome this limitation we designed software for multiple image-stack acquisition (3D-MISA) in confocal laser scanning microscopy (CLSM). The system consists of a 4 channel Leica CLSM equipped with a high resolution z- scanning stage mounted on a xy-monitorized stage. The 3D- MISA software is implemented into the microscope scanning software and uses the microscope settings for the movements of the xy-stage. It allows storage and recall of 70 xyz- positions and the automatic 3D-scanning of image arrays between selected xyz-coordinates. The number of images within one array is limited only by the amount of disk space or memory available. Although for most applications the accuracy of the xy-scanning stage is sufficient for a precise alignment of tiled views, the software provides the possibility of an adjustable overlap between two image stacks by shifting the moving steps of the xy-scanning stage. After scanning a tiled image gallery of the extended focus-images of each channel will be displayed on a graphic monitor. In addition, a tiled image gallery of individual focal planes can be created. In summary, the 3D-MISA allows 3D-image acquisition of coherent regions in combination with high resolution of single images.
Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert
2018-05-08
In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.
Development of image and information management system for Korean standard brain
NASA Astrophysics Data System (ADS)
Chung, Soon Cheol; Choi, Do Young; Tack, Gye Rae; Sohn, Jin Hun
2004-04-01
The purpose of this study is to establish a reference for image acquisition for completing a standard brain for diverse Korean population, and to develop database management system that saves and manages acquired brain images and personal information of subjects. 3D MP-RAGE (Magnetization Prepared Rapid Gradient Echo) technique which has excellent Signal to Noise Ratio (SNR) and Contrast to Noise Ratio (CNR) as well as reduces image acquisition time was selected for anatomical image acquisition, and parameter values were obtained for the optimal image acquisition. Using these standards, image data of 121 young adults (early twenties) were obtained and stored in the system. System was designed to obtain, save, and manage not only anatomical image data but also subjects' basic demographic factors, medical history, handedness inventory, state-trait anxiety inventory, A-type personality inventory, self-assessment depression inventory, mini-mental state examination, intelligence test, and results of personality test via a survey questionnaire. Additionally this system was designed to have functions of saving, inserting, deleting, searching, and printing image data and personal information of subjects, and to have accessibility to them as well as automatic connection setup with ODBC. This newly developed system may have major contribution to the completion of a standard brain for diverse Korean population since it can save and manage their image data and personal information.
New developments in electron microscopy for serial image acquisition of neuronal profiles.
Kubota, Yoshiyuki
2015-02-01
Recent developments in electron microscopy largely automate the continuous acquisition of serial electron micrographs (EMGs), previously achieved by laborious manual serial ultrathin sectioning using an ultramicrotome and ultrastructural image capture process with transmission electron microscopy. The new systems cut thin sections and capture serial EMGs automatically, allowing for acquisition of large data sets in a reasonably short time. The new methods are focused ion beam/scanning electron microscopy, ultramicrotome/serial block-face scanning electron microscopy, automated tape-collection ultramicrotome/scanning electron microscopy and transmission electron microscope camera array. In this review, their positive and negative aspects are discussed. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Enhanced FIB-SEM systems for large-volume 3D imaging.
Xu, C Shan; Hayworth, Kenneth J; Lu, Zhiyuan; Grob, Patricia; Hassan, Ahmed M; García-Cerdán, José G; Niyogi, Krishna K; Nogales, Eva; Weinberg, Richard J; Hess, Harald F
2017-05-13
Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 µm 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.
SPIM-fluid: open source light-sheet based platform for high-throughput imaging
Gualda, Emilio J.; Pereira, Hugo; Vale, Tiago; Estrada, Marta Falcão; Brito, Catarina; Moreno, Nuno
2015-01-01
Light sheet fluorescence microscopy has recently emerged as the technique of choice for obtaining high quality 3D images of whole organisms/embryos with low photodamage and fast acquisition rates. Here we present an open source unified implementation based on Arduino and Micromanager, which is capable of operating Light Sheet Microscopes for automatized 3D high-throughput imaging on three-dimensional cell cultures and model organisms like zebrafish, oriented to massive drug screening. PMID:26601007
Real-time inspection by submarine images
NASA Astrophysics Data System (ADS)
Tascini, Guido; Zingaretti, Primo; Conte, Giuseppe
1996-10-01
A real-time application of computer vision concerning tracking and inspection of a submarine pipeline is described. The objective is to develop automatic procedures for supporting human operators in the real-time analysis of images acquired by means of cameras mounted on underwater remotely operated vehicles (ROV) Implementation of such procedures gives rise to a human-machine system for underwater pipeline inspection that can automatically detect and signal the presence of the pipe, of its structural or accessory elements, and of dangerous or alien objects in its neighborhood. The possibility of modifying the image acquisition rate in the simulations performed on video- recorded images is used to prove that the system performs all necessary processing with an acceptable robustness working in real-time up to a speed of about 2.5 kn, widely greater than that the actual ROVs and the security features allow.
PACS quality control and automatic problem notifier
NASA Astrophysics Data System (ADS)
Honeyman-Buck, Janice C.; Jones, Douglas; Frost, Meryll M.; Staab, Edward V.
1997-05-01
One side effect of installing a clinical PACS Is that users become dependent upon the technology and in some cases it can be very difficult to revert back to a film based system if components fail. The nature of system failures range from slow deterioration of function as seen in the loss of monitor luminance through sudden catastrophic loss of the entire PACS networks. This paper describes the quality control procedures in place at the University of Florida and the automatic notification system that alerts PACS personnel when a failure has happened or is anticipated. The goal is to recover from a failure with a minimum of downtime and no data loss. Routine quality control is practiced on all aspects of PACS, from acquisition, through network routing, through display, and including archiving. Whenever possible, the system components perform self and between platform checks for active processes, file system status, errors in log files, and system uptime. When an error is detected or a exception occurs, an automatic page is sent to a pager with a diagnostic code. Documentation on each code, trouble shooting procedures, and repairs are kept on an intranet server accessible only to people involved in maintaining the PACS. In addition to the automatic paging system for error conditions, acquisition is assured by an automatic fax report sent on a daily basis to all technologists acquiring PACS images to be used as a cross check that all studies are archived prior to being removed from the acquisition systems. Daily quality control is preformed to assure that studies can be moved from each acquisition and contrast adjustment. The results of selected quality control reports will be presented. The intranet documentation server will be described with the automatic pager system. Monitor quality control reports will be described and the cost of quality control will be quantified. As PACS is accepted as a clinical tool, the same standards of quality control must be established as are expected on other equipment used in the diagnostic process.
Automatic image registration performance for two different CBCT systems; variation with imaging dose
NASA Astrophysics Data System (ADS)
Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.
2014-03-01
The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.
Content-based image exploitation for situational awareness
NASA Astrophysics Data System (ADS)
Gains, David
2008-04-01
Image exploitation is of increasing importance to the enterprise of building situational awareness from multi-source data. It involves image acquisition, identification of objects of interest in imagery, storage, search and retrieval of imagery, and the distribution of imagery over possibly bandwidth limited networks. This paper describes an image exploitation application that uses image content alone to detect objects of interest, and that automatically establishes and preserves spatial and temporal relationships between images, cameras and objects. The application features an intuitive user interface that exposes all images and information generated by the system to an operator thus facilitating the formation of situational awareness.
Automatic creation of three-dimensional avatars
NASA Astrophysics Data System (ADS)
Villa-Uriol, Maria-Cruz; Sainz, Miguel; Kuester, Falko; Bagherzadeh, Nader
2003-01-01
Highly accurate avatars of humans promise a new level of realism in engineering and entertainment applications, including areas such as computer animated movies, computer game development interactive virtual environments and tele-presence. In order to provide high-quality avatars, new techniques for the automatic acquisition and creation are required. A framework for the capture and construction of arbitrary avatars from image data is presented in this paper. Avatars are automatically reconstructed from multiple static images of a human subject by utilizing image information to reshape a synthetic three-dimensional articulated reference model. A pipeline is presented that combines a set of hardware-accelerated stages into one seamless system. Primary stages in this pipeline include pose estimation, skeleton fitting, body part segmentation, geometry construction and coloring, leading to avatars that can be animated and included into interactive environments. The presented system removes traditional constraints in the initial pose of the captured subject by using silhouette-based modification techniques in combination with a reference model. Results can be obtained in near-real time with very limited user intervention.
Neumann, M; Breton, E; Cuvillon, L; Pan, L; Lorenz, C H; de Mathelin, M
2012-01-01
In this paper, an original workflow is presented for MR image plane alignment based on tracking in real-time MR images. A test device consisting of two resonant micro-coils and a passive marker is proposed for detection using image-based algorithms. Micro-coils allow for automated initialization of the object detection in dedicated low flip angle projection images; then the passive marker is tracked in clinical real-time MR images, with alternation between two oblique orthogonal image planes along the test device axis; in case the passive marker is lost in real-time images, the workflow is reinitialized. The proposed workflow was designed to minimize dedicated acquisition time to a single dedicated acquisition in the ideal case (no reinitialization required). First experiments have shown promising results for test-device tracking precision, with a mean position error of 0.79 mm and a mean orientation error of 0.24°.
Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)
NASA Astrophysics Data System (ADS)
Javanshir Moghaddam, Mandana; Tan, Tao; Karssemeijer, Nico; Platel, Bram
2014-03-01
Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.
NASA Astrophysics Data System (ADS)
Ham, S.; Oh, Y.; Choi, K.; Lee, I.
2018-05-01
Detecting unregistered buildings from aerial images is an important task for urban management such as inspection of illegal buildings in green belt or update of GIS database. Moreover, the data acquisition platform of photogrammetry is evolving from manned aircraft to UAVs (Unmanned Aerial Vehicles). However, it is very costly and time-consuming to detect unregistered buildings from UAV images since the interpretation of aerial images still relies on manual efforts. To overcome this problem, we propose a system which automatically detects unregistered buildings from UAV images based on deep learning methods. Specifically, we train a deconvolutional network with publicly opened geospatial data, semantically segment a given UAV image into a building probability map and compare the building map with existing GIS data. Through this procedure, we could detect unregistered buildings from UAV images automatically and efficiently. We expect that the proposed system can be applied for various urban management tasks such as monitoring illegal buildings or illegal land-use change.
Fallah, Faezeh; Machann, Jürgen; Martirosian, Petros; Bamberg, Fabian; Schick, Fritz; Yang, Bin
2017-04-01
To evaluate and compare conventional T1-weighted 2D turbo spin echo (TSE), T1-weighted 3D volumetric interpolated breath-hold examination (VIBE), and two-point 3D Dixon-VIBE sequences for automatic segmentation of visceral adipose tissue (VAT) volume at 3 Tesla by measuring and compensating for errors arising from intensity nonuniformity (INU) and partial volume effects (PVE). The body trunks of 28 volunteers with body mass index values ranging from 18 to 41.2 kg/m 2 (30.02 ± 6.63 kg/m 2 ) were scanned at 3 Tesla using three imaging techniques. Automatic methods were applied to reduce INU and PVE and to segment VAT. The automatically segmented VAT volumes obtained from all acquisitions were then statistically and objectively evaluated against the manually segmented (reference) VAT volumes. Comparing the reference volumes with the VAT volumes automatically segmented over the uncorrected images showed that INU led to an average relative volume difference of -59.22 ± 11.59, 2.21 ± 47.04, and -43.05 ± 5.01 % for the TSE, VIBE, and Dixon images, respectively, while PVE led to average differences of -34.85 ± 19.85, -15.13 ± 11.04, and -33.79 ± 20.38 %. After signal correction, differences of -2.72 ± 6.60, 34.02 ± 36.99, and -2.23 ± 7.58 % were obtained between the reference and the automatically segmented volumes. A paired-sample two-tailed t test revealed no significant difference between the reference and automatically segmented VAT volumes of the corrected TSE (p = 0.614) and Dixon (p = 0.969) images, but showed a significant VAT overestimation using the corrected VIBE images. Under similar imaging conditions and spatial resolution, automatically segmented VAT volumes obtained from the corrected TSE and Dixon images agreed with each other and with the reference volumes. These results demonstrate the efficacy of the signal correction methods and the similar accuracy of TSE and Dixon imaging for automatic volumetry of VAT at 3 Tesla.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, C. Shan; Hayworth, Kenneth J.; Lu, Zhiyuan
Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 ?m 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processesmore » and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.« less
Dehairs, M; Bosmans, H; Desmet, W; Marshall, N W
2017-07-31
Current automatic dose rate controls (ADRCs) of dynamic x-ray imaging systems adjust their acquisition parameters in response to changes in patient thickness in order to achieve a constant signal level in the image receptor. This work compares a 3 parameter (3P) ADRC control to a more flexible 5-parameter (5P) method to meet this goal. A phantom composed of 15 composite poly(methyl) methacrylate (PMMA)/aluminium (Al) plates was imaged on a Siemens Artis Q dynamic system using standard 3P and 5P ADRC techniques. Phantom thickness covered a water equivalent thickness (WET) range of 2.5 cm to 37.5 cm. Acquisition parameter settings (tube potential, tube current, pulse length, copper filtration and focus size) and phantom entrance air kerma rate (EAKR) were recorded as the thickness changed. Signal difference to noise ratio (SDNR) was measured using a 0.3 mm iron insert centred in the PMMA stack, positioned at the system isocentre. SDNR was then multiplied by modulation transfer function (MTF) based correction factors for focal spot penumbral blurring and motion blurring, to give a spatial frequency dependent parameter, SDNR(u). These MTF correction factors were evaluated for an object motion of 25 mm s -1 and at a spatial frequency of 1.4 mm -1 in the object plane, typical for cardiac imaging. The figure of merit (FOM) was calculated as SDNR(u)²/EAKR for the two ADRC regimes. Using 5P versus 3P technique showed clear improvements over all thicknesses. Averaged over clinically relevant adult WET values (20 cm-37.5 cm), EAKR was reduced by 13% and 27% for fluoroscopy and acquisition modes, respectively, while the SDNR(u) based FOM increased by 16% and 34% for fluoroscopy and acquisition. In conclusion, the generalized FOM, taking into account the influence of focus size and object motion, showed benefit in terms of image quality and patient dose for the 5-parameter control over 3-parameter method for the ADRC programming of dynamic x-ray imaging systems.
NASA Astrophysics Data System (ADS)
Dehairs, M.; Bosmans, H.; Desmet, W.; Marshall, N. W.
2017-08-01
Current automatic dose rate controls (ADRCs) of dynamic x-ray imaging systems adjust their acquisition parameters in response to changes in patient thickness in order to achieve a constant signal level in the image receptor. This work compares a 3 parameter (3P) ADRC control to a more flexible 5-parameter (5P) method to meet this goal. A phantom composed of 15 composite poly(methyl) methacrylate (PMMA)/aluminium (Al) plates was imaged on a Siemens Artis Q dynamic system using standard 3P and 5P ADRC techniques. Phantom thickness covered a water equivalent thickness (WET) range of 2.5 cm to 37.5 cm. Acquisition parameter settings (tube potential, tube current, pulse length, copper filtration and focus size) and phantom entrance air kerma rate (EAKR) were recorded as the thickness changed. Signal difference to noise ratio (SDNR) was measured using a 0.3 mm iron insert centred in the PMMA stack, positioned at the system isocentre. SDNR was then multiplied by modulation transfer function (MTF) based correction factors for focal spot penumbral blurring and motion blurring, to give a spatial frequency dependent parameter, SDNR(u). These MTF correction factors were evaluated for an object motion of 25 mm s-1 and at a spatial frequency of 1.4 mm-1 in the object plane, typical for cardiac imaging. The figure of merit (FOM) was calculated as SDNR(u)²/EAKR for the two ADRC regimes. Using 5P versus 3P technique showed clear improvements over all thicknesses. Averaged over clinically relevant adult WET values (20 cm-37.5 cm), EAKR was reduced by 13% and 27% for fluoroscopy and acquisition modes, respectively, while the SDNR(u) based FOM increased by 16% and 34% for fluoroscopy and acquisition. In conclusion, the generalized FOM, taking into account the influence of focus size and object motion, showed benefit in terms of image quality and patient dose for the 5-parameter control over 3-parameter method for the ADRC programming of dynamic x-ray imaging systems.
Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan
2018-01-01
A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.
Research of aerial imaging spectrometer data acquisition technology based on USB 3.0
NASA Astrophysics Data System (ADS)
Huang, Junze; Wang, Yueming; He, Daogang; Yu, Yanan
2016-11-01
With the emergence of UAV (unmanned aerial vehicle) platform for aerial imaging spectrometer, research of aerial imaging spectrometer DAS(data acquisition system) faces new challenges. Due to the limitation of platform and other factors, the aerial imaging spectrometer DAS requires small-light, low-cost and universal. Traditional aerial imaging spectrometer DAS system is expensive, bulky, non-universal and unsupported plug-and-play based on PCIe. So that has been unable to meet promotion and application of the aerial imaging spectrometer. In order to solve these problems, the new data acquisition scheme bases on USB3.0 interface.USB3.0 can provide guarantee of small-light, low-cost and universal relying on the forward-looking technology advantage. USB3.0 transmission theory is up to 5Gbps.And the GPIF programming interface achieves 3.2Gbps of the effective theoretical data bandwidth.USB3.0 can fully meet the needs of the aerial imaging spectrometer data transmission rate. The scheme uses the slave FIFO asynchronous data transmission mode between FPGA and USB3014 interface chip. Firstly system collects spectral data from TLK2711 of high-speed serial interface chip. Then FPGA receives data in DDR2 cache after ping-pong data processing. Finally USB3014 interface chip transmits data via automatic-dma approach and uploads to PC by USB3.0 cable. During the manufacture of aerial imaging spectrometer, the DAS can achieve image acquisition, transmission, storage and display. All functions can provide the necessary test detection for aerial imaging spectrometer. The test shows that system performs stable and no data lose. Average transmission speed and storage speed of writing SSD can stabilize at 1.28Gbps. Consequently ,this data acquisition system can meet application requirements for aerial imaging spectrometer.
Accelerated Optical Projection Tomography Applied to In Vivo Imaging of Zebrafish
Correia, Teresa; Yin, Jun; Ramel, Marie-Christine; Andrews, Natalie; Katan, Matilda; Bugeon, Laurence; Dallman, Margaret J.; McGinty, James; Frankel, Paul; French, Paul M. W.; Arridge, Simon
2015-01-01
Optical projection tomography (OPT) provides a non-invasive 3-D imaging modality that can be applied to longitudinal studies of live disease models, including in zebrafish. Current limitations include the requirement of a minimum number of angular projections for reconstruction of reasonable OPT images using filtered back projection (FBP), which is typically several hundred, leading to acquisition times of several minutes. It is highly desirable to decrease the number of required angular projections to decrease both the total acquisition time and the light dose to the sample. This is particularly important to enable longitudinal studies, which involve measurements of the same fish at different time points. In this work, we demonstrate that the use of an iterative algorithm to reconstruct sparsely sampled OPT data sets can provide useful 3-D images with 50 or fewer projections, thereby significantly decreasing the minimum acquisition time and light dose while maintaining image quality. A transgenic zebrafish embryo with fluorescent labelling of the vasculature was imaged to acquire densely sampled (800 projections) and under-sampled data sets of transmitted and fluorescence projection images. The under-sampled OPT data sets were reconstructed using an iterative total variation-based image reconstruction algorithm and compared against FBP reconstructions of the densely sampled data sets. To illustrate the potential for quantitative analysis following rapid OPT data acquisition, a Hessian-based method was applied to automatically segment the reconstructed images to select the vasculature network. Results showed that 3-D images of the zebrafish embryo and its vasculature of sufficient visual quality for quantitative analysis can be reconstructed using the iterative algorithm from only 32 projections—achieving up to 28 times improvement in imaging speed and leading to total acquisition times of a few seconds. PMID:26308086
Development of on line automatic separation device for apple and sleeve
NASA Astrophysics Data System (ADS)
Xin, Dengke; Ning, Duo; Wang, Kangle; Han, Yuhang
2018-04-01
Based on STM32F407 single chip microcomputer as control core, automatic separation device of fruit sleeve is designed. This design consists of hardware and software. In hardware, it includes mechanical tooth separator and three degree of freedom manipulator, as well as industrial control computer, image data acquisition card, end effector and other structures. The software system is based on Visual C++ development environment, to achieve localization and recognition of fruit sleeve with the technology of image processing and machine vision, drive manipulator of foam net sets of capture, transfer, the designated position task. Test shows: The automatic separation device of the fruit sleeve has the advantages of quick response speed and high separation success rate, and can realize separation of the apple and plastic foam sleeve, and lays the foundation for further studying and realizing the application of the enterprise production line.
Image-based mobile service: automatic text extraction and translation
NASA Astrophysics Data System (ADS)
Berclaz, Jérôme; Bhatti, Nina; Simske, Steven J.; Schettino, John C.
2010-01-01
We present a new mobile service for the translation of text from images taken by consumer-grade cell-phone cameras. Such capability represents a new paradigm for users where a simple image provides the basis for a service. The ubiquity and ease of use of cell-phone cameras enables acquisition and transmission of images anywhere and at any time a user wishes, delivering rapid and accurate translation over the phone's MMS and SMS facilities. Target text is extracted completely automatically, requiring no bounding box delineation or related user intervention. The service uses localization, binarization, text deskewing, and optical character recognition (OCR) in its analysis. Once the text is translated, an SMS message is sent to the user with the result. Further novelties include that no software installation is required on the handset, any service provider or camera phone can be used, and the entire service is implemented on the server side.
SPEQTACLE: An automated generalized fuzzy C-means algorithm for tumor delineation in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapuyade-Lahorgue, Jérôme; Visvikis, Dimitris; Hatt, Mathieu, E-mail: hatt@univ-brest.fr
Purpose: Accurate tumor delineation in positron emission tomography (PET) images is crucial in oncology. Although recent methods achieved good results, there is still room for improvement regarding tumors with complex shapes, low signal-to-noise ratio, and high levels of uptake heterogeneity. Methods: The authors developed and evaluated an original clustering-based method called spatial positron emission quantification of tumor—Automatic Lp-norm estimation (SPEQTACLE), based on the fuzzy C-means (FCM) algorithm with a generalization exploiting a Hilbertian norm to more accurately account for the fuzzy and non-Gaussian distributions of PET images. An automatic and reproducible estimation scheme of the norm on an image-by-image basismore » was developed. Robustness was assessed by studying the consistency of results obtained on multiple acquisitions of the NEMA phantom on three different scanners with varying acquisition parameters. Accuracy was evaluated using classification errors (CEs) on simulated and clinical images. SPEQTACLE was compared to another FCM implementation, fuzzy local information C-means (FLICM) and fuzzy locally adaptive Bayesian (FLAB). Results: SPEQTACLE demonstrated a level of robustness similar to FLAB (variability of 14% ± 9% vs 14% ± 7%, p = 0.15) and higher than FLICM (45% ± 18%, p < 0.0001), and improved accuracy with lower CE (14% ± 11%) over both FLICM (29% ± 29%) and FLAB (22% ± 20%) on simulated images. Improvement was significant for the more challenging cases with CE of 17% ± 11% for SPEQTACLE vs 28% ± 22% for FLAB (p = 0.009) and 40% ± 35% for FLICM (p < 0.0001). For the clinical cases, SPEQTACLE outperformed FLAB and FLICM (15% ± 6% vs 37% ± 14% and 30% ± 17%, p < 0.004). Conclusions: SPEQTACLE benefitted from the fully automatic estimation of the norm on a case-by-case basis. This promising approach will be extended to multimodal images and multiclass estimation in future developments.« less
Norman, Berk; Pedoia, Valentina; Majumdar, Sharmila
2018-03-27
Purpose To analyze how automatic segmentation translates in accuracy and precision to morphology and relaxometry compared with manual segmentation and increases the speed and accuracy of the work flow that uses quantitative magnetic resonance (MR) imaging to study knee degenerative diseases such as osteoarthritis (OA). Materials and Methods This retrospective study involved the analysis of 638 MR imaging volumes from two data cohorts acquired at 3.0 T: (a) spoiled gradient-recalled acquisition in the steady state T1 ρ -weighted images and (b) three-dimensional (3D) double-echo steady-state (DESS) images. A deep learning model based on the U-Net convolutional network architecture was developed to perform automatic segmentation. Cartilage and meniscus compartments were manually segmented by skilled technicians and radiologists for comparison. Performance of the automatic segmentation was evaluated on Dice coefficient overlap with the manual segmentation, as well as by the automatic segmentations' ability to quantify, in a longitudinally repeatable way, relaxometry and morphology. Results The models produced strong Dice coefficients, particularly for 3D-DESS images, ranging between 0.770 and 0.878 in the cartilage compartments to 0.809 and 0.753 for the lateral meniscus and medial meniscus, respectively. The models averaged 5 seconds to generate the automatic segmentations. Average correlations between manual and automatic quantification of T1 ρ and T2 values were 0.8233 and 0.8603, respectively, and 0.9349 and 0.9384 for volume and thickness, respectively. Longitudinal precision of the automatic method was comparable with that of the manual one. Conclusion U-Net demonstrates efficacy and precision in quickly generating accurate segmentations that can be used to extract relaxation times and morphologic characterization and values that can be used in the monitoring and diagnosis of OA. © RSNA, 2018 Online supplemental material is available for this article.
48 CFR 19.507 - Automatic dissolution of a small business set-aside.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Automatic dissolution of a small business set-aside. 19.507 Section 19.507 Federal Acquisition Regulations System FEDERAL... Automatic dissolution of a small business set-aside. (a) If a small business set-aside acquisition or...
48 CFR 19.507 - Automatic dissolution of a small business set-aside.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Automatic dissolution of a small business set-aside. 19.507 Section 19.507 Federal Acquisition Regulations System FEDERAL... Automatic dissolution of a small business set-aside. (a) If a small business set-aside acquisition or...
48 CFR 19.507 - Automatic dissolution of a small business set-aside.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Automatic dissolution of a small business set-aside. 19.507 Section 19.507 Federal Acquisition Regulations System FEDERAL... Automatic dissolution of a small business set-aside. (a) If a small business set-aside acquisition or...
48 CFR 19.507 - Automatic dissolution of a small business set-aside.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Automatic dissolution of a small business set-aside. 19.507 Section 19.507 Federal Acquisition Regulations System FEDERAL... Automatic dissolution of a small business set-aside. (a) If a small business set-aside acquisition or...
48 CFR 19.507 - Automatic dissolution of a small business set-aside.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Automatic dissolution of a small business set-aside. 19.507 Section 19.507 Federal Acquisition Regulations System FEDERAL... Automatic dissolution of a small business set-aside. (a) If a small business set-aside acquisition or...
Estimation of urinary stone composition by automated processing of CT images.
Chevreau, Grégoire; Troccaz, Jocelyne; Conort, Pierre; Renard-Penna, Raphaëlle; Mallet, Alain; Daudon, Michel; Mozer, Pierre
2009-10-01
The objective of this article was developing an automated tool for routine clinical practice to estimate urinary stone composition from CT images based on the density of all constituent voxels. A total of 118 stones for which the composition had been determined by infrared spectroscopy were placed in a helical CT scanner. A standard acquisition, low-dose and high-dose acquisitions were performed. All voxels constituting each stone were automatically selected. A dissimilarity index evaluating variations of density around each voxel was created in order to minimize partial volume effects: stone composition was established on the basis of voxel density of homogeneous zones. Stone composition was determined in 52% of cases. Sensitivities for each compound were: uric acid: 65%, struvite: 19%, cystine: 78%, carbapatite: 33.5%, calcium oxalate dihydrate: 57%, calcium oxalate monohydrate: 66.5%, brushite: 75%. Low-dose acquisition did not lower the performances (P < 0.05). This entirely automated approach eliminates manual intervention on the images by the radiologist while providing identical performances including for low-dose protocols.
Motion artifact detection in four-dimensional computed tomography images
NASA Astrophysics Data System (ADS)
Bouilhol, G.; Ayadi, M.; Pinho, R.; Rit, S.; Sarrut, D.
2014-03-01
Motion artifacts appear in four-dimensional computed tomography (4DCT) images because of suboptimal acquisition parameters or patient breathing irregularities. Frequency of motion artifacts is high and they may introduce errors in radiation therapy treatment planning. Motion artifact detection can be useful for image quality assessment and 4D reconstruction improvement but manual detection in many images is a tedious process. We propose a novel method to evaluate the quality of 4DCT images by automatic detection of motion artifacts. The method was used to evaluate the impact of the optimization of acquisition parameters on image quality at our institute. 4DCT images of 114 lung cancer patients were analyzed. Acquisitions were performed with a rotation period of 0.5 seconds and a pitch of 0.1 (74 patients) or 0.081 (40 patients). A sensitivity of 0.70 and a specificity of 0.97 were observed. End-exhale phases were less prone to motion artifacts. In phases where motion speed is high, the number of detected artifacts was systematically reduced with a pitch of 0.081 instead of 0.1 and the mean reduction was 0.79. The increase of the number of patients with no artifact detected was statistically significant for the 10%, 70% and 80% respiratory phases, indicating a substantial image quality improvement.
Enhanced FIB-SEM systems for large-volume 3D imaging
Xu, C Shan; Hayworth, Kenneth J; Lu, Zhiyuan; Grob, Patricia; Hassan, Ahmed M; García-Cerdán, José G; Niyogi, Krishna K; Nogales, Eva; Weinberg, Richard J; Hess, Harald F
2017-01-01
Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 106 µm3. These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology. DOI: http://dx.doi.org/10.7554/eLife.25916.001 PMID:28500755
Enhanced FIB-SEM systems for large-volume 3D imaging
Xu, C. Shan; Hayworth, Kenneth J.; Lu, Zhiyuan; ...
2017-05-13
Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 ?m 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processesmore » and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.« less
Key features for ATA / ATR database design in missile systems
NASA Astrophysics Data System (ADS)
Özertem, Kemal Arda
2017-05-01
Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.
A cochlear implant phantom for evaluating CT acquisition parameters
NASA Astrophysics Data System (ADS)
Chakravorti, Srijata; Bussey, Brian J.; Zhao, Yiyuan; Dawant, Benoit M.; Labadie, Robert F.; Noble, Jack H.
2017-03-01
Cochlear Implants (CIs) are surgically implantable neural prosthetic devices used to treat profound hearing loss. Recent literature indicates that there is a correlation between the positioning of the electrode array within the cochlea and the ultimate hearing outcome of the patient, indicating that further studies aimed at better understanding the relationship between electrode position and outcomes could have significant implications for future surgical techniques, array design, and processor programming methods. Post-implantation high resolution CT imaging is the best modality for localizing electrodes and provides the resolution necessary to visually identify electrode position, albeit with an unknown degree of accuracy depending on image acquisition parameters, like the HU range of reconstruction, radiation dose, and resolution of the image. In this paper, we report on the development of a phantom that will both permit studying which CT acquisition parameters are best for accurately identifying electrode position and serve as a ground truth for evaluating how different electrode localization methods perform when using different CT scanners and acquisition parameters. We conclude based on our tests that image resolution and HU range of reconstruction strongly affect how accurately the true position of the electrode array can be found by both experts and automatic analysis techniques. The results presented in this paper demonstrate that our phantom is a versatile tool for assessing how CT acquisition parameters affect the localization of CIs.
Space infrared telescope pointing control system. Automated star pattern recognition
NASA Technical Reports Server (NTRS)
Powell, J. D.; Vanbezooijen, R. W. H.
1985-01-01
The Space Infrared Telescope Facility (SIRTF) is a free flying spacecraft carrying a 1 meter class cryogenically cooled infrared telescope nearly three oders of magnitude most sensitive than the current generation of infrared telescopes. Three automatic target acquisition methods will be presented that are based on the use of an imaging star tracker. The methods are distinguished by the number of guidestars that are required per target, the amount of computational capability necessary, and the time required for the complete acquisition process. Each method is described in detail.
Clancy, Neil T.; Stoyanov, Danail; James, David R. C.; Di Marco, Aimee; Sauvage, Vincent; Clark, James; Yang, Guang-Zhong; Elson, Daniel S.
2012-01-01
Sequential multispectral imaging is an acquisition technique that involves collecting images of a target at different wavelengths, to compile a spectrum for each pixel. In surgical applications it suffers from low illumination levels and motion artefacts. A three-channel rigid endoscope system has been developed that allows simultaneous recording of stereoscopic and multispectral images. Salient features on the tissue surface may be tracked during the acquisition in the stereo cameras and, using multiple camera triangulation techniques, this information used to align the multispectral images automatically even though the tissue or camera is moving. This paper describes a detailed validation of the set-up in a controlled experiment before presenting the first in vivo use of the device in a porcine minimally invasive surgical procedure. Multispectral images of the large bowel were acquired and used to extract the relative concentration of haemoglobin in the tissue despite motion due to breathing during the acquisition. Using the stereoscopic information it was also possible to overlay the multispectral information on the reconstructed 3D surface. This experiment demonstrates the ability of this system for measuring blood perfusion changes in the tissue during surgery and its potential use as a platform for other sequential imaging modalities. PMID:23082296
Automatic rectum limit detection by anatomical markers correlation.
Namías, R; D'Amato, J P; del Fresno, M; Vénere, M
2014-06-01
Several diseases take place at the end of the digestive system. Many of them can be diagnosed by means of different medical imaging modalities together with computer aided detection (CAD) systems. These CAD systems mainly focus on the complete segmentation of the digestive tube. However, the detection of limits between different sections could provide important information to these systems. In this paper we present an automatic method for detecting the rectum and sigmoid colon limit using a novel global curvature analysis over the centerline of the segmented digestive tube in different imaging modalities. The results are compared with the gold standard rectum upper limit through a validation scheme comprising two different anatomical markers: the third sacral vertebra and the average rectum length. Experimental results in both magnetic resonance imaging (MRI) and computed tomography colonography (CTC) acquisitions show the efficacy of the proposed strategy in automatic detection of rectum limits. The method is intended for application to the rectum segmentation in MRI for geometrical modeling and as contextual information source in virtual colonoscopies and CAD systems. Copyright © 2014 Elsevier Ltd. All rights reserved.
Singh, Sarabjeet; Petrovic, Dean; Jamnik, Ethen; Aran, Shima; Pourjabbar, Sarvenaz; Kave, Maggie L; Bradley, Stephen E; Choy, Garry; Kalra, Mannudeep K
2014-01-01
To evaluate the effect of localizing radiograph on computed tomography (CT) radiation dose associated with automatic exposure control with a human cadaver and patient study. Institutional review board approved the study with a waiver of informed consent. Two chest CT image series with fixed tube current and combined longitudinal-angular automatic exposure control (AEC) were acquired in a human cadaver (64-year-old man) after each of the 8 combinations of localizer radiographs (anteroposterior [AP], AP lateral, AP-posteroanterior [PA], lateral AP, lateral PA, PA, PA-AP, and PA lateral). Applied effective milliampere second, volume CT dose index (CTDIvol) and image noise were recorded for all 24-image series. Volume CT dose indexes were also recorded in 20 patients undergoing chest and abdominal CT after PA and PA-lateral radiographs with the use of AEC. Data were analyzed using analysis of variance and linear correlation tests. With AEC, the CTDIvol fluctuates with the number and projection of localizer radiographs (P < 0.0001). Lowest CTDIvol values are seen when 2 orthogonal localizer radiographs are acquired, whereas highest values are seen when single PA or AP-PA projection localizer radiographs are acquired for planning (P < 0.0001). In 20 patients, CT scanning with AEC after acquisition of 2 orthogonal projection localizer radiographs was associated with significant reduction in radiation dose compared to PA projection radiographs alone (P < 0.0001). When scanning with AEC, acquisition of 2 orthogonal localizer radiographs is associated with lower CTDIvol compared to a single localizer radiograph.
An image registration based ultrasound probe calibration
NASA Astrophysics Data System (ADS)
Li, Xin; Kumar, Dinesh; Sarkar, Saradwata; Narayanan, Ram
2012-02-01
Reconstructed 3D ultrasound of prostate gland finds application in several medical areas such as image guided biopsy, therapy planning and dose delivery. In our application, we use an end-fire probe rotated about its axis to acquire a sequence of rotational slices to reconstruct 3D TRUS (Transrectal Ultrasound) image. The image acquisition system consists of an ultrasound transducer situated on a cradle directly attached to a rotational sensor. However, due to system tolerances, axis of probe does not align exactly with the designed axis of rotation resulting in artifacts in the 3D reconstructed ultrasound volume. We present a rigid registration based automatic probe calibration approach. The method uses a sequence of phantom images, each pair acquired at angular separation of 180 degrees and registers corresponding image pairs to compute the deviation from designed axis. A modified shadow removal algorithm is applied for preprocessing. An attribute vector is constructed from image intensity and a speckle-insensitive information-theoretic feature. We compare registration between the presented method and expert-corrected images in 16 prostate phantom scans. Images were acquired at multiple resolutions, and different misalignment settings from two ultrasound machines. Screenshots from 3D reconstruction are shown before and after misalignment correction. Registration parameters from automatic and manual correction were found to be in good agreement. Average absolute differences of translation and rotation between automatic and manual methods were 0.27 mm and 0.65 degree, respectively. The registration parameters also showed lower variability for automatic registration (pooled standard deviation σtranslation = 0.50 mm, σrotation = 0.52 degree) compared to the manual approach (pooled standard deviation σtranslation = 0.62 mm, σrotation = 0.78 degree).
Efficient Semi-Automatic 3D Segmentation for Neuron Tracing in Electron Microscopy Images
Jones, Cory; Liu, Ting; Cohan, Nathaniel Wood; Ellisman, Mark; Tasdizen, Tolga
2015-01-01
0.1. Background In the area of connectomics, there is a significant gap between the time required for data acquisition and dense reconstruction of the neural processes contained in the same dataset. Automatic methods are able to eliminate this timing gap, but the state-of-the-art accuracy so far is insufficient for use without user corrections. If completed naively, this process of correction can be tedious and time consuming. 0.2. New Method We present a new semi-automatic method that can be used to perform 3D segmentation of neurites in EM image stacks. It utilizes an automatic method that creates a hierarchical structure for recommended merges of superpixels. The user is then guided through each predicted region to quickly identify errors and establish correct links. 0.3. Results We tested our method on three datasets with both novice and expert users. Accuracy and timing were compared with published automatic, semi-automatic, and manual results. 0.4. Comparison with Existing Methods Post-automatic correction methods have also been used in [1] and [2]. These methods do not provide navigation or suggestions in the manner we present. Other semi-automatic methods require user input prior to the automatic segmentation such as [3] and [4] and are inherently different than our method. 0.5. Conclusion Using this method on the three datasets, novice users achieved accuracy exceeding state-of-the-art automatic results, and expert users achieved accuracy on par with full manual labeling but with a 70% time improvement when compared with other examples in publication. PMID:25769273
Autoshaping of key pecking in pigeons with negative reinforcement1
Rachlin, Howard
1969-01-01
Pigeons exposed to gradually increasing intensities of pulsing electric shock pecked a key and thereby reduced the intensity of shock to zero for 2 min. Acquisition of key pecking was brought about through an autoshaping process in which periodic brief keylight presentations immediately preceded automatic reduction of the shock. On the occasions of such automatic reduction of shock preceding the first measured key peck, little or no orientation to the key was observed. Observations of pigeons with autoshaping of positive reinforcement also revealed little evidence of orientation toward the key. ImagesFig. 3.Fig. 4. PMID:16811371
Red Lesion Detection Using Dynamic Shape Features for Diabetic Retinopathy Screening.
Seoud, Lama; Hurtut, Thomas; Chelbi, Jihed; Cheriet, Farida; Langlois, J M Pierre
2016-04-01
The development of an automatic telemedicine system for computer-aided screening and grading of diabetic retinopathy depends on reliable detection of retinal lesions in fundus images. In this paper, a novel method for automatic detection of both microaneurysms and hemorrhages in color fundus images is described and validated. The main contribution is a new set of shape features, called Dynamic Shape Features, that do not require precise segmentation of the regions to be classified. These features represent the evolution of the shape during image flooding and allow to discriminate between lesions and vessel segments. The method is validated per-lesion and per-image using six databases, four of which are publicly available. It proves to be robust with respect to variability in image resolution, quality and acquisition system. On the Retinopathy Online Challenge's database, the method achieves a FROC score of 0.420 which ranks it fourth. On the Messidor database, when detecting images with diabetic retinopathy, the proposed method achieves an area under the ROC curve of 0.899, comparable to the score of human experts, and it outperforms state-of-the-art approaches.
Nishi, Ryuji; Cao, Meng; Kanaji, Atsuko; Nishida, Tomoki; Yoshida, Kiyokazu; Isakozawa, Shigeto
2014-11-01
The ultra-high voltage electron microscope (UHVEM) H-3000 with the world highest acceleration voltage of 3 MV can observe remarkable three dimensional microstructures of microns-thick samples[1]. Acquiring a tilt series of electron tomography is laborious work and thus an automatic technique is highly desired. We proposed the Auto-Focus system using image Sharpness (AFS)[2,3] for UHVEM tomography tilt series acquisition. In the method, five images with different defocus values are firstly acquired and the image sharpness are calculated. The sharpness are then fitted to a quasi-Gaussian function to decide the best focus value[3]. Defocused images acquired by the slow scan CCD (SS-CCD) camera (Hitachi F486BK) are of high quality but one minute is taken for acquisition of five defocused images.In this study, we introduce a high-definition video camera (HD video camera; Hamamatsu Photonics K. K. C9721S) for fast acquisition of images[4]. It is an analog camera but the camera image is captured by a PC and the effective image resolution is 1280×1023 pixels. This resolution is lower than that of the SS-CCD camera of 4096×4096 pixels. However, the HD video camera captures one image for only 1/30 second. In exchange for the faster acquisition the S/N of images are low. To improve the S/N, 22 captured frames are integrated so that each image sharpness is enough to become lower fitting error. As countermeasure against low resolution, we selected a large defocus step, which is typically five times of the manual defocus step, to discriminate different defocused images.By using HD video camera for autofocus process, the time consumption for each autofocus procedure was reduced to about six seconds. It took one second for correction of an image position and the total correction time was seven seconds, which was shorter by one order than that using SS-CCD camera. When we used SS-CCD camera for final image capture, it took 30 seconds to record one tilt image. We can obtain a tilt series of 61 images within 30 minutes. Accuracy and repeatability were good enough to practical use (Figure 1). We successfully reduced the total acquisition time of a tomography tilt series in half than before.jmicro;63/suppl_1/i25/DFU066F1F1DFU066F1Fig. 1.Objective lens current change with a tilt angle during acquisition of tomography series (Sample: a rat hepatocyte, thickness: 2 m, magnification: 4k, acc. voltage: 2 MV). Tilt angle range is ±60 degree with 2 degree step angle. Two series were acquired in the same area. Both data were almost same and the deviation was smaller than the minimum step by manual, so auto-focus worked well. We also developed a computer-aided three dimensional (3D) visualization and analysis software for electron tomography "HawkC" which can sectionalize the 3D data semi-automatically[5,6]. If this auto-acquisition system is used with IMOD reconstruction software[7] and HawkC software, we will be able to do on-line UHVEM tomography. The system would help pathology examination in the future.This work was supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan, under a Grant-in-Aid for Scientific Research (Grant No. 23560024, 23560786), and SENTAN, Japan Science and Technology Agency, Japan. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.
Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos
2016-05-05
Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.
Catheter tracking using continuous radial MRI.
Rasche, V; Holz, D; Köhler, J; Proksa, R; Röschmann, P
1997-06-01
The guidance of minimally invasive procedures may become a very important future application of MRI. The guidance of interventions requires images of the anatomy as well as the information of the position of invasive devices used. This paper introduces continuous radial MRI for the simultaneous acquisition of the anatomic MR image and the position of one or more small RF-coils (mu-coils), which can be mounted on invasive devices such as catheters or biopsy needles. This approach allows the in-plane tracking of an invasive device without any prolongation of the overall acquisition time. The extension to three-dimensional position tracking is described. Phantom studies are presented demonstrating the capability of this technique for real-time automatic adjustment of the slice position to the current catheter position with a temporal resolution of 100 ms. Simultaneously the in-plane catheter position is depicted in the actually acquired MR image during continuous scanning.
Quantitative nanoscopy: Tackling sampling limitations in (S)TEM imaging of polymers and composites.
Gnanasekaran, Karthikeyan; Snel, Roderick; de With, Gijsbertus; Friedrich, Heiner
2016-01-01
Sampling limitations in electron microscopy questions whether the analysis of a bulk material is representative, especially while analyzing hierarchical morphologies that extend over multiple length scales. We tackled this problem by automatically acquiring a large series of partially overlapping (S)TEM images with sufficient resolution, subsequently stitched together to generate a large-area map using an in-house developed acquisition toolbox (TU/e Acquisition ToolBox) and stitching module (TU/e Stitcher). In addition, we show that quantitative image analysis of the large scale maps provides representative information that can be related to the synthesis and process conditions of hierarchical materials, which moves electron microscopy analysis towards becoming a bulk characterization tool. We demonstrate the power of such an analysis by examining two different multi-phase materials that are structured over multiple length scales. Copyright © 2015 Elsevier B.V. All rights reserved.
Saha, Sajib Kumar; Fernando, Basura; Cuadros, Jorge; Xiao, Di; Kanagasingam, Yogesan
2018-04-27
Fundus images obtained in a telemedicine program are acquired at different sites that are captured by people who have varying levels of experience. These result in a relatively high percentage of images which are later marked as unreadable by graders. Unreadable images require a recapture which is time and cost intensive. An automated method that determines the image quality during acquisition is an effective alternative. To determine the image quality during acquisition, we describe here an automated method for the assessment of image quality in the context of diabetic retinopathy. The method explicitly applies machine learning techniques to access the image and to determine 'accept' and 'reject' categories. 'Reject' category image requires a recapture. A deep convolution neural network is trained to grade the images automatically. A large representative set of 7000 colour fundus images was used for the experiment which was obtained from the EyePACS that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorise these images into 'accept' and 'reject' classes based on the precise definition of image quality in the context of DR. The network was trained using 3428 images. The method shows an accuracy of 100% to successfully categorise 'accept' and 'reject' images, which is about 2% higher than the traditional machine learning method. On a clinical trial, the proposed method shows 97% agreement with human grader. The method can be easily incorporated with the fundus image capturing system in the acquisition centre and can guide the photographer whether a recapture is necessary or not.
Contour matching for a fish recognition and migration-monitoring system
NASA Astrophysics Data System (ADS)
Lee, Dah-Jye; Schoenberger, Robert B.; Shiozawa, Dennis; Xu, Xiaoqian; Zhan, Pengcheng
2004-12-01
Fish migration is being monitored year round to provide valuable information for the study of behavioral responses of fish to environmental variations. However, currently all monitoring is done by human observers. An automatic fish recognition and migration monitoring system is more efficient and can provide more accurate data. Such a system includes automatic fish image acquisition, contour extraction, fish categorization, and data storage. Shape is a very important characteristic and shape analysis and shape matching are studied for fish recognition. Previous work focused on finding critical landmark points on fish shape using curvature function analysis. Fish recognition based on landmark points has shown satisfying results. However, the main difficulty of this approach is that landmark points sometimes cannot be located very accurately. Whole shape matching is used for fish recognition in this paper. Several shape descriptors, such as Fourier descriptors, polygon approximation and line segments, are tested. A power cepstrum technique has been developed in order to improve the categorization speed using contours represented in tangent space with normalized length. Design and integration including image acquisition, contour extraction and fish categorization are discussed in this paper. Fish categorization results based on shape analysis and shape matching are also included.
SU-E-I-68: Practical Considerations On Implementation of the Image Gently Pediatric CT Protocols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, J; Adams, C; Lumby, C
Purpose: One limitation associated with the Image Gently pediatric CT protocols is practical implementation of the recommended manual techniques. Inconsistency as a result of different practice is a possibility among technologist. An additional concern is the added risk of data error that would result in over or underexposure. The Automatic Exposure Control (AEC) features automatically reduce radiation for children. However, they do not work efficiently for the patients of very small size and relative large size. This study aims to implement the Image Gently pediatric CT protocols in the practical setting while maintaining the use of AEC features for pediatricmore » patients of varying size. Methods: Anthropomorphological abdomen phantoms were scanned in a CT scanner using the Image Gently pediatric protocols, the AEC technique with a fixed adult baseline, and automatic protocols with various baselines. The baselines were adjusted corresponding to patient age, weight and posterioranterior thickness to match the Image Gently pediatric CT manual techniques. CTDIvol was recorded for each examination. Image noise was measured and recorded for image quality comparison. Clinical images were evaluated by pediatric radiologists. Results: By adjusting vendor default baselines used in the automatic techniques, radiation dose and image quality can match those of the Image Gently manual techniques. In practice, this can be achieved by dividing pediatric patients into three major groups for technologist reference: infant, small child, and large child. Further division can be done but will increase the number of CT protocols. For each group, AEC can efficiently adjust acquisition techniques for children. This implementation significantly overcomes the limitation of the Image Gently manual techniques. Conclusion: Considering the effectiveness in clinical practice, Image Gently Pediatric CT protocols can be implemented in accordance with AEC techniques, with adjusted baselines, to achieve the goal of providing the most appropriate radiation dose for pediatric patients of varying sizes.« less
SU-E-J-29: Automatic Image Registration Performance of Three IGRT Systems for Prostate Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barber, J; University of Sydney, Sydney, NSW; Sykes, J
Purpose: To compare the performance of an automatic image registration algorithm on image sets collected on three commercial image guidance systems, and explore its relationship with imaging parameters such as dose and sharpness. Methods: Images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on the CBCT systems of Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings; and MVCT on a Tomotherapy Hi-ART accelerator with a range of pitch. Using the 6D correlation ratio algorithm of XVI, each image was registered to a mask of the prostate volume with a 5 mm expansion.more » Registrations were repeated 100 times, with random initial offsets introduced to simulate daily matching. Residual registration errors were calculated by correcting for the initial phantom set-up error. Automatic registration was also repeated after reconstructing images with different sharpness filters. Results: All three systems showed good registration performance, with residual translations <0.5mm (1σ) for typical clinical dose and reconstruction settings. Residual rotational error had larger range, with 0.8°, 1.2° and 1.9° for 1σ in XVI, OBI and Tomotherapy respectively. The registration accuracy of XVI images showed a strong dependence on imaging dose, particularly below 4mGy. No evidence of reduced performance was observed at the lowest dose settings for OBI and Tomotherapy, but these were above 4mGy. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 10% of registrations. Changing the sharpness of image reconstruction had no significant effect on registration performance. Conclusions: Using the present automatic image registration algorithm, all IGRT systems tested provided satisfactory registrations for clinical use, within a normal range of acquisition settings.« less
NASA Technical Reports Server (NTRS)
Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)
2011-01-01
A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.
Rahim, Sarni Suhaila; Palade, Vasile; Shuttleworth, James; Jayne, Chrisina
2016-12-01
Digital retinal imaging is a challenging screening method for which effective, robust and cost-effective approaches are still to be developed. Regular screening for diabetic retinopathy and diabetic maculopathy diseases is necessary in order to identify the group at risk of visual impairment. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images by employing fuzzy image processing techniques. The paper first introduces the existing systems for diabetic retinopathy screening, with an emphasis on the maculopathy detection methods. The proposed medical decision support system consists of four parts, namely: image acquisition, image preprocessing including four retinal structures localisation, feature extraction and the classification of diabetic retinopathy and maculopathy. A combination of fuzzy image processing techniques, the Circular Hough Transform and several feature extraction methods are implemented in the proposed system. The paper also presents a novel technique for the macula region localisation in order to detect the maculopathy. In addition to the proposed detection system, the paper highlights a novel online dataset and it presents the dataset collection, the expert diagnosis process and the advantages of our online database compared to other public eye fundus image databases for diabetic retinopathy purposes.
DLA based compressed sensing for high resolution MR microscopy of neuronal tissue
NASA Astrophysics Data System (ADS)
Nguyen, Khieu-Van; Li, Jing-Rebecca; Radecki, Guillaume; Ciobanu, Luisa
2015-10-01
In this work we present the implementation of compressed sensing (CS) on a high field preclinical scanner (17.2 T) using an undersampling trajectory based on the diffusion limited aggregation (DLA) random growth model. When applied to a library of images this approach performs better than the traditional undersampling based on the polynomial probability density function. In addition, we show that the method is applicable to imaging live neuronal tissues, allowing significantly shorter acquisition times while maintaining the image quality necessary for identifying the majority of neurons via an automatic cell segmentation algorithm.
Prospective motion correction of high-resolution magnetic resonance imaging data in children.
Brown, Timothy T; Kuperman, Joshua M; Erhart, Matthew; White, Nathan S; Roddey, J Cooper; Shankaranarayanan, Ajit; Han, Eric T; Rettmann, Dan; Dale, Anders M
2010-10-15
Motion artifacts pose significant problems for the acquisition and analysis of high-resolution magnetic resonance imaging data. These artifacts can be particularly severe when studying pediatric populations, where greater patient movement reduces the ability to clearly view and reliably measure anatomy. In this study, we tested the effectiveness of a new prospective motion correction technique, called PROMO, as applied to making neuroanatomical measures in typically developing school-age children. This method attempts to address the problem of motion at its source by keeping the measurement coordinate system fixed with respect to the subject throughout image acquisition. The technique also performs automatic rescanning of images that were acquired during intervals of particularly severe motion. Unlike many previous techniques, this approach adjusts for both in-plane and through-plane movement, greatly reducing image artifacts without the need for additional equipment. Results show that the use of PROMO notably enhances subjective image quality, reduces errors in Freesurfer cortical surface reconstructions, and significantly improves the subcortical volumetric segmentation of brain structures. Further applications of PROMO for clinical and cognitive neuroscience are discussed. Copyright 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie
2018-01-01
The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.
Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.
Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A
2011-04-01
Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.
An automatic markerless registration method for neurosurgical robotics based on an optical camera.
Meng, Fanle; Zhai, Fangwen; Zeng, Bowei; Ding, Hui; Wang, Guangzhi
2018-02-01
Current markerless registration methods for neurosurgical robotics use the facial surface to match the robot space with the image space, and acquisition of the facial surface usually requires manual interaction and constrains the patient to a supine position. To overcome these drawbacks, we propose a registration method that is automatic and does not constrain patient position. An optical camera attached to the robot end effector captures images around the patient's head from multiple views. Then, high coverage of the head surface is reconstructed from the images through multi-view stereo vision. Since the acquired head surface point cloud contains color information, a specific mark that is manually drawn on the patient's head prior to the capture procedure can be extracted to automatically accomplish coarse registration rather than using facial anatomic landmarks. Then, fine registration is achieved by registering the high coverage of the head surface without relying solely on the facial region, thus eliminating patient position constraints. The head surface was acquired by the camera with a good repeatability accuracy. The average target registration error of 8 different patient positions measured with targets inside a head phantom was [Formula: see text], while the mean surface registration error was [Formula: see text]. The method proposed in this paper achieves automatic markerless registration in multiple patient positions and guarantees registration accuracy inside the head. This method provides a new approach for establishing the spatial relationship between the image space and the robot space.
The impact of OCR accuracy on automated cancer classification of pathology reports.
Zuccon, Guido; Nguyen, Anthony N; Bergheim, Anton; Wickman, Sandra; Grayson, Narelle
2012-01-01
To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.
Automatic real time evaluation of red blood cell elasticity by optical tweezers
NASA Astrophysics Data System (ADS)
Moura, Diógenes S.; Silva, Diego C. N.; Williams, Ajoke J.; Bezerra, Marcos A. C.; Fontes, Adriana; de Araujo, Renato E.
2015-05-01
Optical tweezers have been used to trap, manipulate, and measure individual cell properties. In this work, we show that the association of a computer controlled optical tweezers system with image processing techniques allows rapid and reproducible evaluation of cell deformability. In particular, the deformability of red blood cells (RBCs) plays a key role in the transport of oxygen through the blood microcirculation. The automatic measurement processes consisted of three steps: acquisition, segmentation of images, and measurement of the elasticity of the cells. An optical tweezers system was setup on an upright microscope equipped with a CCD camera and a motorized XYZ stage, computer controlled by a Labview platform. On the optical tweezers setup, the deformation of the captured RBC was obtained by moving the motorized stage. The automatic real-time homemade system was evaluated by measuring RBCs elasticity from normal donors and patients with sickle cell anemia. Approximately 150 erythrocytes were examined, and the elasticity values obtained by using the developed system were compared to the values measured by two experts. With the automatic system, there was a significant time reduction (60 × ) of the erythrocytes elasticity evaluation. Automated system can help to expand the applications of optical tweezers in hematology and hemotherapy.
Automatic segmentation of cerebral white matter hyperintensities using only 3D FLAIR images.
Simões, Rita; Mönninghoff, Christoph; Dlugaj, Martha; Weimar, Christian; Wanke, Isabel; van Cappellen van Walsum, Anne-Marie; Slump, Cornelis
2013-09-01
Magnetic Resonance (MR) white matter hyperintensities have been shown to predict an increased risk of developing cognitive decline. However, their actual role in the conversion to dementia is still not fully understood. Automatic segmentation methods can help in the screening and monitoring of Mild Cognitive Impairment patients who take part in large population-based studies. Most existing segmentation approaches use multimodal MR images. However, multiple acquisitions represent a limitation in terms of both patient comfort and computational complexity of the algorithms. In this work, we propose an automatic lesion segmentation method that uses only three-dimensional fluid-attenuation inversion recovery (FLAIR) images. We use a modified context-sensitive Gaussian mixture model to determine voxel class probabilities, followed by correction of FLAIR artifacts. We evaluate the method against the manual segmentation performed by an experienced neuroradiologist and compare the results with other unimodal segmentation approaches. Finally, we apply our method to the segmentation of multiple sclerosis lesions by using a publicly available benchmark dataset. Results show a similar performance to other state-of-the-art multimodal methods, as well as to the human rater. Copyright © 2013 Elsevier Inc. All rights reserved.
Automated MicroSPECT/MicroCT Image Analysis of the Mouse Thyroid Gland.
Cheng, Peng; Hollingsworth, Brynn; Scarberry, Daniel; Shen, Daniel H; Powell, Kimerly; Smart, Sean C; Beech, John; Sheng, Xiaochao; Kirschner, Lawrence S; Menq, Chia-Hsiang; Jhiang, Sissy M
2017-11-01
The ability of thyroid follicular cells to take up iodine enables the use of radioactive iodine (RAI) for imaging and targeted killing of RAI-avid thyroid cancer following thyroidectomy. To facilitate identifying novel strategies to improve 131 I therapeutic efficacy for patients with RAI refractory disease, it is desired to optimize image acquisition and analysis for preclinical mouse models of thyroid cancer. A customized mouse cradle was designed and used for microSPECT/CT image acquisition at 1 hour (t1) and 24 hours (t24) post injection of 123 I, which mainly reflect RAI influx/efflux equilibrium and RAI retention in the thyroid, respectively. FVB/N mice with normal thyroid glands and TgBRAF V600E mice with thyroid tumors were imaged. In-house CTViewer software was developed to streamline image analysis with new capabilities, along with display of 3D voxel-based 123 I gamma photon intensity in MATLAB. The customized mouse cradle facilitates consistent tissue configuration among image acquisitions such that rigid body registration can be applied to align serial images of the same mouse via the in-house CTViewer software. CTViewer is designed specifically to streamline SPECT/CT image analysis with functions tailored to quantify thyroid radioiodine uptake. Automatic segmentation of thyroid volumes of interest (VOI) from adjacent salivary glands in t1 images is enabled by superimposing the thyroid VOI from the t24 image onto the corresponding aligned t1 image. The extent of heterogeneity in 123 I accumulation within thyroid VOIs can be visualized by 3D display of voxel-based 123 I gamma photon intensity. MicroSPECT/CT image acquisition and analysis for thyroidal RAI uptake is greatly improved by the cradle and the CTViewer software, respectively. Furthermore, the approach of superimposing thyroid VOIs from t24 images to select thyroid VOIs on corresponding aligned t1 images can be applied to studies in which the target tissue has differential radiotracer retention from surrounding tissues.
A Method to Recognize Anatomical Site and Image Acquisition View in X-ray Images.
Chang, Xiao; Mazur, Thomas; Li, H Harold; Yang, Deshan
2017-12-01
A method was developed to recognize anatomical site and image acquisition view automatically in 2D X-ray images that are used in image-guided radiation therapy. The purpose is to enable site and view dependent automation and optimization in the image processing tasks including 2D-2D image registration, 2D image contrast enhancement, and independent treatment site confirmation. The X-ray images for 180 patients of six disease sites (the brain, head-neck, breast, lung, abdomen, and pelvis) were included in this study with 30 patients each site and two images of orthogonal views each patient. A hierarchical multiclass recognition model was developed to recognize general site first and then specific site. Each node of the hierarchical model recognized the images using a feature extraction step based on principal component analysis followed by a binary classification step based on support vector machine. Given two images in known orthogonal views, the site recognition model achieved a 99% average F1 score across the six sites. If the views were unknown in the images, the average F1 score was 97%. If only one image was taken either with or without view information, the average F1 score was 94%. The accuracy of the site-specific view recognition models was 100%.
Classifying magnetic resonance image modalities with convolutional neural networks
NASA Astrophysics Data System (ADS)
Remedios, Samuel; Pham, Dzung L.; Butman, John A.; Roy, Snehashis
2018-02-01
Magnetic Resonance (MR) imaging allows the acquisition of images with different contrast properties depending on the acquisition protocol and the magnetic properties of tissues. Many MR brain image processing techniques, such as tissue segmentation, require multiple MR contrasts as inputs, and each contrast is treated differently. Thus it is advantageous to automate the identification of image contrasts for various purposes, such as facilitating image processing pipelines, and managing and maintaining large databases via content-based image retrieval (CBIR). Most automated CBIR techniques focus on a two-step process: extracting features from data and classifying the image based on these features. We present a novel 3D deep convolutional neural network (CNN)- based method for MR image contrast classification. The proposed CNN automatically identifies the MR contrast of an input brain image volume. Specifically, we explored three classification problems: (1) identify T1-weighted (T1-w), T2-weighted (T2-w), and fluid-attenuated inversion recovery (FLAIR) contrasts, (2) identify pre vs postcontrast T1, (3) identify pre vs post-contrast FLAIR. A total of 3418 image volumes acquired from multiple sites and multiple scanners were used. To evaluate each task, the proposed model was trained on 2137 images and tested on the remaining 1281 images. Results showed that image volumes were correctly classified with 97.57% accuracy.
Dual-Energy CT: New Horizon in Medical Imaging
Goo, Jin Mo
2017-01-01
Dual-energy CT has remained underutilized over the past decade probably due to a cumbersome workflow issue and current technical limitations. Clinical radiologists should be made aware of the potential clinical benefits of dual-energy CT over single-energy CT. To accomplish this aim, the basic principle, current acquisition methods with advantages and disadvantages, and various material-specific imaging methods as clinical applications of dual-energy CT should be addressed in detail. Current dual-energy CT acquisition methods include dual tubes with or without beam filtration, rapid voltage switching, dual-layer detector, split filter technique, and sequential scanning. Dual-energy material-specific imaging methods include virtual monoenergetic or monochromatic imaging, effective atomic number map, virtual non-contrast or unenhanced imaging, virtual non-calcium imaging, iodine map, inhaled xenon map, uric acid imaging, automatic bone removal, and lung vessels analysis. In this review, we focus on dual-energy CT imaging including related issues of radiation exposure to patients, scanning and post-processing options, and potential clinical benefits mainly to improve the understanding of clinical radiologists and thus, expand the clinical use of dual-energy CT; in addition, we briefly describe the current technical limitations of dual-energy CT and the current developments of photon-counting detector. PMID:28670151
Bernhardt, Sylvain; Nicolau, Stéphane A; Agnus, Vincent; Soler, Luc; Doignon, Christophe; Marescaux, Jacques
2016-05-01
The use of augmented reality in minimally invasive surgery has been the subject of much research for more than a decade. The endoscopic view of the surgical scene is typically augmented with a 3D model extracted from a preoperative acquisition. However, the organs of interest often present major changes in shape and location because of the pneumoperitoneum and patient displacement. There have been numerous attempts to compensate for this distortion between the pre- and intraoperative states. Some have attempted to recover the visible surface of the organ through image analysis and register it to the preoperative data, but this has proven insufficiently robust and may be problematic with large organs. A second approach is to introduce an intraoperative 3D imaging system as a transition. Hybrid operating rooms are becoming more and more popular, so this seems to be a viable solution, but current techniques require yet another external and constraining piece of apparatus such as an optical tracking system to determine the relationship between the intraoperative images and the endoscopic view. In this article, we propose a new approach to automatically register the reconstruction from an intraoperative CT acquisition with the static endoscopic view, by locating the endoscope tip in the volume data. We first describe our method to localize the endoscope orientation in the intraoperative image using standard image processing algorithms. Secondly, we highlight that the axis of the endoscope needs a specific calibration process to ensure proper registration accuracy. In the last section, we present quantitative and qualitative results proving the feasibility and the clinical potential of our approach. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Chen, Kan; Stafford, Frank P.
A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…
Three-Dimensional Computer Graphics Brain-Mapping Project.
1987-03-15
NEUROQUANT . This package was directed towards quantitative microneuroanatomic data acquisition and analysis. Using this interface, image frames captured...populations of brains. This would have been aprohibitive task if done manually with a densitometer and film, due to user error and bias. NEUROQUANT functioned...of cells were of interest. NEUROQUANT is presently being implemented with a more fully automatic method of localizing the cell bodies directly
High-Resolution Surface Reconstruction from Imagery for Close Range Cultural Heritage Applications
NASA Astrophysics Data System (ADS)
Wenzel, K.; Abdel-Wahab, M.; Cefalu, A.; Fritsch, D.
2012-07-01
The recording of high resolution point clouds with sub-mm resolution is a demanding and cost intensive task, especially with current equipment like handheld laser scanners. We present an image based approached, where techniques of image matching and dense surface reconstruction are combined with a compact and affordable rig of off-the-shelf industry cameras. Such cameras provide high spatial resolution with low radiometric noise, which enables a one-shot solution and thus an efficient data acquisition while satisfying high accuracy requirements. However, the largest drawback of image based solutions is often the acquisition of surfaces with low texture where the image matching process might fail. Thus, an additional structured light projector is employed, represented here by the pseudo-random pattern projector of the Microsoft Kinect. Its strong infrared-laser projects speckles of different sizes. By using dense image matching techniques on the acquired images, a 3D point can be derived for almost each pixel. The use of multiple cameras enables the acquisition of a high resolution point cloud with high accuracy for each shot. For the proposed system up to 3.5 Mio. 3D points with sub-mm accuracy can be derived per shot. The registration of multiple shots is performed by Structure and Motion reconstruction techniques, where feature points are used to derive the camera positions and rotations automatically without initial information.
Mirion--a software package for automatic processing of mass spectrometric images.
Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B
2013-08-01
Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.
Guyader, Jean-Marie; Bernardin, Livia; Douglas, Naomi H M; Poot, Dirk H J; Niessen, Wiro J; Klein, Stefan
2015-08-01
To evaluate the influence of image registration on apparent diffusion coefficient (ADC) images obtained from abdominal free-breathing diffusion-weighted MR images (DW-MRIs). A comprehensive pipeline based on automatic three-dimensional nonrigid image registrations is developed to compensate for misalignments in DW-MRI datasets obtained from five healthy subjects scanned twice. Motion is corrected both within each image and between images in a time series. ADC distributions are compared with and without registration in two abdominal volumes of interest (VOIs). The effects of interpolations and Gaussian blurring as alternative strategies to reduce motion artifacts are also investigated. Among the four considered scenarios (no processing, interpolation, blurring and registration), registration yields the best alignment scores. Median ADCs vary according to the chosen scenario: for the considered datasets, ADCs obtained without processing are 30% higher than with registration. Registration improves voxelwise reproducibility at least by a factor of 2 and decreases uncertainty (Fréchet-Cramér-Rao lower bound). Registration provides similar improvements in reproducibility and uncertainty as acquiring four times more data. Patient motion during image acquisition leads to misaligned DW-MRIs and inaccurate ADCs, which can be addressed using automatic registration. © 2014 Wiley Periodicals, Inc.
33 CFR 164.38 - Automatic radar plotting aids (ARPA).
Code of Federal Regulations, 2010 CFR
2010-07-01
... ARPA data is clearly visible in general to more than one observer in the conditions of light normally... radar display and, in the case of automatic acquisition, enters within the acquisition area chosen by the observer or, in the case of manual acquisition, has been acquired by the observer, the ARPA should...
33 CFR 164.38 - Automatic radar plotting aids (ARPA).
Code of Federal Regulations, 2012 CFR
2012-07-01
... ARPA data is clearly visible in general to more than one observer in the conditions of light normally... radar display and, in the case of automatic acquisition, enters within the acquisition area chosen by the observer or, in the case of manual acquisition, has been acquired by the observer, the ARPA should...
33 CFR 164.38 - Automatic radar plotting aids (ARPA).
Code of Federal Regulations, 2014 CFR
2014-07-01
... ARPA data is clearly visible in general to more than one observer in the conditions of light normally... radar display and, in the case of automatic acquisition, enters within the acquisition area chosen by the observer or, in the case of manual acquisition, has been acquired by the observer, the ARPA should...
33 CFR 164.38 - Automatic radar plotting aids (ARPA).
Code of Federal Regulations, 2011 CFR
2011-07-01
... ARPA data is clearly visible in general to more than one observer in the conditions of light normally... radar display and, in the case of automatic acquisition, enters within the acquisition area chosen by the observer or, in the case of manual acquisition, has been acquired by the observer, the ARPA should...
Automated phenotype pattern recognition of zebrafish for high-throughput screening.
Schutera, Mark; Dickmeis, Thomas; Mione, Marina; Peravali, Ravindra; Marcato, Daniel; Reischl, Markus; Mikut, Ralf; Pylatiuk, Christian
2016-07-03
Over the last years, the zebrafish (Danio rerio) has become a key model organism in genetic and chemical screenings. A growing number of experiments and an expanding interest in zebrafish research makes it increasingly essential to automatize the distribution of embryos and larvae into standard microtiter plates or other sample holders for screening, often according to phenotypical features. Until now, such sorting processes have been carried out by manually handling the larvae and manual feature detection. Here, a prototype platform for image acquisition together with a classification software is presented. Zebrafish embryos and larvae and their features such as pigmentation are detected automatically from the image. Zebrafish of 4 different phenotypes can be classified through pattern recognition at 72 h post fertilization (hpf), allowing the software to classify an embryo into 2 distinct phenotypic classes: wild-type versus variant. The zebrafish phenotypes are classified with an accuracy of 79-99% without any user interaction. A description of the prototype platform and of the algorithms for image processing and pattern recognition is presented.
The CTIO Acquisition CCD-TV camera design
NASA Astrophysics Data System (ADS)
Schmidt, Ricardo E.
1990-07-01
A CCD-based Acquisition TV Camera has been developed at CTIO to replace the existing ISIT units. In a 60 second exposure, the new Camera shows a sixfold improvement in sensitivity over an ISIT used with a Leaky Memory. Integration times can be varied over a 0.5 to 64 second range. The CCD, contained in an evacuated enclosure, is operated at -45 C. Only the image section, an area of 8.5 mm x 6.4 mm, gets exposed to light. Pixel size is 22 microns and either no binning or 2 x 2 binning can be selected. The typical readout rates used vary between 3.5 and 9 microseconds/pixel. Images are stored in a PC/XT/AT, which generates RS-170 video. The contrast in the RS-170 frames is automatically enhanced by the software.
Method and apparatus for reading meters from a video image
Lewis, Trevor J.; Ferguson, Jeffrey J.
1997-01-01
A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.
Divers-Operated Underwater Photogrammetry: Applications in the Study of Antarctic Benthos
NASA Astrophysics Data System (ADS)
Piazza, P.; Cummings, V.; Lohrer, D.; Marini, S.; Marriott, P.; Menna, F.; Nocerino, E.; Peirano, A.; Schiaparelli, S.
2018-05-01
Ecological studies about marine benthic communities received a major leap from the application of a variety of non-destructive sampling and mapping techniques based on underwater image and video recording. The well-established scientific diving practice consists in the acquisition of single path or `round-trip' over elongated transects, with the imaging device oriented in a nadir looking direction. As it may be expected, the application of automatic image processing procedures to data not specifically acquired for 3D modelling can be risky, especially if proper tools for assessing the quality of the produced results are not employed. This paper, born from an international cooperation, focuses on this topic, which is of great interest for ecological and monitoring benthic studies in Antarctica. Several video footages recorded from different scientific teams in different years are processed with an automatic photogrammetric procedure and salient statistical features are reported to critically analyse the derived results. As expected, the inclusion of oblique images from additional lateral strips may improve the expected accuracy in the object space, without altering too much the current video recording practices.
DLA based compressed sensing for high resolution MR microscopy of neuronal tissue.
Nguyen, Khieu-Van; Li, Jing-Rebecca; Radecki, Guillaume; Ciobanu, Luisa
2015-10-01
In this work we present the implementation of compressed sensing (CS) on a high field preclinical scanner (17.2 T) using an undersampling trajectory based on the diffusion limited aggregation (DLA) random growth model. When applied to a library of images this approach performs better than the traditional undersampling based on the polynomial probability density function. In addition, we show that the method is applicable to imaging live neuronal tissues, allowing significantly shorter acquisition times while maintaining the image quality necessary for identifying the majority of neurons via an automatic cell segmentation algorithm. Copyright © 2015 Elsevier Inc. All rights reserved.
Automatic transperineal ultrasound probe positioning based on CT scan for image guided radiotherapy
NASA Astrophysics Data System (ADS)
Camps, S. M.; Verhaegen, F.; Paiva Fonesca, G.; de With, P. H. N.; Fontanarosa, D.
2017-03-01
Image interpretation is crucial during ultrasound image acquisition. A skilled operator is typically needed to verify if the correct anatomical structures are all visualized and with sufficient quality. The need for this operator is one of the major reasons why presently ultrasound is not widely used in radiotherapy workflows. To solve this issue, we introduce an algorithm that uses anatomical information derived from a CT scan to automatically provide the operator with a patient-specific ultrasound probe setup. The first application we investigated, for its relevance to radiotherapy, is 4D transperineal ultrasound image acquisition for prostate cancer patients. As initial test, the algorithm was applied on a CIRS multi-modality pelvic phantom. Probe setups were calculated in order to allow visualization of the prostate and adjacent edges of bladder and rectum, as clinically required. Five of the proposed setups were reproduced using a precision robotic arm and ultrasound volumes were acquired. A gel-filled probe cover was used to ensure proper acoustic coupling, while taking into account possible tilted positions of the probe with respect to the flat phantom surface. Visual inspection of the acquired volumes revealed that clinical requirements were fulfilled. Preliminary quantitative evaluation was also performed. The mean absolute distance (MAD) was calculated between actual anatomical structure positions and positions predicted by the CT-based algorithm. This resulted in a MAD of (2.8±0.4) mm for prostate, (2.5±0.6) mm for bladder and (2.8±0.6) mm for rectum. These results show that no significant systematic errors due to e.g. probe misplacement were introduced.
NASA Astrophysics Data System (ADS)
Diaz, Kristians; Castañeda, Benjamín; Miranda, César; Lavarello, Roberto; Llanos, Alejandro
2010-03-01
We developed a protocol for the acquisition of digital images and an algorithm for a color-based automatic segmentation of cutaneous lesions of Leishmaniasis. The protocol for image acquisition provides control over the working environment to manipulate brightness, lighting and undesirable shadows on the injury using indirect lighting. Also, this protocol was used to accurately calculate the area of the lesion expressed in mm2 even in curved surfaces by combining the information from two consecutive images. Different color spaces were analyzed and compared using ROC curves in order to determine the color layer with the highest contrast between the background and the wound. The proposed algorithm is composed of three stages: (1) Location of the wound determined by threshold and mathematical morphology techniques to the H layer of the HSV color space, (2) Determination of the boundaries of the wound by analyzing the color characteristics in the YIQ space based on masks (for the wound and the background) estimated from the first stage, and (3) Refinement of the calculations obtained on the previous stages by using the discrete dynamic contours algorithm. The segmented regions obtained with the algorithm were compared with manual segmentations made by a medical specialist. Broadly speaking, our results support that color provides useful information during segmentation and measurement of wounds of cutaneous Leishmaniasis. Results from ten images showed 99% specificity, 89% sensitivity, and 98% accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, N; Knutson, N; Schmidt, M
Purpose: To verify a method used to automatically acquire jaw, MLC, collimator and couch star shots for a Varian TrueBeam linear accelerator utilizing Developer Mode and an Electronic Portal Imaging Device (EPID). Methods: An XML script was written to automate motion of the jaws, MLC, collimator and couch in TrueBeam Developer Mode (TBDM) to acquire star shot measurements. The XML script also dictates MV imaging parameters to facilitate automatic acquisition and recording of integrated EPID images. Since couch star shot measurements cannot be acquired using a combination of EPID and jaw/MLC collimation alone due to a fixed imager geometry, amore » method utilizing a 5mm wide steel ruler placed on the table and centered within a 15×15cm2 open field to produce a surrogate of the narrow field aperture was investigated. Four individual star shot measurements (X jaw, Y jaw, MLC and couch) were obtained using our proposed as well as traditional film-based method. Integrated EPID images and scanned measurement films were analyzed and compared. Results: Star shot (X jaw, Y jaw, MLC and couch) measurements were obtained in a single 5 minute delivery using the TBDM XML script method compared to 60 minutes for equivalent traditional film measurements. Analysis of the images and films demonstrated comparable isocentricity results, agreeing within 0.3mm of each other. Conclusion: The presented automatic approach of acquiring star shot measurements using TBDM and EPID has proven to be more efficient than the traditional film approach with equivalent results.« less
2010-11-05
The Food and Drug Administration (FDA) is announcing the reclassification of the full-field digital mammography (FFDM) system from class III (premarket approval) to class II (special controls). The device type is intended to produce planar digital x-ray images of the entire breast; this generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component parts, and accessories. The special control that will apply to the device is the guidance document entitled "Class II Special Controls Guidance Document: Full-Field Digital Mammography System." FDA is reclassifying the device into class II (special controls) because general controls along with special controls will provide a reasonable assurance of safety and effectiveness of the device. Elsewhere in this issue of the Federal Register, FDA is announcing the availability of the guidance document that will serve as the special control for this device.
Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance.
Yuan, Yading; Chao, Ming; Lo, Yeh-Chi
2017-09-01
Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.
Application-Driven No-Reference Quality Assessment for Dermoscopy Images With Multiple Distortions.
Xie, Fengying; Lu, Yanan; Bovik, Alan C; Jiang, Zhiguo; Meng, Rusong
2016-06-01
Dermoscopy images often suffer from blur and uneven illumination distortions that occur during acquisition, which can adversely influence consequent automatic image analysis results on potential lesion objects. The purpose of this paper is to deploy an algorithm that can automatically assess the quality of dermoscopy images. Such an algorithm could be used to direct image recapture or correction. We describe an application-driven no-reference image quality assessment (IQA) model for dermoscopy images affected by possibly multiple distortions. For this purpose, we created a multiple distortion dataset of dermoscopy images impaired by varying degrees of blur and uneven illumination. The basis of this model is two single distortion IQA metrics that are sensitive to blur and uneven illumination, respectively. The outputs of these two metrics are combined to predict the quality of multiply distorted dermoscopy images using a fuzzy neural network. Unlike traditional IQA algorithms, which use human subjective score as ground truth, here ground truth is driven by the application, and generated according to the degree of influence of the distortions on lesion analysis. The experimental results reveal that the proposed model delivers accurate and stable quality prediction results for dermoscopy images impaired by multiple distortions. The proposed model is effective for quality assessment of multiple distorted dermoscopy images. An application-driven concept for IQA is introduced, and at the same time, a solution framework for the IQA of multiple distortions is proposed.
NASA Astrophysics Data System (ADS)
Kim, Sungho
2017-06-01
Automatic target recognition (ATR) is a traditionally challenging problem in military applications because of the wide range of infrared (IR) image variations and the limited number of training images. IR variations are caused by various three-dimensional target poses, noncooperative weather conditions (fog and rain), and difficult target acquisition environments. Recently, deep convolutional neural network-based approaches for RGB images (RGB-CNN) showed breakthrough performance in computer vision problems, such as object detection and classification. The direct use of RGB-CNN to the IR ATR problem fails to work because of the IR database problems (limited database size and IR image variations). An IR variation-reduced deep CNN (IVR-CNN) to cope with the problems is presented. The problem of limited IR database size is solved by a commercial thermal simulator (OKTAL-SE). The second problem of IR variations is mitigated by the proposed shifted ramp function-based intensity transformation. This can suppress the background and enhance the target contrast simultaneously. The experimental results on the synthesized IR images generated by the thermal simulator (OKTAL-SE) validated the feasibility of IVR-CNN for military ATR applications.
A Scalable Framework For Segmenting Magnetic Resonance Images
Hore, Prodip; Goldgof, Dmitry B.; Gu, Yuhua; Maudsley, Andrew A.; Darkazanli, Ammar
2009-01-01
A fast, accurate and fully automatic method of segmenting magnetic resonance images of the human brain is introduced. The approach scales well allowing fast segmentations of fine resolution images. The approach is based on modifications of the soft clustering algorithm, fuzzy c-means, that enable it to scale to large data sets. Two types of modifications to create incremental versions of fuzzy c-means are discussed. They are much faster when compared to fuzzy c-means for medium to extremely large data sets because they work on successive subsets of the data. They are comparable in quality to application of fuzzy c-means to all of the data. The clustering algorithms coupled with inhomogeneity correction and smoothing are used to create a framework for automatically segmenting magnetic resonance images of the human brain. The framework is applied to a set of normal human brain volumes acquired from different magnetic resonance scanners using different head coils, acquisition parameters and field strengths. Results are compared to those from two widely used magnetic resonance image segmentation programs, Statistical Parametric Mapping and the FMRIB Software Library (FSL). The results are comparable to FSL while providing significant speed-up and better scalability to larger volumes of data. PMID:20046893
Evaluation of new data processing algorithms for planar gated ventriculography (MUGA)
Fair, Joanna R.; Telepak, Robert J.
2009-01-01
Before implementing one of two new LVEF radionuclide gated ventriculogram (MUGA) systems, the results from 312 consecutive parallel patient studies were evaluated. Each gamma‐camera acquisition was simultaneously processed by semi‐automatic Medasys Pinnacle and by fully automatic and semiautomatic Philips nuclear medicine computer systems. The Philips systems yielded LVEF results within ±5LVEF percentage points of the Medasys system in fewer than half of the studies. The remaining values were higher or lower than those from the long‐used Medasys system. These differences might have changed cancer patient chemotherapy clinical decisions. As a result, our institution elected not to implement either new system. PACS: 87.57.U‐ Nuclear medicine imaging
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W; Gautier, Virginie W
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip.
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W.; Gautier, Virginie W.
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip. PMID:26485569
Automated planning of MRI scans of knee joints
NASA Astrophysics Data System (ADS)
Bystrov, Daniel; Pekar, Vladimir; Young, Stewart; Dries, Sebastian P. M.; Heese, Harald S.; van Muiswinkel, Arianne M.
2007-03-01
A novel and robust method for automatic scan planning of MRI examinations of knee joints is presented. Clinical knee examinations require acquisition of a 'scout' image, in which the operator manually specifies the scan volume orientations (off-centres, angulations, field-of-view) for the subsequent diagnostic scans. This planning task is time-consuming and requires skilled operators. The proposed automated planning system determines orientations for the diagnostic scan by using a set of anatomical landmarks derived by adapting active shape models of the femur, patella and tibia to the acquired scout images. The expert knowledge required to position scan geometries is learned from previous manually planned scans, allowing individual preferences to be taken into account. The system is able to automatically discriminate between left and right knees. This allows to use and merge training data from both left and right knees, and to automatically transform all learned scan geometries to the side for which a plan is required, providing a convenient integration of the automated scan planning system in the clinical routine. Assessment of the method on the basis of 88 images from 31 different individuals, exhibiting strong anatomical and positional variability demonstrates success, robustness and efficiency of all parts of the proposed approach, which thus has the potential to significantly improve the clinical workflow.
Regmi, Rajesh; Lovelock, D. Michael; Hunt, Margie; Zhang, Pengpeng; Pham, Hai; Xiong, Jianping; Yorke, Ellen D.; Goodman, Karyn A.; Rimner, Andreas; Mostafavi, Hassan; Mageras, Gig S.
2014-01-01
Purpose: Certain types of commonly used fiducial markers take on irregular shapes upon implantation in soft tissue. This poses a challenge for methods that assume a predefined shape of markers when automatically tracking such markers in kilovoltage (kV) radiographs. The authors have developed a method of automatically tracking regularly and irregularly shaped markers using kV projection images and assessed its potential for detecting intrafractional target motion during rotational treatment. Methods: Template-based matching used a normalized cross-correlation with simplex minimization. Templates were created from computed tomography (CT) images for phantom studies and from end-expiration breath-hold planning CT for patient studies. The kV images were processed using a Sobel filter to enhance marker visibility. To correct for changes in intermarker relative positions between simulation and treatment that can introduce errors in automatic matching, marker offsets in three dimensions were manually determined from an approximately orthogonal pair of kV images. Two studies in anthropomorphic phantom were carried out, one using a gold cylindrical marker representing regular shape, another using a Visicoil marker representing irregular shape. Automatic matching of templates to cone beam CT (CBCT) projection images was performed to known marker positions in phantom. In patient data, automatic matching was compared to manual matching as an approximate ground truth. Positional discrepancy between automatic and manual matching of less than 2 mm was assumed as the criterion for successful tracking. Tracking success rates were examined in kV projection images from 22 CBCT scans of four pancreas, six gastroesophageal junction, and one lung cancer patients. Each patient had at least one irregularly shaped radiopaque marker implanted in or near the tumor. In addition, automatic tracking was tested in intrafraction kV images of three lung cancer patients with irregularly shaped markers during 11 volumetric modulated arc treatments. Purpose-built software developed at our institution was used to create marker templates and track the markers embedded in kV images. Results: Phantom studies showed mean ± standard deviation measurement uncertainty of automatic registration to be 0.14 ± 0.07 mm and 0.17 ± 0.08 mm for Visicoil and gold cylindrical markers, respectively. The mean success rate of automatic tracking with CBCT projections (11 frames per second, fps) of pancreas, gastroesophageal junction, and lung cancer patients was 100%, 99.1% (range 98%–100%), and 100%, respectively. With intrafraction images (approx. 0.2 fps) of lung cancer patients, the success rate was 98.2% (range 97%–100%), and 94.3% (range 93%–97%) using templates from 1.25 mm and 2.5 mm slice spacing CT scans, respectively. Correction of intermarker relative position was found to improve the success rate in two out of eight patients analyzed. Conclusions: The proposed method can track arbitrary marker shapes in kV images using templates generated from a breath-hold CT acquired at simulation. The studies indicate its feasibility for tracking tumor motion during rotational treatment. Investigation of the causes of misregistration suggests that its rate of incidence can be reduced with higher frequency of image acquisition, templates made from smaller CT slice spacing, and correction of changes in intermarker relative positions when they occur. PMID:24989384
DOE Office of Scientific and Technical Information (OSTI.GOV)
Regmi, Rajesh; Lovelock, D. Michael; Hunt, Margie
Purpose: Certain types of commonly used fiducial markers take on irregular shapes upon implantation in soft tissue. This poses a challenge for methods that assume a predefined shape of markers when automatically tracking such markers in kilovoltage (kV) radiographs. The authors have developed a method of automatically tracking regularly and irregularly shaped markers using kV projection images and assessed its potential for detecting intrafractional target motion during rotational treatment. Methods: Template-based matching used a normalized cross-correlation with simplex minimization. Templates were created from computed tomography (CT) images for phantom studies and from end-expiration breath-hold planning CT for patient studies. Themore » kV images were processed using a Sobel filter to enhance marker visibility. To correct for changes in intermarker relative positions between simulation and treatment that can introduce errors in automatic matching, marker offsets in three dimensions were manually determined from an approximately orthogonal pair of kV images. Two studies in anthropomorphic phantom were carried out, one using a gold cylindrical marker representing regular shape, another using a Visicoil marker representing irregular shape. Automatic matching of templates to cone beam CT (CBCT) projection images was performed to known marker positions in phantom. In patient data, automatic matching was compared to manual matching as an approximate ground truth. Positional discrepancy between automatic and manual matching of less than 2 mm was assumed as the criterion for successful tracking. Tracking success rates were examined in kV projection images from 22 CBCT scans of four pancreas, six gastroesophageal junction, and one lung cancer patients. Each patient had at least one irregularly shaped radiopaque marker implanted in or near the tumor. In addition, automatic tracking was tested in intrafraction kV images of three lung cancer patients with irregularly shaped markers during 11 volumetric modulated arc treatments. Purpose-built software developed at our institution was used to create marker templates and track the markers embedded in kV images. Results: Phantom studies showed mean ± standard deviation measurement uncertainty of automatic registration to be 0.14 ± 0.07 mm and 0.17 ± 0.08 mm for Visicoil and gold cylindrical markers, respectively. The mean success rate of automatic tracking with CBCT projections (11 frames per second, fps) of pancreas, gastroesophageal junction, and lung cancer patients was 100%, 99.1% (range 98%–100%), and 100%, respectively. With intrafraction images (approx. 0.2 fps) of lung cancer patients, the success rate was 98.2% (range 97%–100%), and 94.3% (range 93%–97%) using templates from 1.25 mm and 2.5 mm slice spacing CT scans, respectively. Correction of intermarker relative position was found to improve the success rate in two out of eight patients analyzed. Conclusions: The proposed method can track arbitrary marker shapes in kV images using templates generated from a breath-hold CT acquired at simulation. The studies indicate its feasibility for tracking tumor motion during rotational treatment. Investigation of the causes of misregistration suggests that its rate of incidence can be reduced with higher frequency of image acquisition, templates made from smaller CT slice spacing, and correction of changes in intermarker relative positions when they occur.« less
A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656
A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.
Sensor, signal, and image informatics - state of the art and current topics.
Lehmann, T M; Aach, T; Witte, H
2006-01-01
The number of articles published annually in the fields of biomedical signal and image acquisition and processing is increasing. Based on selected examples, this survey aims at comprehensively demonstrating the recent trends and developments. Four articles are selected for biomedical data acquisition covering topics such as dose saving in CT, C-arm X-ray imaging systems for volume imaging, and the replacement of dose-intensive CT-based diagnostic with harmonic ultrasound imaging. Regarding biomedical signal analysis (BSA), the four selected articles discuss the equivalence of different time-frequency approaches for signal analysis, an application to Cochlea implants, where time-frequency analysis is applied for controlling the replacement system, recent trends for fusion of different modalities, and the role of BSA as part of a brain machine interfaces. To cover the broad spectrum of publications in the field of biomedical image processing, six papers are focused. Important topics are content-based image retrieval in medical applications, automatic classification of tongue photographs from traditional Chinese medicine, brain perfusion analysis in single photon emission computed tomography (SPECT), model-based visualization of vascular trees, and virtual surgery, where enhanced visualization and haptic feedback techniques are combined with a sphere-filled model of the organ. The selected papers emphasize the five fields forming the chain of biomedical data processing: (1) data acquisition, (2) data reconstruction and pre-processing, (3) data handling, (4) data analysis, and (5) data visualization. Fields 1 and 2 form the sensor informatics, while fields 2 to 5 form signal or image informatics with respect to the nature of the data considered. Biomedical data acquisition and pre-processing, as well as data handling, analysis and visualization aims at providing reliable tools for decision support that improve the quality of health care. Comprehensive evaluation of the processing methods and their reliable integration in routine applications are future challenges in the field of sensor, signal and image informatics.
Method and apparatus for reading meters from a video image
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, T.J.; Ferguson, J.J.
1995-12-31
A method and system enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusivemore » manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.« less
Method and apparatus for reading meters from a video image
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, T.J.; Ferguson, J.J.
1997-09-30
A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relativelymore » non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower. 1 fig.« less
Korzynska, Anna; Roszkowiak, Lukasz; Pijanowska, Dorota; Kozlowski, Wojciech; Markiewicz, Tomasz
2014-01-01
The aim of this study is to compare the digital images of the tissue biopsy captured with optical microscope using bright field technique under various light conditions. The range of colour's variation in immunohistochemically stained with 3,3'-Diaminobenzidine and Haematoxylin tissue samples is immense and coming from various sources. One of them is inadequate setting of camera's white balance to microscope's light colour temperature. Although this type of error can be easily handled during the stage of image acquisition, it can be eliminated with use of colour adjustment algorithms. The examination of the dependence of colour variation from microscope's light temperature and settings of the camera is done as an introductory research to the process of automatic colour standardization. Six fields of view with empty space among the tissue samples have been selected for analysis. Each field of view has been acquired 225 times with various microscope light temperature and camera white balance settings. The fourteen randomly chosen images have been corrected and compared, with the reference image, by the following methods: Mean Square Error, Structural SIMilarity and visual assessment of viewer. For two types of backgrounds and two types of objects, the statistical image descriptors: range, median, mean and its standard deviation of chromaticity on a and b channels from CIELab colour space, and luminance L, and local colour variability for objects' specific area have been calculated. The results have been averaged for 6 images acquired in the same light conditions and camera settings for each sample. The analysis of the results leads to the following conclusions: (1) the images collected with white balance setting adjusted to light colour temperature clusters in certain area of chromatic space, (2) the process of white balance correction for images collected with white balance camera settings not matched to the light temperature moves image descriptors into proper chromatic space but simultaneously the value of luminance changes. So the process of the image unification in a sense of colour fidelity can be solved in separate introductory stage before the automatic image analysis.
Reflector automatic acquisition and pointing based on auto-collimation theodolite.
Luo, Jun; Wang, Zhiqian; Wen, Zhuoman; Li, Mingzhu; Liu, Shaojin; Shen, Chengwu
2018-01-01
An auto-collimation theodolite (ACT) for reflector automatic acquisition and pointing is designed based on the principle of autocollimators and theodolites. First, the principle of auto-collimation and theodolites is reviewed, and then the coaxial ACT structure is developed. Subsequently, the acquisition and pointing strategies for reflector measurements are presented, which first quickly acquires the target over a wide range and then points the laser spot to the charge coupled device zero position. Finally, experiments are conducted to verify the acquisition and pointing performance, including the calibration of the ACT, the comparison of the acquisition mode and pointing mode, and the accuracy measurement in horizontal and vertical directions. In both directions, a measurement accuracy of ±3″ is achieved. The presented ACT is suitable for automatic pointing and monitoring the reflector over a small scanning area and can be used in a wide range of applications such as bridge structure monitoring and cooperative target aiming.
Reflector automatic acquisition and pointing based on auto-collimation theodolite
NASA Astrophysics Data System (ADS)
Luo, Jun; Wang, Zhiqian; Wen, Zhuoman; Li, Mingzhu; Liu, Shaojin; Shen, Chengwu
2018-01-01
An auto-collimation theodolite (ACT) for reflector automatic acquisition and pointing is designed based on the principle of autocollimators and theodolites. First, the principle of auto-collimation and theodolites is reviewed, and then the coaxial ACT structure is developed. Subsequently, the acquisition and pointing strategies for reflector measurements are presented, which first quickly acquires the target over a wide range and then points the laser spot to the charge coupled device zero position. Finally, experiments are conducted to verify the acquisition and pointing performance, including the calibration of the ACT, the comparison of the acquisition mode and pointing mode, and the accuracy measurement in horizontal and vertical directions. In both directions, a measurement accuracy of ±3″ is achieved. The presented ACT is suitable for automatic pointing and monitoring the reflector over a small scanning area and can be used in a wide range of applications such as bridge structure monitoring and cooperative target aiming.
Four and eight faceted domes effects on drag force and image in missile application
NASA Astrophysics Data System (ADS)
Sakarya, Doǧan Uǧur
2017-10-01
Drag force effect is an important aspect of range performance in missile applications. Depending on domes geometry, this effect can be decreased. Hemispherical domes have great image uniformity but more drag force has an effect on it. Four and eight faceted domes decrease drag force. However, environment reflections cause a noise in a system. Also depending on the faceted domes shape, sun and other sources in the environment are deformed in the face of them and these deformed objects result in a false target in an image. In this study; hemispherical, four faceted and eight faceted domes are compared with respect to drag force. Furthermore, images are captured by using these manufactured domes. To compare domes effects on images, scenarios are generated and automatic target acquisition algorithm is used.
Flexible real-time magnetic resonance imaging framework.
Santos, Juan M; Wright, Graham A; Pauly, John M
2004-01-01
The extension of MR imaging to new applications has demonstrated the limitations of the architecture of current real-time systems. Traditional real-time implementations provide continuous acquisition of data and modification of basic sequence parameters on the fly. We have extended the concept of real-time MRI by designing a system that drives the examinations from a real-time localizer and then gets reconfigured for different imaging modes. Upon operator request or automatic feedback the system can immediately generate a new pulse sequence or change fundamental aspects of the acquisition such as gradient waveforms excitation pulses and scan planes. This framework has been implemented by connecting a data processing and control workstation to a conventional clinical scanner. Key components on the design of this framework are the data communication and control mechanisms, reconstruction algorithms optimized for real-time and adaptability, flexible user interface and extensible user interaction. In this paper we describe the various components that comprise this system. Some of the applications implemented in this framework include real-time catheter tracking embedded in high frame rate real-time imaging and immediate switching between real-time localizer and high-resolution volume imaging for coronary angiography applications.
Zhou, Yanli; Faber, Tracy L.; Patel, Zenic; Folks, Russell D.; Cheung, Alice A.; Garcia, Ernest V.; Soman, Prem; Li, Dianfu; Cao, Kejiang; Chen, Ji
2013-01-01
Objective Left ventricular (LV) function and dyssynchrony parameters measured from serial gated single-photon emission computed tomography (SPECT) myocardial perfusion imaging (MPI) using blinded processing had a poorer repeatability than when manual side-by-side processing was used. The objective of this study was to validate whether an automatic alignment tool can reduce the variability of LV function and dyssynchrony parameters in serial gated SPECT MPI. Methods Thirty patients who had undergone serial gated SPECT MPI were prospectively enrolled in this study. Thirty minutes after the first acquisition, each patient was repositioned and a gated SPECT MPI image was reacquired. The two data sets were first processed blinded from each other by the same technologist in different weeks. These processed data were then realigned by the automatic tool, and manual side-by-side processing was carried out. All processing methods used standard iterative reconstruction and Butterworth filtering. The Emory Cardiac Toolbox was used to measure the LV function and dyssynchrony parameters. Results The automatic tool failed in one patient, who had a large, severe scar in the inferobasal wall. In the remaining 29 patients, the repeatability of the LV function and dyssynchrony parameters after automatic alignment was significantly improved from blinded processing and was comparable to manual side-by-side processing. Conclusion The automatic alignment tool can be an alternative method to manual side-by-side processing to improve the repeatability of LV function and dyssynchrony measurements by serial gated SPECT MPI. PMID:23211996
Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S
2012-02-23
We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.
Intelligent navigation to improve obstetrical sonography.
Yeo, Lami; Romero, Roberto
2016-04-01
'Manual navigation' by the operator is the standard method used to obtain information from two-dimensional and volumetric sonography. Two-dimensional sonography is highly operator dependent and requires extensive training and expertise to assess fetal anatomy properly. Most of the sonographic examination time is devoted to acquisition of images, while 'retrieval' and display of diagnostic planes occurs rapidly (essentially instantaneously). In contrast, volumetric sonography has a rapid acquisition phase, but the retrieval and display of relevant diagnostic planes is often time-consuming, tedious and challenging. We propose the term 'intelligent navigation' to refer to a new method of interrogation of a volume dataset whereby identification and selection of key anatomical landmarks allow the system to: 1) generate a geometrical reconstruction of the organ of interest; and 2) automatically navigate, find, extract and display specific diagnostic planes. This is accomplished using operator-independent algorithms that are both predictable and adaptive. Virtual Intelligent Sonographer Assistance (VIS-Assistance®) is a tool that allows operator-independent sonographic navigation and exploration of the surrounding structures in previously identified diagnostic planes. The advantage of intelligent (over manual) navigation in volumetric sonography is the short time required for both acquisition and retrieval and display of diagnostic planes. Intelligent navigation technology automatically realigns the volume, and reorients and standardizes the anatomical position, so that the fetus and the diagnostic planes are consistently displayed in the same manner each time, regardless of the fetal position or the initial orientation. Automatic labeling of anatomical structures, subject orientation and each of the diagnostic planes is also possible. Intelligent navigation technology can operate on conventional computers, and is not dependent on specific ultrasound platforms or on the use of software to perform manual navigation of volume datasets. Diagnostic planes and VIS-Assistance videoclips can be transmitted by telemedicine so that expert consultants can evaluate the images to provide an opinion. The end result is a user-friendly, simple, fast and consistent method of obtaining sonographic images with decreased operator dependency. Intelligent navigation is one approach to improve obstetrical sonography. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
The influence of respiratory motion on CT image volume definition.
Rodríguez-Romero, Ruth; Castro-Tejero, Pablo
2014-04-01
Radiotherapy treatments are based on geometric and density information acquired from patient CT scans. It is well established that breathing motion during scan acquisition induces motion artifacts in CT images, which can alter the size, shape, and density of a patient's anatomy. The aim of this work is to examine and evaluate the impact of breathing motion on multislice CT imaging with respiratory synchronization (4DCT) and without it (3DCT). A specific phantom with a movable insert was used. Static and dynamic phantom acquisitions were obtained with a multislice CT. Four sinusoidal breath patterns were simulated to move known geometric structures longitudinally. Respiratory synchronized acquisitions (4DCT) were performed to generate images during inhale, intermediate, and exhale phases using prospective and retrospective techniques. Static phantom data were acquired in helical and sequential mode to define a baseline for each type of respiratory 4DCT technique. Taking into account the fact that respiratory 4DCT is not always available, 3DCT helical image studies were also acquired for several CT rotation periods. To study breath and acquisition coupling when respiratory 4DCT was not performed, the beginning of the CT image acquisition was matched with inhale, intermediate, or exhale respiratory phases, for each breath pattern. Other coupling scenarios were evaluated by simulating different phantom and CT acquisition parameters. Motion induced variations in shape and density were quantified by automatic threshold volume generation and Dice similarity coefficient calculation. The structure mass center positions were also determined to make a comparison with their theoretical expected position. 4DCT acquisitions provided volume and position accuracies within ± 3% and ± 2 mm for structure dimensions >2 cm, breath amplitude ≤ 15 mm, and breath period ≥ 3 s. The smallest object (1 cm diameter) exceeded 5% volume variation for the breath patterns of higher frequency and amplitude motion. Larger volume differences (>10%) and inconsistencies between the relative positions of objects were detected in image studies acquired without respiratory control. Increasing the 3DCT rotation period caused a higher distortion in structures without obtaining their envelope. Simulated data showed that the slice acquisition time should be at least twice the breath period to average object movement. Respiratory 4DCT images provide accurate volume and position of organs affected by breath motion detecting higher volume discrepancies as amplitude length or breath frequency are increased. For 3DCT acquisitions, a CT should be considered slow enough to include lesion envelope as long as the slice acquisition time exceeds twice the breathing period. If this requirement cannot be satisfied, a fast CT (along with breath-hold inhale and exhale CTs to estimate roughly the ITV) is recommended in order to minimize structure distortion. Even with an awareness of a patient's respiratory cycle, its coupling with 3DCT acquisition cannot be predicted since patient anatomy is not accurately known. © 2014 American Association of Physicists in Medicine.
The influence of respiratory motion on CT image volume definition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodríguez-Romero, Ruth, E-mail: rrromero@salud.madrid.org; Castro-Tejero, Pablo, E-mail: pablo.castro@salud.madrid.org
Purpose: Radiotherapy treatments are based on geometric and density information acquired from patient CT scans. It is well established that breathing motion during scan acquisition induces motion artifacts in CT images, which can alter the size, shape, and density of a patient's anatomy. The aim of this work is to examine and evaluate the impact of breathing motion on multislice CT imaging with respiratory synchronization (4DCT) and without it (3DCT). Methods: A specific phantom with a movable insert was used. Static and dynamic phantom acquisitions were obtained with a multislice CT. Four sinusoidal breath patterns were simulated to move knownmore » geometric structures longitudinally. Respiratory synchronized acquisitions (4DCT) were performed to generate images during inhale, intermediate, and exhale phases using prospective and retrospective techniques. Static phantom data were acquired in helical and sequential mode to define a baseline for each type of respiratory 4DCT technique. Taking into account the fact that respiratory 4DCT is not always available, 3DCT helical image studies were also acquired for several CT rotation periods. To study breath and acquisition coupling when respiratory 4DCT was not performed, the beginning of the CT image acquisition was matched with inhale, intermediate, or exhale respiratory phases, for each breath pattern. Other coupling scenarios were evaluated by simulating different phantom and CT acquisition parameters. Motion induced variations in shape and density were quantified by automatic threshold volume generation and Dice similarity coefficient calculation. The structure mass center positions were also determined to make a comparison with their theoretical expected position. Results: 4DCT acquisitions provided volume and position accuracies within ±3% and ±2 mm for structure dimensions >2 cm, breath amplitude ≤15 mm, and breath period ≥3 s. The smallest object (1 cm diameter) exceeded 5% volume variation for the breath patterns of higher frequency and amplitude motion. Larger volume differences (>10%) and inconsistencies between the relative positions of objects were detected in image studies acquired without respiratory control. Increasing the 3DCT rotation period caused a higher distortion in structures without obtaining their envelope. Simulated data showed that the slice acquisition time should be at least twice the breath period to average object movement. Conclusions: Respiratory 4DCT images provide accurate volume and position of organs affected by breath motion detecting higher volume discrepancies as amplitude length or breath frequency are increased. For 3DCT acquisitions, a CT should be considered slow enough to include lesion envelope as long as the slice acquisition time exceeds twice the breathing period. If this requirement cannot be satisfied, a fast CT (along with breath-hold inhale and exhale CTs to estimate roughly the ITV) is recommended in order to minimize structure distortion. Even with an awareness of a patient's respiratory cycle, its coupling with 3DCT acquisition cannot be predicted since patient anatomy is not accurately known.« less
Automated motion artifact removal for intravital microscopy, without a priori information.
Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph
2014-03-28
Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber.
Automated motion artifact removal for intravital microscopy, without a priori information
Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph
2014-01-01
Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber. PMID:24676021
Automatic registration of ICG images using mutual information and perfusion analysis
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Jong-Mo; Lee, June-goo; Kim, Jong Hyo; Park, Kwangsuk; Yu, Hyeong-Gon; Yu, Young Suk; Chung, Hum
2005-04-01
Introduction: Indocyanin green fundus angiographic images (ICGA) of the eyes is useful method in detecting and characterizing the choroidal neovascularization (CNV), which is the major cause of the blindness over 65 years of age. To investigate the quantitative analysis of the blood flow on ICGA, systematic approach for automatic registration of using mutual information and a quantitative analysis was developed. Methods: Intermittent sequential images of indocyanin green angiography were acquired by Heidelberg retinal angiography that uses the laser scanning system for the image acquisition. Misalignment of the each image generated by the minute eye movement of the patients was corrected by the mutual information method because the distribution of the contrast media on image is changing throughout the time sequences. Several region of interest (ROI) were selected by a physician and the intensities of the selected region were plotted according to the time sequences. Results: The registration of ICGA time sequential images is required not only translate transform but also rotational transform. Signal intensities showed variation based on gamma-variate function depending on ROIs and capillary vessels show more variance of signal intensity than major vessels. CNV showed intermediate variance of signal intensity and prolonged transit time. Conclusion: The resulting registered images can be used not only for quantitative analysis, but also for perfusion analysis. Various investigative approached on CNV using this method will be helpful in the characterization of the lesion and follow-up.
NASA Astrophysics Data System (ADS)
Jiménez del Toro, Oscar; Atzori, Manfredo; Otálora, Sebastian; Andersson, Mats; Eurén, Kristian; Hedlund, Martin; Rönnquist, Peter; Müller, Henning
2017-03-01
The Gleason grading system was developed for assessing prostate histopathology slides. It is correlated to the outcome and incidence of relapse in prostate cancer. Although this grading is part of a standard protocol performed by pathologists, visual inspection of whole slide images (WSIs) has an inherent subjectivity when evaluated by different pathologists. Computer aided pathology has been proposed to generate an objective and reproducible assessment that can help pathologists in their evaluation of new tissue samples. Deep convolutional neural networks are a promising approach for the automatic classification of histopathology images and can hierarchically learn subtle visual features from the data. However, a large number of manual annotations from pathologists are commonly required to obtain sufficient statistical generalization when training new models that can evaluate the daily generated large amounts of pathology data. A fully automatic approach that detects prostatectomy WSIs with high-grade Gleason score is proposed. We evaluate the performance of various deep learning architectures training them with patches extracted from automatically generated regions-of-interest rather than from manually segmented ones. Relevant parameters for training the deep learning model such as size and number of patches as well as the inclusion or not of data augmentation are compared between the tested deep learning architectures. 235 prostate tissue WSIs with their pathology report from the publicly available TCGA data set were used. An accuracy of 78% was obtained in a balanced set of 46 unseen test images with different Gleason grades in a 2-class decision: high vs. low Gleason grade. Grades 7-8, which represent the boundary decision of the proposed task, were particularly well classified. The method is scalable to larger data sets with straightforward re-training of the model to include data from multiple sources, scanners and acquisition techniques. Automatically generated heatmaps for theWSIs could be useful for improving the selection of patches when training networks for big data sets and to guide the visual inspection of these images.
State of the art survey on MRI brain tumor segmentation.
Gordillo, Nelly; Montseny, Eduard; Sobrevilla, Pilar
2013-10-01
Brain tumor segmentation consists of separating the different tumor tissues (solid or active tumor, edema, and necrosis) from normal brain tissues: gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). In brain tumor studies, the existence of abnormal tissues may be easily detectable most of the time. However, accurate and reproducible segmentation and characterization of abnormalities are not straightforward. In the past, many researchers in the field of medical imaging and soft computing have made significant survey in the field of brain tumor segmentation. Both semiautomatic and fully automatic methods have been proposed. Clinical acceptance of segmentation techniques has depended on the simplicity of the segmentation, and the degree of user supervision. Interactive or semiautomatic methods are likely to remain dominant in practice for some time, especially in these applications where erroneous interpretations are unacceptable. This article presents an overview of the most relevant brain tumor segmentation methods, conducted after the acquisition of the image. Given the advantages of magnetic resonance imaging over other diagnostic imaging, this survey is focused on MRI brain tumor segmentation. Semiautomatic and fully automatic techniques are emphasized. Copyright © 2013 Elsevier Inc. All rights reserved.
Content-based analysis of Ki-67 stained meningioma specimens for automatic hot-spot selection.
Swiderska-Chadaj, Zaneta; Markiewicz, Tomasz; Grala, Bartlomiej; Lorent, Malgorzata
2016-10-07
Hot-spot based examination of immunohistochemically stained histological specimens is one of the most important procedures in pathomorphological practice. The development of image acquisition equipment and computational units allows for the automation of this process. Moreover, a lot of possible technical problems occur in everyday histological material, which increases the complexity of the problem. Thus, a full context-based analysis of histological specimens is also needed in the quantification of immunohistochemically stained specimens. One of the most important reactions is the Ki-67 proliferation marker in meningiomas, the most frequent intracranial tumour. The aim of our study is to propose a context-based analysis of Ki-67 stained specimens of meningiomas for automatic selection of hot-spots. The proposed solution is based on textural analysis, mathematical morphology, feature ranking and classification, as well as on the proposed hot-spot gradual extinction algorithm to allow for the proper detection of a set of hot-spot fields. The designed whole slide image processing scheme eliminates such artifacts as hemorrhages, folds or stained vessels from the region of interest. To validate automatic results, a set of 104 meningioma specimens were selected and twenty hot-spots inside them were identified independently by two experts. The Spearman rho correlation coefficient was used to compare the results which were also analyzed with the help of a Bland-Altman plot. The results show that most of the cases (84) were automatically examined properly with two fields of view with a technical problem at the very most. Next, 13 had three such fields, and only seven specimens did not meet the requirement for the automatic examination. Generally, the Automatic System identifies hot-spot areas, especially their maximum points, better. Analysis of the results confirms the very high concordance between an automatic Ki-67 examination and the expert's results, with a Spearman rho higher than 0.95. The proposed hot-spot selection algorithm with an extended context-based analysis of whole slide images and hot-spot gradual extinction algorithm provides an efficient tool for simulation of a manual examination. The presented results have confirmed that the automatic examination of Ki-67 in meningiomas could be introduced in the near future.
[Target volume segmentation of PET images by an iterative method based on threshold value].
Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L
2014-01-01
An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.
Automatic Texture Reconstruction of 3d City Model from Oblique Images
NASA Astrophysics Data System (ADS)
Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang
2016-06-01
In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.
NASA Astrophysics Data System (ADS)
Abdi, Amir H.; Luong, Christina; Tsang, Teresa; Allan, Gregory; Nouranian, Saman; Jue, John; Hawley, Dale; Fleming, Sarah; Gin, Ken; Swift, Jody; Rohling, Robert; Abolmaesumi, Purang
2017-02-01
Echocardiography (echo) is the most common test for diagnosis and management of patients with cardiac condi- tions. While most medical imaging modalities benefit from a relatively automated procedure, this is not the case for echo and the quality of the final echo view depends on the competency and experience of the sonographer. It is not uncommon that the sonographer does not have adequate experience to adjust the transducer and acquire a high quality echo, which may further affect the clinical diagnosis. In this work, we aim to aid the operator during image acquisition by automatically assessing the quality of the echo and generating the Automatic Echo Score (AES). This quality assessment method is based on a deep convolutional neural network, trained in an end-to-end fashion on a large dataset of apical four-chamber (A4C) echo images. For this project, an expert car- diologist went through 2,904 A4C images obtained from independent studies and assessed their condition based on a 6-scale grading system. The scores assigned by the expert ranged from 0 to 5. The distribution of scores among the 6 levels were almost uniform. The network was then trained on 80% of the data (2,345 samples). The average absolute error of the trained model in calculating the AES was 0.8 +/- 0:72. The computation time of the GPU implementation of the neural network was estimated at 5 ms per frame, which is sufficient for real-time deployment.
Liu, Hu; Su, Rong-jia; Wu, Min-jie; Zhang, Yi; Qiu, Xiang-jun; Feng, Jian-gang; Xie, Ting; Lu, Shu-liang
2012-06-01
To form a wound information management scheme with objectivity, standardization, and convenience by means of wound information management system. A wound information management system was set up with the acquisition terminal, the defined wound description, the data bank, and related softwares. The efficacy of this system was evaluated in clinical practice. The acquisition terminal was composed of the third generation mobile phone and the software. It was feasible to get access to the wound information, including description, image, and therapeutic plan from the data bank by mobile phone. During 4 months, a collection of a total of 232 wound treatment information was entered, and accordingly standardized data of 38 patients were formed automatically. This system can provide standardized wound information management by standardized techniques of acquisition, transmission, and storage of wound information. It can be used widely in hospitals, especially primary medical institutions. Data resource of the system makes it possible for epidemiological study with large sample size in future.
Scherer, Sebastian; Kowal, Julia; Chami, Mohamed; Dandey, Venkata; Arheit, Marcel; Ringler, Philippe; Stahlberg, Henning
2014-05-01
The introduction of direct electron detectors (DED) to cryo-electron microscopy has tremendously increased the signal-to-noise ratio (SNR) and quality of the recorded images. We discuss the optimal use of DEDs for cryo-electron crystallography, introduce a new automatic image processing pipeline, and demonstrate the vast improvement in the resolution achieved by the use of both together, especially for highly tilted samples. The new processing pipeline (now included in the software package 2dx) exploits the high SNR and frame readout frequency of DEDs to automatically correct for beam-induced sample movement, and reliably processes individual crystal images without human interaction as data are being acquired. A new graphical user interface (GUI) condenses all information required for quality assessment in one window, allowing the imaging conditions to be verified and adjusted during the data collection session. With this new pipeline an automatically generated unit cell projection map of each recorded 2D crystal is available less than 5 min after the image was recorded. The entire processing procedure yielded a three-dimensional reconstruction of the 2D-crystallized ion-channel membrane protein MloK1 with a much-improved resolution of 5Å in-plane and 7Å in the z-direction, within 2 days of data acquisition and simultaneous processing. The results obtained are superior to those delivered by conventional photographic film-based methodology of the same sample, and demonstrate the importance of drift-correction. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen
2017-07-01
Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.
Liu, Dan; Liu, Xuejun; Wu, Yiguang
2018-04-24
This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.
NASA Astrophysics Data System (ADS)
Santospirito, S. P.; Słyk, Kamil; Luo, Bin; Łopatka, Rafał; Gilmour, Oliver; Rudlin, John
2013-05-01
Detection of defects in Laser Powder Deposition (LPD) produced components has been achieved by laser thermography. An automatic in-process NDT defect detection software system has been developed for the analysis of laser thermography to automatically detect, reliably measure and then sentence defects in individual beads of LPD components. A deposition path profile definition has been introduced so all laser powder deposition beads can be modeled, and the inspection system has been developed to automatically generate an optimized inspection plan in which sampling images follow the deposition track, and automatically control and communicate with robot-arms, the source laser and cameras to implement image acquisition. Algorithms were developed so that the defect sizes can be correctly evaluated and these have been confirmed using test samples. Individual inspection images can also be stitched together for a single bead, a layer of beads or multiple layers of beads so that defects can be mapped through the additive process. A mathematical model was built up to analyze and evaluate the movement of heat throughout the inspection bead. Inspection processes were developed and positional and temporal gradient algorithms have been used to measure the flaw sizes. Defect analysis is then performed to determine if the defect(s) can be further classified (crack, lack of fusion, porosity) and the sentencing engine then compares the most significant defect or group of defects against the acceptance criteria - independent of human decisions. Testing on manufactured defects from the EC funded INTRAPID project has successful detected and correctly sentenced all samples.
Wide-Field Imaging Telescope-0 (WIT0) with automatic observing system
NASA Astrophysics Data System (ADS)
Ji, Tae-Geun; Byeon, Seoyeon; Lee, Hye-In; Park, Woojin; Lee, Sang-Yun; Hwang, Sungyong; Choi, Changsu; Gibson, Coyne Andrew; Kuehne, John W.; Prochaska, Travis; Marshall, Jennifer L.; Im, Myungshin; Pak, Soojong
2018-01-01
We introduce Wide-Field Imaging Telescope-0 (WIT0), with an automatic observing system. It is developed for monitoring the variabilities of many sources at a time, e.g. young stellar objects and active galactic nuclei. It can also find the locations of transient sources such as a supernova or gamma-ray bursts. In 2017 February, we installed the wide-field 10-inch telescope (Takahashi CCA-250) as a piggyback system on the 30-inch telescope at the McDonald Observatory in Texas, US. The 10-inch telescope has a 2.35 × 2.35 deg field-of-view with a 4k × 4k CCD Camera (FLI ML16803). To improve the observational efficiency of the system, we developed a new automatic observing software, KAOS30 (KHU Automatic Observing Software for McDonald 30-inch telescope), which was developed by Visual C++ on the basis of a windows operating system. The software consists of four control packages: the Telescope Control Package (TCP), the Data Acquisition Package (DAP), the Auto Focus Package (AFP), and the Script Mode Package (SMP). Since it also supports the instruments that are using the ASCOM driver, the additional hardware installations become quite simplified. We commissioned KAOS30 in 2017 August and are in the process of testing. Based on the WIT0 experiences, we will extend KAOS30 to control multiple telescopes in future projects.
Márquez Neila, Pablo; Baumela, Luis; González-Soriano, Juncal; Rodríguez, Jose-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Ángel
2016-04-01
Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.
2013-10-01
Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.
Real-time myocardium segmentation for the assessment of cardiac function variation
NASA Astrophysics Data System (ADS)
Zoehrer, Fabian; Huellebrand, Markus; Chitiboi, Teodora; Oechtering, Thekla; Sieren, Malte; Frahm, Jens; Hahn, Horst K.; Hennemuth, Anja
2017-03-01
Recent developments in MRI enable the acquisition of image sequences with high spatio-temporal resolution. Cardiac motion can be captured without gating and triggering. Image size and contrast relations differ from conventional cardiac MRI cine sequences requiring new adapted analysis methods. We suggest a novel segmentation approach utilizing contrast invariant polar scanning techniques. It has been tested with 20 datasets of arrhythmia patients. The results do not differ significantly more between automatic and manual segmentations than between observers. This indicates that the presented solution could enable clinical applications of real-time MRI for the examination of arrhythmic cardiac motion in the future.
Acousto-optic RF signal acquisition system
NASA Astrophysics Data System (ADS)
Bloxham, Laurence H.
1990-09-01
This paper describes the architecture and performance of a prototype Acousto-Optic RF Signal Acquisition System designed to intercept, automatically identify, and track communication signals in the VHF band. The system covers 28.0 to 92.0 MHz with five manually selectable, dual conversion; 12.8 MHZ bandwidth front ends. An acousto-optic spectrum analyzer (AOSA) implemented using a tellurium dioxide (Te02) Bragg cell is used to channelize the 12.8 MHz pass band into 512 25 KHz channels. Polarization switching is used to suppress optical noise. Excellent isolation and dynamic range are achieved by using a linear array of 512 custom 40/50 micron fiber optic cables to collect the light at the focal plane of the AOSA and route the light to individual photodetectors. The photodetectors are operated in the photovoltaic mode to compress the greater than 60 dB input optical dynamic range into an easily processed electrical signal. The 512 signals are multiplexed and processed as a line in a video image by a customized digital image processing system. The image processor simultaneously analyzes the channelized signal data and produces a classical waterfall display.
Yoneda, Mei; Ueda, Ryuhei; Ashida, Hiroshi
2017-01-01
Recent neuroimaging investigations into human honesty suggest that honest moral decisions in individuals who consistently behave honestly occur automatically, without the need for active self-control. However, it remains unclear whether this observation can be applied to two different types of honesty: honesty forgoing dishonest reward acquisition and honesty forgoing dishonest punishment avoidance. To address this issue, a functional MRI study, using an incentivized prediction task in which participants were confronted with real and repeated opportunities for dishonest gain leading to reward acquisition and punishment avoidance, was conducted. Behavioral data revealed that the frequency of dishonesty was equivalent between the opportunities for dishonest reward acquisition and for punishment avoidance. Reaction time data demonstrated that two types of honest decisions in the opportunity for dishonest reward acquisition and punishment avoidance required no additional cognitive control. Neuroimaging data revealed that honest decisions in the opportunity for dishonest reward acquisition and those for punishment avoidance required no additional control-related activity compared with a control condition in which no opportunity for dishonest behavior was given. These results suggest that honesty flows automatically, irrespective of the concomitant motivation for dishonesty leading to reward acquisition and punishment avoidance. PMID:28746066
Yoneda, Mei; Ueda, Ryuhei; Ashida, Hiroshi; Abe, Nobuhito
2017-09-27
Recent neuroimaging investigations into human honesty suggest that honest moral decisions in individuals who consistently behave honestly occur automatically, without the need for active self-control. However, it remains unclear whether this observation can be applied to two different types of honesty: honesty forgoing dishonest reward acquisition and honesty forgoing dishonest punishment avoidance. To address this issue, a functional MRI study, using an incentivized prediction task in which participants were confronted with real and repeated opportunities for dishonest gain leading to reward acquisition and punishment avoidance, was conducted. Behavioral data revealed that the frequency of dishonesty was equivalent between the opportunities for dishonest reward acquisition and for punishment avoidance. Reaction time data demonstrated that two types of honest decisions in the opportunity for dishonest reward acquisition and punishment avoidance required no additional cognitive control. Neuroimaging data revealed that honest decisions in the opportunity for dishonest reward acquisition and those for punishment avoidance required no additional control-related activity compared with a control condition in which no opportunity for dishonest behavior was given. These results suggest that honesty flows automatically, irrespective of the concomitant motivation for dishonesty leading to reward acquisition and punishment avoidance.
Applications of digital image acquisition in anthropometry
NASA Technical Reports Server (NTRS)
Woolford, B.; Lewis, J. L.
1981-01-01
A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.
SoilJ - An ImageJ plugin for semi-automatized image-processing of 3-D X-ray images of soil columns
NASA Astrophysics Data System (ADS)
Koestel, John
2016-04-01
3-D X-ray imaging is a formidable tool for quantifying soil structural properties which are known to be extremely diverse. This diversity necessitates the collection of large sample sizes for adequately representing the spatial variability of soil structure at a specific sampling site. One important bottleneck of using X-ray imaging is however the large amount of time required by a trained specialist to process the image data which makes it difficult to process larger amounts of samples. The software SoilJ aims at removing this bottleneck by automatizing most of the required image processing steps needed to analyze image data of cylindrical soil columns. SoilJ is a plugin of the free Java-based image-processing software ImageJ. The plugin is designed to automatically process all images located with a designated folder. In a first step, SoilJ recognizes the outlines of the soil column upon which the column is rotated to an upright position and placed in the center of the canvas. Excess canvas is removed from the images. Then, SoilJ samples the grey values of the column material as well as the surrounding air in Z-direction. Assuming that the column material (mostly PVC of aluminium) exhibits a spatially constant density, these grey values serve as a proxy for the image illumination at a specific Z-coordinate. Together with the grey values of the air they are used to correct image illumination fluctuations which often occur along the axis of rotation during image acquisition. SoilJ includes also an algorithm for beam-hardening artefact removal and extended image segmentation options. Finally, SoilJ integrates the morphology analyses plugins of BoneJ (Doube et al., 2006, BoneJ Free and extensible bone image analysis in ImageJ. Bone 47: 1076-1079) and provides an ASCII file summarizing these measures for each investigated soil column, respectively. In the future it is planned to integrate SoilJ into FIJI, the maintained and updated edition of ImageJ with selected plugins.
An automated field phenotyping pipeline for application in grapevine research.
Kicherer, Anna; Herzog, Katja; Pflanz, Michael; Wieland, Markus; Rüger, Philipp; Kecke, Steffen; Kuhlmann, Heiner; Töpfer, Reinhard
2015-02-26
Due to its perennial nature and size, the acquisition of phenotypic data in grapevine research is almost exclusively restricted to the field and done by visual estimation. This kind of evaluation procedure is limited by time, cost and the subjectivity of records. As a consequence, objectivity, automation and more precision of phenotypic data evaluation are needed to increase the number of samples, manage grapevine repositories, enable genetic research of new phenotypic traits and, therefore, increase the efficiency in plant research. In the present study, an automated field phenotyping pipeline was setup and applied in a plot of genetic resources. The application of the PHENObot allows image acquisition from at least 250 individual grapevines per hour directly in the field without user interaction. Data management is handled by a database (IMAGEdata). The automatic image analysis tool BIVcolor (Berries in Vineyards-color) permitted the collection of precise phenotypic data of two important fruit traits, berry size and color, within a large set of plants. The application of the PHENObot represents an automated tool for high-throughput sampling of image data in the field. The automated analysis of these images facilitates the generation of objective and precise phenotypic data on a larger scale.
An Automated Field Phenotyping Pipeline for Application in Grapevine Research
Kicherer, Anna; Herzog, Katja; Pflanz, Michael; Wieland, Markus; Rüger, Philipp; Kecke, Steffen; Kuhlmann, Heiner; Töpfer, Reinhard
2015-01-01
Due to its perennial nature and size, the acquisition of phenotypic data in grapevine research is almost exclusively restricted to the field and done by visual estimation. This kind of evaluation procedure is limited by time, cost and the subjectivity of records. As a consequence, objectivity, automation and more precision of phenotypic data evaluation are needed to increase the number of samples, manage grapevine repositories, enable genetic research of new phenotypic traits and, therefore, increase the efficiency in plant research. In the present study, an automated field phenotyping pipeline was setup and applied in a plot of genetic resources. The application of the PHENObot allows image acquisition from at least 250 individual grapevines per hour directly in the field without user interaction. Data management is handled by a database (IMAGEdata). The automatic image analysis tool BIVcolor (Berries in Vineyards-color) permitted the collection of precise phenotypic data of two important fruit traits, berry size and color, within a large set of plants. The application of the PHENObot represents an automated tool for high-throughput sampling of image data in the field. The automated analysis of these images facilitates the generation of objective and precise phenotypic data on a larger scale. PMID:25730485
NASA Technical Reports Server (NTRS)
Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.
1973-01-01
The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.
Correlation applied to the recognition of regular geometric figures
NASA Astrophysics Data System (ADS)
Lasso, William; Morales, Yaileth; Vega, Fabio; Díaz, Leonardo; Flórez, Daniel; Torres, Cesar
2013-11-01
It developed a system capable of recognizing of regular geometric figures, the images are taken by the software automatically through a process of validating the presence of figure to the camera lens, the digitized image is compared with a database that contains previously images captured, to subsequently be recognized and finally identified using sonorous words referring to the name of the figure identified. The contribution of system set out is the fact that the acquisition of data is done in real time and using a spy smart glasses with usb interface offering an system equally optimal but much more economical. This tool may be useful as a possible application for visually impaired people can get information of surrounding environment.
Statistical analysis of low-voltage EDS spectrum images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, I.M.
1998-03-01
The benefits of using low ({le}5 kV) operating voltages for energy-dispersive X-ray spectrometry (EDS) of bulk specimens have been explored only during the last few years. This paper couples low-voltage EDS with two other emerging areas of characterization: spectrum imaging of a computer chip manufactured by a major semiconductor company. Data acquisition was performed with a Philips XL30-FEG SEM operated at 4 kV and equipped with an Oxford super-ATW detector and XP3 pulse processor. The specimen was normal to the electron beam and the take-off angle for acquisition was 35{degree}. The microscope was operated with a 150 {micro}m diameter finalmore » aperture at spot size 3, which yielded an X-ray count rate of {approximately}2,000 s{sup {minus}1}. EDS spectrum images were acquired as Adobe Photoshop files with the 4pi plug-in module. (The spectrum images could also be stored as NIH Image files, but the raw data are automatically rescaled as maximum-contrast (0--255) 8-bit TIFF images -- even at 16-bit resolution -- which poses an inconvenience for quantitative analysis.) The 4pi plug-in module is designed for EDS X-ray mapping and allows simultaneous acquisition of maps from 48 elements plus an SEM image. The spectrum image was acquired by re-defining the energy intervals of 48 elements to form a series of contiguous 20 eV windows from 1.25 kV to 2.19 kV. A spectrum image of 450 x 344 pixels was acquired from the specimen with a sampling density of 50 nm/pixel and a dwell time of 0.25 live seconds per pixel, for a total acquisition time of {approximately}14 h. The binary data files were imported into Mathematica for analysis with software developed by the author at Oak Ridge National Laboratory. A 400 x 300 pixel section of the original image was analyzed. MSA required {approximately}185 Mbytes of memory and {approximately}18 h of CPU time on a 300 MHz Power Macintosh 9600.« less
A median-Gaussian filtering framework for Moiré pattern noise removal from X-ray microscopy image.
Wei, Zhouping; Wang, Jian; Nichol, Helen; Wiebe, Sheldon; Chapman, Dean
2012-02-01
Moiré pattern noise in Scanning Transmission X-ray Microscopy (STXM) imaging introduces significant errors in qualitative and quantitative image analysis. Due to the complex origin of the noise, it is difficult to avoid Moiré pattern noise during the image data acquisition stage. In this paper, we introduce a post-processing method for filtering Moiré pattern noise from STXM images. This method includes a semi-automatic detection of the spectral peaks in the Fourier amplitude spectrum by using a local median filter, and elimination of the spectral noise peaks using a Gaussian notch filter. The proposed median-Gaussian filtering framework shows good results for STXM images with the size of power of two, if such parameters as threshold, sizes of the median and Gaussian filters, and size of the low frequency window, have been properly selected. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonior, Jason D; Hu, Zhen; Guo, Terry N.
This letter presents an experimental demonstration of software-defined-radio-based wireless tomography using computer-hosted radio devices called Universal Software Radio Peripheral (USRP). This experimental brief follows our vision and previous theoretical study of wireless tomography that combines wireless communication and RF tomography to provide a novel approach to remote sensing. Automatic data acquisition is performed inside an RF anechoic chamber. Semidefinite relaxation is used for phase retrieval, and the Born iterative method is utilized for imaging the target. Experimental results are presented, validating our vision of wireless tomography.
NASA Technical Reports Server (NTRS)
Gialdini, M.; Titus, S. J.; Nichols, J. D.; Thomas, R.
1975-01-01
An approach to information acquisition is discussed in the context of meeting user-specified needs in a cost-effective, timely manner through the use of remote sensing data, ground data, and multistage sampling techniques. The roles of both LANDSAT imagery and Skylab photography are discussed as first stages of three separate multistage timber inventory systems and results are given for each system. Emphasis is placed on accuracy and meeting user needs.
NASA Astrophysics Data System (ADS)
Izadyyazdanabadi, Mohammadhassan; Belykh, Evgenii; Martirosyan, Nikolay; Eschbacher, Jennifer; Nakaji, Peter; Yang, Yezhou; Preul, Mark C.
2017-03-01
Confocal laser endomicroscopy (CLE), although capable of obtaining images at cellular resolution during surgery of brain tumors in real time, creates as many non-diagnostic as diagnostic images. Non-useful images are often distorted due to relative motion between probe and brain or blood artifacts. Many images, however, simply lack diagnostic features immediately informative to the physician. Examining all the hundreds or thousands of images from a single case to discriminate diagnostic images from nondiagnostic ones can be tedious. Providing a real time diagnostic value assessment of images (fast enough to be used during the surgical acquisition process and accurate enough for the pathologist to rely on) to automatically detect diagnostic frames would streamline the analysis of images and filter useful images for the pathologist/surgeon. We sought to automatically classify images as diagnostic or non-diagnostic. AlexNet, a deep-learning architecture, was used in a 4-fold cross validation manner. Our dataset includes 16,795 images (8572 nondiagnostic and 8223 diagnostic) from 74 CLE-aided brain tumor surgery patients. The ground truth for all the images is provided by the pathologist. Average model accuracy on test data was 91% overall (90.79 % accuracy, 90.94 % sensitivity and 90.87 % specificity). To evaluate the model reliability we also performed receiver operating characteristic (ROC) analysis yielding 0.958 average for area under ROC curve (AUC). These results demonstrate that a deeply trained AlexNet network can achieve a model that reliably and quickly recognizes diagnostic CLE images.
NASA Astrophysics Data System (ADS)
Dangi, Shusil; Ben-Zikri, Yehuda K.; Cahill, Nathan; Schwarz, Karl Q.; Linte, Cristian A.
2015-03-01
Two-dimensional (2D) ultrasound (US) has been the clinical standard for over two decades for monitoring and assessing cardiac function and providing support via intra-operative visualization and guidance for minimally invasive cardiac interventions. Developments in three-dimensional (3D) image acquisition and transducer design and technology have revolutionized echocardiography imaging enabling both real-time 3D trans-esophageal and intra-cardiac image acquisition. However, in most cases the clinicians do not access the entire 3D image volume when analyzing the data, rather they focus on several key views that render the cardiac anatomy of interest during the US imaging exam. This approach enables image acquisition at a much higher spatial and temporal resolution. Two such common approaches are the bi-plane and tri-plane data acquisition protocols; as their name states, the former comprises two orthogonal image views, while the latter depicts the cardiac anatomy based on three co-axially intersecting views spaced at 600 to one another. Since cardiac anatomy is continuously changing, the intra-operative anatomy depicted using real-time US imaging also needs to be updated by tracking the key features of interest and endocardial left ventricle (LV) boundaries. Therefore, rapid automatic feature tracking in US images is critical for three reasons: 1) to perform cardiac function assessment; 2) to identify location of surgical targets for accurate tool to target navigation and on-target instrument positioning; and 3) to enable pre- to intra-op image registration as a means to fuse pre-op CT or MR images used during planning with intra-operative images for enhanced guidance. In this paper we utilize monogenic filtering, graph-cut based segmentation and robust spline smoothing in a combined work flow to process the acquired tri-plane TEE time series US images and demonstrate robust and accurate tracking of the LV endocardial features. We reconstruct the endocardial LV geometry using the tri-plane contours and spline interpolation, and assess the accuracy of the proposed work flow against gold-standard results from the GE Echopac PC clinical software according to quantitative clinical LV characterization parameters, such as the length, circumference, area and volume. Our proposed combined work flow leads to consistent, rapid and automated identification of the LV endocardium, suitable for intra-operative applications and "on-the-fly" computer-assisted assessment of ejection fraction for cardiac function monitoring.Two-dimensional (2D) ultrasound (US) has been the clinical standard for over two decades for monitoring and assessing cardiac function and providing support via intra-operative visualization and guidance for minimally invasive cardiac interventions. Developments in three-dimensional (3D) image acquisition and transducer design and technology have revolutionized echocardiography imaging enabling both real-time 3D trans-esophageal and intra-cardiac image acquisition. However, in most cases the clinicians do not access the entire 3D image volume when analyzing the data, rather they focus on several key views that render the cardiac anatomy of interest during the US imaging exam. This approach enables image acquisition at a much higher spatial and temporal resolution. Two such common approaches are the bi-plane and tri-plane data acquisition protocols; as their name states, the former comprises two orthogonal image views, while the latter depicts the cardiac anatomy based on three co-axially intersecting views spaced at 600 to one another. Since cardiac anatomy is continuously changing, the intra-operative anatomy depicted using real-time US imaging also needs to be updated by tracking the key features of interest and endocardial left ventricle (LV) boundaries. Therefore, rapid automatic feature tracking in US images is critical for three reasons: 1) to perform cardiac function assessment; 2) to identify location of surgical targets for accurate tool to target navigation and on-target instrument positioning; and 3) to enable pre- to intra-op image registration as a means to fuse pre-op CT or MR images used during planning with intra-operative images for enhanced guidance. In this paper we utilize monogenic filtering, graph-cut based segmentation and robust spline smoothing in a combined work flow to process the acquired tri-plane TEE time series US images and demonstrate robust and accurate tracking of the LV endocardial features. We reconstruct the endocardial LV geometry using the tri-plane contours and spline interpolation, and assess the accuracy of the proposed work flow against gold-standard results from the GE Echopac PC clinical software according to quantitative clinical LV characterization parameters, such as the length, circumference, area and volume. Our proposed combined work flow leads to consistent, rapid and automated identification of the LV endocardium, suitable for intra-operative applications and on-the- y" computer-assisted assessment of ejection fraction for cardiac function monitoring.
Lingua, Andrea; Marenchino, Davide; Nex, Francesco
2009-01-01
In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.
Zawadzki, Robert J.; Jones, Steven M.; Pilli, Suman; Balderas-Mata, Sandra; Kim, Dae Yu; Olivier, Scot S.; Werner, John S.
2011-01-01
We describe an ultrahigh-resolution (UHR) retinal imaging system that combines adaptive optics Fourier-domain optical coherence tomography (AO-OCT) with an adaptive optics scanning laser ophthalmoscope (AO-SLO) to allow simultaneous data acquisition by the two modalities. The AO-SLO subsystem was integrated into the previously described AO-UHR OCT instrument with minimal changes to the latter. This was done in order to ensure optimal performance and image quality of the AO- UHR OCT. In this design both imaging modalities share most of the optical components including a common AO-subsystem and vertical scanner. One of the benefits of combining Fd-OCT with SLO includes automatic co-registration between two acquisition channels for direct comparison between retinal structures imaged by both modalities (e.g., photoreceptor mosaics or microvasculature maps). Because of differences in the detection scheme of the two systems, this dual imaging modality instrument can provide insight into retinal morphology and potentially function, that could not be accessed easily by a single system. In this paper we describe details of the components and parameters of the combined instrument, including incorporation of a novel membrane magnetic deformable mirror with increased stroke and actuator count used as a single wavefront corrector. We also discuss laser safety calculations for this multimodal system. Finally, retinal images acquired in vivo with this system are presented. PMID:21698028
Resiliency of the Multiscale Retinex Image Enhancement Algorithm
NASA Technical Reports Server (NTRS)
Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.
1998-01-01
The multiscale retinex with color restoration (MSRCR) continues to prove itself in extensive testing to be very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition, However, issues remain with regard to the resiliency of the MSRCR to different image sources and arbitrary image manipulations which may have been applied prior to retinex processing. In this paper we define these areas of concern, provide experimental results, and, examine the effects of commonly occurring image manipulation on retinex performance. In virtually all cases the MSRCR is highly resilient to the effects of both the image source variations and commonly encountered prior image-processing. Significant artifacts are primarily observed for the case of selective color channel clipping in large dark zones in a image. These issues are of concerning the processing of digital image archives and other applications where there is neither control over the image acquisition process, nor knowledge about any processing done on th data beforehand.
Automatic evaluation of the Valsalva sinuses from cine-MRI
NASA Astrophysics Data System (ADS)
Blanchard, Cédric; Sliwa, Tadeusz; Lalande, Alain; Mohan, Pauliah; Bouchot, Olivier; Voisin, Yvon
2011-03-01
MRI appears to be particularly attractive for the study of the Sinuses of Valsalva (SV), however there is no global consensus on their suitable measurements. In this paper, we propose a new method, based on the mathematical morphology and combining a numerical geodesic reconstruction with an area estimation, to automatically evaluate the SV from a cine-MRI in a cross-sectional orientation. It consists in the extraction of the shape of the SV, the detection of relevant points (commissures, cusps and the centre of the SV), the measurement of relevant distances and in a classification of the valve as bicuspid or tricuspid by a metric evaluation of the SV. Our method was tested on 23 patient examinations and radii calculations were compared with a manual measurement. The classification of the valve as tricuspid or bicuspid was correct for all the cases. Moreover, there are an excellent correlation and an excellent concordance between manual and automatic measurements for images at diastolic phase (r= 0.97; y = x - 0.02; p=NS; mean of differences = -0.1 mm; standard deviation of differences = 2.3 mm) and at systolic phase (r= 0.96; y = 0.97 x + 0.80; p=NS ; mean of differences = -0.1 mm; standard deviation of differences = 2.4 mm). The cross-sectional orientation of the image acquisition plane conjugated with our automatic method provides a reliable morphometric evaluation of the SV, based on the automatic location of the centre of the SV, the commissure and the cusp positions. Measurements of distances between relevant points allow a precise evaluation of the SV.
[Wireless digital radiography detectors in the emergency area: an efficacious solution].
Garrido Blázquez, M; Agulla Otero, M; Rodríguez Recio, F J; Torres Cabrera, R; Hernando González, I
2013-01-01
To evaluate the implementation of a flat panel digital radiolography (DR) system with WiFi technology in an emergency radiology area in which a computed radiography (CR) system was previously used. We analyzed aspects related to image quality, radiation dose, workflow, and ergonomics. We analyzed the results obtained with the CR and WiFi DR systems related with the quality of images analyzed in images obtained using a phantom and after radiologists' evaluation of radiological images obtained in real patients. We also analyzed the time required for image acquisition and the workflow with the two technological systems. Finally, we analyzed the data related to the dose of radiation in patients before and after the implementation of the new equipment. Image quality improved in both the tests carried out with a phantom and in radiological images obtained in patients, which increased from 3 to 4.5 on a 5-point scale. The average time required for image acquisition decreased by 25 seconds per image. The flat panel required less radiation to be delivered in practically all the techniques carried out using automatic dosimetry, although statistically significant differences were found in only some of the techniques (chest, thoracic spine, and lumbar spine). Implementing the WiFi DR system has brought benefits. Image quality has improved and the dose of radiation to patients has decreased. The new system also has advantages in terms of functionality, ergonomics, and performance. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.
Automation of Vapor-Diffusion Growth of Protein Crystals
NASA Technical Reports Server (NTRS)
Hamrick, David T.; Bray, Terry L.
2005-01-01
Some improvements have been made in a system of laboratory equipment developed previously for studying the crystallization of proteins from solution by use of dynamically controlled flows of dry gas. The improvements involve mainly (1) automation of dispensing of liquids for starting experiments, (2) automatic control of drying of protein solutions during the experiments, and (3) provision for automated acquisition of video images for monitoring experiments in progress and for post-experiment analysis. The automation of dispensing of liquids was effected by adding an automated liquid-handling robot that can aspirate source solutions and dispense them in either a hanging-drop or a sitting-drop configuration, whichever is specified, in each of 48 experiment chambers. A video camera of approximately the size and shape of a lipstick dispenser was added to a mobile stage that is part of the robot, in order to enable automated acquisition of images in each experiment chamber. The experiment chambers were redesigned to enable the use of sitting drops, enable backlighting of each specimen, and facilitate automation.
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2017-11-01
Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.
Distrubtion Tolerant Network Technology Flight Validation Report: DINET
NASA Technical Reports Server (NTRS)
Jones, Ross M.
2009-01-01
In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then, they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions.
Distribution Tolerant Network Technology Flight Validation Report: DINET
NASA Technical Reports Server (NTRS)
Jones, Ross M.
2009-01-01
In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then, they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions.
Automatic 3D relief acquisition and georeferencing of road sides by low-cost on-motion SfM
NASA Astrophysics Data System (ADS)
Voumard, Jérémie; Bornemann, Perrick; Malet, Jean-Philippe; Derron, Marc-Henri; Jaboyedoff, Michel
2017-04-01
3D terrain relief acquisition is important for a large part of geosciences. Several methods have been developed to digitize terrains, such as total station, LiDAR, GNSS or photogrammetry. To digitize road (or rail tracks) sides on long sections, mobile spatial imaging system or UAV are commonly used. In this project, we compare a still fairly new method -the SfM on-motion technics- with some traditional technics of terrain digitizing (terrestrial laser scanning, traditional SfM, UAS imaging solutions, GNSS surveying systems and total stations). The SfM on-motion technics generates 3D spatial data by photogrammetric processing of images taken from a moving vehicle. Our mobile system consists of six action cameras placed on a vehicle. Four fisheye cameras mounted on a mast on the vehicle roof are placed at 3.2 meters above the ground. Three of them have a GNNS chip providing geotagged images. Two pictures were acquired every second by each camera. 4K resolution fisheye videos were also used to extract 8.3M not geotagged pictures. All these pictures are then processed with the Agisoft PhotoScan Professional software. Results from the SfM on-motion technics are compared with results from classical SfM photogrammetry on a 500 meters long alpine track. They were also compared with mobile laser scanning data on the same road section. First results seem to indicate that slope structures are well observable up to decimetric accuracy. For the georeferencing, the planimetric (XY) accuracy of few meters is much better than the altimetric (Z) accuracy. There is indeed a Z coordinate shift of few tens of meters between GoPro cameras and Garmin camera. This makes necessary to give a greater freedom to altimetric coordinates in the processing software. Benefits of this low-cost SfM on-motion method are: 1) a simple setup to use in the field (easy to switch between vehicle types as car, train, bike, etc.), 2) a low cost and 3) an automatic georeferencing of 3D points clouds. Main disadvantages are: 1) results are less accurate than those from LiDAR system, 2) a heavy images processing and 3) a short distance of acquisition.
Validation of automatic joint space width measurements in hand radiographs in rheumatoid arthritis.
Schenk, Olga; Huo, Yinghe; Vincken, Koen L; van de Laar, Mart A; Kuper, Ina H H; Slump, Kees C H; Lafeber, Floris P J G; Bernelot Moens, Hein J
2016-10-01
Computerized methods promise quick, objective, and sensitive tools to quantify progression of radiological damage in rheumatoid arthritis (RA). Measurement of joint space width (JSW) in finger and wrist joints with these systems performed comparable to the Sharp-van der Heijde score (SHS). A next step toward clinical use, validation of precision and accuracy in hand joints with minimal damage, is described with a close scrutiny of sources of error. A recently developed system to measure metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints was validated in consecutive hand images of RA patients. To assess the impact of image acquisition, measurements on radiographs from a multicenter trial and from a recent prospective cohort in a single hospital were compared. Precision of the system was tested by comparing the joint space in mm in pairs of subsequent images with a short interval without progression of SHS. In case of incorrect measurements, the source of error was analyzed with a review by human experts. Accuracy was assessed by comparison with reported measurements with other systems. In the two series of radiographs, the system could automatically locate and measure 1003/1088 (92.2%) and 1143/1200 (95.3%) individual joints, respectively. In joints with a normal SHS, the average (SD) size of MCP joints was [Formula: see text] and [Formula: see text] in the two series of radiographs, and of PIP joints [Formula: see text] and [Formula: see text]. The difference in JSW between two serial radiographs with an interval of 6 to 12 months and unchanged SHS was [Formula: see text], indicating very good precision. Errors occurred more often in radiographs from the multicenter cohort than in a more recent series from a single hospital. Detailed analysis of the 55/1125 (4.9%) measurements that had a discrepant paired measurement revealed that variation in the process of image acquisition (exposure in 15% and repositioning in 57%) was a more frequent source of error than incorrect delineation by the software (25%). Various steps in the validation of an automated measurement system for JSW of MCP and PIP joints are described. The use of serial radiographs from different sources, with a short interval and limited damage, is helpful to detect sources of error. Image acquisition, in particular repositioning, is a dominant source of error.
Evaluation of oesophageal transit velocity using the improved Demons technique.
De Souza, Michele N; Xavier, Fernando E B; Secaf, Marie; Troncon, Luiz E A; de Oliveira, Ricardo B; Moraes, Eder R
2016-01-01
This paper presents a novel method to compute oesophageal transit velocity in a direct and automatized manner by the registration of scintigraphy images. A total of 36 images from nine healthy volunteers were processed. Four dynamic image series per volunteer were acquired after a minimum 8 h fast. Each acquisition was made following the ingestion of 5 ml saline labelled with about 26 MBq (700 µCi) technetium-99m phytate in a single swallow. Between the acquisitions, another two swallows of 5 ml saline were performed to clear the oesophagus. The composite acquired files were made of 240 frames of anterior and posterior views. Each frame is the accumulate count for 250 ms.At the end of acquisitions, the images were corrected for radioactive decay, the geometric mean was computed between the anterior and posterior views and the registration of a set of subsequent images was performed. Utilizing the improved Demons technique, we obtained from the deformation field the regional resultant velocity, which is directly related to the oesophagus transit velocity. The mean regional resulting velocities decreases progressively from the proximal to the distal oesophageal portions and, at the proximal portion, is virtually identical to the primary peristaltic pump typical velocity. Comparison between this parameter and 'time-activity' curves reveals consistency in velocities obtained using both methods, for the proximal portion. Application of the improved Demons technique, as an easy and automated method to evaluate velocities of oesophageal bolus transit, is feasible and seems to yield consistent data, particularly for the proximal oesophagus.
3D change detection at street level using mobile laser scanning point clouds and terrestrial images
NASA Astrophysics Data System (ADS)
Qin, Rongjun; Gruen, Armin
2014-04-01
Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical consistency between point clouds and stereo images. Finally, an over-segmentation based graph cut optimization is carried out, taking into account the color, depth and class information to compute the changed area in the image space. The proposed method is invariant to light changes, robust to small co-registration errors between images and point clouds, and can be applied straightforwardly to 3D polyhedral models. This method can be used for 3D street data updating, city infrastructure management and damage monitoring in complex urban scenes.
Automatic Recognition of Fetal Facial Standard Plane in Ultrasound Image via Fisher Vector.
Lei, Baiying; Tan, Ee-Leng; Chen, Siping; Zhuo, Liu; Li, Shengli; Ni, Dong; Wang, Tianfu
2015-01-01
Acquisition of the standard plane is the prerequisite of biometric measurement and diagnosis during the ultrasound (US) examination. In this paper, a new algorithm is developed for the automatic recognition of the fetal facial standard planes (FFSPs) such as the axial, coronal, and sagittal planes. Specifically, densely sampled root scale invariant feature transform (RootSIFT) features are extracted and then encoded by Fisher vector (FV). The Fisher network with multi-layer design is also developed to extract spatial information to boost the classification performance. Finally, automatic recognition of the FFSPs is implemented by support vector machine (SVM) classifier based on the stochastic dual coordinate ascent (SDCA) algorithm. Experimental results using our dataset demonstrate that the proposed method achieves an accuracy of 93.27% and a mean average precision (mAP) of 99.19% in recognizing different FFSPs. Furthermore, the comparative analyses reveal the superiority of the proposed method based on FV over the traditional methods.
Progressive compressive imager
NASA Astrophysics Data System (ADS)
Evladov, Sergei; Levi, Ofer; Stern, Adrian
2012-06-01
We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.
Shi, Handuo; Colavin, Alexandre; Lee, Timothy K; Huang, Kerwyn Casey
2017-02-01
Single-cell microscopy is a powerful tool for studying gene functions using strain libraries, but it suffers from throughput limitations. Here we describe the Strain Library Imaging Protocol (SLIP), which is a high-throughput, automated microscopy workflow for large strain collections that requires minimal user involvement. SLIP involves transferring arrayed bacterial cultures from multiwell plates onto large agar pads using inexpensive replicator pins and automatically imaging the resulting single cells. The acquired images are subsequently reviewed and analyzed by custom MATLAB scripts that segment single-cell contours and extract quantitative metrics. SLIP yields rich data sets on cell morphology and gene expression that illustrate the function of certain genes and the connections among strains in a library. For a library arrayed on 96-well plates, image acquisition can be completed within 4 min per plate.
Mosaic construction, processing, and review of very large electron micrograph composites
NASA Astrophysics Data System (ADS)
Vogt, Robert C., III; Trenkle, John M.; Harmon, Laurel A.
1996-11-01
A system of programs is described for acquisition, mosaicking, cueing and interactive review of large-scale transmission electron micrograph composite images. This work was carried out as part of a final-phase clinical analysis study of a drug for the treatment of diabetic peripheral neuropathy. MOre than 500 nerve biopsy samples were prepared, digitally imaged, processed, and reviewed. For a given sample, typically 1000 or more 1.5 megabyte frames were acquired, for a total of between 1 and 2 gigabytes of data per sample. These frames were then automatically registered and mosaicked together into a single virtual image composite, which was subsequently used to perform automatic cueing of axons and axon clusters, as well as review and marking by qualified neuroanatomists. Statistics derived from the review process were used to evaluate the efficacy of the drug in promoting regeneration of myelinated nerve fibers. This effort demonstrates a new, entirely digital capability for doing large-scale electron micrograph studies, in which all of the relevant specimen data can be included at high magnification, as opposed to simply taking a random sample of discrete locations. It opens up the possibility of a new era in electron microscopy--one which broadens the scope of questions that this imaging modality can be used to answer.
Optronic System Imaging Simulator (OSIS): imager simulation tool of the ECOMOS project
NASA Astrophysics Data System (ADS)
Wegner, D.; Repasi, E.
2018-04-01
ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defense and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses two approaches to calculate Target Acquisition (TA) ranges, the analytical TRM4 model and the image-based Triangle Orientation Discrimination model (TOD). In this paper the IR imager simulation tool, Optronic System Imaging Simulator (OSIS), is presented. It produces virtual camera imagery required by the TOD approach. Pristine imagery is degraded by various effects caused by atmospheric attenuation, optics, detector footprint, sampling, fixed pattern noise, temporal noise and digital signal processing. Resulting images might be presented to observers or could be further processed for automatic image quality calculations. For convenience OSIS incorporates camera descriptions and intermediate results provided by TRM4. For input OSIS uses pristine imagery tied with meta information about scene content, its physical dimensions, and gray level interpretation. These images represent planar targets placed at specified distances to the imager. Furthermore, OSIS is extended by a plugin functionality that enables integration of advanced digital signal processing techniques in ECOMOS such as compression, local contrast enhancement, digital turbulence mitiga- tion, to name but a few. By means of this image-based approach image degradations and image enhancements can be investigated, which goes beyond the scope of the analytical TRM4 model.
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Shih, Ching-Tien; Peng, Chin-Ling
2011-01-01
This study evaluated whether two people with multiple disabilities would be able to improve their pointing performance through an Automatic Target Acquisition Program (ATAP) and a newly developed mouse driver (i.e. a new mouse driver replaces standard mouse driver, and is able to monitor mouse movement and intercept click action). Initially, both…
A Handbook for Automatic Data Processing Equipment Acquisition.
1981-12-01
Navy ADPE Procurement Policies (Automatic Data Processing Equipment (ADPE) procurement by federal agencies is governed by an interlocking network of...ADPE) procurement by federal agencies is governed by an interlocking network of policies and directives issued by federal agencies, the Department...SECNAVINST) and local procedures governing the acquisition of ADPE. Obtaining and understanding this interlocking network of policies is often difficult
Automatic gang graffiti recognition and interpretation
NASA Astrophysics Data System (ADS)
Parra, Albert; Boutin, Mireille; Delp, Edward J.
2017-09-01
One of the roles of emergency first responders (e.g., police and fire departments) is to prevent and protect against events that can jeopardize the safety and well-being of a community. In the case of criminal gang activity, tools are needed for finding, documenting, and taking the necessary actions to mitigate the problem or issue. We describe an integrated mobile-based system capable of using location-based services, combined with image analysis, to track and analyze gang activity through the acquisition, indexing, and recognition of gang graffiti images. This approach uses image analysis methods for color recognition, image segmentation, and image retrieval and classification. A database of gang graffiti images is described that includes not only the images but also metadata related to the images, such as date and time, geoposition, gang, gang member, colors, and symbols. The user can then query the data in a useful manner. We have implemented these features both as applications for Android and iOS hand-held devices and as a web-based interface.
Large Scale Textured Mesh Reconstruction from Mobile Mapping Images and LIDAR Scans
NASA Astrophysics Data System (ADS)
Boussaha, M.; Vallet, B.; Rives, P.
2018-05-01
The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS). First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points) and photometry (images). Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014) is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer.
Analysis of neoplastic lesions in magnetic resonance imaging using self-organizing maps.
Mei, Paulo Afonso; de Carvalho Carneiro, Cleyton; Fraser, Stephen J; Min, Li Li; Reis, Fabiano
2015-12-15
To provide an improved method for the identification and analysis of brain tumors in MRI scans using a semi-automated computational approach, that has the potential to provide a more objective, precise and quantitatively rigorous analysis, compared to human visual analysis. Self-Organizing Maps (SOM) is an unsupervised, exploratory data analysis tool, which can automatically domain an image into selfsimilar regions or clusters, based on measures of similarity. It can be used to perform image-domain of brain tissue on MR images, without prior knowledge. We used SOM to analyze T1, T2 and FLAIR acquisitions from two MRI machines in our service from 14 patients with brain tumors confirmed by biopsies--three lymphomas, six glioblastomas, one meningioma, one ganglioglioma, two oligoastrocytomas and one astrocytoma. The SOM software was used to analyze the data from the three image acquisitions from each patient and generated a self-organized map for each containing 25 clusters. Damaged tissue was separated from the normal tissue using the SOM technique. Furthermore, in some cases it allowed to separate different areas from within the tumor--like edema/peritumoral infiltration and necrosis. In lesions with less precise boundaries in FLAIR, the estimated damaged tissue area in the resulting map appears bigger. Our results showed that SOM has the potential to be a powerful MR imaging analysis technique for the assessment of brain tumors. Copyright © 2015. Published by Elsevier B.V.
Automatic image acquisition processor and method
Stone, William J.
1986-01-01
A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.
Automatic image acquisition processor and method
Stone, W.J.
1984-01-16
A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.
Karami, Elham; Wang, Yong; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas
2016-01-01
Abstract. In-depth understanding of the diaphragm’s anatomy and physiology has been of great interest to the medical community, as it is the most important muscle of the respiratory system. While noncontrast four-dimensional (4-D) computed tomography (CT) imaging provides an interesting opportunity for effective acquisition of anatomical and/or functional information from a single modality, segmenting the diaphragm in such images is very challenging not only because of the diaphragm’s lack of image contrast with its surrounding organs but also because of respiration-induced motion artifacts in 4-D CT images. To account for such limitations, we present an automatic segmentation algorithm, which is based on a priori knowledge of diaphragm anatomy. The novelty of the algorithm lies in using the diaphragm’s easy-to-segment contacting organs—including the lungs, heart, aorta, and ribcage—to guide the diaphragm’s segmentation. Obtained results indicate that average mean distance to the closest point between diaphragms segmented using the proposed technique and corresponding manual segmentation is 2.55±0.39 mm, which is favorable. An important feature of the proposed technique is that it is the first algorithm to delineate the entire diaphragm. Such delineation facilitates applications, where the diaphragm boundary conditions are required such as biomechanical modeling for in-depth understanding of the diaphragm physiology. PMID:27921072
NASA Astrophysics Data System (ADS)
Matgen, Patrick; Giustarini, Laura; Hostache, Renaud
2012-10-01
This paper introduces an automatic flood mapping application that is hosted on the Grid Processing on Demand (GPOD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver operationally flooded areas using both recent and historical acquisitions of SAR data. Having as a short-term target the flooding-related exploitation of data generated by the upcoming ESA SENTINEL-1 SAR mission, the flood mapping application consists of two building blocks: i) a set of query tools for selecting the "crisis image" and the optimal corresponding "reference image" from the G-POD archive and ii) an algorithm for extracting flooded areas via change detection using the previously selected "crisis image" and "reference image". Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate reference image. Potential users will also be able to apply the implemented flood delineation algorithm. The latter combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. Both algorithms are computationally efficient and operate with minimum data requirements. The case study of the high magnitude flooding event that occurred in July 2007 on the Severn River, UK, and that was observed with a moderateresolution SAR sensor as well as airborne photography highlights the performance of the proposed online application. The flood mapping application on G-POD can be used sporadically, i.e. whenever a major flood event occurs and there is a demand for SAR-based flood extent maps. In the long term, a potential extension of the application could consist in systematically extracting flooded areas from all SAR images acquired on a daily, weekly or monthly basis.
3D acquisition and modeling for flint artefacts analysis
NASA Astrophysics Data System (ADS)
Loriot, B.; Fougerolle, Y.; Sestier, C.; Seulin, R.
2007-07-01
In this paper, we are interested in accurate acquisition and modeling of flint artefacts. Archaeologists needs accurate geometry measurements to refine their understanding of the flint artefacts manufacturing process. Current techniques require several operations. First, a copy of a flint artefact is reproduced. The copy is then sliced. A picture is taken for each slice. Eventually, geometric information is manually determined from the pictures. Such a technique is very time consuming, and the processing applied to the original, as well as the reproduced object, induces several measurement errors (prototyping approximations, slicing, image acquisition, and measurement). By using 3D scanners, we significantly reduce the number of operations related to data acquisition and completely suppress the prototyping step to obtain an accurate 3D model. The 3D models are segmented into sliced parts that are then analyzed. Each slice is then automatically fitted by mathematical representation. Such a representation offers several interesting properties: geometric features can be characterized (e.g. shapes, curvature, sharp edges, etc), and a shape of the original piece of stone can be extrapolated. The contributions of this paper are an acquisition technique using 3D scanners that strongly reduces human intervention, acquisition time and measurement errors, and the representation of flint artefacts as mathematical 2D sections that enable accurate analysis.
Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions
Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner
2016-01-01
In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter. PMID:27983669
Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions.
Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner
2016-12-15
In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter.
An important step forward in continuous spectroscopic imaging of ionising radiations using ASICs
NASA Astrophysics Data System (ADS)
Fessler, P.; Coffin, J.; Eberlé, H.; de Raad Iseli, C.; Hilt, B.; Huss, D.; Krummenacher, F.; Lutz, J. R.; Prévot, G.; Renouprez, A.; Sigward, M. H.; Schwaller, B.; Voltolini, C.
1999-01-01
Characterization results are given for an original ASIC allowing continuous acquisition of ionising radiation images in spectroscopic mode. Ionising radiation imaging in general and spectroscopic imaging in particular must primarily be guided by the attempt to decrease statistical noise, which requires detection systems designed to allow very high counting rates. Any source of dead time must therefore be avoided. Thus, the use of on-line corrections of the inevitable dispersion of characteristics between the large number of electronic channels of the detection system, shall be precluded. Without claiming to achieve ultimate noise levels, the work described is focused on how to prevent good individual acquisition channel noise performance from being totally destroyed by the dispersion between channels without introducing dead times. With this goal, we developed an automatic charge amplifier output voltage offset compensation system which operates regardless of the cause of the offset (detector or electronic). The main performances of the system are the following: the input equivalent noise charge is 190 e rms (input non connected, peaking time 500 ns), the highest gain is 255 mV/fC, the peaking time is adjustable between 200 ns and 2 μs and the power consumption is 10 mW per channel. The agreement between experimental data and theoretical simulation results is excellent.
Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.
Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi
2018-03-24
In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.
2007-06-01
the masses were identified by an experi- enced Mammography Quality Standards Act (MQSA) radiologist. The no-mass data set contained 98 cases. The time...force, and the difference in time between the two acquisitions would cause differ- ences in the subtlety of the masses on the FFDMs and SFMs. However...images," Medical Physics 18, 955-963 (1991). 20A. J. Mendez, P. G. Tahoces, M. J. Lado , M. Souto, and J. J. Vidal, "Computer-aided diagnosis: Automatic
A robust close-range photogrammetric target extraction algorithm for size and type variant targets
NASA Astrophysics Data System (ADS)
Nyarko, Kofi; Thomas, Clayton; Torres, Gilbert
2016-05-01
The Photo-G program conducted by Naval Air Systems Command at the Atlantic Test Range in Patuxent River, Maryland, uses photogrammetric analysis of large amounts of real-world imagery to characterize the motion of objects in a 3-D scene. Current approaches involve several independent processes including target acquisition, target identification, 2-D tracking of image features, and 3-D kinematic state estimation. Each process has its own inherent complications and corresponding degrees of both human intervention and computational complexity. One approach being explored for automated target acquisition relies on exploiting the pixel intensity distributions of photogrammetric targets, which tend to be patterns with bimodal intensity distributions. The bimodal distribution partitioning algorithm utilizes this distribution to automatically deconstruct a video frame into regions of interest (ROI) that are merged and expanded to target boundaries, from which ROI centroids are extracted to mark target acquisition points. This process has proved to be scale, position and orientation invariant, as well as fairly insensitive to global uniform intensity disparities.
Automated body weight prediction of dairy cows using 3-dimensional vision.
Song, X; Bokkers, E A M; van der Tol, P P J; Groot Koerkamp, P W G; van Mourik, S
2018-05-01
The objectives of this study were to quantify the error of body weight prediction using automatically measured morphological traits in a 3-dimensional (3-D) vision system and to assess the influence of various sources of uncertainty on body weight prediction. In this case study, an image acquisition setup was created in a cow selection box equipped with a top-view 3-D camera. Morphological traits of hip height, hip width, and rump length were automatically extracted from the raw 3-D images taken of the rump area of dairy cows (n = 30). These traits combined with days in milk, age, and parity were used in multiple linear regression models to predict body weight. To find the best prediction model, an exhaustive feature selection algorithm was used to build intermediate models (n = 63). Each model was validated by leave-one-out cross-validation, giving the root mean square error and mean absolute percentage error. The model consisting of hip width (measurement variability of 0.006 m), days in milk, and parity was the best model, with the lowest errors of 41.2 kg of root mean square error and 5.2% mean absolute percentage error. Our integrated system, including the image acquisition setup, image analysis, and the best prediction model, predicted the body weights with a performance similar to that achieved using semi-automated or manual methods. Moreover, the variability of our simplified morphological trait measurement showed a negligible contribution to the uncertainty of body weight prediction. We suggest that dairy cow body weight prediction can be improved by incorporating more predictive morphological traits and by improving the prediction model structure. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Preliminary Tests of a New Low-Cost Photogrammetric System
NASA Astrophysics Data System (ADS)
Santise, M.; Thoeni, K.; Roncella, R.; Sloan, S. W.; Giacomini, A.
2017-11-01
This paper presents preliminary tests of a new low-cost photogrammetric system for 4D modelling of large scale areas for civil engineering applications. The system consists of five stand-alone units. Each of the units is composed of a Raspberry Pi 2 Model B (RPi2B) single board computer connected to a PiCamera Module V2 (8 MP) and is powered by a 10 W solar panel. The acquisition of the images is performed automatically using Python scripts and the OpenCV library. Images are recorded at different times during the day and automatically uploaded onto a FTP server from where they can be accessed for processing. Preliminary tests and outcomes of the system are discussed in detail. The focus is on the performance assessment of the low-cost sensor and the quality evaluation of the digital surface models generated by the low-cost photogrammetric systems in the field under real test conditions. Two different test cases were set up in order to calibrate the low-cost photogrammetric system and to assess its performance. First comparisons with a TLS model show a good agreement.
NASA Astrophysics Data System (ADS)
Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G.
2015-02-01
Today the use of spaceborne Very High Resolution (VHR) optical sensors for automatic 3D information extraction is increasing in the scientific and civil communities. The 3D Optical Metrology (3DOM) unit of the Bruno Kessler Foundation (FBK) in Trento (Italy) has collected VHR satellite imagery, as well as aerial and terrestrial data over Trento for creating a complete testfield for investigations on image radiometry, geometric accuracy, automatic digital surface model (DSM) generation, 2D/3D feature extraction, city modelling and data fusion. This paper addresses the radiometric and the geometric aspects of the VHR spaceborne imagery included in the Trento testfield and their potential for 3D information extraction. The dataset consist of two stereo-pairs acquired by WorldView-2 and by GeoEye-1 in panchromatic and multispectral mode, and a triplet from Pléiades-1A. For reference and validation, a DSM from airborne LiDAR acquisition is used. The paper gives details on the project, dataset characteristics and achieved results.
A technique for automatically extracting useful field of view and central field of view images.
Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar
2016-01-01
It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints.
Design and testing of a 750MHz CW-EPR digital console for small animal imaging.
Sato-Akaba, Hideo; Emoto, Miho C; Hirata, Hiroshi; Fujii, Hirotada G
2017-11-01
This paper describes the development of a digital console for three-dimensional (3D) continuous wave electron paramagnetic resonance (CW-EPR) imaging of a small animal to improve the signal-to-noise ratio and lower the cost of the EPR imaging system. A RF generation board, an RF acquisition board and a digital signal processing (DSP) & control board were built for the digital EPR detection. Direct sampling of the reflected RF signal from a resonator (approximately 750MHz), which contains the EPR signal, was carried out using a band-pass subsampling method. A direct automatic control system to reduce the reflection from the resonator was proposed and implemented in the digital EPR detection scheme. All DSP tasks were carried out in field programmable gate array ICs. In vivo 3D imaging of nitroxyl radicals in a mouse's head was successfully performed. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)
2008-01-01
A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.
Passive Infrared Thermographic Imaging for Mobile Robot Object Identification
NASA Astrophysics Data System (ADS)
Hinders, M. K.; Fehlman, W. L.
2010-02-01
The usefulness of thermal infrared imaging as a mobile robot sensing modality is explored, and a set of thermal-physical features used to characterize passive thermal objects in outdoor environments is described. Objects that extend laterally beyond the thermal camera's field of view, such as brick walls, hedges, picket fences, and wood walls as well as compact objects that are laterally within the thermal camera's field of view, such as metal poles and tree trunks, are considered. Classification of passive thermal objects is a subtle process since they are not a source for their own emission of thermal energy. A detailed analysis is included of the acquisition and preprocessing of thermal images, as well as the generation and selection of thermal-physical features from these objects within thermal images. Classification performance using these features is discussed, as a precursor to the design of a physics-based model to automatically classify these objects.
Design and testing of a 750 MHz CW-EPR digital console for small animal imaging
NASA Astrophysics Data System (ADS)
Sato-Akaba, Hideo; Emoto, Miho C.; Hirata, Hiroshi; Fujii, Hirotada G.
2017-11-01
This paper describes the development of a digital console for three-dimensional (3D) continuous wave electron paramagnetic resonance (CW-EPR) imaging of a small animal to improve the signal-to-noise ratio and lower the cost of the EPR imaging system. A RF generation board, an RF acquisition board and a digital signal processing (DSP) & control board were built for the digital EPR detection. Direct sampling of the reflected RF signal from a resonator (approximately 750 MHz), which contains the EPR signal, was carried out using a band-pass subsampling method. A direct automatic control system to reduce the reflection from the resonator was proposed and implemented in the digital EPR detection scheme. All DSP tasks were carried out in field programmable gate array ICs. In vivo 3D imaging of nitroxyl radicals in a mouse's head was successfully performed.
NASA Technical Reports Server (NTRS)
2003-01-01
In order to rapidly and efficiently grow crystals, tools were needed to automatically identify and analyze the growing process of protein crystals. To meet this need, Diversified Scientific, Inc. (DSI), with the support of a Small Business Innovation Research (SBIR) contract from NASA s Marshall Space Flight Center, developed CrystalScore(trademark), the first automated image acquisition, analysis, and archiving system designed specifically for the macromolecular crystal growing community. It offers automated hardware control, image and data archiving, image processing, a searchable database, and surface plotting of experimental data. CrystalScore is currently being used by numerous pharmaceutical companies and academic and nonprofit research centers. DSI, located in Birmingham, Alabama, was awarded the patent Method for acquiring, storing, and analyzing crystal images on March 4, 2003. Another DSI product made possible by Marshall SBIR funding is VaporPro(trademark), a unique, comprehensive system that allows for the automated control of vapor diffusion for crystallization experiments.
Multi-view line-scan inspection system using planar mirrors
NASA Astrophysics Data System (ADS)
Holländer, Bransilav; Štolc, Svorad; Huber-Mörk, Reinhold
2013-04-01
We demonstrate the design, setup, and results for a line-scan stereo image acquisition system using a single area- scan sensor, single lens and two planar mirrors attached to the acquisition device. The acquired object is moving relatively to the acquisition device and is observed under three different angles at the same time. Depending on the specific configuration it is possible to observe the object under a straight view (i.e., looking along the optical axis) and two skewed views. The relative motion between an object and the acquisition device automatically fulfills the epipolar constraint in stereo vision. The choice of lines to be extracted from the CMOS sensor depends on various factors such as the number, position and size of the mirrors, the optical and sensor configuration, or other application-specific parameters like desired depth resolution. The acquisition setup presented in this paper is suitable for the inspection of a printed matter, small parts or security features such as optical variable devices and holograms. The image processing pipeline applied to the extracted sensor lines is explained in detail. The effective depth resolution achieved by the presented system, assembled from only off-the-shelf components, is approximately equal to the spatial resolution and can be smoothly controlled by changing positions and angles of the mirrors. Actual performance of the device is demonstrated on a 3D-printed ground-truth object as well as two real-world examples: (i) the EUR-100 banknote - a high-quality printed matter and (ii) the hologram at the EUR-50 banknote { an optical variable device.
Automatic image enhancement based on multi-scale image decomposition
NASA Astrophysics Data System (ADS)
Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong
2014-01-01
In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.
The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross
2014-06-15
Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less
Automatic three-dimensional registration of intravascular optical coherence tomography images
NASA Astrophysics Data System (ADS)
Ughi, Giovanni J.; Adriaenssens, Tom; Larsson, Matilda; Dubois, Christophe; Sinnaeve, Peter R.; Coosemans, Mark; Desmet, Walter; D'hooge, Jan
2012-02-01
Intravascular optical coherence tomography (IV-OCT) is a catheter-based high-resolution imaging technique able to visualize the inner wall of the coronary arteries and implanted devices in vivo with an axial resolution below 20 μm. IV-OCT is being used in several clinical trials aiming to quantify the vessel response to stent implantation over time. However, stent analysis is currently performed manually and corresponding images taken at different time points are matched through a very labor-intensive and subjective procedure. We present an automated method for the spatial registration of IV-OCT datasets. Stent struts are segmented through consecutive images and three-dimensional models of the stents are created for both datasets to be registered. The two models are initially roughly registered through an automatic initialization procedure and an iterative closest point algorithm is subsequently applied for a more precise registration. To correct for nonuniform rotational distortions (NURDs) and other potential acquisition artifacts, the registration is consecutively refined on a local level. The algorithm was first validated by using an in vitro experimental setup based on a polyvinyl-alcohol gel tubular phantom. Subsequently, an in vivo validation was obtained by exploiting stable vessel landmarks. The mean registration error in vitro was quantified to be 0.14 mm in the longitudinal axis and 7.3-deg mean rotation error. In vivo validation resulted in 0.23 mm in the longitudinal axis and 10.1-deg rotation error. These results indicate that the proposed methodology can be used for automatic registration of in vivo IV-OCT datasets. Such a tool will be indispensable for larger studies on vessel healing pathophysiology and reaction to stent implantation. As such, it will be valuable in testing the performance of new generations of intracoronary devices and new therapeutic drugs.
A user-friendly LabVIEW software platform for grating based X-ray phase-contrast imaging.
Wang, Shenghao; Han, Huajie; Gao, Kun; Wang, Zhili; Zhang, Can; Yang, Meng; Wu, Zhao; Wu, Ziyu
2015-01-01
X-ray phase-contrast imaging can provide greatly improved contrast over conventional absorption-based imaging for weakly absorbing samples, such as biological soft tissues and fibre composites. In this study, we introduced an easy and fast way to develop a user-friendly software platform dedicated to the new grating-based X-ray phase-contrast imaging setup at the National Synchrotron Radiation Laboratory of the University of Science and Technology of China. The control of 21 motorized stages, of a piezoelectric stage and of an X-ray tube are achieved with this software, it also covers image acquisition with a flat panel detector for automatic phase stepping scan. Moreover, a data post-processing module for signals retrieval and other custom features are in principle available. With a seamless integration of all the necessary functions in one software package, this platform greatly facilitate users' activities during experimental runs with this grating based X-ray phase contrast imaging setup.
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Tuell, Grady
2010-04-01
The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.
An Imaging And Graphics Workstation For Image Sequence Analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-01-01
This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.
Diagnostic report acquisition unit for the Mayo/IBM PACS project
NASA Astrophysics Data System (ADS)
Brooks, Everett G.; Rothman, Melvyn L.
1991-07-01
The Mayo Clinic and IBM Rochester have jointly developed a picture archive and control system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. One of the challenges of developing a useful PACS involves integrating the diagnostic reports with the electronic images so they can be displayed simultaneously. By the time a diagnostic report is generated for a particular case, its images have already been captured and archived by the PACS. To integrate the report with the images, the authors have developed an IBM Personal System/2 computer (PS/2) based diagnostic report acquisition unit (RAU). A typed copy of the report is transmitted via facsimile to the RAU where it is stacked electronically with other reports that have been sent previously but not yet processed. By processing these reports at the RAU, the information they contain is integrated with the image database and a copy of the report is archived electronically on an IBM Application System/400 computer (AS/400). When a user requests a set of images for viewing, the report is automatically integrated with the image data. By using a hot key, the user can toggle on/off the report on the display screen. This report describes process, hardware, and software employed to integrate the diagnostic report information into the PACS, including how the report images are captured, transmitted, and entered into the AS/400 database. Also described is how the archived reports and their associated medical images are located and merged for retrieval and display. The methods used to detect and process error conditions are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schadewaldt, N; Schulz, H; Helle, M
2014-06-01
Purpose: To analyze the effect of computing radiation dose on automatically generated MR-based simulated CT images compared to true patient CTs. Methods: Six prostate cancer patients received a regular planning CT for RT planning as well as a conventional 3D fast-field dual-echo scan on a Philips 3.0T Achieva, adding approximately 2 min of scan time to the clinical protocol. Simulated CTs (simCT) where synthesized by assigning known average CT values to the tissue classes air, water, fat, cortical and cancellous bone. For this, Dixon reconstruction of the nearly out-of-phase (echo 1) and in-phase images (echo 2) allowed for water andmore » fat classification. Model based bone segmentation was performed on a combination of the DIXON images. A subsequent automatic threshold divides into cortical and cancellous bone. For validation, the simCT was registered to the true CT and clinical treatment plans were re-computed on the simCT in pinnacle{sup 3}. To differentiate effects related to the 5 tissue classes and changes in the patient anatomy not compensated by rigid registration, we also calculate the dose on a stratified CT, where HU values are sorted in to the same 5 tissue classes as the simCT. Results: Dose and volume parameters on PTV and risk organs as used for the clinical approval were compared. All deviations are below 1.1%, except the anal sphincter mean dose, which is at most 2.2%, but well below clinical acceptance threshold. Average deviations are below 0.4% for PTV and risk organs and 1.3% for the anal sphincter. The deviations of the stratifiedCT are in the same range as for the simCT. All plans would have passed clinical acceptance thresholds on the simulated CT images. Conclusion: This study demonstrated the clinical usability of MR based dose calculation with the presented Dixon acquisition and subsequent fully automatic image processing. N. Schadewaldt, H. Schulz, M. Helle and S. Renisch are employed by Phlips Technologie Innovative Techonologies, a subsidiary of Royal Philips NV.« less
LabVIEW application for motion tracking using USB camera
NASA Astrophysics Data System (ADS)
Rob, R.; Tirian, G. O.; Panoiu, M.
2017-05-01
The technical state of the contact line and also the additional equipment in electric rail transport is very important for realizing the repairing and maintenance of the contact line. During its functioning, the pantograph motion must stay in standard limits. Present paper proposes a LabVIEW application which is able to track in real time the motion of a laboratory pantograph and also to acquire the tracking images. An USB webcam connected to a computer acquires the desired images. The laboratory pantograph contains an automatic system which simulates the real motion. The tracking parameters are the horizontally motion (zigzag) and the vertically motion which can be studied in separate diagrams. The LabVIEW application requires appropriate tool-kits for vision development. Therefore the paper describes the subroutines that are especially programmed for real-time image acquisition and also for data processing.
The determination of high-resolution spatio-temporal glacier motion fields from time-lapse sequences
NASA Astrophysics Data System (ADS)
Schwalbe, Ellen; Maas, Hans-Gerd
2017-12-01
This paper presents a comprehensive method for the determination of glacier surface motion vector fields at high spatial and temporal resolution. These vector fields can be derived from monocular terrestrial camera image sequences and are a valuable data source for glaciological analysis of the motion behaviour of glaciers. The measurement concepts for the acquisition of image sequences are presented, and an automated monoscopic image sequence processing chain is developed. Motion vector fields can be derived with high precision by applying automatic subpixel-accuracy image matching techniques on grey value patterns in the image sequences. Well-established matching techniques have been adapted to the special characteristics of the glacier data in order to achieve high reliability in automatic image sequence processing, including the handling of moving shadows as well as motion effects induced by small instabilities in the camera set-up. Suitable geo-referencing techniques were developed to transform image measurements into a reference coordinate system.The result of monoscopic image sequence analysis is a dense raster of glacier surface point trajectories for each image sequence. Each translation vector component in these trajectories can be determined with an accuracy of a few centimetres for points at a distance of several kilometres from the camera. Extensive practical validation experiments have shown that motion vector and trajectory fields derived from monocular image sequences can be used for the determination of high-resolution velocity fields of glaciers, including the analysis of tidal effects on glacier movement, the investigation of a glacier's motion behaviour during calving events, the determination of the position and migration of the grounding line and the detection of subglacial channels during glacier lake outburst floods.
SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, S; Rottmann, J; Berbeco, R
2014-06-01
Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less
Blessing, Manuel; Arns, Anna; Wertz, Hansjoerg; Stsepankou, Dzmitry; Boda-Heggemann, Judit; Hesser, Juergen; Wenz, Frederik; Lohr, Frank
2018-04-01
To establish a fully automated kV-MV CBCT imaging method on a clinical linear accelerator that allows image acquisition of thoracic targets for patient positioning within one breath-hold (∼15s) under realistic clinical conditions. Our previously developed FPGA-based hardware unit which allows synchronized kV-MV CBCT projection acquisition is connected to a clinical linear accelerator system via a multi-pin switch; i.e. either kV-MV imaging or conventional clinical mode can be selected. An application program was developed to control the relevant linac parameters automatically and to manage the MV detector readout as well as the gantry angle capture for each MV projection. The kV projections are acquired with the conventional CBCT system. GPU-accelerated filtered backprojection is performed separately for both data sets. After appropriate grayscale normalization both modalities are combined and the final kV-MV volume is re-imported in the CBCT system to enable image matching. To demonstrate adequate geometrical accuracy of the novel imaging system the Penta-Guide phantom QA procedure is performed. Furthermore, a human plastinate and different tumor shapes in a thorax phantom are scanned. Diameters of the known tumor shapes are measured in the kV-MV reconstruction. An automated kV-MV CBCT workflow was successfully established in a clinical environment. The overall procedure, from starting the data acquisition until the reconstructed volume is available for registration, requires ∼90s including 17s acquisition time for 100° rotation. It is very simple and allows target positioning in the same way as for conventional CBCT. Registration accuracy of the QA phantom is within ±1mm. The average deviation from the known tumor dimensions measured in the thorax phantom was 0.7mm which corresponds to an improvement of 36% compared to our previous kV-MV imaging system. Due to automation the kV-MV CBCT workflow is speeded up by a factor of >10 compared to the manual approach. Thus, the system allows a simple, fast and reliable imaging procedure and fulfills all requirements to be successfully introduced into the clinical workflow now, enabling single-breath-hold volume imaging. Copyright © 2018. Published by Elsevier GmbH.
Ambrozy, C; Kolar, N A; Rattay, F
2010-01-01
For measurement value logging of board angle values during balance training, it is necessary to develop a measurement system. This study will provide data for a balance study using the smartcard. The data acquisition comes automatically. An individually training plan for each proband is necessary. To store the proband identification a smartcard with an I2C data bus protocol and an E2PROM memory system is used. For reading the smartcard data a smartcard reader is connected via universal serial bus (USB) to a notebook. The data acquisition and smartcard read programme is designed with Microsoft® Visual C#. A training plan file contains the individual training plan for each proband. The data of the test persons are saved in a proband directory. Each event is automatically saved as a log-file for the exact documentation. This system makes study development easy and time-saving.
Automatic detection of the inner ears in head CT images using deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Zhang, Dongqing; Noble, Jack H.; Dawant, Benoit M.
2018-03-01
Cochlear implants (CIs) use electrode arrays that are surgically inserted into the cochlea to stimulate nerve endings to replace the natural electro-mechanical transduction mechanism and restore hearing for patients with profound hearing loss. Post-operatively, the CI needs to be programmed. Traditionally, this is done by an audiologist who is blind to the positions of the electrodes relative to the cochlea and relies on the patient's subjective response to stimuli. This is a trial-and-error process that can be frustratingly long (dozens of programming sessions are not unusual). To assist audiologists, we have proposed what we call IGCIP for image-guided cochlear implant programming. In IGCIP, we use image processing algorithms to segment the intra-cochlear anatomy in pre-operative CT images and to localize the electrode arrays in post-operative CTs. We have shown that programming strategies informed by image-derived information significantly improve hearing outcomes for both adults and pediatric populations. We are now aiming at deploying these techniques clinically, which requires full automation. One challenge we face is the lack of standard image acquisition protocols. The content of the image volumes we need to process thus varies greatly and visual inspection and labelling is currently required to initialize processing pipelines. In this work we propose a deep learning-based approach to automatically detect if a head CT volume contains two ears, one ear, or no ear. Our approach has been tested on a data set that contains over 2,000 CT volumes from 153 patients and we achieve an overall 95.97% classification accuracy.
Design of a Single-Cell Positioning Controller Using Electroosmotic Flow and Image Processing
Ay, Chyung; Young, Chao-Wang; Chen, Jhong-Yin
2013-01-01
The objective of the current research was not only to provide a fast and automatic positioning platform for single cells, but also improved biomolecular manipulation techniques. In this study, an automatic platform for cell positioning using electroosmotic flow and image processing technology was designed. The platform was developed using a PCI image acquisition interface card for capturing images from a microscope and then transferring them to a computer using human-machine interface software. This software was designed by the Laboratory Virtual Instrument Engineering Workbench, a graphical language for finding cell positions and viewing the driving trace, and the fuzzy logic method for controlling the voltage or time of an electric field. After experiments on real human leukemic cells (U-937), the success of the cell positioning rate achieved by controlling the voltage factor reaches 100% within 5 s. A greater precision is obtained when controlling the time factor, whereby the success rate reaches 100% within 28 s. Advantages in both high speed and high precision are attained if these two voltage and time control methods are combined. The control speed with the combined method is about 5.18 times greater than that achieved by the time method, and the control precision with the combined method is more than five times greater than that achieved by the voltage method. PMID:23698272
Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.
Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto
2016-04-01
MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.
Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry
NASA Astrophysics Data System (ADS)
Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio
2016-03-01
Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.
Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry.
Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio
2016-03-22
Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.
Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry
Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio
2016-01-01
Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83–0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments. PMID:27001047
van Dijk, Joris D; van Dalen, Jorn A; Mouden, Mohamed; Ottervanger, Jan Paul; Knollema, Siert; Slump, Cornelis H; Jager, Pieter L
2018-04-01
Correction of motion has become feasible on cadmium-zinc-telluride (CZT)-based SPECT cameras during myocardial perfusion imaging (MPI). Our aim was to quantify the motion and to determine the value of automatic correction using commercially available software. We retrospectively included 83 consecutive patients who underwent stress-rest MPI CZT-SPECT and invasive fractional flow reserve (FFR) measurement. Eight-minute stress acquisitions were reformatted into 1.0- and 20-second bins to detect respiratory motion (RM) and patient motion (PM), respectively. RM and PM were quantified and scans were automatically corrected. Total perfusion deficit (TPD) and SPECT interpretation-normal, equivocal, or abnormal-were compared between the noncorrected and corrected scans. Scans with a changed SPECT interpretation were compared with FFR, the reference standard. Average RM was 2.5 ± 0.4 mm and maximal PM was 4.5 ± 1.3 mm. RM correction influenced the diagnostic outcomes in two patients based on TPD changes ≥7% and in nine patients based on changed visual interpretation. In only four of these patients, the changed SPECT interpretation corresponded with FFR measurements. Correction for PM did not influence the diagnostic outcomes. Respiratory motion and patient motion were small. Motion correction did not appear to improve the diagnostic outcome and, hence, the added value seems limited in MPI using CZT-based SPECT cameras.
Validation of automatic joint space width measurements in hand radiographs in rheumatoid arthritis
Schenk, Olga; Huo, Yinghe; Vincken, Koen L.; van de Laar, Mart A.; Kuper, Ina H. H.; Slump, Kees C. H.; Lafeber, Floris P. J. G.; Bernelot Moens, Hein J.
2016-01-01
Abstract. Computerized methods promise quick, objective, and sensitive tools to quantify progression of radiological damage in rheumatoid arthritis (RA). Measurement of joint space width (JSW) in finger and wrist joints with these systems performed comparable to the Sharp–van der Heijde score (SHS). A next step toward clinical use, validation of precision and accuracy in hand joints with minimal damage, is described with a close scrutiny of sources of error. A recently developed system to measure metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints was validated in consecutive hand images of RA patients. To assess the impact of image acquisition, measurements on radiographs from a multicenter trial and from a recent prospective cohort in a single hospital were compared. Precision of the system was tested by comparing the joint space in mm in pairs of subsequent images with a short interval without progression of SHS. In case of incorrect measurements, the source of error was analyzed with a review by human experts. Accuracy was assessed by comparison with reported measurements with other systems. In the two series of radiographs, the system could automatically locate and measure 1003/1088 (92.2%) and 1143/1200 (95.3%) individual joints, respectively. In joints with a normal SHS, the average (SD) size of MCP joints was 1.7±0.2 and 1.6±0.3 mm in the two series of radiographs, and of PIP joints 1.0±0.2 and 0.9±0.2 mm. The difference in JSW between two serial radiographs with an interval of 6 to 12 months and unchanged SHS was 0.0±0.1 mm, indicating very good precision. Errors occurred more often in radiographs from the multicenter cohort than in a more recent series from a single hospital. Detailed analysis of the 55/1125 (4.9%) measurements that had a discrepant paired measurement revealed that variation in the process of image acquisition (exposure in 15% and repositioning in 57%) was a more frequent source of error than incorrect delineation by the software (25%). Various steps in the validation of an automated measurement system for JSW of MCP and PIP joints are described. The use of serial radiographs from different sources, with a short interval and limited damage, is helpful to detect sources of error. Image acquisition, in particular repositioning, is a dominant source of error. PMID:27921071
Niu, Renjie; Fu, Chenyu; Xu, Zhiyong; Huang, Jianyuan
2016-04-29
Doctors who practice Traditional Chinese Medicine (TCM) diagnose using four methods - inspection, auscultation and olfaction, interrogation, and pulse feeling/palpation. The shape and shape changes of the moon marks on the nails are an important indication when judging the patient's health. There are a series of classical and experimental theories about moon marks in TCM, which does not have support from statistical data. To verify some experiential theories on moon mark in TCM by automatic data-processing equipment. This paper proposes the equipment that utilizes image processing technology to collect moon mark data of different target groups conveniently and quickly, building a database that combines this information with that gathered from the health and mental status questionnaire in each test. This equipment has a simple design, a low cost, and an optimized algorithm. The practice has been proven to quickly complete automatic acquisition and preservation of key data about moon marks. In the future, some conclusions will likely be obtained from these data; some changes of moon marks related to a special pathological change will be established with statistical methods.
NASA Astrophysics Data System (ADS)
Ghanta, Sindhu; Shahini Shamsabadi, Salar; Dy, Jennifer; Wang, Ming; Birken, Ralf
2015-04-01
Around 3,000,000 million vehicle miles are annually traveled utilizing the US transportation systems alone. In addition to the road traffic safety, maintaining the road infrastructure in a sound condition promotes a more productive and competitive economy. Due to the significant amounts of financial and human resources required to detect surface cracks by visual inspection, detection of these surface defects are often delayed resulting in deferred maintenance operations. This paper introduces an automatic system for acquisition, detection, classification, and evaluation of pavement surface cracks by unsupervised analysis of images collected from a camera mounted on the rear of a moving vehicle. A Hessian-based multi-scale filter has been utilized to detect ridges in these images at various scales. Post-processing on the extracted features has been implemented to produce statistics of length, width, and area covered by cracks, which are crucial for roadway agencies to assess pavement quality. This process has been realized on three sets of roads with different pavement conditions in the city of Brockton, MA. A ground truth dataset labeled manually is made available to evaluate this algorithm and results rendered more than 90% segmentation accuracy demonstrating the feasibility of employing this approach at a larger scale.
Boundary acquisition for setup of numerical simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diegert, C.
1997-12-31
The author presents a work flow diagram that includes a path that begins with taking experimental measurements, and ends with obtaining insight from results produced by numerical simulation. Two examples illustrate this path: (1) Three-dimensional imaging measurement at micron scale, using X-ray tomography, provides information on the boundaries of irregularly-shaped alumina oxide particles held in an epoxy matrix. A subsequent numerical simulation predicts the electrical field concentrations that would occur in the observed particle configurations. (2) Three-dimensional imaging measurement at meter scale, again using X-ray tomography, provides information on the boundaries fossilized bone fragments in a Parasaurolophus crest recently discoveredmore » in New Mexico. A subsequent numerical simulation predicts acoustic response of the elaborate internal structure of nasal passageways defined by the fossil record. The author must both add value, and must change the format of the three-dimensional imaging measurements before the define the geometric boundary initial conditions for the automatic mesh generation, and subsequent numerical simulation. The author applies a variety of filters and statistical classification algorithms to estimate the extents of the structures relevant to the subsequent numerical simulation, and capture these extents as faceted geometries. The author will describe the particular combination of manual and automatic methods used in the above two examples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Stayman, J; Ouadah, S
2015-06-15
Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less
[Wearable Automatic External Defibrillators].
Luo, Huajie; Luo, Zhangyuan; Jin, Xun; Zhang, Leilei; Wang, Changjin; Zhang, Wenzan; Tu, Quan
2015-11-01
Defibrillation is the most effective method of treating ventricular fibrillation(VF), this paper introduces wearable automatic external defibrillators based on embedded system which includes EGG measurements, bioelectrical impedance measurement, discharge defibrillation module, which can automatic identify VF signal, biphasic exponential waveform defibrillation discharge. After verified by animal tests, the device can realize EGG acquisition and automatic identification. After identifying the ventricular fibrillation signal, it can automatic defibrillate to abort ventricular fibrillation and to realize the cardiac electrical cardioversion.
Exudate detection in color retinal images for mass screening of diabetic retinopathy.
Zhang, Xiwei; Thibault, Guillaume; Decencière, Etienne; Marcotegui, Beatriz; Laÿ, Bruno; Danno, Ronan; Cazuguel, Guy; Quellec, Gwénolé; Lamard, Mathieu; Massin, Pascale; Chabouis, Agnès; Victor, Zeynep; Erginay, Ali
2014-10-01
The automatic detection of exudates in color eye fundus images is an important task in applications such as diabetic retinopathy screening. The presented work has been undertaken in the framework of the TeleOphta project, whose main objective is to automatically detect normal exams in a tele-ophthalmology network, thus reducing the burden on the readers. A new clinical database, e-ophtha EX, containing precisely manually contoured exudates, is introduced. As opposed to previously available databases, e-ophtha EX is very heterogeneous. It contains images gathered within the OPHDIAT telemedicine network for diabetic retinopathy screening. Image definition, quality, as well as patients condition or the retinograph used for the acquisition, for example, are subject to important changes between different examinations. The proposed exudate detection method has been designed for this complex situation. We propose new preprocessing methods, which perform not only normalization and denoising tasks, but also detect reflections and artifacts in the image. A new candidates segmentation method, based on mathematical morphology, is proposed. These candidates are characterized using classical features, but also novel contextual features. Finally, a random forest algorithm is used to detect the exudates among the candidates. The method has been validated on the e-ophtha EX database, obtaining an AUC of 0.95. It has been also validated on other databases, obtaining an AUC between 0.93 and 0.95, outperforming state-of-the-art methods. Copyright © 2014 Elsevier B.V. All rights reserved.
Galavis, Paulina E; Hollensen, Christian; Jallow, Ngoneh; Paliwal, Bhudatt; Jeraj, Robert
2010-10-01
Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45-60 minutes post-injection of 10 mCi of [(18)F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor segmentation.
GALAVIS, PAULINA E.; HOLLENSEN, CHRISTIAN; JALLOW, NGONEH; PALIWAL, BHUDATT; JERAJ, ROBERT
2014-01-01
Background Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Material and methods Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45–60 minutes post-injection of 10 mCi of [18F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Conclusion Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor segmentation. PMID:20831489
Tamouridou, Afroditi A.; Lagopodi, Anastasia L.; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios
2017-01-01
Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery. PMID:29019957
Tamouridou, Afroditi A; Alexandridis, Thomas K; Pantazi, Xanthoula E; Lagopodi, Anastasia L; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios
2017-10-11
Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery.
Graphical user interface (GUIDE) and semi-automatic system for the acquisition of anaglyphs
NASA Astrophysics Data System (ADS)
Canchola, Marco A.; Arízaga, Juan A.; Cortés, Obed; Tecpanecatl, Eduardo; Cantero, Jose M.
2013-09-01
Diverse educational experiences have shown greater acceptance of children to ideas related to science, compared with adults. That fact and showing great curiosity are factors to consider to undertake scientific outreach efforts for children, with prospects of success. Moreover now 3D digital images have become a topic that has gained importance in various areas, entertainment, film and video games mainly, but also in areas such as medical practice transcendental in disease detection This article presents a system model for 3D images for educational purposes that allows students of various grade levels, school and college, have an approach to image processing, explaining the use of filters for stereoscopic images that give brain impression of depth. The system is based on one of two hardware elements, centered on an Arduino board, and a software based on Matlab. The paper presents the design and construction of each of the elements, also presents information on the images obtained and finally how users can interact with the device.
SU-F-T-476: Performance of the AS1200 EPID for Periodic Photon Quality Assurance
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeMarco, J; Fraass, B; Yang, W
2016-06-15
Purpose: To assess the dosimetric performance of a new amorphous silicon flat-panel electronic portal imaging device (EPID) suitable for high-intensity, flattening-filter-free delivery mode. Methods: An EPID-based QA suite was created with automation to periodically monitor photon central-axis output and two-dimensional beam profile constancy as a function of gantry angle and dose-rate. A Varian TrueBeamTM linear accelerator installed with Developer Mode was used to customize and deliver XML script routines for the QA suite using the dosimetry mode image acquisition for an aS1200 EPID. Automatic post-processing software was developed to analyze the resulting DICOM images. Results: The EPID was used tomore » monitor photon beam output constancy (central-axis), flatness, and symmetry over a period of 10 months for four photon beam energies (6x, 15x, 6xFFF, and 10xFFF). EPID results were consistent to those measured with a standard daily QA check device. At the four cardinal gantry angles, the standard deviation of the EPID central-axis output was <0.5%. Likewise, EPID measurements were independent for the wide range of dose rates (including up to 2400 mu/min for 10xFFF) studied with a standard deviation of <0.8% relative to the nominal dose rate for each energy. Also, profile constancy and field size measurements showed good agreement with the reference acquisition of 0° gantry angle and nominal dose rate. XML script files were also tested for MU linearity and picket-fence delivery. Using Developer Mode, the test suite was delivered in <60 minutes for all 4 photon energies with 4 dose rates per energy and 5 picket-fence acquisitions. Conclusion: Dosimetry image acquisition using a new EPID was found to be accurate for standard and high-intensity photon beams over a broad range of dose rates over 10 months. Developer Mode provided an efficient platform to customize the EPID acquisitions by using custom script files which significantly reduced the time. This work was funded in part by Varian Medical Systems.« less
Segmentation of stereo terrain images
NASA Astrophysics Data System (ADS)
George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.
2000-06-01
We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée; McKay, Erin
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of amore » given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. Conclusions: The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.« less
Garcia, Marie-Paule; Villoing, Daphnée; McKay, Erin; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel
2015-12-01
The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit gate offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on gate to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user's imaging requirements and generates automatically command files used as input for gate. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant gate input files are generated for the virtual patient model and associated pharmacokinetics. Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body "step and shoot" acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.
de Souza, John Kennedy Schettino; Pinto, Marcos Antonio da Silva; Vieira, Pedro Gabrielle; Baron, Jerome; Tierra-Criollo, Carlos Julio
2013-12-01
The dynamic, accurate measurement of pupil size is extremely valuable for studying a large number of neuronal functions and dysfunctions. Despite tremendous and well-documented progress in image processing techniques for estimating pupil parameters, comparatively little work has been reported on practical hardware issues involved in designing image acquisition systems for pupil analysis. Here, we describe and validate the basic features of such a system which is based on a relatively compact, off-the-shelf, low-cost FireWire digital camera. We successfully implemented two configurable modes of video record: a continuous mode and an event-triggered mode. The interoperability of the whole system is guaranteed by a set of modular software components hosted on a personal computer and written in Labview. An offline analysis suite of image processing algorithms for automatically estimating pupillary and eyelid parameters were assessed using data obtained in human subjects. Our benchmark results show that such measurements can be done in a temporally precise way at a sampling frequency of up to 120 Hz and with an estimated maximum spatial resolution of 0.03 mm. Our software is made available free of charge to the scientific community, allowing end users to either use the software as is or modify it to suit their own needs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Evaluation of a video-based head motion tracking system for dedicated brain PET
NASA Astrophysics Data System (ADS)
Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.
2015-03-01
Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.
MR efficiency using automated MRI-desktop eProtocol
NASA Astrophysics Data System (ADS)
Gao, Fei; Xu, Yanzhe; Panda, Anshuman; Zhang, Min; Hanson, James; Su, Congzhe; Wu, Teresa; Pavlicek, William; James, Judy R.
2017-03-01
MRI protocols are instruction sheets that radiology technologists use in routine clinical practice for guidance (e.g., slice position, acquisition parameters etc.). In Mayo Clinic Arizona (MCA), there are over 900 MR protocols (ranging across neuro, body, cardiac, breast etc.) which makes maintaining and updating the protocol instructions a labor intensive effort. The task is even more challenging given different vendors (Siemens, GE etc.). This is a universal problem faced by all the hospitals and/or medical research institutions. To increase the efficiency of the MR practice, we designed and implemented a web-based platform (eProtocol) to automate the management of MRI protocols. It is built upon a database that automatically extracts protocol information from DICOM compliant images and provides a user-friendly interface to the technologists to create, edit and update the protocols. Advanced operations such as protocol migrations from scanner to scanner and capability to upload Multimedia content were also implemented. To the best of our knowledge, eProtocol is the first MR protocol automated management tool used clinically. It is expected that this platform will significantly improve the radiology operations efficiency including better image quality and exam consistency, fewer repeat examinations and less acquisition errors. These protocols instructions will be readily available to the technologists during scans. In addition, this web-based platform can be extended to other imaging modalities such as CT, Mammography, and Interventional Radiology and different vendors for imaging protocol management.
Gyssels, Elodie; Bohy, Pascale; Cornil, Arnaud; van Muylem, Alain; Howarth, Nigel; Gevenois, Pierre A; Tack, Denis
2016-01-01
The aim of the study was to compare radiation dose and image quality between the "average" and the "very strong" automatic exposure control (AEC) strength curves. Images reconstructed with filtered back-projection techniques and radiation dose data of unenhanced helical chest computed tomography (CT) examinations obtained at 2 hospitals (hospital A, hospital B) using the same scanner devices and acquisition protocols but different AEC strength curves were evaluated over a 3-month period. The selected AEC strength curve applied to "slim" patients (diameter <32 cm estimated from the attenuation automatically measured on the topogram) was "average" and "very strong" in hospital A and hospital B, respectively. Two radiologists with 13 and 24 years of experience scored the image quality of the lung parenchyma and the mediastinum on a 5-point scale. The patients' effective diameter, the delivered CT dose index volume, and dose-length products were recorded. A total of 410 patients were included. The average body mass index was 24.0 kg/m in hospital A and 24.8 kg/m in hospital B. There was no significant difference between hospitals with respect to age, sex ratio, weight, height, body mass index, effective diameters, and image quality scores for each radiologist (P ranging from 0.050 to 1.000). The mean CT dose index volume for the entire population was 2.0 mGy and was significantly lower in hospital B with the "very strong" AEC curve as compared with hospital A (-11%, P=0.001). The mean dose-length product delivered in this 70 kg-weight population was 68 mGy cm, corresponding to an effective dose of 0.95 mSv. Changing the AEC strength curve from "average" to "very strong" for slim patients maintains image quality and reduces the radiation dose to <1 mSv in routine chest CT examinations reconstructed with filtered back-projection techniques.
Puccio, Benjamin; Pooley, James P; Pellman, John S; Taverna, Elise C; Craddock, R Cameron
2016-10-25
Skull-stripping is the procedure of removing non-brain tissue from anatomical MRI data. This procedure can be useful for calculating brain volume and for improving the quality of other image processing steps. Developing new skull-stripping algorithms and evaluating their performance requires gold standard data from a variety of different scanners and acquisition methods. We complement existing repositories with manually corrected brain masks for 125 T1-weighted anatomical scans from the Nathan Kline Institute Enhanced Rockland Sample Neurofeedback Study. Skull-stripped images were obtained using a semi-automated procedure that involved skull-stripping the data using the brain extraction based on nonlocal segmentation technique (BEaST) software, and manually correcting the worst results. Corrected brain masks were added into the BEaST library and the procedure was repeated until acceptable brain masks were available for all images. In total, 85 of the skull-stripped images were hand-edited and 40 were deemed to not need editing. The results are brain masks for the 125 images along with a BEaST library for automatically skull-stripping other data. Skull-stripped anatomical images from the Neurofeedback sample are available for download from the Preprocessed Connectomes Project. The resulting brain masks can be used by researchers to improve preprocessing of the Neurofeedback data, as training and testing data for developing new skull-stripping algorithms, and for evaluating the impact on other aspects of MRI preprocessing. We have illustrated the utility of these data as a reference for comparing various automatic methods and evaluated the performance of the newly created library on independent data.
Design and Implementation of A DICOM PACS With Secure Access Via Internet
2001-10-25
which we have called SINFIM ( Sistema de INFormación de Imágenes Médicas). This system permits the automatic acquisition of DICOM images [2] [3...1] MA. Martínez, AJR. Jiménez, BV. Medina, LJ. Azpiroz. “Los Sistemas PACS”. [Last access 08/12/2000]. Available URL http://itzamna.uam.mx...junio, por el que se aprueba el Reglamento de medidas de seguridad de los ficheros automatizados que contengan datos de carácter personal” (Boletín Oficial del Estado, número 151, de 25 de junio de 1999)
Accuracy Analysis for Automatic Orientation of a Tumbling Oblique Viewing Sensor System
NASA Astrophysics Data System (ADS)
Stebner, K.; Wieden, A.
2014-03-01
Dynamic camera systems with moving parts are difficult to handle in photogrammetric workflow, because it is not ensured that the dynamics are constant over the recording period. Minimum changes of the camera's orientation greatly influence the projection of oblique images. In this publication these effects - originating from the kinematic chain of a dynamic camera system - are analysed and validated. A member of the Modular Airborne Camera System family - MACS-TumbleCam - consisting of a vertical viewing and a tumbling oblique camera was used for this investigation. Focus is on dynamic geometric modeling and the stability of the kinematic chain. To validate the experimental findings, the determined parameters are applied to the exterior orientation of an actual aerial image acquisition campaign using MACS-TumbleCam. The quality of the parameters is sufficient for direct georeferencing of oblique image data from the orientation information of a synchronously captured vertical image dataset. Relative accuracy for the oblique data set ranges from 1.5 pixels when using all images of the image block to 0.3 pixels when using only adjacent images.
Deformable templates guided discriminative models for robust 3D brain MRI segmentation.
Liu, Cheng-Yi; Iglesias, Juan Eugenio; Tu, Zhuowen
2013-10-01
Automatically segmenting anatomical structures from 3D brain MRI images is an important task in neuroimaging. One major challenge is to design and learn effective image models accounting for the large variability in anatomy and data acquisition protocols. A deformable template is a type of generative model that attempts to explicitly match an input image with a template (atlas), and thus, they are robust against global intensity changes. On the other hand, discriminative models combine local image features to capture complex image patterns. In this paper, we propose a robust brain image segmentation algorithm that fuses together deformable templates and informative features. It takes advantage of the adaptation capability of the generative model and the classification power of the discriminative models. The proposed algorithm achieves both robustness and efficiency, and can be used to segment brain MRI images with large anatomical variations. We perform an extensive experimental study on four datasets of T1-weighted brain MRI data from different sources (1,082 MRI scans in total) and observe consistent improvement over the state-of-the-art systems.
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
Integrated optical 3D digital imaging based on DSP scheme
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.
2008-03-01
We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.
Galindo, Enrique; Larralde-Corona, C Patricia; Brito, Teresa; Córdova-Aguilar, Ma Soledad; Taboada, Blanca; Vega-Alvarado, Leticia; Corkidi, Gabriel
2005-03-30
Fermentation bioprocesses typically involve two liquid phases (i.e. water and organic compounds) and one gas phase (air), together with suspended solids (i.e. biomass), which are the components to be dispersed. Characterization of multiphase dispersions is required as it determines mass transfer efficiency and bioreactor homogeneity. It is also needed for the appropriate design of contacting equipment, helping in establishing optimum operational conditions. This work describes the development of image analysis based techniques with advantages (in terms of data acquisition and processing), for the characterization of oil drops and bubble diameters in complex simulated fermentation broths. The system consists of fully digital acquisition of in situ images obtained from the inside of a mixing tank using a CCD camera synchronized with a stroboscopic light source, which are processed with a versatile commercial software. To improve the automation of particle recognition and counting, the Hough transform (HT) was used, so bubbles and oil drops were automatically detected and the processing time was reduced by 55% without losing accuracy with respect to a fully manual analysis. The system has been used for the detailed characterization of a number of operational conditions, including oil content, biomass morphology, presence of surfactants (such as proteins) and viscosity of the aqueous phase.
Reproducible MRI Measurement of Adipose Tissue Volumes in Genetic and Dietary Rodent Obesity Models
Johnson, David H.; Flask, Chris A.; Ernsberger, Paul R.; Wong, Wilbur C. K.; Wilson, David L.
2010-01-01
Purpose To develop ratio MRI [lipid/(lipid+water)] methods for assessing lipid depots and compare measurement variability to biological differences in lean controls (spontaneously hypertensive rats, SHRs), dietary obese (SHR-DO), and genetic/dietary obese (SHROBs) animals. Materials and Methods Images with and without CHESS water-suppression were processed using a semi-automatic method accounting for relaxometry, chemical shift, receive coil sensitivity, and partial volume. Results Partial volume correction improved results by 10–15%. Over six operators, volume variation was reduced to 1.9 ml from 30.6 ml for single-image-analysis with intensity inhomogeneity. For three acquisitions on the same animal, volume reproducibility was <1%. SHROBs had 6X visceral and 8X subcutaneous adipose tissue than SHRs. SHR-DOs had enlarged visceral depots (3X SHRs). SHROB had significantly more subcutaneous adipose tissue, indicating a strong genetic component to this fat depot. Liver ratios in SHR-DO and SHROB were higher than SHR, indicating elevated fat content. Among SHROBs, evidence suggested a phenotype SHROB* having elevated liver ratios and visceral adipose tissue volumes. Conclusion Effects of diet and genetics on obesity were significantly larger than variations due to image acquisition and analysis, indicating that these methods can be used to assess accumulation/depletion of lipid depots in animal models of obesity. PMID:18821617
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyer, W.B.
1979-09-01
This report describes both the hardware and software components of an automatic calibration and signal system (Autocal) for the data acquisition system for the Sandia particle beam fusion research accelerators Hydra, Proto I, and Proto II. The Autocal hardware consists of off-the-shelf commercial equipment. The various hardware components, special modifications and overall system configuration are described. Special software has been developed to support the Autocal hardware. Software operation and maintenance are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Kuan-Hao; Hu, Lingzhi; Traughber, Melanie
Purpose: MR-based pseudo-CT has an important role in MR-based radiation therapy planning and PET attenuation correction. The purpose of this study is to establish a clinically feasible approach, including image acquisition, correction, and CT formation, for pseudo-CT generation of the brain using a single-acquisition, undersampled ultrashort echo time (UTE)-mDixon pulse sequence. Methods: Nine patients were recruited for this study. For each patient, a 190-s, undersampled, single acquisition UTE-mDixon sequence of the brain was acquired (TE = 0.1, 1.5, and 2.8 ms). A novel method of retrospective trajectory correction of the free induction decay (FID) signal was performed based on point-spreadmore » functions of three external MR markers. Two-point Dixon images were reconstructed using the first and second echo data (TE = 1.5 and 2.8 ms). R2{sup ∗} images (1/T2{sup ∗}) were then estimated and were used to provide bone information. Three image features, i.e., Dixon-fat, Dixon-water, and R2{sup ∗}, were used for unsupervised clustering. Five tissue clusters, i.e., air, brain, fat, fluid, and bone, were estimated using the fuzzy c-means (FCM) algorithm. A two-step, automatic tissue-assignment approach was proposed and designed according to the prior information of the given feature space. Pseudo-CTs were generated by a voxelwise linear combination of the membership functions of the FCM. A low-dose CT was acquired for each patient and was used as the gold standard for comparison. Results: The contrast and sharpness of the FID images were improved after trajectory correction was applied. The mean of the estimated trajectory delay was 0.774 μs (max: 1.350 μs; min: 0.180 μs). The FCM-estimated centroids of different tissue types showed a distinguishable pattern for different tissues, and significant differences were found between the centroid locations of different tissue types. Pseudo-CT can provide additional skull detail and has low bias and absolute error of estimated CT numbers of voxels (−22 ± 29 HU and 130 ± 16 HU) when compared to low-dose CT. Conclusions: The MR features generated by the proposed acquisition, correction, and processing methods may provide representative clustering information and could thus be used for clinical pseudo-CT generation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, S; Lo, P; Hoffman, J
Purpose: To evaluate the robustness of CAD or Quantitative Imaging methods, they should be tested on a variety of cases and under a variety of image acquisition and reconstruction conditions that represent the heterogeneity encountered in clinical practice. The purpose of this work was to develop a fully-automated pipeline for generating CT images that represent a wide range of dose and reconstruction conditions. Methods: The pipeline consists of three main modules: reduced-dose simulation, image reconstruction, and quantitative analysis. The first two modules of the pipeline can be operated in a completely automated fashion, using configuration files and running the modulesmore » in a batch queue. The input to the pipeline is raw projection CT data; this data is used to simulate different levels of dose reduction using a previously-published algorithm. Filtered-backprojection reconstructions are then performed using FreeCT-wFBP, a freely-available reconstruction software for helical CT. We also added support for an in-house, model-based iterative reconstruction algorithm using iterative coordinate-descent optimization, which may be run in tandem with the more conventional recon methods. The reduced-dose simulations and image reconstructions are controlled automatically by a single script, and they can be run in parallel on our research cluster. The pipeline was tested on phantom and lung screening datasets from a clinical scanner (Definition AS, Siemens Healthcare). Results: The images generated from our test datasets appeared to represent a realistic range of acquisition and reconstruction conditions that we would expect to find clinically. The time to generate images was approximately 30 minutes per dose/reconstruction combination on a hybrid CPU/GPU architecture. Conclusion: The automated research pipeline promises to be a useful tool for either training or evaluating performance of quantitative imaging software such as classifiers and CAD algorithms across the range of acquisition and reconstruction parameters present in the clinical environment. Funding support: NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonaque, J; Bautista-Ballesteros, J; Ibanez-Rosello, B
Purpose: To estimate the sensitivity of TrueBeam 2.0 Imaging System 6DoF automatic matching tool through the acquisition of cone-beam CT images in different phantoms applying submillimeter translations and rotations of tenths of a degree and registered with image simulation CT. Methods: To evaluate overall system-wide image, we consider two uncertainties source; First, the uncertainty of the manual phantom displacement (ε-m). This uncertainty is calculated by a digital caliper (0.01 mm) for vertical (Vrt), lateral (Lat) and longitudinal (Lng). A digital inclinometer (0.01°) for the pitch and roll and the own phantom scale to evaluate the coordinate rotation (Rtn). The secondmore » uncertainty is the displacement detected by the algorithm system of matching (σ-d) that we obtain from the standard deviations of the different measurements. We use three different phantoms. The BrainLab Radiosurgery system for supporting masks with an anthropomorphic dummy adapted to allow displacements of 0.1 mm in Vrt, Lat and Lng dimensions and rotations of 0.1° in Pitch dimension. For the analysis of the Rtn and Roll dimensions we use two homemade phantoms (RinoRot and RinoRoll, La Fe Hospital, Valencia, Spain) that allow rotations of 0.3°. Results: In the case of manual displacement of 0.10 ± 0.03 mm in the translations, the system detect 0.10 ± 0.07 mm, 0.12 ± 0.07 mm and 0.13 ± 0.07 mm (mean ± SD) in Lat, Vrt and Lng respectively. In the case of rotational dimension, manual displacement of 0.3 ± 0.1° was detected with 0.19 ± 0.06°, 0.29 ± 0.03° and 0.27 ± 0.06° in Pitch, Roll and Rtn. Conclusion: We conclude that the sensitivity of the automatic matching system is within 0.10 mm in translations and 0.3° in rotations. These values are under the own sensitivity of the software.« less
NASA Astrophysics Data System (ADS)
Sirakov, Nikolay M.; Suh, Sang; Attardo, Salvatore
2011-06-01
This paper presents a further step of a research toward the development of a quick and accurate weapons identification methodology and system. A basic stage of this methodology is the automatic acquisition and updating of weapons ontology as a source of deriving high level weapons information. The present paper outlines the main ideas used to approach the goal. In the next stage, a clustering approach is suggested on the base of hierarchy of concepts. An inherent slot of every node of the proposed ontology is a low level features vector (LLFV), which facilitates the search through the ontology. Part of the LLFV is the information about the object's parts. To partition an object a new approach is presented capable of defining the objects concavities used to mark the end points of weapon parts, considered as convexities. Further an existing matching approach is optimized to determine whether an ontological object matches the objects from an input image. Objects from derived ontological clusters will be considered for the matching process. Image resizing is studied and applied to decrease the runtime of the matching approach and investigate its rotational and scaling invariance. Set of experiments are preformed to validate the theoretical concepts.
Near Real-Time Automatic Marine Vessel Detection on Optical Satellite Images
NASA Astrophysics Data System (ADS)
Máttyus, G.
2013-05-01
Vessel monitoring and surveillance is important for maritime safety and security, environment protection and border control. Ship monitoring systems based on Synthetic-aperture Radar (SAR) satellite images are operational. On SAR images the ships made of metal with sharp edges appear as bright dots and edges, therefore they can be well distinguished from the water. Since the radar is independent from the sun light and can acquire images also by cloudy weather and rain, it provides a reliable service. Vessel detection from spaceborne optical images (VDSOI) can extend the SAR based systems by providing more frequent revisit times and overcoming some drawbacks of the SAR images (e.g. lower spatial resolution, difficult human interpretation). Optical satellite images (OSI) can have a higher spatial resolution thus enabling the detection of smaller vessels and enhancing the vessel type classification. The human interpretation of an optical image is also easier than as of SAR image. In this paper I present a rapid automatic vessel detection method which uses pattern recognition methods, originally developed in the computer vision field. In the first step I train a binary classifier from image samples of vessels and background. The classifier uses simple features which can be calculated very fast. For the detection the classifier is slided along the image in various directions and scales. The detector has a cascade structure which rejects most of the background in the early stages which leads to faster execution. The detections are grouped together to avoid multiple detections. Finally the position, size(i.e. length and width) and heading of the vessels is extracted from the contours of the vessel. The presented method is parallelized, thus it runs fast (in minutes for 16000 × 16000 pixels image) on a multicore computer, enabling near real-time applications, e.g. one hour from image acquisition to end user.
Herold, Volker; Herz, Stefan; Winter, Patrick; Gutjahr, Fabian Tobias; Andelovic, Kristina; Bauer, Wolfgang Rudolf; Jakob, Peter Michael
2017-10-16
Local aortic pulse wave velocity (PWV) is a measure for vascular stiffness and has a predictive value for cardiovascular events. Ultra high field CMR scanners allow the quantification of local PWV in mice, however these systems are yet unable to monitor the distribution of local elasticities. In the present study we provide a new accelerated method to quantify local aortic PWV in mice with phase-contrast cardiovascular magnetic resonance imaging (PC-CMR) at 17.6 T. Based on a k-t BLAST (Broad-use Linear Acquisition Speed-up Technique) undersampling scheme, total measurement time could be reduced by a factor of 6. The fast data acquisition enables to quantify the local PWV at several locations along the aortic blood vessel based on the evaluation of local temporal changes in blood flow and vessel cross sectional area. To speed up post processing and to eliminate operator bias, we introduce a new semi-automatic segmentation algorithm to quantify cross-sectional areas of the aortic vessel. The new methods were applied in 10 eight-month-old mice (4 C57BL/6J-mice and 6 ApoE (-/-) -mice) at 12 adjacent locations along the abdominal aorta. Accelerated data acquisition and semi-automatic post-processing delivered reliable measures for the local PWV, similiar to those obtained with full data sampling and manual segmentation. No statistically significant differences of the mean values could be detected for the different measurement approaches. Mean PWV values were elevated for the ApoE (-/-) -group compared to the C57BL/6J-group (3.5 ± 0.7 m/s vs. 2.2 ± 0.4 m/s, p < 0.01). A more heterogeneous PWV-distribution in the ApoE (-/-) -animals could be observed compared to the C57BL/6J-mice, representing the local character of lesion development in atherosclerosis. In the present work, we showed that k-t BLAST PC-MRI enables the measurement of the local PWV distribution in the mouse aorta. The semi-automatic segmentation method based on PC-CMR data allowed rapid determination of local PWV. The findings of this study demonstrate the ability of the proposed methods to non-invasively quantify the spatial variations in local PWV along the aorta of ApoE (-/-) -mice as a relevant model of atherosclerosis.
Towards native-state imaging in biological context in the electron microscope
Weston, Anne E.; Armer, Hannah E. J.
2009-01-01
Modern cell biology is reliant on light and fluorescence microscopy for analysis of cells, tissues and protein localisation. However, these powerful techniques are ultimately limited in resolution by the wavelength of light. Electron microscopes offer much greater resolution due to the shorter effective wavelength of electrons, allowing direct imaging of sub-cellular architecture. The harsh environment of the electron microscope chamber and the properties of the electron beam have led to complex chemical and mechanical preparation techniques, which distance biological samples from their native state and complicate data interpretation. Here we describe recent advances in sample preparation and instrumentation, which push the boundaries of high-resolution imaging. Cryopreparation, cryoelectron microscopy and environmental scanning electron microscopy strive to image samples in near native state. Advances in correlative microscopy and markers enable high-resolution localisation of proteins. Innovation in microscope design has pushed the boundaries of resolution to atomic scale, whilst automatic acquisition of high-resolution electron microscopy data through large volumes is finally able to place ultrastructure in biological context. PMID:19916039
Human Age Estimation Method Robust to Camera Sensor and/or Face Movement
Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung
2015-01-01
Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282
First MRI application of an active breathing coordinator
NASA Astrophysics Data System (ADS)
Kaza, E.; Symonds-Tayler, R.; Collins, D. J.; McDonald, F.; McNair, H. A.; Scurr, E.; Koh, D.-M.; Leach, M. O.
2015-02-01
A commercial active breathing coordinator (ABC) device, employed to hold respiration at a specific level for a predefined duration, was successfully adapted for magnetic resonance imaging (MRI) use for the first time. Potential effects of the necessary modifications were assessed and taken into account. Automatic MR acquisition during ABC breath holding was achieved. The feasibility of MR-ABC thoracic and abdominal examinations together with the advantages of imaging in repeated ABC-controlled breath holds were demonstrated on healthy volunteers. Five lung cancer patients were imaged under MR-ABC, visually confirming the very good intra-session reproducibility of organ position in images acquired with the same patient positioning as used for computed tomography (CT). Using identical ABC settings, good MR-CT inter-modality registration was achieved. This demonstrates the value of ABC, since application of T1, T2 and diffusion weighted MR sequences provides a wider range of contrast mechanisms and additional diagnostic information compared to CT, thus improving radiotherapy treatment planning and assessment.
First MRI application of an active breathing coordinator.
Kaza, E; Symonds-Tayler, R; Collins, D J; McDonald, F; McNair, H A; Scurr, E; Koh, D-M; Leach, M O
2015-02-21
A commercial active breathing coordinator (ABC) device, employed to hold respiration at a specific level for a predefined duration, was successfully adapted for magnetic resonance imaging (MRI) use for the first time. Potential effects of the necessary modifications were assessed and taken into account. Automatic MR acquisition during ABC breath holding was achieved. The feasibility of MR-ABC thoracic and abdominal examinations together with the advantages of imaging in repeated ABC-controlled breath holds were demonstrated on healthy volunteers. Five lung cancer patients were imaged under MR-ABC, visually confirming the very good intra-session reproducibility of organ position in images acquired with the same patient positioning as used for computed tomography (CT). Using identical ABC settings, good MR-CT inter-modality registration was achieved. This demonstrates the value of ABC, since application of T1, T2 and diffusion weighted MR sequences provides a wider range of contrast mechanisms and additional diagnostic information compared to CT, thus improving radiotherapy treatment planning and assessment.
First MRI application of an active breathing coordinator
Kaza, E; Symonds-Tayler, R; Collins, D J; McDonald, F; McNair, H A; Scurr, E; Koh, D-M; Leach, M O
2015-01-01
Abstract A commercial active breathing coordinator (ABC) device, employed to hold respiration at a specific level for a predefined duration, was successfully adapted for magnetic resonance imaging (MRI) use for the first time. Potential effects of the necessary modifications were assessed and taken into account. Automatic MR acquisition during ABC breath holding was achieved. The feasibility of MR-ABC thoracic and abdominal examinations together with the advantages of imaging in repeated ABC-controlled breath holds were demonstrated on healthy volunteers. Five lung cancer patients were imaged under MR-ABC, visually confirming the very good intra-session reproducibility of organ position in images acquired with the same patient positioning as used for computed tomography (CT). Using identical ABC settings, good MR-CT inter-modality registration was achieved. This demonstrates the value of ABC, since application of T1, T2 and diffusion weighted MR sequences provides a wider range of contrast mechanisms and additional diagnostic information compared to CT, thus improving radiotherapy treatment planning and assessment. PMID:25633183
Optimization of PROPELLER reconstruction for free-breathing T1-weighted cardiac imaging.
Huang, Teng-Yi; Tseng, Yu-Shen; Tang, Yu-Wei; Lin, Yi-Ru
2012-08-01
Clinical cardiac MR imaging techniques generally require patients to hold their breath during the scanning process to minimize respiratory motion-related artifacts. However, some patients cannot hold their breath because of illness or limited breath-hold capacity. This study aims to optimize the PROPELLER reconstruction for free-breathing myocardial T1-weighted imaging. Eight healthy volunteers (8 men; mean age 26.4 years) participated in this study after providing institutionally approved consent. The PROPELLER encoding method can reconstruct a low-resolution image from every blade because of k-space center oversampling. This study investigated the feasibility of extracting a respiratory trace from the PROPELLER blades by implementing a fully automatic region of interest selection and introducing a best template index to account for the property of the human respiration cycle. Results demonstrated that the proposed algorithm significantly improves the contrast-to-noise ratio and the image sharpness (p < 0.05). The PROPELLER method is expected to provide a robust tool for clinical application in free-breathing myocardial T1-weighted imaging. It could greatly facilitate the acquisition procedures during such a routine examination.
Hamm, Klaus D; Surber, Gunnar; Schmücking, Michael; Wurm, Reinhard E; Aschenbach, Rene; Kleinert, Gabriele; Niesen, A; Baum, Richard P
2004-11-01
Innovative new software solutions may enable image fusion to produce the desired data superposition for precise target definition and follow-up studies in radiosurgery/stereotactic radiotherapy in patients with intracranial lesions. The aim is to integrate the anatomical and functional information completely into the radiation treatment planning and to achieve an exact comparison for follow-up examinations. Special conditions and advantages of BrainLAB's fully automatic image fusion system are evaluated and described for this purpose. In 458 patients, the radiation treatment planning and some follow-up studies were performed using an automatic image fusion technique involving the use of different imaging modalities. Each fusion was visually checked and corrected as necessary. The computerized tomography (CT) scans for radiation treatment planning (slice thickness 1.25 mm), as well as stereotactic angiography for arteriovenous malformations, were acquired using head fixation with stereotactic arc or, in the case of stereotactic radiotherapy, with a relocatable stereotactic mask. Different magnetic resonance (MR) imaging sequences (T1, T2, and fluid-attenuated inversion-recovery images) and positron emission tomography (PET) scans were obtained without head fixation. Fusion results and the effects on radiation treatment planning and follow-up studies were analyzed. The precision level of the results of the automatic fusion depended primarily on the image quality, especially the slice thickness and the field homogeneity when using MR images, as well as on patient movement during data acquisition. Fully automated image fusion of different MR, CT, and PET studies was performed for each patient. Only in a few cases was it necessary to correct the fusion manually after visual evaluation. These corrections were minor and did not materially affect treatment planning. High-quality fusion of thin slices of a region of interest with a complete head data set could be performed easily. The target volume for radiation treatment planning could be accurately delineated using multimodal information provided by CT, MR, angiography, and PET studies. The fusion of follow-up image data sets yielded results that could be successfully compared and quantitatively evaluated. Depending on the quality of the originally acquired image, automated image fusion can be a very valuable tool, allowing for fast (approximately 1-2 minute) and precise fusion of all relevant data sets. Fused multimodality imaging improves the target volume definition for radiation treatment planning. High-quality follow-up image data sets should be acquired for image fusion to provide exactly comparable slices and volumetric results that will contribute to quality contol.
Karbalaie, Abdolamir; Abtahi, Farhad; Fatemi, Alimohammad; Etehadtavakol, Mahnaz; Emrani, Zahra; Erlandsson, Björn-Erik
2017-09-01
Nailfold capillaroscopy is a practical method for identifying and obtaining morphological changes in capillaries which might reveal relevant information about diseases and health. Capillaroscopy is harmless, and seems simple and repeatable. However, there is lack of established guidelines and instructions for acquisition as well as the interpretation of the obtained images; which might lead to various ambiguities. In addition, assessment and interpretation of the acquired images are very subjective. In an attempt to overcome some of these problems, in this study a new modified technique for assessment of nailfold capillary density is introduced. The new method is named elliptic broken line (EBL) which is an extension of the two previously known methods by defining clear criteria for finding the apex of capillaries in different scenarios by using a fitted elliptic. A graphical user interface (GUI) is developed for pre-processing, manual assessment of capillary apexes and automatic correction of selected apexes based on 90° rule. Intra- and inter-observer reliability of EBL and corrected EBL is evaluated in this study. Four independent observers familiar with capillaroscopy performed the assessment for 200 nailfold videocapillaroscopy images, form healthy subject and systemic lupus erythematosus patients, in two different sessions. The results show elevation from moderate (ICC=0.691) and good (ICC=0.753) agreements to good (ICC=0.750) and good (ICC=0.801) for intra- and inter-observer reliability after automatic correction of EBL. This clearly shows the potential of this method to improve the reliability and repeatability of assessment which motivates us for further development of automatic tool for EBL method. Copyright © 2017 Elsevier Inc. All rights reserved.
Breast density quantification with cone-beam CT: A post-mortem study
Johnson, Travis; Ding, Huanjun; Le, Huy Q.; Ducote, Justin L.; Molloi, Sabee
2014-01-01
Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The percent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson’s r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate (SEE) was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. PMID:24254317
Looking inside the Ocean: Toward an Autonomous Imaging System for Monitoring Gelatinous Zooplankton
Corgnati, Lorenzo; Marini, Simone; Mazzei, Luca; Ottaviani, Ennio; Aliani, Stefano; Conversi, Alessandra; Griffa, Annalisa
2016-01-01
Marine plankton abundance and dynamics in the open and interior ocean is still an unknown field. The knowledge of gelatinous zooplankton distribution is especially challenging, because this type of plankton has a very fragile structure and cannot be directly sampled using traditional net based techniques. To overcome this shortcoming, Computer Vision techniques can be successfully used for the automatic monitoring of this group.This paper presents the GUARD1 imaging system, a low-cost stand-alone instrument for underwater image acquisition and recognition of gelatinous zooplankton, and discusses the performance of three different methodologies, Tikhonov Regularization, Support Vector Machines and Genetic Programming, that have been compared in order to select the one to be run onboard the system for the automatic recognition of gelatinous zooplankton. The performance comparison results highlight the high accuracy of the three methods in gelatinous zooplankton identification, showing their good capability in robustly selecting relevant features. In particular, Genetic Programming technique achieves the same performances of the other two methods by using a smaller set of features, thus being the most efficient in avoiding computationally consuming preprocessing stages, that is a crucial requirement for running on an autonomous imaging system designed for long lasting deployments, like the GUARD1. The Genetic Programming algorithm has been installed onboard the system, that has been operationally tested in a two-months survey in the Ligurian Sea, providing satisfactory results in terms of monitoring and recognition performances. PMID:27983638
Automatic thermographic scanning with the creation of 3D panoramic views of buildings
NASA Astrophysics Data System (ADS)
Ferrarini, G.; Cadelano, G.; Bortolin, A.
2016-05-01
Infrared thermography is widely applied to the inspection of building, enabling the identification of thermal anomalies due to the presence of hidden structures, air leakages, and moisture. One of the main advantages of this technique is the possibility to acquire rapidly a temperature map of a surface. However, due to the actual low-resolution of thermal camera and the necessity of scanning surfaces with different orientation, during a building survey it is necessary to take multiple images. In this work a device based on quantitative infrared thermography, called aIRview, has been applied during building surveys to automatically acquire thermograms with a camera mounted on a robotized pan tilt unit. The goal is to perform a first rapid survey of the building that could give useful information for the successive quantitative thermal investigations. For each data acquisition, the instrument covers a rotational field of view of 360° around the vertical axis and up to 180° around the horizontal one. The obtained images have been processed in order to create a full equirectangular projection of the ambient. For this reason the images have been integrated into a web visualization tool, working with web panorama viewers such as Google Street View, creating a webpage where it is possible to have a three dimensional virtual visit of the building. The thermographic data are embedded with the visual imaging and with other sensor data, facilitating the understanding of the physical phenomena underlying the temperature distribution.
NASA Astrophysics Data System (ADS)
Xu, Guoping; Udupa, Jayaram K.; Tong, Yubing; Cao, Hanqiang; Odhner, Dewey; Torigian, Drew A.; Wu, Xingyu
2018-03-01
Currently, there are many papers that have been published on the detection and segmentation of lymph nodes from medical images. However, it is still a challenging problem owing to low contrast with surrounding soft tissues and the variations of lymph node size and shape on computed tomography (CT) images. This is particularly very difficult on low-dose CT of PET/CT acquisitions. In this study, we utilize our previous automatic anatomy recognition (AAR) framework to recognize the thoracic-lymph node stations defined by the International Association for the Study of Lung Cancer (IASLC) lymph node map. The lymph node stations themselves are viewed as anatomic objects and are localized by using a one-shot method in the AAR framework. Two strategies have been taken in this paper for integration into AAR framework. The first is to combine some lymph node stations into composite lymph node stations according to their geometrical nearness. The other is to find the optimal parent (organ or union of organs) as an anchor for each lymph node station based on the recognition error and thereby find an overall optimal hierarchy to arrange anchor organs and lymph node stations. Based on 28 contrast-enhanced thoracic CT image data sets for model building, 12 independent data sets for testing, our results show that thoracic lymph node stations can be localized within 2-3 voxels compared to the ground truth.
MR-based source localization for MR-guided HDR brachytherapy
NASA Astrophysics Data System (ADS)
Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.
2018-04-01
For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.
Gupta, Vikas; Bustamante, Mariana; Fredriksson, Alexandru; Carlhäll, Carl-Johan; Ebbers, Tino
2018-01-01
Assessment of blood flow in the left ventricle using four-dimensional flow MRI requires accurate left ventricle segmentation that is often hampered by the low contrast between blood and the myocardium. The purpose of this work is to improve left-ventricular segmentation in four-dimensional flow MRI for reliable blood flow analysis. The left ventricle segmentations are first obtained using morphological cine-MRI with better in-plane resolution and contrast, and then aligned to four-dimensional flow MRI data. This alignment is, however, not trivial due to inter-slice misalignment errors caused by patient motion and respiratory drift during breath-hold based cine-MRI acquisition. A robust image registration based framework is proposed to mitigate such errors automatically. Data from 20 subjects, including healthy volunteers and patients, was used to evaluate its geometric accuracy and impact on blood flow analysis. High spatial correspondence was observed between manually and automatically aligned segmentations, and the improvements in alignment compared to uncorrected segmentations were significant (P < 0.01). Blood flow analysis from manual and automatically corrected segmentations did not differ significantly (P > 0.05). Our results demonstrate the efficacy of the proposed approach in improving left-ventricular segmentation in four-dimensional flow MRI, and its potential for reliable blood flow analysis. Magn Reson Med 79:554-560, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
ERIC Educational Resources Information Center
Chinkina, Maria; Ruiz, Simón; Meurers, Detmar
2017-01-01
We integrate insights from research in Second Language Acquisition (SLA) and Computational Linguistics (CL) to generate text-based questions. We discuss the generation of wh- questions as functionally-driven input enhancement facilitating the acquisition of particle verbs and report the results of two crowdsourcing studies. The first study shows…
Panoramic optical-servoing for industrial inspection and repair
NASA Astrophysics Data System (ADS)
Sallinger, Christian; O'Leary, Paul; Retschnig, Alexander; Kammerhofer, Martin
2004-05-01
Recently specialized robots were introduced to perform the task of inspection and repair in large cylindrical structures such as ladles, melting furnaces and converters. This paper reports on the image processing system and optical servoing for one such a robot. A panoramic image of the vessels inner surface is produced by performing a coordinated robot motion and image acquisition. The level of projective distortion is minimized by acquiring a high density of images. Normalized phase correlation calculated via the 2D Fourier transform is used to calculate the shift between the single images. The narrow strips from the dense image map are then stitched together to build the panorama. The mapping between the panoramic image and the positioning of the robot is established during the stitching of the images. This enables optical feedback. The robots operator can locate a defect on the surface by selecting the area of the image. Calculation of the forward and inverse kinematics enable the robot to automatically move to the location on the surface requiring repair. Experimental results using a standard 6R industrial robot have shown the full functionality of the system concept. Finally, were test measurements carried out successfully, in a ladle at a temperature of 1100° C.
NASA Astrophysics Data System (ADS)
Yang, Chang-Ying Joseph; Huang, Weidong
2009-02-01
Computed radiography (CR) is considered a drop-in addition or replacement for traditional screen-film (SF) systems in digital mammography. Unlike other technologies, CR has the advantage of being compatible with existing mammography units. One of the challenges, however, is to properly configure the automatic exposure control (AEC) on existing mammography units for CR use. Unlike analogue systems, the capture and display of digital CR images is decoupled. The function of AEC is changed from ensuring proper and consistent optical density of the captured image on film to balancing image quality with patient dose needed for CR. One of the preferences when acquiring CR images under AEC is to use the same patient dose as SF systems. The challenge is whether the existing AEC design and calibration process-most of them proprietary from the X-ray systems manufacturers and tailored specifically for SF response properties-can be adapted for CR cassettes, in order to compensate for their response and attenuation differences. This paper describes the methods for configuring the AEC of three different mammography units models to match the patient dose used for CR with those that are used for a KODAK MIN-R 2000 SF System. Based on phantom test results, these methods provide the dose level under AEC for the CR systems to match with the dose of SF systems. These methods can be used in clinical environments that require the acquisition of CR images under AEC at the same dose levels as those used for SF systems.
NASA Astrophysics Data System (ADS)
Ughi, Giovanni J.; Adriaenssens, Tom; Larsson, Matilda; Dubois, Christophe; Sinnaeve, Peter; Coosemans, Mark; Desmet, Walter; D'hooghe, Jan
2012-01-01
In the last decade a large number of new intracoronary devices (i.e. drug-eluting stents, DES) have been developed to reduce the risks related to bare metal stent (BMS) implantation. The use of this new generation of DES has been shown to substantially reduce, compared with BMS, the occurrence of restenosis and recurrent ischemia that would necessitate a second revascularization procedure. Nevertheless, safety issues on the use of DES persist and full understanding of mechanisms of adverse clinical events is still a matter of concern and debate. Intravascular Optical Coherence Tomography (IV-OCT) is an imaging technique able to visualize the microstructure of blood vessels with an axial resolution <20 μm. Due to its very high spatial resolution, it enables detailed in-vivo assessment of implanted devices and vessel wall. Currently, the aim of several major clinical trials is to observe and quantify the vessel response to DES implantation over time. However, image analysis is currently performed manually and corresponding images, belonging to different IV-OCT acquisitions, can only be matched through a very labor intensive and subjective procedure. The aim of this study is to develop and validate a new methodology for the automatic registration of IV-OCT datasets on an image level. Hereto, we propose a landmark based rigid registration method exploiting the metallic stent framework as a feature. Such a tool would provide a better understanding of the behavior of different intracoronary devices in-vivo, giving unique insights about vessel pathophysiology and performance of new generation of intracoronary devices and different drugs.
Precision targeting with a tracking adaptive optics scanning laser ophthalmoscope
NASA Astrophysics Data System (ADS)
Hammer, Daniel X.; Ferguson, R. Daniel; Bigelow, Chad E.; Iftimia, Nicusor V.; Ustun, Teoman E.; Noojin, Gary D.; Stolarski, David J.; Hodnett, Harvey M.; Imholte, Michelle L.; Kumru, Semih S.; McCall, Michelle N.; Toth, Cynthia A.; Rockwell, Benjamin A.
2006-02-01
Precise targeting of retinal structures including retinal pigment epithelial cells, feeder vessels, ganglion cells, photoreceptors, and other cells important for light transduction may enable earlier disease intervention with laser therapies and advanced methods for vision studies. A novel imaging system based upon scanning laser ophthalmoscopy (SLO) with adaptive optics (AO) and active image stabilization was designed, developed, and tested in humans and animals. An additional port allows delivery of aberration-corrected therapeutic/stimulus laser sources. The system design includes simultaneous presentation of non-AO, wide-field (~40 deg) and AO, high-magnification (1-2 deg) retinal scans easily positioned anywhere on the retina in a drag-and-drop manner. The AO optical design achieves an error of <0.45 waves (at 800 nm) over +/-6 deg on the retina. A MEMS-based deformable mirror (Boston Micromachines Inc.) is used for wave-front correction. The third generation retinal tracking system achieves a bandwidth of greater than 1 kHz allowing acquisition of stabilized AO images with an accuracy of ~10 μm. Normal adult human volunteers and animals with previously-placed lesions (cynomolgus monkeys) were tested to optimize the tracking instrumentation and to characterize AO imaging performance. Ultrafast laser pulses were delivered to monkeys to characterize the ability to precisely place lesions and stimulus beams. Other advanced features such as real-time image averaging, automatic highresolution mosaic generation, and automatic blink detection and tracking re-lock were also tested. The system has the potential to become an important tool to clinicians and researchers for early detection and treatment of retinal diseases.
a Fast Approach for Stitching of Aerial Images
NASA Astrophysics Data System (ADS)
Moussa, A.; El-Sheimy, N.
2016-06-01
The last few years have witnessed an increasing volume of aerial image data because of the extensive improvements of the Unmanned Aerial Vehicles (UAVs). These newly developed UAVs have led to a wide variety of applications. A fast assessment of the achieved coverage and overlap of the acquired images of a UAV flight mission is of great help to save the time and cost of the further steps. A fast automatic stitching of the acquired images can help to visually assess the achieved coverage and overlap during the flight mission. This paper proposes an automatic image stitching approach that creates a single overview stitched image using the acquired images during a UAV flight mission along with a coverage image that represents the count of overlaps between the acquired images. The main challenge of such task is the huge number of images that are typically involved in such scenarios. A short flight mission with image acquisition frequency of one second can capture hundreds to thousands of images. The main focus of the proposed approach is to reduce the processing time of the image stitching procedure by exploiting the initial knowledge about the images positions provided by the navigation sensors. The proposed approach also avoids solving for all the transformation parameters of all the photos together to save the expected long computation time if all the parameters were considered simultaneously. After extracting the points of interest of all the involved images using Scale-Invariant Feature Transform (SIFT) algorithm, the proposed approach uses the initial image's coordinates to build an incremental constrained Delaunay triangulation that represents the neighborhood of each image. This triangulation helps to match only the neighbor images and therefore reduces the time-consuming features matching step. The estimated relative orientation between the matched images is used to find a candidate seed image for the stitching process. The pre-estimated transformation parameters of the images are employed successively in a growing fashion to create the stitched image and the coverage image. The proposed approach is implemented and tested using the images acquired through a UAV flight mission and the achieved results are presented and discussed.
SU-G-TeP4-13: Interfraction Treatment Monitoring Using Integrated Invivo EPID Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Defoor, D; Papanikolaou, N; Stathakis, S
Purpose: To investigate inter-fraction differences of dose delivery by analyzing portal images acquired during treatment and implement an automated system to generate a report for each fraction. Large differences in images between fractions can alert the physicist of possible machine performance issues or patient set-up errors. Methods: A Varian Novalis Tx equipped with a HD120 MLC and aS1000 electronic portal imaging device (EPID) was used in our study. EPID images are acquired in continuous acquisition mode for 32 volumetric arc therapy (VMAT) patients. The images are summed to create an image for each arc and a single image for eachmore » fraction. The first fraction is designated as the reference unless a machine error prevented acquisition of all images. The images for each beam as well as the fraction image are compared using gamma analysis at 1%/1mm, 2%/2mm and 3%/3mm. A report is then generated using an in house MatLab program containing the comparison for the current fraction as well as a history of previous fractions. The reports are automatically sent via email to the physicist for review. Fractions in which the total number of images was not within 5% of the reference number of images were not included in the results. Results: 91 of the 182 fractions recorded an image count within 5% of the reference. Gamma averages over all fractions and patients were 96.2% ±0.8% at 3%/3mm, 92.9% ±1% at 2%/2mm and 80.6% ±1.8% at 1%/1mm. The SD between fractions for each patient ranged from .004% to 10.4%. Of the 91 fractions 3 flagged due to low gamma values. After further investigation no significant errors were found. Conclusion: This toolkit can be used for in-vivo monitoring of treatment plan delivery an alert the physics staff of any inter-fraction discrepancies that may require further investigation.« less
Tuning of automatic exposure control strength in lumbar spine CT.
D'Hondt, A; Cornil, A; Bohy, P; De Maertelaer, V; Gevenois, P A; Tack, D
2014-05-01
To investigate the impact of tuning the automatic exposure control (AEC) strength curve (specific to Care Dose 4D®; Siemens Healthcare, Forchheim, Germany) from "average" to "strong" on image quality, radiation dose and operator dependency during lumbar spine CT examinations. Two hospitals (H1, H2), both using the same scanners, were considered for two time periods (P1 and P2). During P1, the AEC curve was "average" and radiographers had to select one of two protocols according to the body mass index (BMI): "standard" if BMI <30.0 kg m(-2) (120 kV-330 mAs) or "large" if BMI >30.0 kg m(-2) (140 kV-280 mAs). During P2, the AEC curve was changed to "strong", and all acquisitions were obtained with one protocol (120 kV and 270 mAs). Image quality was scored and patients' diameters calculated for both periods. 497 examinations were analysed. There was no significant difference in mean diameters according to hospitals and periods (p > 0.801) and in quality scores between periods (p > 0.172). There was a significant difference between hospitals regarding how often the "large" protocol was assigned [13 (10%)/132 patients in H1 vs 37 (28%)/133 in H2] (p < 0.001). During P1, volume CT dose index (CTDIvol) was higher in H2 (+13%; p = 0.050). In both hospitals, CTDIvol was reduced between periods (-19.2% in H1 and -29.4% in H2; p < 0.001). An operator dependency in protocol selection, unexplained by patient diameters or highlighted by image quality scores, has been observed. Tuning the AEC curve from average to strong enables suppression of the operator dependency in protocol selection and related dose increase, while preserving image quality. CT acquisition protocols based on weight are responsible for biases in protocol selection. Using an appropriate AEC strength curve reduces the number of protocols to one. Operator dependency of protocol selection is thereby eliminated.
Forsberg, Daniel; Lindblom, Maria; Quick, Petter; Gauffin, Håkan
2016-09-01
To present a semi-automatic method with minimal user interaction for quantitative analysis of the patellofemoral motion pattern. 4D CT data capturing the patellofemoral motion pattern of a continuous flexion and extension were collected for five patients prone to patellar luxation both pre- and post-surgically. For the proposed method, an observer would place landmarks in a single 3D volume, which then are automatically propagated to the other volumes in a time sequence. From the landmarks in each volume, the measures patellar displacement, patellar tilt and angle between femur and tibia were computed. Evaluation of the observer variability showed the proposed semi-automatic method to be favorable over a fully manual counterpart, with an observer variability of approximately 1.5[Formula: see text] for the angle between femur and tibia, 1.5 mm for the patellar displacement, and 4.0[Formula: see text]-5.0[Formula: see text] for the patellar tilt. The proposed method showed that surgery reduced the patellar displacement and tilt at maximum extension with approximately 10-15 mm and 15[Formula: see text]-20[Formula: see text] for three patients but with less evident differences for two of the patients. A semi-automatic method suitable for quantification of the patellofemoral motion pattern as captured by 4D CT data has been presented. Its observer variability is on par with that of other methods but with the distinct advantage to support continuous motions during the image acquisition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Fan; Wang, Yuanqing, E-mail: yqwang@nju.edu.cn; Li, Fenfang
The avalanche-photodiode-array (APD-array) laser detection and ranging (LADAR) system has been continually developed owing to its superiority of nonscanning, large field of view, high sensitivity, and high precision. However, how to achieve higher-efficient detection and better integration of the LADAR system for real-time three-dimensional (3D) imaging continues to be a problem. In this study, a novel LADAR system using four linear mode APDs (LmAPDs) is developed for high-efficient detection by adopting a modulation and multiplexing technique. Furthermore, an automatic control system for the array LADAR system is proposed and designed by applying the virtual instrumentation technique. The control system aimsmore » to achieve four functions: synchronization of laser emission and rotating platform, multi-channel synchronous data acquisition, real-time Ethernet upper monitoring, and real-time signal processing and 3D visualization. The structure and principle of the complete system are described in the paper. The experimental results demonstrate that the LADAR system is capable of achieving real-time 3D imaging on an omnidirectional rotating platform under the control of the virtual instrumentation system. The automatic imaging LADAR system utilized only 4 LmAPDs to achieve 256-pixel-per-frame detection with by employing 64-bit demodulator. Moreover, the lateral resolution is ∼15 cm and range accuracy is ∼4 cm root-mean-square error at a distance of ∼40 m.« less
Covert photo classification by fusing image features and visual attributes.
Lang, Haitao; Ling, Haibin
2015-10-01
In this paper, we study a novel problem of classifying covert photos, whose acquisition processes are intentionally concealed from the subjects being photographed. Covert photos are often privacy invasive and, if distributed over Internet, can cause serious consequences. Automatic identification of such photos, therefore, serves as an important initial step toward further privacy protection operations. The problem is, however, very challenging due to the large semantic similarity between covert and noncovert photos, the enormous diversity in the photographing process and environment of cover photos, and the difficulty to collect an effective data set for the study. Attacking these challenges, we make three consecutive contributions. First, we collect a large data set containing 2500 covert photos, each of them is verified rigorously and carefully. Second, we conduct a user study on how humans distinguish covert photos from noncovert ones. The user study not only provides an important evaluation baseline, but also suggests fusing heterogeneous information for an automatic solution. Our third contribution is a covert photo classification algorithm that fuses various image features and visual attributes in the multiple kernel learning framework. We evaluate the proposed approach on the collected data set in comparison with other modern image classifiers. The results show that our approach achieves an average classification rate (1-EER) of 0.8940, which significantly outperforms other competitors as well as human's performance.
[Research on automatic external defibrillator based on DSP].
Jing, Jun; Ding, Jingyan; Zhang, Wei; Hong, Wenxue
2012-10-01
Electrical defibrillation is the most effective way to treat the ventricular tachycardia (VT) and ventricular fibrillation (VF). An automatic external defibrillator based on DSP is introduced in this paper. The whole design consists of the signal collection module, the microprocessor controlingl module, the display module, the defibrillation module and the automatic recognition algorithm for VF and non VF, etc. This automatic external defibrillator has achieved goals such as ECG signal real-time acquisition, ECG wave synchronous display, data delivering to U disk and automatic defibrillate when shockable rhythm appears, etc.
Automatic detection of blurred images in UAV image sets
NASA Astrophysics Data System (ADS)
Sieberth, Till; Wackrow, Rene; Chandler, Jim H.
2016-12-01
Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm.
Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation
NASA Astrophysics Data System (ADS)
Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.
2017-05-01
In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.
Thermal image analysis using the serpentine method
NASA Astrophysics Data System (ADS)
Koprowski, Robert; Wilczyński, Sławomir
2018-03-01
Thermal imaging is an increasingly widespread alternative to other imaging methods. As a supplementary method in diagnostics, it can be used both statically and with dynamic temperature changes. The paper proposes a new image analysis method that allows for the acquisition of new diagnostic information as well as object segmentation. The proposed serpentine analysis uses known and new methods of image analysis and processing proposed by the authors. Affine transformations of an image and subsequent Fourier analysis provide a new diagnostic quality. The method is fully repeatable and automatic and independent of inter-individual variability in patients. The segmentation results are by 10% better than those obtained from the watershed method and the hybrid segmentation method based on the Canny detector. The first and second harmonics of serpentine analysis enable to determine the type of temperature changes in the region of interest (gradient, number of heat sources etc.). The presented serpentine method provides new quantitative information on thermal imaging and more. Since it allows for image segmentation and designation of contact points of two and more heat sources (local minimum), it can be used to support medical diagnostics in many areas of medicine.
MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.
2011-01-01
MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically
Park, Jae Byung; Lee, Seung Hun; Lee, Il Jae
2009-01-01
In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor. PMID:22400007
Automatic Dissection Of Plantlets
NASA Astrophysics Data System (ADS)
Batchelor, B. G.; Harris, I. P.; Marchant, J. A.; Tillett, R. D.
1989-03-01
Micropropagation is a technique used in horticulture for generating a monoclonal colony of plants. A tiny plantlet is cut into several parts, each of which is then replanted. At the moment, the cutting is performed manually. Automating this task would have significant economic benefits. A robot designed to dissect plants would need to be equipped with intelligent visual sensing. This article is concerned with the image acquisition and processing techniques which such a machine might use. A program, which can calculate where to cut a plant with an "open" structure, is presented. This is expressed in the ProVision language, which is described in another article presented at this conference. (Article 1002-65)
ORBS: A reduction software for SITELLE and SpiOMM data
NASA Astrophysics Data System (ADS)
Martin, Thomas
2014-09-01
ORBS merges, corrects, transforms and calibrates interferometric data cubes and produces a spectral cube of the observed region for analysis. It is a fully automatic data reduction software for use with SITELLE (installed at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont Mégantic); these imaging Fourier transform spectrometers obtain a hyperspectral data cube which samples a 12 arc-minutes field of view into 4 millions of visible spectra. ORBS is highly parallelized; its core classes (ORB) have been designed to be used in a suite of softwares for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS).
Evaluation of areas prepared for planting using LANDSAT data. M.S. Thesis; [Ribeirao Preto, Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Deassuncao, G. V.; Duarte, V.
1983-01-01
Three different algorithms (SINGLE-CELL, MAXVER and MEDIA K) were used to automatically interpret data from LANDSAT observations of an area of Ribeirao Preto, Brazil. Photographic transparencies were obtained, projected and visually interpreted. The results show that: (1) the MAXVER algorithm presented a better classification performance; (2) verification of the changes in cultivated areas using data from the three different acquisition dates was possible; (3) the water bodies, degraded lands, urban areas, and fallow fields were frequently mistaken by cultivated soils; and (4) the use of projected photographic transparencies furnished satisfactory results, besides reducing the time spent on the image-100 system.
Sieracki, M E; Reichenbach, S E; Webb, K L
1989-01-01
The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431
NASA Astrophysics Data System (ADS)
Jeon, P.-H.; Lee, C.-L.; Kim, D.-H.; Lee, Y.-J.; Jeon, S.-S.; Kim, H.-J.
2014-03-01
Multi-detector computed tomography (MDCT) can be used to easily and rapidly perform numerous acquisitions, possibly leading to a marked increase in the radiation dose to individual patients. Technical options dedicated to automatically adjusting the acquisition parameters according to the patient's size are of specific interest in pediatric radiology. A constant tube potential reduction can be achieved for adults and children, while maintaining a constant detector energy fluence. To evaluate radiation dose, the weighted CT dose index (CTDIw) was calculated based on the CT dose index (CTDI) measured using an ion chamber, and image noise and image contrast were measured from a scanned image to evaluate image quality. The dose-weighted contrast-to-noise ratio (CNRD) was calculated from the radiation dose, image noise, and image contrast measured from a scanned image. The noise derivative (ND) is a quality index for dose efficiency. X-ray spectra with tube voltages ranging from 80 to 140 kVp were used to compute the average photon energy. Image contrast and the corresponding contrast-to-noise ratio (CNR) were determined for lesions of soft tissue, muscle, bone, and iodine relative to a uniform water background, as the iodine contrast increases at lower energy (i.e., k-edge of iodine is 33 keV closer to the beam energy) using mixed water-iodine contrast normalization (water 0, iodine 25, 100, 200, and 1000 HU, respectively). The proposed values correspond to high quality images and can be reduced if only high-contrast organs are assessed. The potential benefit of lowering the tube voltage is an improved CNRD, resulting in a lower radiation dose and optimization of image quality. Adjusting the tube potential in abdominal CT would be useful in current pediatric radiography, where the choice of X-ray techniques generally takes into account the size of the patient as well as the need to balance the conflicting requirements of diagnostic image quality and radiation dose optimization.
Ship Speed Retrieval From Single Channel TerraSAR-X Data
NASA Astrophysics Data System (ADS)
Soccorsi, Matteo; Lehner, Susanne
2010-04-01
A method to estimate the speed of a moving ship is presented. The technique, introduced in Kirscht (1998), is extended to marine application and validated on TerraSAR-X High-Resolution (HR) data. The generation of a sequence of single-look SAR images from a single- channel image corresponds to an image time series with reduced resolution. This allows applying change detection techniques on the time series to evaluate the velocity components in range and azimuth of the ship. The evaluation of the displacement vector of a moving target in consecutive images of the sequence allows the estimation of the azimuth velocity component. The range velocity component is estimated by evaluating the variation of the signal amplitude during the sequence. In order to apply the technique on TerraSAR-X Spot Light (SL) data a further processing step is needed. The phase has to be corrected as presented in Eineder et al. (2009) due to the SL acquisition mode; otherwise the image sequence cannot be generated. The analysis, when possible validated by the Automatic Identification System (AIS), was performed in the framework of the ESA project MARISS.
Measurement of gastric meal and secretion volumes using magnetic resonance imaging
Hoad, C.L.; Parker, H.; Hudders, N.; Costigan, C.; Cox, E.F.; Perkins, A.C.; Blackshaw, P.E.; Marciani, L.; Spiller, R.C.; Fox, M.R.; Gowland, P.A.
2015-01-01
MRI can assess multiple gastric functions without ionizing radiation. However, time consuming image acquisition and analysis of gastric volume data, plus confounding of gastric emptying measurements by gastric secretions mixed with the test meal have limited its use to research centres. This study presents an MRI acquisition protocol and analysis algorithm suitable for the clinical measurement of gastric volume and secretion volume. Reproducibility of gastric volume measurements was assessed using data from 10 healthy volunteers following a liquid test meal with rapid MRI acquisition within one breath-hold and semi-automated analysis. Dilution of the ingested meal with gastric secretion was estimated using a respiratory-triggered T1 mapping protocol. Accuracy of the secretion volume measurements was assessed using data from 24 healthy volunteers following a mixed (liquid/solid) test meal with MRI meal volumes compared to data acquired using gamma scintigraphy (GS) on the same subjects studied on a separate study day. The mean (SD) coefficient of variance between 3 observers for both total gastric contents (including meal, secretions and air) and just the gastric contents (meal and secretion only) was 3 (2) % at large gastric volumes (> 200 ml). Mean (SD) secretion volumes post meal ingestion were 64 (51) ml and 110 (40) ml at 15 and 75 minutes respectively. Comparison with GS meal volumes, showed that MRI meal only volume (after correction for secretion volume) were similar to GS, with a linear regression gradient (std err) of 1.06 (0.10) and intercept −11 (24) ml. In conclusion, (i) rapid acquisition removed the requirement to image during prolonged breath-hold (ii) semi-automatic analysis greatly reduced time required to derive measurements and (iii) correction for secretion volumes provides accurate assessment of gastric meal volumes and emptying. Together these features provide the scientific basis of a protocol which would be suitable in clinical practice. PMID:25592405
Tropical Timber Identification using Backpropagation Neural Network
NASA Astrophysics Data System (ADS)
Siregar, B.; Andayani, U.; Fatihah, N.; Hakim, L.; Fahmi, F.
2017-01-01
Each and every type of wood has different characteristics. Identifying the type of wood properly is important, especially for industries that need to know the type of timber specifically. However, it requires expertise in identifying the type of wood and only limited experts available. In addition, the manual identification even by experts is rather inefficient because it requires a lot of time and possibility of human errors. To overcome these problems, a digital image based method to identify the type of timber automatically is needed. In this study, backpropagation neural network is used as artificial intelligence component. Several stages were developed: a microscope image acquisition, pre-processing, feature extraction using gray level co-occurrence matrix and normalization of data extraction using decimal scaling features. The results showed that the proposed method was able to identify the timber with an accuracy of 94%.
Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.
Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J
2017-09-01
A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.
Gang, G J; Siewerdsen, J H; Stayman, J W
2016-02-01
This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.
Application of automatic image analysis in wood science
Charles W. McMillin
1982-01-01
In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...
The ALICE-HMPID Detector Control System: Its evolution towards an expert and adaptive system
NASA Astrophysics Data System (ADS)
De Cataldo, G.; Franco, A.; Pastore, C.; Sgura, I.; Volpe, G.
2011-05-01
The High Momentum Particle IDentification (HMPID) detector is a proximity focusing Ring Imaging Cherenkov (RICH) for charged hadron identification. The HMPID is based on liquid C 6F 14 as the radiator medium and on a 10 m 2 CsI coated, pad segmented photocathode of MWPCs for UV Cherenkov photon detection. To ensure full remote control, the HMPID is equipped with a detector control system (DCS) responding to industrial standards for robustness and reliability. It has been implemented using PVSS as Slow Control And Data Acquisition (SCADA) environment, Programmable Logic Controller as control devices and Finite State Machines for modular and automatic command execution. In the perspective of reducing human presence at the experiment site, this paper focuses on DCS evolution towards an expert and adaptive control system, providing, respectively, automatic error recovery and stable detector performance. HAL9000, the first prototype of the HMPID expert system, is then presented. Finally an analysis of the possible application of the adaptive features is provided.
Real-Time 3d Reconstruction from Images Taken from AN Uav
NASA Astrophysics Data System (ADS)
Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.
2015-08-01
We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.
Retrospective data-driven respiratory gating for PET/CT
NASA Astrophysics Data System (ADS)
Schleyer, Paul J.; O'Doherty, Michael J.; Barrington, Sally F.; Marsden, Paul K.
2009-04-01
Respiratory motion can adversely affect both PET and CT acquisitions. Respiratory gating allows an acquisition to be divided into a series of motion-reduced bins according to the respiratory signal, which is typically hardware acquired. In order that the effects of motion can potentially be corrected for, we have developed a novel, automatic, data-driven gating method which retrospectively derives the respiratory signal from the acquired PET and CT data. PET data are acquired in listmode and analysed in sinogram space, and CT data are acquired in cine mode and analysed in image space. Spectral analysis is used to identify regions within the CT and PET data which are subject to respiratory motion, and the variation of counts within these regions is used to estimate the respiratory signal. Amplitude binning is then used to create motion-reduced PET and CT frames. The method was demonstrated with four patient datasets acquired on a 4-slice PET/CT system. To assess the accuracy of the data-derived respiratory signal, a hardware-based signal was acquired for comparison. Data-driven gating was successfully performed on PET and CT datasets for all four patients. Gated images demonstrated respiratory motion throughout the bin sequences for all PET and CT series, and image analysis and direct comparison of the traces derived from the data-driven method with the hardware-acquired traces indicated accurate recovery of the respiratory signal.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of which is not the acquisition, storage, manipulation, management, movement, control, display... of equipment, that is used in the automatic acquisition, storage, manipulation, management, movement... agency can demonstrate would result in a fundamental alteration in its nature; or (2) With respect to any...
Strategic and Nonstrategic Information Acquisition.
ERIC Educational Resources Information Center
Berger, Charles R.
2002-01-01
Uses dual-process theories and research concerned with automaticity and the role conceptual short-term memory plays in visual information processing to illustrate both the ubiquity of nonstrategic information acquisition during interpersonal communication and its potential consequences on judgments and behavior. Discusses theoretical and…
NASA Astrophysics Data System (ADS)
Hostache, Renaud; Chini, Marco; Matgen, Patrick; Giustarini, Laura
2013-04-01
There is a clear need for developing innovative processing chains based on earth observation (EO) data to generate products supporting emergency response and flood management at a global scale. Here an automatic flood mapping application is introduced. The latter is currently hosted on the Grid Processing on Demand (G-POD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver flooded areas using both recent and historical acquisitions of SAR data in an operational framework. It is worth mentioning that the method can be applied to both medium and high resolution SAR images. The flood mapping application consists of two main blocks: 1) A set of query tools for selecting the "crisis image" and the optimal corresponding pre-flood "reference image" from the G-POD archive. 2) An algorithm for extracting flooded areas using the previously selected "crisis image" and "reference image". The proposed method is a hybrid methodology, which combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. The method is based on the calibration of a statistical distribution of "open water" backscatter values inferred from SAR images of floods. Change detection with respect to a pre-flood reference image helps reducing over-detection of inundated areas. The algorithms are computationally efficient and operate with minimum data requirements, considering as input data a flood image and a reference image. Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate pre-flood reference image. Potential users will also be able to apply the implemented flood delineation algorithm. Case studies of several recent high magnitude flooding events (e.g. July 2007 Severn River flood, UK and March 2010 Red River flood, US) observed by high-resolution SAR sensors as well as airborne photography highlight advantages and limitations of the online application. A mid-term target is the exploitation of ESA SENTINEL 1 SAR data streams. In the long term it is foreseen to develop a potential extension of the application for systematically extracting flooded areas from all SAR images acquired on a daily, weekly or monthly basis. On-going research activities investigate the usefulness of the method for mapping flood hazard at global scale using databases of historic SAR remote sensing-derived flood inundation maps.
Real-time automatic registration in optical surgical navigation
NASA Astrophysics Data System (ADS)
Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming
2016-05-01
An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.
A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes
2011-01-01
Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Conclusions Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views. PMID:21251284
A 3D freehand ultrasound system for multi-view reconstructions from sparse 2D scanning planes.
Yu, Honggang; Pattichis, Marios S; Agurto, Carla; Beth Goens, M
2011-01-20
A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes.For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views.
Segmentation of 830- and 1310-nm LASIK corneal optical coherence tomography images
NASA Astrophysics Data System (ADS)
Li, Yan; Shekhar, Raj; Huang, David
2002-05-01
Optical coherence tomography (OCT) provides a non-contact and non-invasive means to visualize the corneal anatomy at micron scale resolution. We obtained corneal images from an arc-scanning (converging) OCT system operating at a wavelength of 830nm and a fan-shaped-scanning high-speed OCT system with an operating wavelength of 1310nm. Different scan protocols (arc/fan) and data acquisition rates, as well as wavelength dependent bio-tissue backscatter contrast and optical absorption, make the images acquired using the two systems different. We developed image-processing algorithms to automatically detect the air-tear interface, epithelium-Bowman's layer interface, laser in-situ keratomileusis (LASIK) flap interface, and the cornea-aqueous interface in both kinds of images. The overall segmentation scheme for 830nm and 1310nm OCT images was similar, although different strategies were adopted for specific processing approaches. Ultrasound pachymetry measurements of the corneal thickness and Placido-ring based corneal topography measurements of the corneal curvature were made on the same day as the OCT examination. Anterior/posterior corneal surface curvature measurement with OCT was also investigated. Results showed that automated segmentation of OCT images could evaluate anatomic outcome of LASIK surgery.
NASA Astrophysics Data System (ADS)
Bredfeldt, Jeremy S.; Liu, Yuming; Pehlke, Carolyn A.; Conklin, Matthew W.; Szulczewski, Joseph M.; Inman, David R.; Keely, Patricia J.; Nowak, Robert D.; Mackie, Thomas R.; Eliceiri, Kevin W.
2014-01-01
Second-harmonic generation (SHG) imaging can help reveal interactions between collagen fibers and cancer cells. Quantitative analysis of SHG images of collagen fibers is challenged by the heterogeneity of collagen structures and low signal-to-noise ratio often found while imaging collagen in tissue. The role of collagen in breast cancer progression can be assessed post acquisition via enhanced computation. To facilitate this, we have implemented and evaluated four algorithms for extracting fiber information, such as number, length, and curvature, from a variety of SHG images of collagen in breast tissue. The image-processing algorithms included a Gaussian filter, SPIRAL-TV filter, Tubeness filter, and curvelet-denoising filter. Fibers are then extracted using an automated tracking algorithm called fiber extraction (FIRE). We evaluated the algorithm performance by comparing length, angle and position of the automatically extracted fibers with those of manually extracted fibers in twenty-five SHG images of breast cancer. We found that the curvelet-denoising filter followed by FIRE, a process we call CT-FIRE, outperforms the other algorithms under investigation. CT-FIRE was then successfully applied to track collagen fiber shape changes over time in an in vivo mouse model for breast cancer.
48 CFR 245.608-72 - Screening excess automatic data processing equipment (ADPE).
Code of Federal Regulations, 2010 CFR
2010-10-01
... data processing equipment (ADPE). 245.608-72 Section 245.608-72 Federal Acquisition Regulations System... Reporting, Redistribution, and Disposal of Contractor Inventory 245.608-72 Screening excess automatic data... Agency, Defense Automation Resources Management Program Division (DARMP). DARMP does all required...
Automatic Contour Tracking in Ultrasound Images
ERIC Educational Resources Information Center
Li, Min; Kambhamettu, Chandra; Stone, Maureen
2005-01-01
In this paper, a new automatic contour tracking system, EdgeTrak, for the ultrasound image sequences of human tongue is presented. The images are produced by a head and transducer support system (HATS). The noise and unrelated high-contrast edges in ultrasound images make it very difficult to automatically detect the correct tongue surfaces. In…
[True color accuracy in digital forensic photography].
Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A
2016-01-01
Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).
48 CFR 370.601 - Funding and sponsorship.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Funding and sponsorship... SUPPLEMENTATIONS SPECIAL PROGRAMS AFFECTING ACQUISITION Conference Funding and Sponsorship 370.601 Funding and sponsorship. Funding a conference through an HHS contract does not automatically imply HHS (OPDIV/STAFFDIV...
NASA Astrophysics Data System (ADS)
Lesage, F.; Castonguay, A.; Tardif, P. L.; Lefebvre, J.; Li, B.
2015-09-01
A combined serial OCT/confocal scanner was designed to image large sections of biological tissues at microscopic resolution. Serial imaging of organs embedded in agarose blocks is performed by cutting through tissue using a vibratome which sequentially cuts slices in order to reveal new tissue to image, overcoming limited light penetration encountered in microscopy. Two linear stages allow moving the tissue with respect to the microscope objective, acquiring a 2D grid of volumes (1x1x0.3 mm) with OCT and a 2D grid of images (1x1mm) with the confocal arm. This process is repeated automatically, until the entire sample is imaged. Raw data is then post-processed to re-stitch each individual acquisition and obtain a reconstructed volume of the imaged tissue. This design is being used to investigate correlations between white matter and microvasculature changes with aging and with increase in pulse pressure following transaortic constriction in mice. The dual imaging capability of the system allowed to reveal different contrast information: OCT imaging reveals changes in refractive indices giving contrast between white and grey matter in the mouse brain, while transcardial perfusion of FITC or pre-sacrifice injection of Evans Blue shows microsvasculature properties in the brain with confocal imaging.
Efficient image acquisition design for a cancer detection system
NASA Astrophysics Data System (ADS)
Nguyen, Dung; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet
2013-09-01
Modern imaging modalities, such as Computed Tomography (CT), Digital Breast Tomosynthesis (DBT) or Magnetic Resonance Tomography (MRT) are able to acquire volumetric images with an isotropic resolution in micrometer (um) or millimeter (mm) range. When used in interactive telemedicine applications, these raw images need a huge storage unit, thereby necessitating the use of high bandwidth data communication link. To reduce the cost of transmission and enable archiving, especially for medical applications, image compression is performed. Recent advances in compression algorithms have resulted in a vast array of data compression techniques, but because of the characteristics of these images, there are challenges to overcome to transmit these images efficiently. In addition, the recent studies raise the low dose mammography risk on high risk patient. Our preliminary studies indicate that by bringing the compression before the analog-to-digital conversion (ADC) stage is more efficient than other compression techniques after the ADC. The linearity characteristic of the compressed sensing and ability to perform the digital signal processing (DSP) during data conversion open up a new area of research regarding the roles of sparsity in medical image registration, medical image analysis (for example, automatic image processing algorithm to efficiently extract the relevant information for the clinician), further Xray dose reduction for mammography, and contrast enhancement.
Very Large Scale Integrated Circuits for Military Systems.
1981-01-01
ABBREVIATIONS A/D Analog-to-digital C AGC Automatic Gain Control A A/J Anti-jam ASP Advanced Signal Processor AU Arithmetic Units C.AD Computer-Aided...ESM) equipments (Ref. 23); in lieu of an adequate automatic proces- sing capability, the function is now performed manually (Ref. 24), which involves...a human operator, displays, etc., and a sacrifice in performance (acquisition speed, saturation signal density). Various automatic processing
A Limited-Vocabulary, Multi-Speaker Automatic Isolated Word Recognition System.
ERIC Educational Resources Information Center
Paul, James E., Jr.
Techniques for automatic recognition of isolated words are investigated, and a computer simulation of a word recognition system is effected. Considered in detail are data acquisition and digitizing, word detection, amplitude and time normalization, short-time spectral estimation including spectral windowing, spectral envelope approximation,…
Automatic dynamic range adjustment for ultrasound B-mode imaging.
Lee, Yeonhwa; Kang, Jinbum; Yoo, Yangmo
2015-02-01
In medical ultrasound imaging, dynamic range (DR) is defined as the difference between the maximum and minimum values of the displayed signal to display and it is one of the most essential parameters that determine its image quality. Typically, DR is given with a fixed value and adjusted manually by operators, which leads to low clinical productivity and high user dependency. Furthermore, in 3D ultrasound imaging, DR values are unable to be adjusted during 3D data acquisition. A histogram matching method, which equalizes the histogram of an input image based on that from a reference image, can be applied to determine the DR value. However, it could be lead to an over contrasted image. In this paper, a new Automatic Dynamic Range Adjustment (ADRA) method is presented that adaptively adjusts the DR value by manipulating input images similar to a reference image. The proposed ADRA method uses the distance ratio between the log average and each extreme value of a reference image. To evaluate the performance of the ADRA method, the similarity between the reference and input images was measured by computing a correlation coefficient (CC). In in vivo experiments, the CC values were increased by applying the ADRA method from 0.6872 to 0.9870 and from 0.9274 to 0.9939 for kidney and liver data, respectively, compared to the fixed DR case. In addition, the proposed ADRA method showed to outperform the histogram matching method with in vivo liver and kidney data. When using 3D abdominal data with 70 frames, while the CC value from the ADRA method is slightly increased (i.e., 0.6%), the proposed method showed improved image quality in the c-plane compared to its fixed counterpart, which suffered from a shadow artifact. These results indicate that the proposed method can enhance image quality in 2D and 3D ultrasound B-mode imaging by improving the similarity between the reference and input images while eliminating unnecessary manual interaction by the user. Copyright © 2014 Elsevier B.V. All rights reserved.
Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz
2017-01-01
To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.
Automatic RST-based system for a rapid detection of man-made disasters
NASA Astrophysics Data System (ADS)
Tramutoli, Valerio; Corrado, Rosita; Filizzola, Carolina; Livia Grimaldi, Caterina Sara; Mazzeo, Giuseppe; Marchese, Francesco; Pergola, Nicola
2010-05-01
Man-made disasters may cause injuries to citizens and damages to critical infrastructures. When it is not possible to prevent or foresee such disasters it is hoped at least to rapidly detect the accident in order to intervene as soon as possible to minimize damages. In this context, the combination of a Robust Satellite Technique (RST), able to identify for sure actual (i.e. no false alarm) accidents, and satellite sensors with high temporal resolution seems to assure both a reliable and a timely detection of abrupt Thermal Infrared (TIR) transients related to dangerous explosions. A processing chain, based on the RST approach, has been developed in the framework of the GMOSS and G-MOSAIC projects by DIFA-UNIBAS team, suitable for automatically identify on MSG-SEVIRI images harmful events. Maps of thermal anomalies are generated every 15 minutes (i.e. SEVIRI temporal repetition rate) over a selected area together with kml files (containing information on latitude and longitude of "thermally" anomalous SEVIRI pixel centre, time of image acquisition, relative intensity of anomalies, etc.) for a rapid visualization of the accident position even on Google Earth. Results achieved in the cases of gas pipelines recently exploded or attacked in Russia and in Iraq will be presented in this work.
NASA Astrophysics Data System (ADS)
Liu, Xiaoqi; Wang, Chengliang; Bai, Jianying; Liao, Guobin
2018-02-01
Portal hypertensive gastropathy (PHG) is common in gastrointestinal (GI) diseases, and a severe stage of PHG (S-PHG) is a source of gastrointestinal active bleeding. Generally, the diagnosis of PHG is made visually during endoscopic examination; compared with traditional endoscopy, (wireless capsule endoscopy) WCE with noninvasive and painless is chosen as a prevalent tool for visual observation of PHG. However, accurate measurement of WCE images with PHG is a difficult task due to faint contrast and confusing variations in background gastric mucosal tissue for physicians. Therefore, this paper proposes a comprehensive methodology to automatically detect S-PHG images in WCE video to help physicians accurately diagnose S-PHG. Firstly, a rough dominatecolor-tone extraction approach is proposed for better describing global color distribution information of gastric mucosa. Secondly, a hybrid two-layer texture acquisition model is designed by integrating co-occurrence matrix into local binary pattern to depict complex and unique gastric mucosal microstructure local variation. Finally, features of mucosal color and microstructure texture are merged into linear support vector machine to accomplish this automatic classification task. Experiments were implemented on an annotated data set including 1,050 SPHG and 1,370 normal images collected from 36 real patients of different nationalities, ages and genders. By comparison with three traditional texture extraction methods, our method, combined with experimental results, performs best in detection of S-PHG images in WCE video: the maximum of accuracy, sensitivity and specificity reach 0.90, 0.92 and 0.92 respectively.
Research of spectacle frame measurement system based on structured light method
NASA Astrophysics Data System (ADS)
Guan, Dong; Chen, Xiaodong; Zhang, Xiuda; Yan, Huimin
2016-10-01
Automatic eyeglass lens edging system is now widely used to automatically cut and polish the uncut lens based on the spectacle frame shape data which is obtained from the spectacle frame measuring machine installed on the system. The conventional approach to acquire the frame shape data works in the contact scanning mode with a probe tracing around the groove contour of the spectacle frame which requires a sophisticated mechanical and numerical control system. In this paper, a novel non-contact optical measuring method based on structured light to measure the three dimensional (3D) data of the spectacle frame is proposed. First we focus on the processing approach solving the problem of deterioration of the structured light stripes caused by intense specular reflection on the frame surface. The techniques of bright-dark bi-level fringe projecting, multiple exposuring and high dynamic range imaging are introduced to obtain a high-quality image of structured light stripes. Then, the Gamma transform and median filtering are applied to enhance image contrast. In order to get rid of background noise from the image and extract the region of interest (ROI), an auxiliary lighting system of special design is utilized to help effectively distinguish between the object and the background. In addition, a morphological method with specific morphological structure-elements is adopted to remove noise between stripes and boundary of the spectacle frame. By further fringe center extraction and depth information acquisition through the method of look-up table, the 3D shape of the spectacle frame is recovered.
Compact multi-band fluorescent microscope with an electrically tunable lens for autofocusing
Wang, Zhaojun; Lei, Ming; Yao, Baoli; Cai, Yanan; Liang, Yansheng; Yang, Yanlong; Yang, Xibin; Li, Hui; Xiong, Daxi
2015-01-01
Autofocusing is a routine technique in redressing focus drift that occurs in time-lapse microscopic image acquisition. To date, most automatic microscopes are designed on the distance detection scheme to fulfill the autofocusing operation, which may suffer from the low contrast of the reflected signal due to the refractive index mismatch at the water/glass interface. To achieve high autofocusing speed with minimal motion artifacts, we developed a compact multi-band fluorescent microscope with an electrically tunable lens (ETL) device for autofocusing. A modified searching algorithm based on equidistant scanning and curve fitting is proposed, which no longer requires a single-peak focus curve and then efficiently restrains the impact of external disturbance. This technique enables us to achieve an autofocusing time of down to 170 ms and the reproductivity of over 97%. The imaging head of the microscope has dimensions of 12 cm × 12 cm × 6 cm. This portable instrument can easily fit inside standard incubators for real-time imaging of living specimens. PMID:26601001
Design and Implementation of a Novel Portable 360° Stereo Camera System with Low-Cost Action Cameras
NASA Astrophysics Data System (ADS)
Holdener, D.; Nebiker, S.; Blaser, S.
2017-11-01
The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.
Papadakis, Antonios E; Perisinakis, Kostas; Damilakis, John
2014-10-01
To study the effect of patient size, body region and modulation strength on tube current and image quality on CT examinations that use automatic tube current modulation (ATCM). Ten physical anthropomorphic phantoms that simulate an individual as neonate, 1-, 5-, 10-year-old and adult at various body habitus were employed. CT acquisition of head, neck, thorax and abdomen/pelvis was performed with ATCM activated at weak, average and strong modulation strength. The mean modulated mAs (mAsmod) values were recorded. Image noise was measured at selected anatomical sites. The mAsmod recorded for neonate compared to 10-year-old increased by 30 %, 14 %, 6 % and 53 % for head, neck, thorax and abdomen/pelvis, respectively, (P < 0.05). The mAsmod was lower than the preselected mAs with the exception of the 10-year-old phantom. In paediatric and adult phantoms, the mAsmod ranged from 44 and 53 for weak to 117 and 93 for strong modulation strength, respectively. At the same exposure parameters image noise increased with body size (P < 0.05). The ATCM system studied here may affect dose differently for different patient habitus. Dose may decrease for overweight adults but increase for children older than 5 years old. Care should be taken when implementing ATCM protocols to ensure that image quality is maintained. • ATCM efficiency is related to the size of the patient's body. • ATCM should be activated without caution in overweight adult individuals. • ATCM may increase radiation dose in children older than 5 years old. • ATCM efficiency depends on the protocol selected for a specific anatomical region. • Modulation strength may be appropriately tuned to enhance ATCM efficiency.
Initial Experience With Ultra High-Density Mapping of Human Right Atria.
Bollmann, Andreas; Hilbert, Sebastian; John, Silke; Kosiuk, Jedrzej; Hindricks, Gerhard
2016-02-01
Recently, an automatic, high-resolution mapping system has been presented to accurately and quickly identify right atrial geometry and activation patterns in animals, but human data are lacking. This study aims to assess the clinical feasibility and accuracy of high-density electroanatomical mapping of various RA arrhythmias. Electroanatomical maps of the RA (35 partial and 24 complete) were created in 23 patients using a novel mini-basket catheter with 64 electrodes and automatic electrogram annotation. Median acquisition time was 6:43 minutes (0:39-23:05 minutes) with shorter times for partial (4.03 ± 4.13 minutes) than for complete maps (9.41 ± 4.92 minutes). During mapping 3,236 (710-16,306) data points were automatically annotated without manual correction. Maps obtained during sinus rhythm created geometry consistent with CT imaging and demonstrated activation originating at the middle to superior crista terminalis, while maps during CS pacing showed right atrial activation beginning at the infero-septal region. Activation patterns were consistent with cavotricuspid isthmus-dependent atrial flutter (n = 4), complex reentry tachycardia (n = 1), or ectopic atrial tachycardia (n = 2). His bundle and fractionated potentials in the slow pathway region were automatically detected in all patients. Ablation of the cavotricuspid isthmus (n = 9), the atrio-ventricular node (n = 2), atrial ectopy (n = 2), and the slow pathway (n = 3) was successfully and safely performed. RA mapping with this automatic high-density mapping system is fast, feasible, and safe. It is possible to reproducibly identify propagation of atrial activation during sinus rhythm, various tachycardias, and also complex reentrant arrhythmias. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, X; Mazur, T; Yang, D
Purpose: To investigate an approach of automatically recognizing anatomical sites and imaging views (the orientation of the image acquisition) in 2D X-ray images. Methods: A hierarchical (binary tree) multiclass recognition model was developed to recognize the treatment sites and views in x-ray images. From top to bottom of the tree, the treatment sites are grouped hierarchically from more general to more specific. Each node in the hierarchical model was designed to assign images to one of two categories of anatomical sites. The binary image classification function of each node in the hierarchical model is implemented by using a PCA transformationmore » and a support vector machine (SVM) model. The optimal PCA transformation matrices and SVM models are obtained by learning from a set of sample images. Alternatives of the hierarchical model were developed to support three scenarios of site recognition that may happen in radiotherapy clinics, including two or one X-ray images with or without view information. The performance of the approach was tested with images of 120 patients from six treatment sites – brain, head-neck, breast, lung, abdomen and pelvis – with 20 patients per site and two views (AP and RT) per patient. Results: Given two images in known orthogonal views (AP and RT), the hierarchical model achieved a 99% average F1 score to recognize the six sites. Site specific view recognition models have 100 percent accuracy. The computation time to process a new patient case (preprocessing, site and view recognition) is 0.02 seconds. Conclusion: The proposed hierarchical model of site and view recognition is effective and computationally efficient. It could be useful to automatically and independently confirm the treatment sites and views in daily setup x-ray 2D images. It could also be applied to guide subsequent image processing tasks, e.g. site and view dependent contrast enhancement and image registration. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less
Leiner, Tim; Vink, Eva E.; Blankestijn, Peter J.; van den Berg, Cornelis A.T.
2017-01-01
Purpose Renal dynamic contrast‐enhanced (DCE) MRI provides information on renal perfusion and filtration. However, clinical implementation is hampered by challenges in postprocessing as a result of misalignment of the kidneys due to respiration. We propose to perform automated image registration using the fat‐only images derived from a modified Dixon reconstruction of a dual‐echo acquisition because these provide consistent contrast over the dynamic series. Methods DCE data of 10 hypertensive patients was used. Dual‐echo images were acquired at 1.5 T with temporal resolution of 3.9 s during contrast agent injection. Dixon fat, water, and in‐phase and opposed‐phase (OP) images were reconstructed. Postprocessing was automated. Registration was performed both to fat images and OP images for comparison. Perfusion and filtration values were extracted from a two‐compartment model fit. Results Automatic registration to fat images performed better than automatic registration to OP images with visible contrast enhancement. Median vertical misalignment of the kidneys was 14 mm prior to registration, compared to 3 mm and 5 mm with registration to fat images and OP images, respectively (P = 0.03). Mean perfusion values and MR‐based glomerular filtration rates (GFR) were 233 ± 64 mL/100 mL/min and 60 ± 36 mL/minute, respectively, based on fat‐registered images. MR‐based GFR correlated with creatinine‐based GFR (P = 0.04) for fat‐registered images. For unregistered and OP‐registered images, this correlation was not significant. Conclusion Absence of contrast changes on Dixon fat images improves registration in renal DCE MRI and enables automated postprocessing, resulting in a more accurate estimation of GFR. Magn Reson Med 80:66–76, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. PMID:29134673
Cho, Woon; Jang, Jinbeum; Koschan, Andreas; Abidi, Mongi A; Paik, Joonki
2016-11-28
A fundamental limitation of hyperspectral imaging is the inter-band misalignment correlated with subject motion during data acquisition. One way of resolving this problem is to assess the alignment quality of hyperspectral image cubes derived from the state-of-the-art alignment methods. In this paper, we present an automatic selection framework for the optimal alignment method to improve the performance of face recognition. Specifically, we develop two qualitative prediction models based on: 1) a principal curvature map for evaluating the similarity index between sequential target bands and a reference band in the hyperspectral image cube as a full-reference metric; and 2) the cumulative probability of target colors in the HSV color space for evaluating the alignment index of a single sRGB image rendered using all of the bands of the hyperspectral image cube as a no-reference metric. We verify the efficacy of the proposed metrics on a new large-scale database, demonstrating a higher prediction accuracy in determining improved alignment compared to two full-reference and five no-reference image quality metrics. We also validate the ability of the proposed framework to improve hyperspectral face recognition.
Fuzzy C-means classification for corrosion evolution of steel images
NASA Astrophysics Data System (ADS)
Trujillo, Maite; Sadki, Mustapha
2004-05-01
An unavoidable problem of metal structures is their exposure to rust degradation during their operational life. Thus, the surfaces need to be assessed in order to avoid potential catastrophes. There is considerable interest in the use of patch repair strategies which minimize the project costs. However, to operate such strategies with confidence in the long useful life of the repair, it is essential that the condition of the existing coatings and the steel substrate can be accurately quantified and classified. This paper describes the application of fuzzy set theory for steel surfaces classification according to the steel rust time. We propose a semi-automatic technique to obtain image clustering using the Fuzzy C-means (FCM) algorithm and we analyze two kinds of data to study the classification performance. Firstly, we investigate the use of raw images" pixels without any pre-processing methods and neighborhood pixels. Secondly, we apply Gaussian noise to the images with different standard deviation to study the FCM method tolerance to Gaussian noise. The noisy images simulate the possible perturbations of the images due to the weather or rust deposits in the steel surfaces during typical on-site acquisition procedures
Classification by Using Multispectral Point Cloud Data
NASA Astrophysics Data System (ADS)
Liao, C. T.; Huang, H. H.
2012-07-01
Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.
Texture analysis with statistical methods for wheat ear extraction
NASA Astrophysics Data System (ADS)
Bakhouche, M.; Cointault, F.; Gouton, P.
2007-01-01
In agronomic domain, the simplification of crop counting, necessary for yield prediction and agronomic studies, is an important project for technical institutes such as Arvalis. Although the main objective of our global project is to conceive a mobile robot for natural image acquisition directly in a field, Arvalis has proposed us first to detect by image processing the number of wheat ears in images before to count them, which will allow to obtain the first component of the yield. In this paper we compare different texture image segmentation techniques based on feature extraction by first and higher order statistical methods which have been applied on our images. The extracted features are used for unsupervised pixel classification to obtain the different classes in the image. So, the K-means algorithm is implemented before the choice of a threshold to highlight the ears. Three methods have been tested in this feasibility study with very average error of 6%. Although the evaluation of the quality of the detection is visually done, automatic evaluation algorithms are currently implementing. Moreover, other statistical methods of higher order will be implemented in the future jointly with methods based on spatio-frequential transforms and specific filtering.
Coded aperture solution for improving the performance of traffic enforcement cameras
NASA Astrophysics Data System (ADS)
Masoudifar, Mina; Pourreza, Hamid Reza
2016-10-01
A coded aperture camera is proposed for automatic license plate recognition (ALPR) systems. It captures images using a noncircular aperture. The aperture pattern is designed for the rapid acquisition of high-resolution images while preserving high spatial frequencies of defocused regions. It is obtained by minimizing an objective function, which computes the expected value of perceptual deblurring error. The imaging conditions and camera sensor specifications are also considered in the proposed function. The designed aperture improves the depth of field (DoF) and subsequently ALPR performance. The captured images can be directly analyzed by the ALPR software up to a specific depth, which is 13 m in our case, though it is 11 m for the circular aperture. Moreover, since the deblurring results of images captured by our aperture yield fewer artifacts than those captured by the circular aperture, images can be first deblurred and then analyzed by the ALPR software. In this way, the DoF and recognition rate can be improved at the same time. Our case study shows that the proposed camera can improve the DoF up to 17 m while it is limited to 11 m in the conventional aperture.
NASA Astrophysics Data System (ADS)
Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao
2018-01-01
Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.
Automatic Image Processing Workflow for the Keck/NIRC2 Vortex Coronagraph
NASA Astrophysics Data System (ADS)
Xuan, Wenhao; Cook, Therese; Ngo, Henry; Zawol, Zoe; Ruane, Garreth; Mawet, Dimitri
2018-01-01
The Keck/NIRC2 camera, equipped with the vortex coronagraph, is an instrument targeted at the high contrast imaging of extrasolar planets. To uncover a faint planet signal from the overwhelming starlight, we utilize the Vortex Image Processing (VIP) library, which carries out principal component analysis to model and remove the stellar point spread function. To bridge the gap between data acquisition and data reduction, we implement a workflow that 1) downloads, sorts, and processes data with VIP, 2) stores the analysis products into a database, and 3) displays the reduced images, contrast curves, and auxiliary information on a web interface. Both angular differential imaging and reference star differential imaging are implemented in the analysis module. A real-time version of the workflow runs during observations, allowing observers to make educated decisions about time distribution on different targets, hence optimizing science yield. The post-night version performs a standardized reduction after the observation, building up a valuable database that not only helps uncover new discoveries, but also enables a statistical study of the instrument itself. We present the workflow, and an examination of the contrast performance of the NIRC2 vortex with respect to factors including target star properties and observing conditions.
Acquisition of Automatic Imitation Is Sensitive to Sensorimotor Contingency
ERIC Educational Resources Information Center
Cook, Richard; Press, Clare; Dickinson, Anthony; Heyes, Cecilia
2010-01-01
The associative sequence learning model proposes that the development of the mirror system depends on the same mechanisms of associative learning that mediate Pavlovian and instrumental conditioning. To test this model, two experiments used the reduction of automatic imitation through incompatible sensorimotor training to assess whether mirror…
Haneder, Stefan; Siedek, Florian; Doerner, Jonas; Pahn, Gregor; Grosse Hokamp, Nils; Maintz, David; Wybranski, Christian
2018-01-01
Background A novel, multi-energy, dual-layer spectral detector computed tomography (SDCT) is commercially available now with the vendor's claim that it yields the same or better quality of polychromatic, conventional CT images like modern single-energy CT scanners without any radiation dose penalty. Purpose To intra-individually compare the quality of conventional polychromatic CT images acquired with a dual-layer spectral detector (SDCT) and the latest generation 128-row single-energy-detector (CT128) from the same manufacturer. Material and Methods Fifty patients underwent portal-venous phase, thoracic-abdominal CT scans with the SDCT and prior CT128 imaging. The SDCT scanning protocol was adapted to yield a similar estimated dose length product (DLP) as the CT128. Patient dose optimization by automatic tube current modulation and CT image reconstruction with a state-of-the-art iterative algorithm were identical on both scanners. CT image contrast-to-noise ratio (CNR) was compared between the SDCT and CT128 in different anatomic structures. Image quality and noise were assessed independently by two readers with 5-point-Likert-scales. Volume CT dose index (CTDI vol ), and DLP were recorded and normalized to 68 cm acquisition length (DLP 68 ). Results The SDCT yielded higher mean CNR values of 30.0% ± 2.0% (26.4-32.5%) in all anatomic structures ( P < 0.001) and excellent scores for qualitative parameters surpassing the CT128 (all P < 0.0001) with substantial inter-rater agreement (κ ≥ 0.801). Despite adapted scan protocols the SDCT yielded lower values for CTDI vol (-10.1 ± 12.8%), DLP (-13.1 ± 13.9%), and DLP 68 (-15.3 ± 16.9%) than the CT128 (all P < 0.0001). Conclusion The SDCT scanner yielded better CT image quality compared to the CT128 and lower radiation dose parameters.
NASA Astrophysics Data System (ADS)
Revuelto, Jesús; Jonas, Tobias; López-Moreno, Juan Ignacio
2015-04-01
Snow distribution in mountain areas plays a key role in many processes as runoff dynamics, ecological cycles or erosion rates. Nevertheless, the acquisition of high resolution snow depth data (SD) in space-time is a complex task that needs the application of remote sensing techniques as Terrestrial Laser Scanning (TLS). Such kind of techniques requires intense field work for obtaining high quality snowpack evolution during a specific time period. Combining TLS data with other remote sensing techniques (satellite images, photogrammetry…) and in-situ measurements could represent an improvement of the available information of a variable with rapid topographic changes. The aim of this study is to reconstruct daily SD distribution from lapse-rate images from a webcam and data from two to three TLS acquisitions during the snow melting periods of 2012, 2013 and 2014. This information is obtained at Izas Experimental catchment in Central Spanish Pyrenees; a catchment of 33ha, with an elevation ranging from 2050 to 2350m a.s.l. The lapse-rate images provide the Snow Covered Area (SCA) evolution at the study site, while TLS allows obtaining high resolution information of SD distribution. With ground control points, lapse-rate images are georrectified and their information is rasterized into a 1-meter resolution Digital Elevation Model. Subsequently, for each snow season, the Melt-Out Date (MOD) of each pixel is obtained. The reconstruction increases the estimated SD lose for each time step (day) in a distributed manner; starting the reconstruction for each grid cell at the MOD (note the reverse time evolution). To do so, the reconstruction has been previously adjusted in time and space as follows. Firstly, the degree day factor (SD lose/positive average temperatures) is calculated from the information measured at an automatic weather station (AWS) located in the catchment. Afterwards, comparing the SD lose at the AWS during a specific time period (i.e. between two TLS acquisitions) to that melted on each grid cell, a coefficient is obtained for spatially distributing the SD loses. For 2012 and 2013, three TLS acquisition campaigns were available during each melting period. This way the first acquisitions of both melting periods were reserved for validation while the other two were considered for adjusting the reconstruction. Validation has revealed a very good performance of the reconstructed SD distribution when compared with the TLS data (r2 values between 0.74 and 0.8 respectively). When no calibration with TLS data was applied for distributing melt rates; this is, using the distribution coefficients for reconstructing SD of precedent years, rather similar accuracy was reached. With the spatial calibration of 2012 and 2013, the reconstructions for the two TLS acquisition dates in 2014, obtained r2 values that ranged between 0.73 and 0.76. This shows the usefulness of lapse-rate images to estimate not only SCA but also the spatial distribution of the SD when combined with TLS acquisition and punctual information on temperature and SD. In such a way it is shown the effectiveness of combining two remote sensing techniques for obtaining distributed information on snow depth.
NASA Astrophysics Data System (ADS)
He, Youmin; Qu, Yueqiao; Zhang, Yi; Ma, Teng; Zhu, Jiang; Miao, Yusi; Humayun, Mark; Zhou, Qifa; Chen, Zhongping
2017-02-01
Age-related macular degeneration (AMD) is an eye condition that is considered to be one of the leading causes of blindness among people over 50. Recent studies suggest that the mechanical properties in retina layers are affected during the early onset of disease. Therefore, it is necessary to identify such changes in the individual layers of the retina so as to provide useful information for disease diagnosis. In this study, we propose using an acoustic radiation force optical coherence elastography (ARF-OCE) system to dynamically excite the porcine retina and detect the vibrational displacement with phase resolved Doppler optical coherence tomography. Due to the vibrational mechanism of the tissue response, the image quality is compromised during elastogram acquisition. In order to properly analyze the images, all signals, including the trigger and control signals for excitation, as well as detection and scanning signals, are synchronized within the OCE software and are kept consistent between frames, making it possible for easy phase unwrapping and elasticity analysis. In addition, a combination of segmentation algorithms is used to accommodate the compromised image quality. An automatic 3D segmentation method has been developed to isolate and measure the relative elasticity of every individual retinal layer. Two different segmentation schemes based on random walker and dynamic programming are implemented. The algorithm has been validated using a 3D region of the porcine retina, where individual layers have been isolated and analyzed using statistical methods. The errors compared to manual segmentation will be calculated.
Oudman, Erik; Van der Stigchel, Stefan; Nijboer, Tanja C W; Wijnia, Jan W; Seekles, Maaike L; Postma, Albert
2016-03-01
Korsakoff's syndrome (KS) is characterized by explicit amnesia, but relatively spared implicit memory. The aim of this study was to assess to what extent KS patients can acquire spatial information while performing a spatial navigation task. Furthermore, we examined whether residual spatial acquisition in KS was based on automatic or effortful coding processes. Therefore, 20 KS patients and 20 matched healthy controls performed six tasks on spatial navigation after they navigated through a residential area. Ten participants per group were instructed to pay close attention (intentional condition), while 10 received mock instructions (incidental condition). KS patients showed hampered performance on a majority of tasks, yet their performance was superior to chance level on a route time and distance estimation tasks, a map drawing task and a route walking task. Performance was relatively spared on the route distance estimation task, but there were large variations between participants. Acquisition in KS was automatic rather than effortful, since no significant differences were obtained between the intentional and incidental condition on any task, whereas for the healthy controls, the intention to learn was beneficial for the map drawing task and the route walking task. The results of this study suggest that KS patients are still able to acquire spatial information during navigation on multiple domains despite the presence of the explicit amnesia. Residual acquisition is most likely based on automatic coding processes. © 2014 The British Psychological Society.
Attention to Action: Willed and Automatic Control of Behavior.
1980-12-15
component is in need of supervisory assistance, has been suggested by LaBerge (1975), LaBerge and Samuels (1974); and Klein (1976). It is related to...Ed.), Motor Con- trol: Issues and Trends, New York: Academic Press, 1976. LaBerge , D., & Samuels, S.J. Toward a theory of automatic information...processing in reading. Cognitive Psychology 1974, 6, 293-323. LaBerge , D. Acquisition of automatic processing in perceptual and associative learning. In
Nondestructive Vibratory Testing and Evaluation Procedure for Military Roads and Streets.
1984-07-01
the addition of an auto- matic data acquisition system to the instrumentation control panel. This system , presently available, would automatically ...the data used to further develop and define the basic correlations. c. Consideration be given to installing an automatic data acquisi- tion system to...glows red any time the force generator is not fully elevated. Depressing this switch will stop the automatic cycle at any point and clear all system
Satellite Imagery Via Personal Computer
NASA Technical Reports Server (NTRS)
1989-01-01
Automatic Picture Transmission (APT) was incorporated by NASA in the Tiros 8 weather satellite. APT included an advanced satellite camera that immediately transmitted a picture as well as low cost receiving equipment. When an advanced scanning radiometer was later introduced, ground station display equipment would not readily adjust to the new format until GSFC developed an APT Digital Scan Converter that made them compatible. A NASA Technical Note by Goddard's Vermillion and Kamoski described how to build a converter. In 1979, Electro-Services, using this technology, built the first microcomputer weather imaging system in the U.S. The company changed its name to Satellite Data Systems, Inc. and now manufactures the WeatherFax facsimile display graphics system which converts a personal computer into a weather satellite image acquisition and display workstation. Hardware, antennas, receivers, etc. are also offered. Customers include U.S. Weather Service, schools, military, etc.
NASA Astrophysics Data System (ADS)
Schwenk, Kurt; Willburger, Katharina; Pless, Sebastian
2017-10-01
Motivated by politics and economy, the monitoring of the world wide ship traffic is a field of high topicality. To detect illegal activities like piracy, illegal fishery, ocean dumping and refugee transportation is of great value. The analysis of satellite images on the ground delivers a great contribution to situation awareness. However, for many applications the up-to-dateness of the data is crucial. With ground based processing, the time between image acquisition and delivery of the data to the end user is in the range of several hours. The highest influence to the duration of ground based processing is the delay caused by the transmission of the large amount of image data from the satellite to the processing centre on the ground. One expensive solution to this issue is the usage of data relay satellites systems like EDRS. Another approach is to analyse the image data directly on-board of the satellite. Since the product data (e.g. ship position, heading, velocity, characteristics) is very small compared to the input image data, real-time connections provided by satellite telecommunication services like Iridium or Orbcomm can be used to send small packets of information directly to the end user without significant delay. The AMARO (Autonomous real-time detection of moving maritime objects) project at DLR is a feasibility study of an on-board ship detection system involving a real-time low bandwidth communication. The operation of a prototype on-board ship detection system will be demonstrated on an airborne platform. In this article, the scope, aim and design of a flight experiment for an on-board ship detection system scheduled for mid of 2018 is presented. First, the scope and the constraints of the experiment are explained in detail. The main goal is to demonstrate the operability of an automatic ship detection system on board of an airplane. For data acquisition the optical high resolution DLR MACS-MARE camera (VIS/NIR) is used. The system will be able to send product data, like position, size and a small image of the ship directly to the user's smart-phone by email. The time between the acquisition of the image data and the delivery of the product data to the end-user is aimed to be less than three minutes. For communication, the SMS-like Iridium Short Burst Data (SBD) Service was chosen, providing a message size of around 300 Bytes. Under optimal sending/receiving conditions, messages can be transmitted bidirectional every 20 seconds. Due to the very small data bandwidth, not all product data may be transmittable at once, for instance, when flying over busy ships traffic zones. Therefore the system offers two services: a query and a push service. With the query service the end user can explicitly request data of a defined location and fixed time period by posting queries in an SQL-like language. With the push service, events can be predefined and messages are received automatically, if and when the event occurs. Finally, the hardware set-up, details of the ship detection algorithms and the current status of the experiment is presented.
Automatic morphological classification of galaxy images
Shamir, Lior
2009-01-01
We describe an image analysis supervised learning algorithm that can automatically classify galaxy images. The algorithm is first trained using a manually classified images of elliptical, spiral, and edge-on galaxies. A large set of image features is extracted from each image, and the most informative features are selected using Fisher scores. Test images can then be classified using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as the feature weights. Experimental results show that galaxy images from Galaxy Zoo can be classified automatically to spiral, elliptical and edge-on galaxies with accuracy of ~90% compared to classifications carried out by the author. Full compilable source code of the algorithm is available for free download, and its general-purpose nature makes it suitable for other uses that involve automatic image analysis of celestial objects. PMID:20161594
1989-08-01
Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17
An Auto-flag Method of Radio Visibility Data Based on Support Vector Machine
NASA Astrophysics Data System (ADS)
Dai, Hui-mei; Mei, Ying; Wang, Wei; Deng, Hui; Wang, Feng
2017-01-01
The Mingantu Ultrawide Spectral Radioheliograph (MUSER) has entered a test observation stage. After the construction of the data acquisition and storage system, it is urgent to automatically flag and eliminate the abnormal visibility data so as to improve the imaging quality. In this paper, according to the observational records, we create a credible visibility set, and further obtain the corresponding flag model of visibility data by using the support vector machine (SVM) technique. The results show that the SVM is a robust approach to flag the MUSER visibility data, and can attain an accuracy of about 86%. Meanwhile, this method will not be affected by solar activities, such as flare eruptions.
Kawakami, Shogo; Ishiyama, Hiromichi; Satoh, Takefumi; Tsumura, Hideyasu; Sekiguchi, Akane; Takenaka, Kouji; Tabata, Ken-Ichi; Iwamura, Masatsugu; Hayakawa, Kazushige
2017-08-01
To compare prostate contours on conventional stepping transverse image acquisitions with those on twister-based sagittal image acquisitions. Twenty prostate cancer patients who were planned to have permanent interstitial prostate brachytherapy were prospectively accrued. A transrectal ultrasonography probe was inserted, with the patient in lithotomy position. Transverse images were obtained with stepping movement of the transverse transducer. In the same patient, sagittal images were also obtained through rotation of the sagittal transducer using the "Twister" mode. The differences of prostate size among the two types of image acquisitions were compared. The relationships among the difference of the two types of image acquisitions, dose-volume histogram (DVH) parameters on the post-implant computed tomography (CT) analysis, as well as other factors were analyzed. The sagittal image acquisitions showed a larger prostate size compared to the transverse image acquisitions especially in the anterior-posterior (AP) direction ( p < 0.05). Interestingly, relative size of prostate apex in AP direction in sagittal image acquisitions compared to that in transverse image acquisitions was correlated to DVH parameters such as D 90 ( R = 0.518, p = 0.019), and V 100 ( R = 0.598, p = 0.005). There were small but significant differences in the prostate contours between the transverse and the sagittal planning image acquisitions. Furthermore, our study suggested that the differences between the two types of image acquisitions might correlated to dosimetric results on CT analysis.
Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.
2013-01-01
This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.
NASA Astrophysics Data System (ADS)
Cheng, Gong; Han, Junwei; Zhou, Peicheng; Guo, Lei
2014-12-01
The rapid development of remote sensing technology has facilitated us the acquisition of remote sensing images with higher and higher spatial resolution, but how to automatically understand the image contents is still a big challenge. In this paper, we develop a practical and rotation-invariant framework for multi-class geospatial object detection and geographic image classification based on collection of part detectors (COPD). The COPD is composed of a set of representative and discriminative part detectors, where each part detector is a linear support vector machine (SVM) classifier used for the detection of objects or recurring spatial patterns within a certain range of orientation. Specifically, when performing multi-class geospatial object detection, we learn a set of seed-based part detectors where each part detector corresponds to a particular viewpoint of an object class, so the collection of them provides a solution for rotation-invariant detection of multi-class objects. When performing geographic image classification, we utilize a large number of pre-trained part detectors to discovery distinctive visual parts from images and use them as attributes to represent the images. Comprehensive evaluations on two remote sensing image databases and comparisons with some state-of-the-art approaches demonstrate the effectiveness and superiority of the developed framework.
NASA Astrophysics Data System (ADS)
Frikha, Mayssa; Fendri, Emna; Hammami, Mohamed
2017-09-01
Using semantic attributes such as gender, clothes, and accessories to describe people's appearance is an appealing modeling method for video surveillance applications. We proposed a midlevel appearance signature based on extracting a list of nameable semantic attributes describing the body in uncontrolled acquisition conditions. Conventional approaches extract the same set of low-level features to learn the semantic classifiers uniformly. Their critical limitation is the inability to capture the dominant visual characteristics for each trait separately. The proposed approach consists of extracting low-level features in an attribute-adaptive way by automatically selecting the most relevant features for each attribute separately. Furthermore, relying on a small training-dataset would easily lead to poor performance due to the large intraclass and interclass variations. We annotated large scale people images collected from different person reidentification benchmarks covering a large attribute sample and reflecting the challenges of uncontrolled acquisition conditions. These annotations were gathered into an appearance semantic attribute dataset that contains 3590 images annotated with 14 attributes. Various experiments prove that carefully designed features for learning the visual characteristics for an attribute provide an improvement of the correct classification accuracy and a reduction of both spatial and temporal complexities against state-of-the-art approaches.
Identification using face regions: application and assessment in forensic scenarios.
Tome, Pedro; Fierrez, Julian; Vera-Rodriguez, Ruben; Ramos, Daniel
2013-12-10
This paper reports an exhaustive analysis of the discriminative power of the different regions of the human face on various forensic scenarios. In practice, when forensic examiners compare two face images, they focus their attention not only on the overall similarity of the two faces. They carry out an exhaustive morphological comparison region by region (e.g., nose, mouth, eyebrows, etc.). In this scenario it is very important to know based on scientific methods to what extent each facial region can help in identifying a person. This knowledge obtained using quantitative and statical methods on given populations can then be used by the examiner to support or tune his observations. In order to generate such scientific knowledge useful for the expert, several methodologies are compared, such as manual and automatic facial landmarks extraction, different facial regions extractors, and various distances between the subject and the acquisition camera. Also, three scenarios of interest for forensics are considered comparing mugshot and Closed-Circuit TeleVision (CCTV) face images using MORPH and SCface databases. One of the findings is that depending of the acquisition distances, the discriminative power of the facial regions change, having in some cases better performance than the full face. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.
Achille, Cristiana; Adami, Andrea; Chiarini, Silvia; Cremonesi, Stefano; Fassi, Francesco; Fregonese, Luigi; Taffurelli, Laura
2015-01-01
This paper examines the survey of tall buildings in an emergency context like in the case of post-seismic events. The after-earthquake survey has to guarantee time-savings, high precision and security during the operational stages. The main goal is to optimize the application of methodologies based on acquisition and automatic elaborations of photogrammetric data even with the use of Unmanned Aerial Vehicle (UAV) systems in order to provide fast and low cost operations. The suggested methods integrate new technologies with commonly used technologies like TLS and topographic acquisition. The value of the photogrammetric application is demonstrated by a test case, based on the comparison of acquisition, calibration and 3D modeling results in case of use of a laser scanner, metric camera and amateur reflex camera. The test would help us to demonstrate the efficiency of image based methods in the acquisition of complex architecture. The case study is Santa Barbara Bell tower in Mantua. The applied survey solution allows a complete 3D database of the complex architectural structure to be obtained for the extraction of all the information needed for significant intervention. This demonstrates the applicability of the photogrammetry using UAV for the survey of vertical structures, complex buildings and difficult accessible architectural parts, providing high precision results. PMID:26134108
Achille, Cristiana; Adami, Andrea; Chiarini, Silvia; Cremonesi, Stefano; Fassi, Francesco; Fregonese, Luigi; Taffurelli, Laura
2015-06-30
This paper examines the survey of tall buildings in an emergency context like in the case of post-seismic events. The after-earthquake survey has to guarantee time-savings, high precision and security during the operational stages. The main goal is to optimize the application of methodologies based on acquisition and automatic elaborations of photogrammetric data even with the use of Unmanned Aerial Vehicle (UAV) systems in order to provide fast and low cost operations. The suggested methods integrate new technologies with commonly used technologies like TLS and topographic acquisition. The value of the photogrammetric application is demonstrated by a test case, based on the comparison of acquisition, calibration and 3D modeling results in case of use of a laser scanner, metric camera and amateur reflex camera. The test would help us to demonstrate the efficiency of image based methods in the acquisition of complex architecture. The case study is Santa Barbara Bell tower in Mantua. The applied survey solution allows a complete 3D database of the complex architectural structure to be obtained for the extraction of all the information needed for significant intervention. This demonstrates the applicability of the photogrammetry using UAV for the survey of vertical structures, complex buildings and difficult accessible architectural parts, providing high precision results.
NASA Astrophysics Data System (ADS)
Luther, Ed; Mendes, Livia; Pan, Jiayi; Costa, Daniel; Sarisozen, Can; Torchilin, Vladimir
2018-02-01
We rely on in vitro cellular cultures to evaluate the effects of the components of multifunctional nano-based formulations under development. We employ an incubator-adapted, label-free holographic imaging cytometer HoloMonitor M4® (Phase Holographic Imaging, Lund, Sweden) to obtain multi-day time-lapse sequences at 5- minute intervals. An automated stage allows hand-free acquisition of multiple fields of view. Our system is based on the Mach-Zehnder interferometry principle to create interference patterns which are deconvolved to produce images of the optical thickness of the field of view. These images are automatically segmented resulting in a full complement of quantitative morphological features, such as optical volume, thickness, and area amongst many others. Precise XY cell locations and the time of acquisition are also recorded. Visualization is best achieved by novel 4-Dimensional plots, where XY position is plotted overtime time (Z-directions) and cell-thickness is coded as color or gray scale brightness. Fundamental events of interest, i.e., cells undergoing mitosis or mitotic dysfunction, cell death, cell-to-cell interactions, motility are discernable. We use both 2D and 3D models of the tumor microenvironment. We report our new analysis method to track feature changes over time based on a 4-sample version of the Kolmogorov-Smirnov test. Feature A is compared to Control A, and Feature B is compared to Control B to give a 2D probability plot of the feature changes over time. As a result, we efficiently obtain vectors quantifying feature changes over time in various sample conditions, i.e., changing compound concentrations or multi-compound combinations.
Segmentation and learning in the quantitative analysis of microscopy images
NASA Astrophysics Data System (ADS)
Ruggiero, Christy; Ross, Amy; Porter, Reid
2015-02-01
In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.
Scholtz, Jan-Erik; Wichmann, Julian L; Kaup, Moritz; Fischer, Sebastian; Kerl, J Matthias; Lehnert, Thomas; Vogl, Thomas J; Bauer, Ralf W
2015-03-01
To evaluate software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine on CT in terms of accuracy, potential for time savings and workflow improvement. 77 patients (28 women, 49 men, mean age 65.3±14.4 years) with known or suspected spinal disorders (degenerative spine disease n=32; disc herniation n=36; traumatic vertebral fractures n=9) underwent 64-slice MDCT with thin-slab reconstruction. Time for automatic labeling of the thoracolumbar spine and reconstruction of double-angulated axial images of the pathological vertebrae was compared with manually performed reconstruction of anatomical aligned axial images. Reformatted images of both reconstruction methods were assessed by two observers regarding accuracy of symmetric depiction of anatomical structures. In 33 cases double-angulated axial images were created in 1 vertebra, in 28 cases in 2 vertebrae and in 16 cases in 3 vertebrae. Correct automatic labeling was achieved in 72 of 77 patients (93.5%). Errors could be manually corrected in 4 cases. Automatic labeling required 1min in average. In cases where anatomical aligned axial images of 1 vertebra were created, reconstructions made by hand were significantly faster (p<0.05). Automatic reconstruction was time-saving in cases of 2 and more vertebrae (p<0.05). Both reconstruction methods revealed good image quality with excellent inter-observer agreement. The evaluated software for automatic labeling and anatomically aligned, double-angulated axial image reconstruction of the thoracolumbar spine on CT is time-saving when reconstructions of 2 and more vertebrae are performed. Checking results of automatic labeling is necessary to prevent errors in labeling. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauch, Phillip; Lin, Pei-Jan Paul; Balter, Stephen
2012-05-15
Task Group 125 (TG 125) was charged with investigating the functionality of fluoroscopic automatic dose rate and image quality control logic in modern angiographic systems, paying specific attention to the spectral shaping filters and variations in the selected radiologic imaging parameters. The task group was also charged with describing the operational aspects of the imaging equipment for the purpose of assisting the clinical medical physicist with clinical set-up and performance evaluation. Although there are clear distinctions between the fluoroscopic operation of an angiographic system and its acquisition modes (digital cine, digital angiography, digital subtraction angiography, etc.), the scope of thismore » work was limited to the fluoroscopic operation of the systems studied. The use of spectral shaping filters in cardiovascular and interventional angiography equipment has been shown to reduce patient dose. If the imaging control algorithm were programmed to work in conjunction with the selected spectral filter, and if the generator parameters were optimized for the selected filter, then image quality could also be improved. Although assessment of image quality was not included as part of this report, it was recognized that for fluoroscopic imaging the parameters that influence radiation output, differential absorption, and patient dose are also the same parameters that influence image quality. Therefore, this report will utilize the terminology ''automatic dose rate and image quality'' (ADRIQ) when describing the control logic in modern interventional angiographic systems and, where relevant, will describe the influence of controlled parameters on the subsequent image quality. A total of 22 angiography units were investigated by the task group and of these one each was chosen as representative of the equipment manufactured by GE Healthcare, Philips Medical Systems, Shimadzu Medical USA, and Siemens Medical Systems. All equipment, for which measurement data were included in this report, was manufactured within the three year period from 2006 to 2008. Using polymethylmethacrylate (PMMA) plastic to simulate patient attenuation, each angiographic imaging system was evaluated by recording the following parameters: tube potential in units of kilovolts peak (kVp), tube current in units of milliamperes (mA), pulse width (PW) in units of milliseconds (ms), spectral filtration setting, and patient air kerma rate (PAKR) as a function of the attenuator thickness. Data were graphically plotted to reveal the manner in which the ADRIQ control logic responded to changes in object attenuation. There were similarities in the manner in which the ADRIQ control logic operated that allowed the four chosen devices to be divided into two groups, with two of the systems in each group. There were also unique approaches to the ADRIQ control logic that were associated with some of the systems, and these are described in the report. The evaluation revealed relevant information about the testing procedure and also about the manner in which different manufacturers approach the utilization of spectral filtration, pulsed fluoroscopy, and maximum PAKR limitation. This information should be particularly valuable to the clinical medical physicist charged with acceptance testing and performance evaluation of modern angiographic systems.« less
Rauch, Phillip; Lin, Pei-Jan Paul; Balter, Stephen; Fukuda, Atsushi; Goode, Allen; Hartwell, Gary; LaFrance, Terry; Nickoloff, Edward; Shepard, Jeff; Strauss, Keith
2012-05-01
Task Group 125 (TG 125) was charged with investigating the functionality of fluoroscopic automatic dose rate and image quality control logic in modern angiographic systems, paying specific attention to the spectral shaping filters and variations in the selected radiologic imaging parameters. The task group was also charged with describing the operational aspects of the imaging equipment for the purpose of assisting the clinical medical physicist with clinical set-up and performance evaluation. Although there are clear distinctions between the fluoroscopic operation of an angiographic system and its acquisition modes (digital cine, digital angiography, digital subtraction angiography, etc.), the scope of this work was limited to the fluoroscopic operation of the systems studied. The use of spectral shaping filters in cardiovascular and interventional angiography equipment has been shown to reduce patient dose. If the imaging control algorithm were programmed to work in conjunction with the selected spectral filter, and if the generator parameters were optimized for the selected filter, then image quality could also be improved. Although assessment of image quality was not included as part of this report, it was recognized that for fluoroscopic imaging the parameters that influence radiation output, differential absorption, and patient dose are also the same parameters that influence image quality. Therefore, this report will utilize the terminology "automatic dose rate and image quality" (ADRIQ) when describing the control logic in modern interventional angiographic systems and, where relevant, will describe the influence of controlled parameters on the subsequent image quality. A total of 22 angiography units were investigated by the task group and of these one each was chosen as representative of the equipment manufactured by GE Healthcare, Philips Medical Systems, Shimadzu Medical USA, and Siemens Medical Systems. All equipment, for which measurement data were included in this report, was manufactured within the three year period from 2006 to 2008. Using polymethylmethacrylate (PMMA) plastic to simulate patient attenuation, each angiographic imaging system was evaluated by recording the following parameters: tube potential in units of kilovolts peak (kVp), tube current in units of milliamperes (mA), pulse width (PW) in units of milliseconds (ms), spectral filtration setting, and patient air kerma rate (PAKR) as a function of the attenuator thickness. Data were graphically plotted to reveal the manner in which the ADRIQ control logic responded to changes in object attenuation. There were similarities in the manner in which the ADRIQ control logic operated that allowed the four chosen devices to be divided into two groups, with two of the systems in each group. There were also unique approaches to the ADRIQ control logic that were associated with some of the systems, and these are described in the report. The evaluation revealed relevant information about the testing procedure and also about the manner in which different manufacturers approach the utilization of spectral filtration, pulsed fluoroscopy, and maximum PAKR limitation. This information should be particularly valuable to the clinical medical physicist charged with acceptance testing and performance evaluation of modern angiographic systems.
ERIC Educational Resources Information Center
Letmanyi, Helen
Developed to identify and qualitatively assess computer system evaluation techniques for use during acquisition of general purpose computer systems, this document presents several criteria for comparison and selection. An introduction discusses the automatic data processing (ADP) acquisition process and the need to plan for uncertainty through…
Automatization and Orthographic Development in Second Language Visual Word Recognition
ERIC Educational Resources Information Center
Kida, Shusaku
2016-01-01
The present study investigated second language (L2) learners' acquisition of automatic word recognition and the development of L2 orthographic representation in the mental lexicon. Participants in the study were Japanese university students enrolled in a compulsory course involving a weekly 30-minute sustained silent reading (SSR) activity with…
Study of the Acquisition of Peripheral Equipment for Use with Automatic Data Processing Systems.
ERIC Educational Resources Information Center
Comptroller General of the U.S., Washington, DC.
The General Accounting Office (GAO) performed this study because: preliminary indications showed that significant savings could be achieved in the procurement of selected computer components; the Federal Government is investing increasing amounts of money in Automatic Data Processing (ADP) equipment; and there is a widespread congressional…
Feasibility study ASCS remote sensing/compliance determination system
NASA Technical Reports Server (NTRS)
Duggan, I. E.; Minter, T. C., Jr.; Moore, B. H.; Nosworthy, C. T.
1973-01-01
A short-term technical study was performed by the MSC Earth Observations Division to determine the feasibility of the proposed Agricultural Stabilization and Conservation Service Automatic Remote Sensing/Compliance Determination System. For the study, the term automatic was interpreted as applying to an automated remote-sensing system that includes data acquisition, processing, and management.
Automatic programming for critical applications
NASA Technical Reports Server (NTRS)
Loganantharaj, Raj L.
1988-01-01
The important phases of a software life cycle include verification and maintenance. Usually, the execution performance is an expected requirement in a software development process. Unfortunately, the verification and the maintenance of programs are the time consuming and the frustrating aspects of software engineering. The verification cannot be waived for the programs used for critical applications such as, military, space, and nuclear plants. As a consequence, synthesis of programs from specifications, an alternative way of developing correct programs, is becoming popular. The definition, or what is understood by automatic programming, has been changed with our expectations. At present, the goal of automatic programming is the automation of programming process. Specifically, it means the application of artificial intelligence to software engineering in order to define techniques and create environments that help in the creation of high level programs. The automatic programming process may be divided into two phases: the problem acquisition phase and the program synthesis phase. In the problem acquisition phase, an informal specification of the problem is transformed into an unambiguous specification while in the program synthesis phase such a specification is further transformed into a concrete, executable program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mijnheer, B; Mans, A; Olaciregui-Ruiz, I
Purpose: To develop a 3D in vivo dosimetry method that is able to substitute pre-treatment verification in an efficient way, and to terminate treatment delivery if the online measured 3D dose distribution deviates too much from the predicted dose distribution. Methods: A back-projection algorithm has been further developed and implemented to enable automatic 3D in vivo dose verification of IMRT/VMAT treatments using a-Si EPIDs. New software tools were clinically introduced to allow automated image acquisition, to periodically inspect the record-and-verify database, and to automatically run the EPID dosimetry software. The comparison of the EPID-reconstructed and planned dose distribution is donemore » offline to raise automatically alerts and to schedule actions when deviations are detected. Furthermore, a software package for online dose reconstruction was also developed. The RMS of the difference between the cumulative planned and reconstructed 3D dose distributions was used for triggering a halt of a linac. Results: The implementation of fully automated 3D EPID-based in vivo dosimetry was able to replace pre-treatment verification for more than 90% of the patient treatments. The process has been fully automated and integrated in our clinical workflow where over 3,500 IMRT/VMAT treatments are verified each year. By optimizing the dose reconstruction algorithm and the I/O performance, the delivered 3D dose distribution is verified in less than 200 ms per portal image, which includes the comparison between the reconstructed and planned dose distribution. In this way it was possible to generate a trigger that can stop the irradiation at less than 20 cGy after introducing large delivery errors. Conclusion: The automatic offline solution facilitated the large scale clinical implementation of 3D EPID-based in vivo dose verification of IMRT/VMAT treatments; the online approach has been successfully tested for various severe delivery errors.« less
Measuring Leaf Area in Soy Plants by HSI Color Model Filtering and Mathematical Morphology
NASA Astrophysics Data System (ADS)
Benalcázar, M.; Padín, J.; Brun, M.; Pastore, J.; Ballarin, V.; Peirone, L.; Pereyra, G.
2011-12-01
There has been lately a significant progress in automating tasks for the agricultural sector. One of the advances is the development of robots, based on computer vision, applied to care and management of soy crops. In this task, digital image processing plays an important role, but must solve some important problems, like the ones associated to the variations in lighting conditions during image acquisition. Such variations influence directly on the brightness level of the images to be processed. In this paper we propose an algorithm to segment and measure automatically the leaf area of soy plants. This information is used by the specialists to evaluate and compare the growth of different soy genotypes. This algorithm, based on color filtering using the HSI model, detects green objects from the image background. The segmentation of leaves (foliage) was made applying Mathematical Morphology. The foliage area was estimated counting the pixels that belong to the segmented leaves. From several experiments, consisting in applying the algorithm to measure the foliage of about fifty plants of various genotypes of soy, at different growth stages, we obtained successful results, despite the high brightness variations and shadows in the processed images.
Lu, Lee-Jane W.; Nishino, Thomas K.; Khamapirad, Tuenchit; Grady, James J; Leonard, Morton H.; Brunder, Donald G.
2009-01-01
Breast density (the percentage of fibroglandular tissue in the breast) has been suggested to be a useful surrogate marker for breast cancer risk. It is conventionally measured using screen-film mammographic images by a labor intensive histogram segmentation method (HSM). We have adapted and modified the HSM for measuring breast density from raw digital mammograms acquired by full-field digital mammography. Multiple regression model analyses showed that many of the instrument parameters for acquiring the screening mammograms (e.g. breast compression thickness, radiological thickness, radiation dose, compression force, etc) and image pixel intensity statistics of the imaged breasts were strong predictors of the observed threshold values (model R2=0.93) and %density (R2=0.84). The intra-class correlation coefficient of the %-density for duplicate images was estimated to be 0.80, using the regression model-derived threshold values, and 0.94 if estimated directly from the parameter estimates of the %-density prediction regression model. Therefore, with additional research, these mathematical models could be used to compute breast density objectively, automatically bypassing the HSM step, and could greatly facilitate breast cancer research studies. PMID:17671343
Toward dynamic magnetic resonance imaging of the vocal tract during speech production.
Ventura, Sandra M Rua; Freitas, Diamantino Rui S; Tavares, João Manuel R S
2011-07-01
The most recent and significant magnetic resonance imaging (MRI) improvements allow for the visualization of the vocal tract during speech production, which has been revealed to be a powerful tool in dynamic speech research. However, a synchronization technique with enhanced temporal resolution is still required. The study design was transversal in nature. Throughout this work, a technique for the dynamic study of the vocal tract with MRI by using the heart's signal to synchronize and trigger the imaging-acquisition process is presented and described. The technique in question is then used in the measurement of four speech articulatory parameters to assess three different syllables (articulatory gestures) of European Portuguese Language. The acquired MR images are automatically reconstructed so as to result in a variable sequence of images (slices) of different vocal tract shapes in articulatory positions associated with Portuguese speech sounds. The knowledge obtained as a result of the proposed technique represents a direct contribution to the improvement of speech synthesis algorithms, thereby allowing for novel perceptions in coarticulation studies, in addition to providing further efficient clinical guidelines in the pursuit of more proficient speech rehabilitation processes. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Evaluation of experimental UAV video change detection
NASA Astrophysics Data System (ADS)
Bartelsen, J.; Saur, G.; Teutsch, C.
2016-10-01
During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kruger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect to the flight path, viewpoint change and parametrization. Hence, based on synthetic "before" and "after" videos of a simulated scene, we estimated the precision and recall of automatically detected changes. In addition and based on our approach, we illustrate the results showing the change detection in short, but real video sequences. Future work will improve the photogrammetric approach for frame registration, and extensive real video material, capable of change detection, will be acquired.
NASA Astrophysics Data System (ADS)
Oniga, E.; Chirilă, C.; Stătescu, F.
2017-02-01
Nowadays, Unmanned Aerial Systems (UASs) are a wide used technique for acquisition in order to create buildings 3D models, providing the acquisition of a high number of images at very high resolution or video sequences, in a very short time. Since low-cost UASs are preferred, the accuracy of a building 3D model created using this platforms must be evaluated. To achieve results, the dean's office building from the Faculty of "Hydrotechnical Engineering, Geodesy and Environmental Engineering" of Iasi, Romania, has been chosen, which is a complex shape building with the roof formed of two hyperbolic paraboloids. Seven points were placed on the ground around the building, three of them being used as GCPs, while the remaining four as Check points (CPs) for accuracy assessment. Additionally, the coordinates of 10 natural CPs representing the building characteristic points were measured with a Leica TCR 405 total station. The building 3D model was created as a point cloud which was automatically generated based on digital images acquired with the low-cost UASs, using the image matching algorithm and different software like 3DF Zephyr, Visual SfM, PhotoModeler Scanner and Drone2Map for ArcGIS. Except for the PhotoModeler Scanner software, the interior and exterior orientation parameters were determined simultaneously by solving a self-calibrating bundle adjustment. Based on the UAS point clouds, automatically generated by using the above mentioned software and GNSS data respectively, the parameters of the east side hyperbolic paraboloid were calculated using the least squares method and a statistical blunder detection. Then, in order to assess the accuracy of the building 3D model, several comparisons were made for the facades and the roof with reference data, considered with minimum errors: TLS mesh for the facades and GNSS mesh for the roof. Finally, the front facade of the building was created in 3D based on its characteristic points using the PhotoModeler Scanner software, resulting a CAD (Computer Aided Design) model. The results showed the high potential of using low-cost UASs for building 3D model creation and if the building 3D model is created based on its characteristic points the accuracy is significantly improved.
Web camera as low cost multispectral sensor for quantification of chlorophyll in soybean leaves
NASA Astrophysics Data System (ADS)
Adhiwibawa, Marcelinus A.; Setiawan, Yonathan E.; Prilianti, Kestrilia R.; Brotosudarmo, Tatas H. P.
2015-01-01
Soybeans is one of main crops in Indonesia but the demand for soybeans is not followed by an increase in soybeans national production. One of the production limitation factor is the availability of lush cultivation area for soybeans plantation. Indonesian farners are usually grow soybeans in marginal cultivation area that requires soybeans varieties which tolerant with environmental stress such as drought, nutrition limitation, pest, disease and many others. Chlorophyll content in leaf is one of plant health indicator that can be used to determine environmental stress tolerant soybean varieties. However, there are difficulties in soybeans breeding research due to the manual acquisition of data that are time consume and labour extensive. In this paper authors proposed automatic system of soybeans leaves area and chlorophyll quantification based on low cost multispectral sensor using web camera as an indicator of soybean plant tollerance to environmental stress particularlly drought stress. The system acquires the image of the plant that is placed in the acquisition box from the top of the plant. The image is segmented using NDVI (Normalized Difference Vegetation Index) from image and quantified to yield an average value of NDVI and leaf area. The proposed system showed that acquired NDVI value has a strong relationship with SPAD value with r-square value 0.70, while the leaf area prediction has error of 18.41%. Thus the automation system can quantify plant data with good result.
A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.
Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping
2017-03-01
Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.
Qi, Yuan-Hua; Wang, Hui; Zhang, Xiao-Bo; Jin, Yan; Ge, Xiao-Guang; Jing, Zhi-Xian; Wang, Ling; Zhao, Yu-Ping; Guo, Lan-Ping; Huang, Lu-Qi
2017-11-01
In this paper, a data acquisition system based on mobile terminal combining GPS, offset correction, automatic speech recognition and database networking technology was designed implemented with the function of locating the latitude and elevation information fast, taking conveniently various types of Chinese herbal plant photos, photos, samples habitat photos and so on. The mobile system realizes automatic association with Chinese medicine source information, through the voice recognition function it records the information of plant characteristics and environmental characteristics, and record relevant plant specimen information. The data processing platform based on Chinese medicine resources survey data reporting client can effectively assists in indoor data processing, derives the mobile terminal data to computer terminal. The established data acquisition system provides strong technical support for the fourth national survey of the Chinese materia medica resources (CMMR). Copyright© by the Chinese Pharmaceutical Association.
Managing multiple image stacks from confocal laser scanning microscopy
NASA Astrophysics Data System (ADS)
Zerbe, Joerg; Goetze, Christian H.; Zuschratter, Werner
1999-05-01
A major goal in neuroanatomy is to obtain precise information about the functional organization of neuronal assemblies and their interconnections. Therefore, the analysis of histological sections frequently requires high resolution images in combination with an overview about the structure. To overcome this conflict we have previously introduced a software for the automatic acquisition of multiple image stacks (3D-MISA) in confocal laser scanning microscopy. Here, we describe a Windows NT based software for fast and easy navigation through the multiple images stacks (MIS-browser), the visualization of individual channels and layers and the selection of user defined subregions. In addition, the MIS browser provides useful tools for the visualization and evaluation of the datavolume, as for instance brightness and contrast corrections of individual layers and channels. Moreover, it includes a maximum intensity projection, panning and zoom in/out functions within selected channels or focal planes (x/y) and tracking along the z-axis. The import module accepts any tiff-format and reconstructs the original image arrangement after the user has defined the sequence of images in x/y and z and the number of channels. The implemented export module allows storage of user defined subregions (new single image stacks) for further 3D-reconstruction and evaluation.
Automatic visibility retrieval from thermal camera images
NASA Astrophysics Data System (ADS)
Dizerens, Céline; Ott, Beat; Wellig, Peter; Wunderle, Stefan
2017-10-01
This study presents an automatic visibility retrieval of a FLIR A320 Stationary Thermal Imager installed on a measurement tower on the mountain Lagern located in the Swiss Jura Mountains. Our visibility retrieval makes use of edges that are automatically detected from thermal camera images. Predefined target regions, such as mountain silhouettes or buildings with high thermal differences to the surroundings, are used to derive the maximum visibility distance that is detectable in the image. To allow a stable, automatic processing, our procedure additionally removes noise in the image and includes automatic image alignment to correct small shifts of the camera. We present a detailed analysis of visibility derived from more than 24000 thermal images of the years 2015 and 2016 by comparing them to (1) visibility derived from a panoramic camera image (VISrange), (2) measurements of a forward-scatter visibility meter (Vaisala FD12 working in the NIR spectra), and (3) modeled visibility values using the Thermal Range Model TRM4. Atmospheric conditions, mainly water vapor from European Center for Medium Weather Forecast (ECMWF), were considered to calculate the extinction coefficients using MODTRAN. The automatic visibility retrieval based on FLIR A320 images is often in good agreement with the retrieval from the systems working in different spectral ranges. However, some significant differences were detected as well, depending on weather conditions, thermal differences of the monitored landscape, and defined target size.
ERIC Educational Resources Information Center
Department of Commerce, Washington, DC.
This volume, the second of two, presents and analyzes the information technology acquisition plans of the Federal Government by agency and component. A brief description covers the outlays planned for major information technology acquisitions of general purpose data processing and telecommunications systems, facilities, and related services for 6…
Li, Xueming; Zheng, Shawn; Agard, David A.; Cheng, Yifan
2015-01-01
Newly developed direct electron detection cameras have a high image output frame rate that enables recording dose fractionated image stacks of frozen hydrated biological samples by electron cryomicroscopy (cryoEM). Such novel image acquisition schemes provide opportunities to analyze cryoEM data in ways that were previously impossible. The file size of a dose fractionated image stack is 20 ~ 60 times larger than that of a single image. Thus, efficient data acquisition and on-the-fly analysis of a large number of dose-fractionated image stacks become a serious challenge to any cryoEM data acquisition system. We have developed a computer-assisted system, named UCSFImage4, for semi-automated cryo-EM image acquisition that implements an asynchronous data acquisition scheme. This facilitates efficient acquisition, on-the-fly motion correction, and CTF analysis of dose fractionated image stacks with a total time of ~60 seconds/exposure. Here we report the technical details and configuration of this system. PMID:26370395
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y; Sharp, G
2014-06-15
Purpose: Gain calibration for X-ray imaging systems with movable flat panel detectors (FPD) and intrinsic crosshairs is a challenge due to the geometry dependence of the heel effect and crosshair artifact. This study aims to develop a gain correction method for such systems by implementing the multi-acquisition gain image correction (MAGIC) technique. Methods: Raw flat-field images containing crosshair shadows and heel effect were acquired in 4 different FPD positions with fixed exposure parameters. The crosshair region was automatically detected and substituted with interpolated values from nearby exposed regions, generating a conventional single-image gain-map for each FPD position. Large kernel-based correctionmore » was applied to these images to correct the heel effect. A mask filter was used to invalidate the original cross-hair regions previously filled with the interpolated values. A final, seamless gain-map was created from the processed images by either the sequential filling (SF) or selective averaging (SA) techniques developed in this study. Quantitative evaluation was performed based on detective quantum efficiency improvement factor (DQEIF) for gain-corrected images using the conventional and proposed techniques. Results: Qualitatively, the MAGIC technique was found to be more effective in eliminating crosshair artifacts compared to the conventional single-image method. The mean DQEIF over the range of frequencies from 0.5 to 3.5 mm-1 were 1.09±0.06, 2.46±0.32, and 3.34±0.36 in the crosshair-artifact region and 2.35±0.31, 2.33±0.31, and 3.09±0.34 in the normal region, for the conventional, MAGIC-SF, and MAGIC-SA techniques, respectively. Conclusion: The introduced MAGIC technique is appropriate for gain calibration of an imaging system associated with a moving FPD and an intrinsic crosshair. The technique showed advantages over a conventional single image-based technique by successfully reducing residual crosshair artifacts, and higher image quality with respect to DQE.« less
NASA Astrophysics Data System (ADS)
Lee, Donghoon; Kim, Ye-seul; Choi, Sunghoon; Lee, Haenghwa; Jo, Byungdu; Choi, Seungyeon; Shin, Jungwook; Kim, Hee-Joung
2017-03-01
The chest digital tomosynthesis(CDT) is recently developed medical device that has several advantage for diagnosing lung disease. For example, CDT provides depth information with relatively low radiation dose compared to computed tomography (CT). However, a major problem with CDT is the image artifacts associated with data incompleteness resulting from limited angle data acquisition in CDT geometry. For this reason, the sensitivity of lung disease was not clear compared to CT. In this study, to improve sensitivity of lung disease detection in CDT, we developed computer aided diagnosis (CAD) systems based on machine learning. For design CAD systems, we used 100 cases of lung nodules cropped images and 100 cases of normal lesion cropped images acquired by lung man phantoms and proto type CDT. We used machine learning techniques based on support vector machine and Gabor filter. The Gabor filter was used for extracting characteristics of lung nodules and we compared performance of feature extraction of Gabor filter with various scale and orientation parameters. We used 3, 4, 5 scales and 4, 6, 8 orientations. After extracting features, support vector machine (SVM) was used for classifying feature of lesions. The linear, polynomial and Gaussian kernels of SVM were compared to decide the best SVM conditions for CDT reconstruction images. The results of CAD system with machine learning showed the capability of automatically lung lesion detection. Furthermore detection performance was the best when Gabor filter with 5 scale and 8 orientation and SVM with Gaussian kernel were used. In conclusion, our suggested CAD system showed improving sensitivity of lung lesion detection in CDT and decide Gabor filter and SVM conditions to achieve higher detection performance of our developed CAD system for CDT.
Greffier, Joël; Pereira, Fabricio; Macri, Francesco; Beregi, Jean-Paul; Larbi, Ahmed
2016-04-01
To evaluate the impact of Automatic Exposure Control (AEC) on radiation dose and image quality in paediatric chest scans (MDCT), with or without iterative reconstruction (IR). Three anthropomorphic phantoms representing children aged one, five and 10-year-old were explored using AEC system (CARE Dose 4D) with five modulation strength options. For each phantom, six acquisitions were carried out: one with fixed mAs (without AEC) and five each with different modulation strength. Raw data were reconstructed with Filtered Back Projection (FBP) and with two distinct levels of IR using soft and strong kernels. Dose reduction and image quality indices (Noise, SNR, CNR) were measured in lung and soft tissues. Noise Power Spectrum (NPS) was evaluated with a Catphan 600 phantom. The use of AEC produced a significant dose reduction (p<0.01) for all anthropomorphic sizes employed. According to the modulation strength applied, dose delivered was reduced from 43% to 91%. This pattern led to significantly increased noise (p<0.01) and reduced SNR and CNR (p<0.01). However, IR was able to improve these indices. The use of AEC/IR preserved image quality indices with a lower dose delivered. Doses were reduced from 39% to 58% for the one-year-old phantom, from 46% to 63% for the five-year-old phantom, and from 58% to 74% for the 10-year-old phantom. In addition, AEC/IR changed the patterns of NPS curves in amplitude and in spatial frequency. In chest paediatric MDCT, the use of AEC with IR allows one to obtain a significant dose reduction while maintaining constant image quality indices. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Automatic Intra-Operative Stitching of Non-Overlapping Cone-Beam CT Acquisitions
Fotouhi, Javad; Fuerst, Bernhard; Unberath, Mathias; Reichenstein, Stefan; Lee, Sing Chun; Johnson, Alex A.; Osgood, Greg M.; Armand, Mehran; Navab, Nassir
2018-01-01
Purpose Cone-Beam Computed Tomography (CBCT) is one of the primary imaging modalities in radiation therapy, dentistry, and orthopedic interventions. While CBCT provides crucial intraoperative information, it is bounded by a limited imaging volume, resulting in reduced effectiveness. This paper introduces an approach allowing real-time intraoperative stitching of overlapping and non-overlapping CBCT volumes to enable 3D measurements on large anatomical structures. Methods A CBCT-capable mobile C-arm is augmented with a Red-Green-Blue-Depth (RGBD) camera. An off-line co-calibration of the two imaging modalities results in co-registered video, infrared, and X-ray views of the surgical scene. Then, automatic stitching of multiple small, non-overlapping CBCT volumes is possible by recovering the relative motion of the C-arm with respect to the patient based on the camera observations. We propose three methods to recover the relative pose: RGB-based tracking of visual markers that are placed near the surgical site, RGBD-based simultaneous localization and mapping (SLAM) of the surgical scene which incorporates both color and depth information for pose estimation, and surface tracking of the patient using only depth data provided by the RGBD sensor. Results On an animal cadaver, we show stitching errors as low as 0.33 mm, 0.91 mm, and 1.72mm when the visual marker, RGBD SLAM, and surface data are used for tracking, respectively. Conclusions The proposed method overcomes one of the major limitations of CBCT C-arm systems by integrating vision-based tracking and expanding the imaging volume without any intraoperative use of calibration grids or external tracking systems. We believe this solution to be most appropriate for 3D intraoperative verification of several orthopedic procedures. PMID:29569728
NASA Astrophysics Data System (ADS)
Keshavamurthy, Krishna N.; Leary, Owen P.; Merck, Lisa H.; Kimia, Benjamin; Collins, Scott; Wright, David W.; Allen, Jason W.; Brock, Jeffrey F.; Merck, Derek
2017-03-01
Traumatic brain injury (TBI) is a major cause of death and disability in the United States. Time to treatment is often related to patient outcome. Access to cerebral imaging data in a timely manner is a vital component of patient care. Current methods of detecting and quantifying intracranial pathology can be time-consuming and require careful review of 2D/3D patient images by a radiologist. Additional time is needed for image protocoling, acquisition, and processing. These steps often occur in series, adding more time to the process and potentially delaying time-dependent management decisions for patients with traumatic brain injury. Our team adapted machine learning and computer vision methods to develop a technique that rapidly and automatically detects CT-identifiable lesions. Specifically, we use scale invariant feature transform (SIFT)1 and deep convolutional neural networks (CNN)2 to identify important image features that can distinguish TBI lesions from background data. Our learning algorithm is a linear support vector machine (SVM)3. Further, we also employ tools from topological data analysis (TDA) for gleaning insights into the correlation patterns between healthy and pathological data. The technique was validated using 409 CT scans of the brain, acquired via the Progesterone for the Treatment of Traumatic Brain Injury phase III clinical trial (ProTECT_III) which studied patients with moderate to severe TBI4. CT data were annotated by a central radiologist and included patients with positive and negative scans. Additionally, the largest lesion on each positive scan was manually segmented. We reserved 80% of the data for training the SVM and used the remaining 20% for testing. Preliminary results are promising with 92.55% prediction accuracy (sensitivity = 91.15%, specificity = 93.45%), indicating the potential usefulness of this technique in clinical scenarios.
WE-G-BRF-09: Force- and Image-Adaptive Strategies for Robotised Placement of 4D Ultrasound Probes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhlemann, I; Graduate School for Computing in Life Science, University of Luebeck, Luebeck; Bruder, R
2014-06-15
Purpose: To allow continuous acquisition of high quality 4D ultrasound images for non-invasive live tracking of tumours for IGRT, image- and force-adaptive strategies for robotised placement of 4D ultrasound probes are developed and evaluated. Methods: The developed robotised ultrasound system is based on a 6-axes industrial robot (adept Viper s850) carrying a 4D ultrasound transducer with a mounted force-torque sensor. The force-adaptive placement strategies include probe position control using artificial potential fields and contact pressure regulation by a PD controller strategy. The basis for live target tracking is a continuous minimum contact pressure to ensure good image quality and highmore » patient comfort. This contact pressure can be significantly disturbed by respiratory movements and has to be compensated. All measurements were performed on human subjects under realistic conditions. When performing cardiac ultrasound, rib- and lung shadows are a common source of interference and can disrupt the tracking. To ensure continuous tracking, these artefacts had to be detected to automatically realign the probe. The detection is realised by multiple algorithms based on entropy calculations as well as a determination of the image quality. Results: Through active contact pressure regulation it was possible to reduce the variance of the contact pressure by 89.79% despite respiratory motion of the chest. The results regarding the image processing clearly demonstrate the feasibility to detect image artefacts like rib shadows in real-time. Conclusion: In all cases, it was possible to stabilise the image quality by active contact pressure control and automatically detected image artefacts. This fact enables the possibility to compensate for such interferences by realigning the probe and thus continuously optimising the ultrasound images. This is a huge step towards fully automated transducer positioning and opens the possibility for stable target tracking in ultrasoundguided radiation therapy requiring contact pressure of 5–10 N. This work was supported by the Graduate School for Computing in Medicine and Life Sciences funded by Germany's Excellence Initiative [DFG GSC 235/1].« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod
Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less
ERIC Educational Resources Information Center
Barela, Jose A.; Dias, Josenaldo L.; Godoi, Daniela; Viana, Andre R.; de Freitas, Paulo B.
2011-01-01
Difficulty with literacy acquisition is only one of the symptoms of developmental dyslexia. Dyslexic children also show poor motor coordination and postural control. Those problems could be associated with automaticity, i.e., difficulty in performing a task without dispending a fair amount of conscious efforts. If this is the case, dyslexic…
Optical Radiation: Susceptibility and Countermeasures
1998-12-01
1995). "Early Visual Acuity Side Effects After Laser Retinal Photocoagulation in Diabetic Retinopathy ," W.D. Kosnik, L. Marouf, and M. Myers...tests. The automatic positioner (AMPS) coupled with the automatic optical test system (PEATS) permits timely and consistent evaluation of candidate...Science and Engineering OR:S&C Optical Radiation: Susceptibility and Countermeasures OSADS Optical Signature, Acquisition, and Detection System
Disruption Tolerant Networking Flight Validation Experiment on NASA's EPOXI Mission
NASA Technical Reports Server (NTRS)
Wyatt, Jay; Burleigh, Scott; Jones, Ross; Torgerson, Leigh; Wissler, Steve
2009-01-01
In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions. This activity was part of a larger NASA space DTN development program to mature DTN to flight readiness for a wide variety of mission types by the end of 2011. This paper describes the DTN protocols, the flight demo implementation, validation metrics which were created for the experiment, and validation results.
A new generation scanning system for the high-speed analysis of nuclear emulsions
NASA Astrophysics Data System (ADS)
Alexandrov, A.; Buonaura, A.; Consiglio, L.; D'Ambrosio, N.; De Lellis, G.; Di Crescenzo, A.; Galati, G.; Lauria, A.; Montesi, M. C.; Tioukov, V.; Vladymyrov, M.
2016-06-01
The development of automatic scanning systems was a fundamental issue for large scale neutrino detectors exploiting nuclear emulsions as particle trackers. Such systems speed up significantly the event analysis in emulsion, allowing the feasibility of experiments with unprecedented statistics. In the early 1990s, R&D programs were carried out by Japanese and European laboratories leading to automatic scanning systems more and more efficient. The recent progress in the technology of digital signal processing and of image acquisition allows the fulfillment of new systems with higher performances. In this paper we report the description and the performance of a new generation scanning system able to operate at the record speed of 84 cm2/hour and based on the Large Angle Scanning System for OPERA (LASSO) software infrastructure developed by the Naples scanning group. Such improvement, reduces the scanning time by a factor 4 with respect to the available systems, allowing the readout of huge amount of nuclear emulsions in reasonable time. This opens new perspectives for the employment of such detectors in a wider variety of applications.
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
NASA Astrophysics Data System (ADS)
Nagarajan, Sounderya; Pioche-Durieu, Catherine; Tizei, Luiz H. G.; Fang, Chia-Yi; Bertrand, Jean-Rémi; Le Cam, Eric; Chang, Huan-Cheng; Treussart, François; Kociak, Mathieu
2016-06-01
Light and Transmission Electron Microscopies (LM and TEM) hold potential in bioimaging owing to the advantages of fast imaging of multiple cells with LM and ultrastructure resolution offered by TEM. Integrated or correlated LM and TEM are the current approaches to combine the advantages of both techniques. Here we propose an alternative in which the electron beam of a scanning TEM (STEM) is used to excite concomitantly the luminescence of nanoparticle labels (a process known as cathodoluminescence, CL), and image the cell ultrastructure. This CL-STEM imaging allows obtaining luminescence spectra and imaging ultrastructure simultaneously. We present a proof of principle experiment, showing the potential of this technique in image cytometry of cell vesicular components. To label the vesicles we used fluorescent diamond nanocrystals (nanodiamonds, NDs) of size ~150 nm coated with different cationic polymers, known to trigger different internalization pathways. Each polymer was associated with a type of ND with a different emission spectrum. With CL-STEM, for each individual vesicle, we were able to measure (i) their size with nanometric resolution, (ii) their content in different ND labels, and realize intracellular component cytometry. In contrast to the recently reported organelle flow cytometry technique that requires cell sonication, CL-STEM-based image cytometry preserves the cell integrity and provides a much higher resolution in size. Although this novel approach is still limited by a low throughput, the automatization of data acquisition and image analysis, combined with improved intracellular targeting, should facilitate applications in cell biology at the subcellular level.Light and Transmission Electron Microscopies (LM and TEM) hold potential in bioimaging owing to the advantages of fast imaging of multiple cells with LM and ultrastructure resolution offered by TEM. Integrated or correlated LM and TEM are the current approaches to combine the advantages of both techniques. Here we propose an alternative in which the electron beam of a scanning TEM (STEM) is used to excite concomitantly the luminescence of nanoparticle labels (a process known as cathodoluminescence, CL), and image the cell ultrastructure. This CL-STEM imaging allows obtaining luminescence spectra and imaging ultrastructure simultaneously. We present a proof of principle experiment, showing the potential of this technique in image cytometry of cell vesicular components. To label the vesicles we used fluorescent diamond nanocrystals (nanodiamonds, NDs) of size ~150 nm coated with different cationic polymers, known to trigger different internalization pathways. Each polymer was associated with a type of ND with a different emission spectrum. With CL-STEM, for each individual vesicle, we were able to measure (i) their size with nanometric resolution, (ii) their content in different ND labels, and realize intracellular component cytometry. In contrast to the recently reported organelle flow cytometry technique that requires cell sonication, CL-STEM-based image cytometry preserves the cell integrity and provides a much higher resolution in size. Although this novel approach is still limited by a low throughput, the automatization of data acquisition and image analysis, combined with improved intracellular targeting, should facilitate applications in cell biology at the subcellular level. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr01908k
Application of image recognition-based automatic hyphae detection in fungal keratitis.
Wu, Xuelian; Tao, Yuan; Qiu, Qingchen; Wu, Xinyi
2018-03-01
The purpose of this study is to evaluate the accuracy of two methods in diagnosis of fungal keratitis, whereby one method is automatic hyphae detection based on images recognition and the other method is corneal smear. We evaluate the sensitivity and specificity of the method in diagnosis of fungal keratitis, which is automatic hyphae detection based on image recognition. We analyze the consistency of clinical symptoms and the density of hyphae, and perform quantification using the method of automatic hyphae detection based on image recognition. In our study, 56 cases with fungal keratitis (just single eye) and 23 cases with bacterial keratitis were included. All cases underwent the routine inspection of slit lamp biomicroscopy, corneal smear examination, microorganism culture and the assessment of in vivo confocal microscopy images before starting medical treatment. Then, we recognize the hyphae images of in vivo confocal microscopy by using automatic hyphae detection based on image recognition to evaluate its sensitivity and specificity and compare with the method of corneal smear. The next step is to use the index of density to assess the severity of infection, and then find the correlation with the patients' clinical symptoms and evaluate consistency between them. The accuracy of this technology was superior to corneal smear examination (p < 0.05). The sensitivity of the technology of automatic hyphae detection of image recognition was 89.29%, and the specificity was 95.65%. The area under the ROC curve was 0.946. The correlation coefficient between the grading of the severity in the fungal keratitis by the automatic hyphae detection based on image recognition and the clinical grading is 0.87. The technology of automatic hyphae detection based on image recognition was with high sensitivity and specificity, able to identify fungal keratitis, which is better than the method of corneal smear examination. This technology has the advantages when compared with the conventional artificial identification of confocal microscope corneal images, of being accurate, stable and does not rely on human expertise. It was the most useful to the medical experts who are not familiar with fungal keratitis. The technology of automatic hyphae detection based on image recognition can quantify the hyphae density and grade this property. Being noninvasive, it can provide an evaluation criterion to fungal keratitis in a timely, accurate, objective and quantitative manner.
Measurement of gastric meal and secretion volumes using magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Hoad, C. L.; Parker, H.; Hudders, N.; Costigan, C.; Cox, E. F.; Perkins, A. C.; Blackshaw, P. E.; Marciani, L.; Spiller, R. C.; Fox, M. R.; Gowland, P. A.
2015-02-01
MRI can assess multiple gastric functions without ionizing radiation. However, time consuming image acquisition and analysis of gastric volume data, plus confounding of gastric emptying measurements by gastric secretions mixed with the test meal have limited its use to research centres. This study presents an MRI acquisition protocol and analysis algorithm suitable for the clinical measurement of gastric volume and secretion volume. Reproducibility of gastric volume measurements was assessed using data from 10 healthy volunteers following a liquid test meal with rapid MRI acquisition within one breath-hold and semi-automated analysis. Dilution of the ingested meal with gastric secretion was estimated using a respiratory-triggered T1 mapping protocol. Accuracy of the secretion volume measurements was assessed using data from 24 healthy volunteers following a mixed (liquid/solid) test meal with MRI meal volumes compared to data acquired using gamma scintigraphy (GS) on the same subjects studied on a separate study day. The mean ± SD coefficient of variance between 3 observers for both total gastric contents (including meal, secretions and air) and just the gastric contents (meal and secretion only) was 3 ± 2% at large gastric volumes (>200 ml). Mean ± SD secretion volumes post meal ingestion were 64 ± 51 ml and 110 ± 40 ml at 15 and 75 min, respectively. Comparison with GS meal volumes, showed that MRI meal only volume (after correction for secretion volume) were similar to GS, with a linear regression gradient ± std err of 1.06 ± 0.10 and intercept -11 ± 24 ml. In conclusion, (i) rapid volume acquisition and respiratory triggered T1 mapping removed the requirement to image during prolonged breath-holds (ii) semi-automatic analysis greatly reduced the time required to derive measurements and (iii) correction for secretion volumes provided accurate assessment of gastric meal volumes and emptying. Together these features provide the scientific basis of a protocol which would be suitable in clinical practice.
Automatic color preference correction for color reproduction
NASA Astrophysics Data System (ADS)
Tsukada, Masato; Funayama, Chisato; Tajima, Johji
2000-12-01
The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.
Large - scale Rectangular Ruler Automated Verification Device
NASA Astrophysics Data System (ADS)
Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie
2018-03-01
This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.
Knowledge acquisition to qualify Unified Medical Language System interconceptual relationships.
Le Duff, F.; Burgun, A.; Cleret, M.; Pouliquen, B.; Barac'h, V.; Le Beux, P.
2000-01-01
Adding automatically relations between concepts from a database to a knowledge base such as the Unified Medical Language System can be very useful to increase the consistency of the latter one. But the transfer of qualified relationships is more interesting. The most important interest of these new acquisitions is that the UMLS became more compliant and medically pertinent to be used in different medical applications. This paper describes the possibility to inherit automatically medical inter-conceptual relationships qualifiers from a disease description included into a database and to integrate them into the UMLS knowledge base. The paper focuses on the transmission of knowledge from a French medical database to an English one. PMID:11079930
Automatic segmentation of MR brain images of preterm infants using supervised classification.
Moeskops, Pim; Benders, Manon J N L; Chiţ, Sabina M; Kersbergen, Karina J; Groenendaal, Floris; de Vries, Linda S; Viergever, Max A; Išgum, Ivana
2015-09-01
Preterm birth is often associated with impaired brain development. The state and expected progression of preterm brain development can be evaluated using quantitative assessment of MR images. Such measurements require accurate segmentation of different tissue types in those images. This paper presents an algorithm for the automatic segmentation of unmyelinated white matter (WM), cortical grey matter (GM), and cerebrospinal fluid in the extracerebral space (CSF). The algorithm uses supervised voxel classification in three subsequent stages. In the first stage, voxels that can easily be assigned to one of the three tissue types are labelled. In the second stage, dedicated analysis of the remaining voxels is performed. The first and the second stages both use two-class classification for each tissue type separately. Possible inconsistencies that could result from these tissue-specific segmentation stages are resolved in the third stage, which performs multi-class classification. A set of T1- and T2-weighted images was analysed, but the optimised system performs automatic segmentation using a T2-weighted image only. We have investigated the performance of the algorithm when using training data randomly selected from completely annotated images as well as when using training data from only partially annotated images. The method was evaluated on images of preterm infants acquired at 30 and 40weeks postmenstrual age (PMA). When the method was trained using random selection from the completely annotated images, the average Dice coefficients were 0.95 for WM, 0.81 for GM, and 0.89 for CSF on an independent set of images acquired at 30weeks PMA. When the method was trained using only the partially annotated images, the average Dice coefficients were 0.95 for WM, 0.78 for GM and 0.87 for CSF for the images acquired at 30weeks PMA, and 0.92 for WM, 0.80 for GM and 0.85 for CSF for the images acquired at 40weeks PMA. Even though the segmentations obtained using training data from the partially annotated images resulted in slightly lower Dice coefficients, the performance in all experiments was close to that of a second human expert (0.93 for WM, 0.79 for GM and 0.86 for CSF for the images acquired at 30weeks, and 0.94 for WM, 0.76 for GM and 0.87 for CSF for the images acquired at 40weeks). These results show that the presented method is robust to age and acquisition protocol and that it performs accurate segmentation of WM, GM, and CSF when the training data is extracted from complete annotations as well as when the training data is extracted from partial annotations only. This extends the applicability of the method by reducing the time and effort necessary to create training data in a population with different characteristics. Copyright © 2015 Elsevier Inc. All rights reserved.
Enhancement of breast periphery region in digital mammography
NASA Astrophysics Data System (ADS)
Menegatti Pavan, Ana Luiza; Vacavant, Antoine; Petean Trindade, Andre; Quini, Caio Cesar; Rodrigues de Pina, Diana
2018-03-01
Volumetric breast density has been shown to be one of the strongest risk factor for breast cancer diagnosis. This metric can be estimated using digital mammograms. During mammography acquisition, breast is compressed and part of it loses contact with the paddle, resulting in an uncompressed region in periphery with thickness variation. Therefore, reliable density estimation in the breast periphery region is a problem, which affects the accuracy of volumetric breast density measurement. The aim of this study was to enhance breast periphery to solve the problem of thickness variation. Herein, we present an automatic algorithm to correct breast periphery thickness without changing pixel value from internal breast region. The correction pixel values from periphery was based on mean values over iso-distance lines from the breast skin-line using only adipose tissue information. The algorithm detects automatically the periphery region where thickness should be corrected. A correction factor was applied in breast periphery image to enhance the region. We also compare our contribution with two other algorithms from state-of-the-art, and we show its accuracy by means of different quality measures. Experienced radiologists subjectively evaluated resulting images from the tree methods in relation to original mammogram. The mean pixel value, skewness and kurtosis from histogram of the three methods were used as comparison metric. As a result, the methodology presented herein showed to be a good approach to be performed before calculating volumetric breast density.
Zeng, Jinle; Chang, Baohua; Du, Dong; Wang, Li; Chang, Shuhe; Peng, Guodong; Wang, Wenzhu
2018-01-05
Multi-layer/multi-pass welding (MLMPW) technology is widely used in the energy industry to join thick components. During automatic welding using robots or other actuators, it is very important to recognize the actual weld pass position using visual methods, which can then be used not only to perform reasonable path planning for actuators, but also to correct any deviations between the welding torch and the weld pass position in real time. However, due to the small geometrical differences between adjacent weld passes, existing weld position recognition technologies such as structured light methods are not suitable for weld position detection in MLMPW. This paper proposes a novel method for weld position detection, which fuses various kinds of information in MLMPW. First, a synchronous acquisition method is developed to obtain various kinds of visual information when directional light and structured light sources are on, respectively. Then, interferences are eliminated by fusing adjacent images. Finally, the information from directional and structured light images is fused to obtain the 3D positions of the weld passes. Experiment results show that each process can be done in 30 ms and the deviation is less than 0.6 mm. The proposed method can be used for automatic path planning and seam tracking in the robotic MLMPW process as well as electron beam freeform fabrication process.
Markov random field based automatic image alignment for electron tomography.
Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark
2008-03-01
We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.
Partanen, Eino; Leminen, Alina; de Paoli, Stine; Bundgaard, Anette; Kingo, Osman Skjold; Krøjgaard, Peter; Shtyrov, Yury
2017-07-15
Children learn new words and word forms with ease, often acquiring a new word after very few repetitions. Recent neurophysiological research on word form acquisition in adults indicates that novel words can be acquired within minutes of repetitive exposure to them, regardless of the individual's focused attention on the speech input. Although it is well-known that children surpass adults in language acquisition, the developmental aspects of such rapid and automatic neural acquisition mechanisms remain unexplored. To address this open question, we used magnetoencephalography (MEG) to scrutinise brain dynamics elicited by spoken words and word-like sounds in healthy monolingual (Danish) children throughout a 20-min repetitive passive exposure session. We found rapid neural dynamics manifested as an enhancement of early (~100ms) brain activity over the short exposure session, with distinct spatiotemporal patterns for different novel sounds. For novel Danish word forms, signs of such enhancement were seen in the left temporal regions only, suggesting reliance on pre-existing language circuits for acquisition of novel word forms with native phonology. In contrast, exposure both to novel word forms with non-native phonology and to novel non-speech sounds led to activity enhancement in both left and right hemispheres, suggesting that more wide-spread cortical networks contribute to the build-up of memory traces for non-native and non-speech sounds. Similar studies in adults have previously reported more sluggish (~15-25min, as opposed to 4min in the present study) or non-existent neural dynamics for non-native sound acquisition, which might be indicative of a higher degree of plasticity in the children's brain. Overall, the results indicate a rapid and highly plastic mechanism for a dynamic build-up of memory traces for novel acoustic information in the children's brain that operates automatically and recruits bilateral temporal cortical circuits. Copyright © 2017 Elsevier Inc. All rights reserved.
Research-oriented image registry for multimodal image integration.
Tanaka, M; Sadato, N; Ishimori, Y; Yonekura, Y; Yamashita, Y; Komuro, H; Hayahsi, N; Ishii, Y
1998-01-01
To provide multimodal biomedical images automatically, we constructed the research-oriented image registry, Data Delivery System (DDS). DDS was constructed on the campus local area network. Machines which generate images (imagers: DSA, ultrasound, PET, MRI, SPECT and CT) were connected to the campus LAN. Once a patient is registered, all his images are automatically picked up by DDS as they are generated, transferred through the gateway server to the intermediate server, and copied into the directory of the user who registered the patient. DDS informs the user through e-mail that new data have been generated and transferred. Data format is automatically converted into one which is chosen by the user. Data inactive for a certain period in the intermediate server are automatically achieved into the final and permanent data server based on compact disk. As a soft link is automatically generated through this step, a user has access to all (old or new) image data of the patient of his interest. As DDS runs with minimal maintenance, cost and time for data transfer are significantly saved. By making the complex process of data transfer and conversion invisible, DDS has made it easy for naive-to-computer researchers to concentrate on their biomedical interest.
Indicator Species Population Monitoring in Antarctica with Uav
NASA Astrophysics Data System (ADS)
Zmarz, A.; Korczak-Abshire, M.; Storvold, R.; Rodzewicz, M.; Kędzierska, I.
2015-08-01
A program to monitor bird and pinniped species in the vicinity of Arctowski Station, King George Island, South Shetlands, Antarctica, has been conducted over the past 38 years. Annual monitoring of these indicator species includes estimations of breeding population sizes of three Pygoscelis penguin species: Adélie, gentoo and chinstrap. Six penguin colonies situated on the western shores of two bays: Admiralty and King George are investigated. To study changes in penguin populations Unmanned Aerial Vehicles were used for the first time in the 2014/15 austral summer season. During photogrammetric flights the high-resolution images of eight penguin breeding colonies were taken. Obtained high resolution images were used for estimation of breeding population size and compared with the results of measurements taken at the same time from the ground. During this Antarctic expedition eight successful photogrammetry missions (total distance 1500 km) were performed. Images were taken with digital SLR Canon 700D, Nikon D5300, Nikon D5100 with a 35mm objective lens. Flights altitude at 350 - 400 AGL, allowed images to be taken with a resolution GSD (ground sample distance) less than 5 cm. The Image J software analysis method was tested to provide automatic population estimates from obtained images. The use of UAV for monitoring of indicator species, enabled data acquisition from areas inaccessible by ground methods.
a Novel Image Acquisition and Processing Procedure for Fast Tunnel Dsm Production
NASA Astrophysics Data System (ADS)
Roncella, R.; Umili, G.; Forlani, G.
2012-07-01
In mining operations the evaluation of the stability condition of the excavated front are critic to ensure a safe and correct planning of the subsequent activities. The procedure currently used to this aim has some shortcomings: safety for the geologist, completeness of data collection and objective documentation of the results. In the last decade it has been shown that the geostructural parameters necessary to the stability analysis can be derived from high resolution digital surface models (DSM) of rock faces. With the objective to overcome the limitation of the traditional survey and to minimize data capture times, so reducing delays on mining site operations, a photogrammetric system to generate high resolution DSM of tunnels has been realized. A fast, effective and complete data capture method has been developed and the orientation and restitution phases have been largely automated. The survey operations take no more than required to the traditional ones; no additional topographic measurements other than those available are required. To make the data processing fast and economic our Structure from Motion procedure has been slightly modified to adapt to the peculiar block geometry while, the DSM of the tunnel is created using automatic image correlation techniques. The geomechanical data are sampled on the DSM, by using the acquired images in a GUI and a segmentation procedure to select discontinuity planes. To allow an easier and faster identification of relevant features of the surface of the tunnel, using again an automatic procedure, an orthophoto of the tunnel is produced. A case study where a tunnel section of ca. 130 m has been surveyed is presented.
An improved parallel fuzzy connected image segmentation method based on CUDA.
Wang, Liansheng; Li, Dong; Huang, Shaohui
2016-05-12
Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.
Feature detection on 3D images of dental imprints
NASA Astrophysics Data System (ADS)
Mokhtari, Marielle; Laurendeau, Denis
1994-09-01
A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.
NASA Astrophysics Data System (ADS)
Zhang, Pengfei; Zam, Azhar; Jian, Yifan; Wang, Xinlei; Burns, Marie E.; Sarunic, Marinko V.; Pugh, Edward N.; Zawadzki, Robert J.
2015-03-01
A compact, non-invasive multi-modal system has been developed for in vivo mouse retina imaging. It is configured for simultaneously detecting green and red fluorescent protein signals with scanning laser ophthalmoscopy (SLO) back-scattered light from the SLO illumination beam, and depth information about different retinal layers by means of Optical Coherence Tomography (OCT). Simultaneous assessment of retinal characteristics with different modalities can provide a wealth of information about the structural and functional changes in the retinal neural tissue and chorio-retinal vasculature in vivo. Additionally, simultaneous acquisition of multiple channels facilitates analysis of the data of different modalities by automatic temporal and structural co-registration. As an example of the instrument's performance we imaged the retina of a mouse with constitutive expression of GFP in microglia cells (Cx3cr1GFP/+), and which also expressed the red fluorescent protein mCherry in Müller glial cells by means of adeno-associated virus delivery (AAV2) of an mCherry cDNA driven by the GFAP (glial fibrillary acid protein) promoter.
Visual stimulus presentation using fiber optics in the MRI scanner.
Huang, Ruey-Song; Sereno, Martin I
2008-03-30
Imaging the neural basis of visuomotor actions using fMRI is a topic of increasing interest in the field of cognitive neuroscience. One challenge is to present realistic three-dimensional (3-D) stimuli in the subject's peripersonal space inside the MRI scanner. The stimulus generating apparatus must be compatible with strong magnetic fields and must not interfere with image acquisition. Virtual 3-D stimuli can be generated with a stereo image pair projected onto screens or via binocular goggles. Here, we describe designs and implementations for automatically presenting physical 3-D stimuli (point-light targets) in peripersonal and near-face space using fiber optics in the MRI scanner. The feasibility of fiber-optic based displays was demonstrated in two experiments. The first presented a point-light array along a slanted surface near the body, and the second presented multiple point-light targets around the face. Stimuli were presented using phase-encoded paradigms in both experiments. The results suggest that fiber-optic based displays can be a complementary approach for visual stimulus presentation in the MRI scanner.
An Open Source Low-Cost Automatic System for Image-Based 3d Digitization
NASA Astrophysics Data System (ADS)
Menna, F.; Nocerino, E.; Morabito, D.; Farella, E. M.; Perini, M.; Remondino, F.
2017-11-01
3D digitization of heritage artefacts, reverse engineering of industrial components or rapid prototyping-driven design are key topics today. Indeed, millions of archaeological finds all over the world need to be surveyed in 3D either to allow convenient investigations by researchers or because they are inaccessible to visitors and scientists or, unfortunately, because they are seriously endangered by wars and terrorist attacks. On the other hand, in case of industrial and design components there is often the need of deformation analyses or physical replicas starting from reality-based 3D digitisations. The paper is aligned with these needs and presents the realization of the ORION (arduinO Raspberry pI rOtating table for image based 3D recostructioN) prototype system, with its hardware and software components, providing critical insights about its modular design. ORION is an image-based 3D reconstruction system based on automated photogrammetric acquisitions and processing. The system is being developed under a collaborative educational project between FBK Trento, the University of Trento and internship programs with high school in the Trentino province (Italy).
NASA Astrophysics Data System (ADS)
Yan, Yue
2018-03-01
A synthetic aperture radar (SAR) automatic target recognition (ATR) method based on the convolutional neural networks (CNN) trained by augmented training samples is proposed. To enhance the robustness of CNN to various extended operating conditions (EOCs), the original training images are used to generate the noisy samples at different signal-to-noise ratios (SNRs), multiresolution representations, and partially occluded images. Then, the generated images together with the original ones are used to train a designed CNN for target recognition. The augmented training samples can contrapuntally improve the robustness of the trained CNN to the covered EOCs, i.e., the noise corruption, resolution variance, and partial occlusion. Moreover, the significantly larger training set effectively enhances the representation capability for other conditions, e.g., the standard operating condition (SOC), as well as the stability of the network. Therefore, better performance can be achieved by the proposed method for SAR ATR. For experimental evaluation, extensive experiments are conducted on the Moving and Stationary Target Acquisition and Recognition dataset under SOC and several typical EOCs.
Smartphone-based analysis of biochemical tests for health monitoring support at home.
Velikova, Marina; Smeets, Ruben L; van Scheltinga, Josien Terwisscha; Lucas, Peter J F; Spaanderman, Marc
2014-09-01
In the context of home-based healthcare monitoring systems, it is desirable that the results obtained from biochemical tests - tests of various body fluids such as blood and urine - are objective and automatically generated to reduce the number of man-made errors. The authors present the StripTest reader - an innovative smartphone-based interpreter of biochemical tests based on paper-based strip colour using image processing techniques. The working principles of the reader include image acquisition of the colour strip pads using the camera phone, analysing the images within the phone and comparing them with reference colours provided by the manufacturer to obtain the test result. The detection of kidney damage was used as a scenario to illustrate the application of, and test, the StripTest reader. An extensive evaluation using laboratory and human urine samples demonstrates the reader's accuracy and precision of detection, indicating the successful development of a cheap, mobile and smart reader for home-monitoring of kidney functioning, which can facilitate the early detection of health problems and a timely treatment intervention.
Using flatbed scanners in the undergraduate optics laboratory—An example of frugal science
NASA Astrophysics Data System (ADS)
Koopman, Thomas; Gopal, Venkatesh
2017-05-01
We describe the use of a low-cost commercial flatbed scanner in the undergraduate teaching laboratory to image large (˜25 cm) interference and diffraction patterns in two dimensions. Such scanners usually have an 8-bit linear photosensor array that can scan large areas (˜28 cm × 22 cm) at very high spatial resolutions (≥100 Megapixels), which makes them versatile large-format imaging devices. We describe how the scanner can be used to image interference and diffraction from rectangular single-slit, double-slit, and circular apertures. The experiments are very simple to setup and require no specialized components besides a small laser and a flatbed scanner. Due to the presence of Automatic Gain Control in the scanner, which we were not able to override, we were unable to get an excellent fit to the data. Interestingly, we found that the less-than-ideal data were actually pedagogically superior as it forced the students to think about the process of data acquisition in much greater detail instead of simply performing the experiment mechanically.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Y; Huang, H; Su, T
Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less
The image acquisition system design of floor grinder
NASA Astrophysics Data System (ADS)
Wang, Yang-jiang; Liu, Wei; Liu, Hui-qin
2018-01-01
Based on linear CCD, high resolution image real-time acquisition system serves as designing a set of image acquisition system for floor grinder through the calculation of optical imaging system. The entire image acquisition system can collect images of ground before and after the work of the floor grinder, and the data is transmitted through the Bluetooth system to the computer and compared to realize real-time monitoring of its working condition. The system provides technical support for the design of unmanned ground grinders.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-22
... (see FAR 25.408). II. Discussion and Analysis This interim rule adds Colombia to the definition of...,000 for construction. Because the Colombia FTA construction threshold of $7,777,000 is the same as the... states that acquisitions that do not exceed $150,000 (with some exceptions) are automatically reserved...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jutras, Jean-David
MRI-only Radiation Treatment Planning (RTP) is becoming increasingly popular because of a simplified work-flow, and less inconvenience to the patient who avoids multiple scans. The advantages of MRI-based RTP over traditional CT-based RTP lie in its superior soft-tissue contrast, and absence of ionizing radiation dose. The lack of electron-density information in MRI can be addressed by automatic tissue classification. To distinguish bone from air, which both appear dark in MRI, an ultra-short echo time (UTE) pulse sequence may be used. Quantitative MRI parametric maps can provide improved tissue segmentation/classification and better sensitivity in monitoring disease progression and treatment outcome thanmore » standard weighted images. Superior tumor contrast can be achieved on pure T{sub 1} images compared to conventional T{sub 1}-weighted images acquired in the same scan duration and voxel resolution. In this study, we have developed a robust and fast quantitative MRI acquisition and post-processing work-flow that integrates these latest advances into the MRI-based RTP of brain lesions. Using 3D multi-echo FLASH images at two different optimized flip angles (both acquired in under 9 min, and 1mm isotropic resolution), parametric maps of T{sub 1}, proton-density (M{sub 0}), and T{sub 2}{sup *} are obtained with high contrast-to-noise ratio, and negligible geometrical distortions, water-fat shifts and susceptibility effects. An additional 3D UTE MRI dataset is acquired (in under 4 min) and post-processed to classify tissues for dose simulation. The pipeline was tested on four healthy volunteers and a clinical trial on brain cancer patients is underway.« less
ERIC Educational Resources Information Center
Sauval, Karinne; Perre, Laetitia; Casalis, Séverine
2017-01-01
The present study aimed to investigate the development of automatic phonological processes involved in visual word recognition during reading acquisition in French. A visual masked priming lexical decision experiment was carried out with third, fifth graders and adult skilled readers. Three different types of partial overlap between the prime and…
ERIC Educational Resources Information Center
Kandel, Sonia; Perret, Cyril
2015-01-01
Learning how to write involves the automation of grapho-motor skills. One of the factors that determine automaticity is "motor anticipation." This is the ability to write a letter while processing information on how to produce following letters. It is essential for writing fast and smoothly. We investigated how motor anticipation…
Improvements in Speed and Functionality of a 670-GHz Imaging Radar
NASA Technical Reports Server (NTRS)
Dengler, Robert J.; Cooper, Ken B.; Mehdi, Imran; Siegel, Peter H.; Tarsala, Jan A.; Bryllert, Thomas E.
2011-01-01
Significant improvements have been made in the instrument originally described in a prior NASA Tech Briefs article: Improved Speed and Functionality of a 580-GHz Imaging Radar (NPO-45156), Vol. 34, No. 7 (July 2010), p. 51. First, the wideband YIG oscillator has been replaced with a JPL-designed and built phase-locked, low-noise chirp source. Second, further refinements to the data acquisition and signal processing software have been performed by moving critical code sections to C code, and compiling those sections to Windows DLLs, which are then invoked from the main LabVIEW executive. This system is an active, single-pixel scanned imager operating at 670 GHz. The actual chirp signals for the RF and LO chains were generated by a pair of MITEQ 2.5 3.3 GHz chirp sources. Agilent benchtop synthesizers operating at fixed frequencies around 13 GHz were then used to up-convert the chirp sources to 15.5 16.3 GHz. The resulting signals were then multiplied 36 times by a combination of off-the-shelf millimeter- wave components, and JPL-built 200- GHz doublers and 300- and 600-GHz triplers. The power required to drive the submillimeter-wave multipliers was provided by JPL-built W-band amplifiers. The receive and transmit signal paths were combined using a thin, high-resistivity silicon wafer as a beam splitter. While the results at present are encouraging, the system still lacks sufficient speed to be usable for practical applications in a contraband detection. Ideally, an image acquisition speed of ten seconds, or a factor of 30 improvement, is desired. However, the system improvements to date have resulted in a factor of five increase in signal acquisition speed, as well as enhanced signal processing algorithms, permitting clearer imaging of contraband objects hidden underneath clothing. In particular, advances in three distinct areas have enabled these performance enhancements: base source phase noise reduction, chirp rate, and signal processing. Additionally, a second pixel was added, automatically reducing the imaging time by a factor of two. Although adding a second pixel to the system doubles the amount of submillimeter components required, some savings in microwave hardware can be realized by using a common low-noise source.
Jini service to reconstruct tomographic data
NASA Astrophysics Data System (ADS)
Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.
2002-06-01
A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.
Automatic Identification and Quantification of Extra-Well Fluorescence in Microarray Images.
Rivera, Robert; Wang, Jie; Yu, Xiaobo; Demirkan, Gokhan; Hopper, Marika; Bian, Xiaofang; Tahsin, Tasnia; Magee, D Mitchell; Qiu, Ji; LaBaer, Joshua; Wallstrom, Garrick
2017-11-03
In recent studies involving NAPPA microarrays, extra-well fluorescence is used as a key measure for identifying disease biomarkers because there is evidence to support that it is better correlated with strong antibody responses than statistical analysis involving intraspot intensity. Because this feature is not well quantified by traditional image analysis software, identification and quantification of extra-well fluorescence is performed manually, which is both time-consuming and highly susceptible to variation between raters. A system that could automate this task efficiently and effectively would greatly improve the process of data acquisition in microarray studies, thereby accelerating the discovery of disease biomarkers. In this study, we experimented with different machine learning methods, as well as novel heuristics, for identifying spots exhibiting extra-well fluorescence (rings) in microarray images and assigning each ring a grade of 1-5 based on its intensity and morphology. The sensitivity of our final system for identifying rings was found to be 72% at 99% specificity and 98% at 92% specificity. Our system performs this task significantly faster than a human, while maintaining high performance, and therefore represents a valuable tool for microarray image analysis.
Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe
2013-08-01
Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies. Copyright © 2013 Elsevier Ltd. All rights reserved.
[State of the art and future trends in technology for computed tomography dose reduction].
Calzado Cantera, A; Hernández-Girón, I; Salvadó Artells, M; Rodríguez González, R
2013-12-01
The introduction of helical and multislice acquisitions in CT scanners together with decreased image reconstruction times has had a tremendous impact on radiological practice. Technological developments in the last 10 to 12 years have enabled very high quality images to be obtained in a very short time. Improved image quality has led to an increase in the number of indications for CT. In parallel to this development, radiation exposure in patients has increased considerably. Concern about the potential health risks posed by CT imaging, reflected in diverse initiatives and actions by official organs and scientific societies, has prompted the search for ways to reduce radiation exposure in patients without compromising diagnostic efficacy. To this end, good practice guidelines have been established, special applications have been developed for scanners, and research has been undertaken to optimize the clinical use of CT. Noteworthy technical developments incorporated in scanners include the different modes of X-ray tube current modulation, automatic selection of voltage settings, selective organ protection, adaptive collimation, and iterative reconstruction. The appropriate use of these tools to reduce radiation doses requires thorough knowledge of how they work. Copyright © 2013 SERAM. Published by Elsevier Espana. All rights reserved.
Automatic tissue image segmentation based on image processing and deep learning
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.
SPRAT: Spectrograph for the Rapid Acquisition of Transients
NASA Astrophysics Data System (ADS)
Piascik, A. S.; Steele, Iain A.; Bates, Stuart D.; Mottram, Christopher J.; Smith, R. J.; Barnsley, R. M.; Bolton, B.
2014-07-01
We describe the development of a low cost, low resolution (R ~ 350), high throughput, long slit spectrograph covering visible (4000-8000) wavelengths. The spectrograph has been developed for fully robotic operation with the Liverpool Telescope (La Palma). The primary aim is to provide rapid spectral classification of faint (V ˜ 20) transient objects detected by projects such as Gaia, iPTF (intermediate Palomar Transient Factory), LOFAR, and a variety of high energy satellites. The design employs a volume phase holographic (VPH) transmission grating as the dispersive element combined with a prism pair (grism) in a linear optical path. One of two peak spectral sensitivities are selectable by rotating the grism. The VPH and prism combination and entrance slit are deployable, and when removed from the beam allow the collimator/camera pair to re-image the target field onto the detector. This mode of operation provides automatic acquisition of the target onto the slit prior to spectrographic observation through World Coordinate System fitting. The selection and characterisation of optical components to maximise photon throughput is described together with performance predictions.
Giordan, Daniele; Allasia, Paolo; Dematteis, Niccolò; Dell’Anese, Federico; Vagliasindi, Marco; Motta, Elena
2016-01-01
In this work, we present the results of a low-cost optical monitoring station designed for monitoring the kinematics of glaciers in an Alpine environment. We developed a complete hardware/software data acquisition and processing chain that automatically acquires, stores and co-registers images. The system was installed in September 2013 to monitor the evolution of the Planpincieux glacier, within the open-air laboratory of the Grandes Jorasses, Mont Blanc massif (NW Italy), and collected data with an hourly frequency. The acquisition equipment consists of a high-resolution DSLR camera operating in the visible band. The data are processed with a Pixel Offset algorithm based on normalized cross-correlation, to estimate the deformation of the observed glacier. We propose a method for the pixel-to-metric conversion and present the results of the projection on the mean slope of the glacier. The method performances are compared with measurements obtained by GB-SAR, and exhibit good agreement. The system provides good support for the analysis of the glacier evolution and allows the creation of daily displacement maps. PMID:27775652
Automatic Assessment of Acquisition and Transmission Losses in Indian Remote Sensing Satellite Data
NASA Astrophysics Data System (ADS)
Roy, D.; Purna Kumari, B.; Manju Sarma, M.; Aparna, N.; Gopal Krishna, B.
2016-06-01
The quality of Remote Sensing data is an important parameter that defines the extent of its usability in various applications. The data from Remote Sensing satellites is received as raw data frames at the ground station. This data may be corrupted with data losses due to interferences during data transmission, data acquisition and sensor anomalies. Thus it is important to assess the quality of the raw data before product generation for early anomaly detection, faster corrective actions and product rejection minimization. Manual screening of raw images is a time consuming process and not very accurate. In this paper, an automated process for identification and quantification of losses in raw data like pixel drop out, line loss and data loss due to sensor anomalies is discussed. Quality assessment of raw scenes based on these losses is also explained. This process is introduced in the data pre-processing stage and gives crucial data quality information to users at the time of browsing data for product ordering. It has also improved the product generation workflow by enabling faster and more accurate quality estimation.
Modelling of Indoor Environments Using Lindenmayer Systems
NASA Astrophysics Data System (ADS)
Peter, M.
2017-09-01
Documentation of the "as-built" state of building interiors has gained a lot of interest in the recent years. Various data acquisition methods exist, e.g. the extraction from photographed evacuation plans using image processing or, most prominently, indoor mobile laser scanning. Due to clutter or data gaps as well as errors during data acquisition and processing, automatic reconstruction of CAD/BIM-like models from these data sources is not a trivial task. Thus it is often tried to support reconstruction by general rules for the perpendicularity and parallelism which are predominant in man-made structures. Indoor environments of large, public buildings, however, often also follow higher-level rules like symmetry and repetition of e.g. room sizes and corridor widths. In the context of reconstruction of city city elements (e.g. street networks) or building elements (e.g. façade layouts), formal grammars have been put to use. In this paper, we describe the use of Lindenmayer systems - which originally have been developed for the computer-based modelling of plant growth - to model and reproduce the layout of indoor environments in 2D.
Giordan, Daniele; Allasia, Paolo; Dematteis, Niccolò; Dell'Anese, Federico; Vagliasindi, Marco; Motta, Elena
2016-10-21
In this work, we present the results of a low-cost optical monitoring station designed for monitoring the kinematics of glaciers in an Alpine environment. We developed a complete hardware/software data acquisition and processing chain that automatically acquires, stores and co-registers images. The system was installed in September 2013 to monitor the evolution of the Planpincieux glacier, within the open-air laboratory of the Grandes Jorasses, Mont Blanc massif (NW Italy), and collected data with an hourly frequency. The acquisition equipment consists of a high-resolution DSLR camera operating in the visible band. The data are processed with a Pixel Offset algorithm based on normalized cross-correlation, to estimate the deformation of the observed glacier. We propose a method for the pixel-to-metric conversion and present the results of the projection on the mean slope of the glacier. The method performances are compared with measurements obtained by GB-SAR, and exhibit good agreement. The system provides good support for the analysis of the glacier evolution and allows the creation of daily displacement maps.
Bidgood, W D; Bray, B; Brown, N; Mori, A R; Spackman, K A; Golichowski, A; Jones, R H; Korman, L; Dove, B; Hildebrand, L; Berg, M
1999-01-01
To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. The authors introduce the notion of "image acquisition context," the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries.
Improved automatic adjustment of density and contrast in FCR system using neural network
NASA Astrophysics Data System (ADS)
Takeo, Hideya; Nakajima, Nobuyoshi; Ishida, Masamitsu; Kato, Hisatoyo
1994-05-01
FCR system has an automatic adjustment of image density and contrast by analyzing the histogram of image data in the radiation field. Advanced image recognition methods proposed in this paper can improve the automatic adjustment performance, in which neural network technology is used. There are two methods. Both methods are basically used 3-layer neural network with back propagation. The image data are directly input to the input-layer in one method and the histogram data is input in the other method. The former is effective to the imaging menu such as shoulder joint in which the position of interest region occupied on the histogram changes by difference of positioning and the latter is effective to the imaging menu such as chest-pediatrics in which the histogram shape changes by difference of positioning. We experimentally confirm the validity of these methods (about the automatic adjustment performance) as compared with the conventional histogram analysis methods.
Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion
NASA Astrophysics Data System (ADS)
Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Tade, Funmilayo; Schuster, David M.; Nieh, Peter; Master, Viraj; Fei, Baowei
2017-02-01
Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.
SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Yang, D
2015-06-15
Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less
a Method for the Seamlines Network Automatic Selection Based on Building Vector
NASA Astrophysics Data System (ADS)
Li, P.; Dong, Y.; Hu, Y.; Li, X.; Tan, P.
2018-04-01
In order to improve the efficiency of large scale orthophoto production of city, this paper presents a method for automatic selection of seamlines network in large scale orthophoto based on the buildings' vector. Firstly, a simple model of the building is built by combining building's vector, height and DEM, and the imaging area of the building on single DOM is obtained. Then, the initial Voronoi network of the measurement area is automatically generated based on the positions of the bottom of all images. Finally, the final seamlines network is obtained by optimizing all nodes and seamlines in the network automatically based on the imaging areas of the buildings. The experimental results show that the proposed method can not only get around the building seamlines network quickly, but also remain the Voronoi network' characteristics of projection distortion minimum theory, which can solve the problem of automatic selection of orthophoto seamlines network in image mosaicking effectively.
Assessing UAV platform types and optical sensor specifications
NASA Astrophysics Data System (ADS)
Altena, B.; Goedemé, T.
2014-05-01
Photogrammetric acquisition with unmanned aerial vehicles (UAV) has grown extensively over the last couple of years. Such mobile platforms and their processing software have matured, resulting in a market which offers off-the-shelf mapping solutions to surveying companies and geospatial enterprises. Different approaches in platform type and optical instruments exist, though its resulting products have similar specifications. To demonstrate differences in acquisitioning practice, a case study over an open mine was flown with two different off-the-shelf UAVs (a fixed-wing and a multi-rotor). The resulting imagery is analyzed to clarify the differences in collection quality. We look at image settings, and stress the fact of photographic experience if manual setting are applied. For mapping production it might be safest to set the camera on automatic. Furthermore, we try to estimate if blur is present due to image motion. A subtle trend seems to be present, for the fast flying platform though its extent is of similar order to the slow moving one. It shows both systems operate at their limits. Finally, the lens distortion is assessed with special attention to chromatic aberration. Here we see that through calibration such aberrations could be present, however detecting this phenomena directly on imagery is not straightforward. For such effects a normal lens is sufficient, though a better lens and collimator does give significant improvement.
TU-FG-BRB-11: Design and Evaluation of a Robotic C-Arm CBCT System for Image-Guided Proton Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hua, C; Yao, W; Farr, J
Purpose: To describe the design and performance of a ceiling-mounted robotic C-arm CBCT system for image-guided proton therapy. Methods: Uniquely different from traditional C-arm CBCT used in interventional radiology, the imaging system was designed to provide volumetric image guidance for patients treated on a 190-degree proton gantry system and a 6 degree-of-freedom (DOF) robotic patient positioner. The mounting of robotic arms to the ceiling rails, rather than gantry or nozzle, provides the flexibility in imaging locations (isocenter, iso+27cm in X, iso+100cm in Y) in the room and easier upgrade as technology advances. A kV X-ray tube and a 43×43cm flatmore » panel imager were mounted to a rotating C-ring (87cm diameter), which is coupled to the C-arm concentrically. Both C-arm and the robotic arm remain stationary during imaging to maintain high position accuracy. Source-to-axis distance and source-to-imager distance are 100 and 150cm, respectively. A 14:1 focused anti-scatter grid and a bowtie filer are used for image acquisition. A unique automatic collimator device of 4 independent blades for adjusting field of view and reducing patient dose has also been developed. Results: Sub-millimeter position accuracy and repeatability of the robotic C-arm were measured with a laser tracker. High quality CBCT images for positioning can be acquired with a weighted CTDI of 3.6mGy (head in 200° full fan mode: 100kV, 20mA, 20ms, 10fps)-8.7 mGy (pelvis in 360° half fan mode: 125kV, 42mA, 20ms, 10fps). Image guidance accuracy achieved <1mm (3D vector) with automatic 3D-3D registration for anthropomorphic head and pelvis phantoms. Since November 2015, 22 proton therapy patients have undergone daily CBCT imaging for 6 DOF positioning. Conclusion: Decoupled from gantry and nozzle, this CBCT system provides a unique solution for volumetric image guidance with half/partial proton gantry systems. We demonstrated that daily CBCT can be integrated into proton therapy for pre-treatment position verification.« less
Automatic Co-Registration of QuickBird Data for Change Detection Applications
NASA Technical Reports Server (NTRS)
Bryant, Nevin A.; Logan, Thomas L.; Zobrist, Albert L.
2006-01-01
This viewgraph presentation reviews the use Automatic Fusion of Image Data System (AFIDS) for Automatic Co-Registration of QuickBird Data to ascertain if changes have occurred in images. The process is outlined, and views from Iraq and Los Angelels are shown to illustrate the process.
Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael
1999-01-01
Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229
An automatic frequency control loop using overlapping DFTs (Discrete Fourier Transforms)
NASA Technical Reports Server (NTRS)
Aguirre, S.
1988-01-01
An automatic frequency control (AFC) loop is introduced and analyzed in detail. The new scheme is a generalization of the well known Cross Product AFC loop that uses running overlapping discrete Fourier transforms (DFTs) to create a discriminator curve. Linear analysis is included and supported with computer simulations. The algorithm is tested in a low carrier to noise ratio (CNR) dynamic environment, and the probability of loss of lock is estimated via computer simulations. The algorithm discussed is a suboptimum tracking scheme with a larger frequency error variance compared to an optimum strategy, but offers simplicity of implementation and a very low operating threshold CNR. This technique can be applied during the carrier acquisition and re-acquisition process in the Advanced Receiver.
NASA Technical Reports Server (NTRS)
Arteaga, Ricardo A. (Inventor)
2016-01-01
The present invention proposes an automatic dependent surveillance broadcast (ADS-B) architecture and process, in which priority aircraft and ADS-B IN traffic information are included in the transmission of data through the telemetry communications to a remote ground control station. The present invention further proposes methods for displaying general aviation traffic information in three and/or four dimension trajectories using an industry standard Earth browser for increased situation awareness and enhanced visual acquisition of traffic for conflict detection. The present invention enable the applications of enhanced visual acquisition of traffic, traffic alerts, and en-route and terminal surveillance used to augment pilot situational awareness through ADS-B IN display and information in three or four dimensions for self-separation awareness.
Towards intelligent diagnostic system employing integration of mathematical and engineering model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isa, Nor Ashidi Mat
The development of medical diagnostic system has been one of the main research fields during years. The goal of the medical diagnostic system is to place a nosological system that could ease the diagnostic evaluation normally performed by scientists and doctors. Efficient diagnostic evaluation is essentials and requires broad knowledge in order to improve conventional diagnostic system. Several approaches on developing the medical diagnostic system have been designed and tested since the earliest 60s. Attempts on improving their performance have been made which utilizes the fields of artificial intelligence, statistical analyses, mathematical model and engineering theories. With the availability ofmore » the microcomputer and software development as well as the promising aforementioned fields, medical diagnostic prototypes could be developed. In general, the medical diagnostic system consists of several stages, namely the 1) data acquisition, 2) feature extraction, 3) feature selection, and 4) classifications stages. Data acquisition stage plays an important role in converting the inputs measured from the real world physical conditions to the digital numeric values that can be manipulated by the computer system. One of the common medical inputs could be medical microscopic images, radiographic images, magnetic resonance image (MRI) as well as medical signals such as electrocardiogram (ECG) and electroencephalogram (EEG). Normally, the scientist or doctors have to deal with myriad of data and redundant to be processed. In order to reduce the complexity of the diagnosis process, only the significant features of the raw data such as peak value of the ECG signal or size of lesion in the mammogram images will be extracted and considered in the subsequent stages. Mathematical models and statistical analyses will be performed to select the most significant features to be classified. The statistical analyses such as principal component analysis and discriminant analysis as well as mathematical model of clustering technique have been widely used in developing the medical diagnostic systems. The selected features will be classified using mathematical models that embedded engineering theory such as artificial intelligence, support vector machine, neural network and fuzzy-neuro system. These classifiers will provide the diagnostic results without human intervention. Among many publishable researches, several prototypes have been developed namely NeuralPap, Neural Mammo, and Cervix Kit. The former system (NeuralPap) is an automatic intelligent diagnostic system for classifying and distinguishing between the normal and cervical cancerous cells. Meanwhile, the Cervix Kit is a portable Field-programmable gate array (FPGA)-based cervical diagnostic kit that could automatically diagnose the cancerous cell based on the images obtained during sampling test. Besides the cervical diagnostic system, the Neural Mammo system is developed to specifically aid the diagnosis of breast cancer using a fine needle aspiration image.« less
Towards intelligent diagnostic system employing integration of mathematical and engineering model
NASA Astrophysics Data System (ADS)
Isa, Nor Ashidi Mat
2015-05-01
The development of medical diagnostic system has been one of the main research fields during years. The goal of the medical diagnostic system is to place a nosological system that could ease the diagnostic evaluation normally performed by scientists and doctors. Efficient diagnostic evaluation is essentials and requires broad knowledge in order to improve conventional diagnostic system. Several approaches on developing the medical diagnostic system have been designed and tested since the earliest 60s. Attempts on improving their performance have been made which utilizes the fields of artificial intelligence, statistical analyses, mathematical model and engineering theories. With the availability of the microcomputer and software development as well as the promising aforementioned fields, medical diagnostic prototypes could be developed. In general, the medical diagnostic system consists of several stages, namely the 1) data acquisition, 2) feature extraction, 3) feature selection, and 4) classifications stages. Data acquisition stage plays an important role in converting the inputs measured from the real world physical conditions to the digital numeric values that can be manipulated by the computer system. One of the common medical inputs could be medical microscopic images, radiographic images, magnetic resonance image (MRI) as well as medical signals such as electrocardiogram (ECG) and electroencephalogram (EEG). Normally, the scientist or doctors have to deal with myriad of data and redundant to be processed. In order to reduce the complexity of the diagnosis process, only the significant features of the raw data such as peak value of the ECG signal or size of lesion in the mammogram images will be extracted and considered in the subsequent stages. Mathematical models and statistical analyses will be performed to select the most significant features to be classified. The statistical analyses such as principal component analysis and discriminant analysis as well as mathematical model of clustering technique have been widely used in developing the medical diagnostic systems. The selected features will be classified using mathematical models that embedded engineering theory such as artificial intelligence, support vector machine, neural network and fuzzy-neuro system. These classifiers will provide the diagnostic results without human intervention. Among many publishable researches, several prototypes have been developed namely NeuralPap, Neural Mammo, and Cervix Kit. The former system (NeuralPap) is an automatic intelligent diagnostic system for classifying and distinguishing between the normal and cervical cancerous cells. Meanwhile, the Cervix Kit is a portable Field-programmable gate array (FPGA)-based cervical diagnostic kit that could automatically diagnose the cancerous cell based on the images obtained during sampling test. Besides the cervical diagnostic system, the Neural Mammo system is developed to specifically aid the diagnosis of breast cancer using a fine needle aspiration image.
A fast and automatic mosaic method for high-resolution satellite images
NASA Astrophysics Data System (ADS)
Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing
2015-12-01
We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.
Image-based surface reconstruction in geomorphometry - merits, limits and developments
NASA Astrophysics Data System (ADS)
Eltner, Anette; Kaiser, Andreas; Castillo, Carlos; Rock, Gilles; Neugirg, Fabian; Abellán, Antonio
2016-05-01
Photogrammetry and geosciences have been closely linked since the late 19th century due to the acquisition of high-quality 3-D data sets of the environment, but it has so far been restricted to a limited range of remote sensing specialists because of the considerable cost of metric systems for the acquisition and treatment of airborne imagery. Today, a wide range of commercial and open-source software tools enable the generation of 3-D and 4-D models of complex geomorphological features by geoscientists and other non-experts users. In addition, very recent rapid developments in unmanned aerial vehicle (UAV) technology allow for the flexible generation of high-quality aerial surveying and ortho-photography at a relatively low cost.The increasing computing capabilities during the last decade, together with the development of high-performance digital sensors and the important software innovations developed by computer-based vision and visual perception research fields, have extended the rigorous processing of stereoscopic image data to a 3-D point cloud generation from a series of non-calibrated images. Structure-from-motion (SfM) workflows are based upon algorithms for efficient and automatic orientation of large image sets without further data acquisition information, examples including robust feature detectors like the scale-invariant feature transform for 2-D imagery. Nevertheless, the importance of carrying out well-established fieldwork strategies, using proper camera settings, ground control points and ground truth for understanding the different sources of errors, still needs to be adapted in the common scientific practice.This review intends not only to summarise the current state of the art on using SfM workflows in geomorphometry but also to give an overview of terms and fields of application. Furthermore, this article aims to quantify already achieved accuracies and used scales, using different strategies in order to evaluate possible stagnations of current developments and to identify key future challenges. It is our belief that some lessons learned from former articles, scientific reports and book chapters concerning the identification of common errors or "bad practices" and some other valuable information may help in guiding the future use of SfM photogrammetry in geosciences.
Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.
Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana
2017-07-01
Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.
Rieger, Michael; Czermak, Benedikt; El Attal, Rene; Sumann, Günther; Jaschke, Werner; Freund, Martin
2009-03-01
The objective of this study was to assess time management and diagnostic quality when using a 64-multidetector-row computed tomography (MDCT) whole-body scanner to evaluate polytraumatized patients in an emergency department. Eighty-eight consecutive polytraumatized patients with injury severity score (ISS) > or = 18 (mean ISS = 29) were included in this study. Documented and evaluated data were crash history, trauma mechanism, number and pattern of injuries, injury severity, diagnostics, time flow, and missed diagnoses. Data were stored in our hospital information system. Seven time intervals were evaluated. In particular, attention was paid to the "acquisition interval," the "reformatting and evaluation time" as well as the "CT time" (time from CT start to preliminary diagnosis). A standardized whole-body CT was performed. The acquired CT data together with automatically generated multiplanar reformatted images ("direct MPR") were transferred to a 3D rendering workstation. Diagnostic quality was determined on the basis of missed diagnoses. Head-to-toe scout images were possible because volume coverage was up to 2 m. Experienced radiologists at an affiliated workstation performed radiologic evaluation of the acquired datasets immediately after acquisition. The "acquisition interval" was 12 minutes +/- 4.9 minutes, the "reformatting and evaluation interval" 7.0 minutes +/- 2.1 minutes, and the "CT time" 19 minutes +/- 6.1 minutes. Altogether, 7 of 486 lesions were recognized but not communicated in the "reformatting and evaluation interval", and 10 injuries were initially missed and detected during follow-up. This study indicates that 64-MDCT saves time, especially in the "reformatting and evaluation interval." Diagnostic quality is high, as reflected by the small number of missed diagnoses.
Development of a method to estimate organ doses for pediatric CT examinations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papadakis, Antonios E., E-mail: apapadak@pagni.gr; Perisinakis, Kostas; Damilakis, John
Purpose: To develop a method for estimating doses to primarily exposed organs in pediatric CT by taking into account patient size and automatic tube current modulation (ATCM). Methods: A Monte Carlo CT dosimetry software package, which creates patient-specific voxelized phantoms, accurately simulates CT exposures, and generates dose images depicting the energy imparted on the exposed volume, was used. Routine head, thorax, and abdomen/pelvis CT examinations in 92 pediatric patients, ranging from 1-month to 14-yr-old (49 boys and 43 girls), were simulated on a 64-slice CT scanner. Two sets of simulations were performed in each patient using (i) a fixed tubemore » current (FTC) value over the entire examination length and (ii) the ATCM profile extracted from the DICOM header of the reconstructed images. Normalized to CTDI{sub vol} organ dose was derived for all primary irradiated radiosensitive organs. Normalized dose data were correlated to patient’s water equivalent diameter using log-transformed linear regression analysis. Results: The maximum percent difference in normalized organ dose between FTC and ATCM acquisitions was 10% for eyes in head, 26% for thymus in thorax, and 76% for kidneys in abdomen/pelvis. In most of the organs, the correlation between dose and water equivalent diameter was significantly improved in ATCM compared to FTC acquisitions (P < 0.001). Conclusions: The proposed method employs size specific CTDI{sub vol}-normalized organ dose coefficients for ATCM-activated and FTC acquisitions in pediatric CT. These coefficients are substantially different between ATCM and FTC modes of operation and enable a more accurate assessment of patient-specific organ dose in the clinical setting.« less
Quality assurance and quality control in mammography: a review of available guidance worldwide.
Reis, Cláudia; Pascoal, Ana; Sakellaris, Taxiarchis; Koutalonis, Manthos
2013-10-01
Review available guidance for quality assurance (QA) in mammography and discuss its contribution to harmonise practices worldwide. Literature search was performed on different sources to identify guidance documents for QA in mammography available worldwide in international bodies, healthcare providers, professional/scientific associations. The guidance documents identified were reviewed and a selection was compared for type of guidance (clinical/technical), technology and proposed QA methodologies focusing on dose and image quality (IQ) performance assessment. Fourteen protocols (targeted at conventional and digital mammography) were reviewed. All included recommendations for testing acquisition, processing and display systems associated with mammographic equipment. All guidance reviewed highlighted the importance of dose assessment and testing the Automatic Exposure Control (AEC) system. Recommended tests for assessment of IQ showed variations in the proposed methodologies. Recommended testing focused on assessment of low-contrast detection, spatial resolution and noise. QC of image display is recommended following the American Association of Physicists in Medicine guidelines. The existing QA guidance for mammography is derived from key documents (American College of Radiology and European Union guidelines) and proposes similar tests despite the variations in detail and methodologies. Studies reported on QA data should provide detail on experimental technique to allow robust data comparison. Countries aiming to implement a mammography/QA program may select/prioritise the tests depending on available technology and resources. •An effective QA program should be practical to implement in a clinical setting. •QA should address the various stages of the imaging chain: acquisition, processing and display. •AEC system QC testing is simple to implement and provides information on equipment performance.
An evaluation of inexpensive methods for root image acquisition when using rhizotrons.
Mohamed, Awaz; Monnier, Yogan; Mao, Zhun; Lobet, Guillaume; Maeght, Jean-Luc; Ramel, Merlin; Stokes, Alexia
2017-01-01
Belowground processes play an essential role in ecosystem nutrient cycling and the global carbon budget cycle. Quantifying fine root growth is crucial to the understanding of ecosystem structure and function and in predicting how ecosystems respond to climate variability. A better understanding of root system growth is necessary, but choosing the best method of observation is complex, especially in the natural soil environment. Here, we compare five methods of root image acquisition using inexpensive technology that is currently available on the market: flatbed scanner, handheld scanner, manual tracing, a smartphone application scanner and a time-lapse camera. Using the five methods, root elongation rate (RER) was measured for three months, on roots of hybrid walnut ( Juglans nigra × Juglans regia L.) in rhizotrons installed in agroforests. When all methods were compared together, there were no significant differences in relative cumulative root length. However, the time-lapse camera and the manual tracing method significantly overestimated the relative mean diameter of roots compared to the three scanning methods. The smartphone scanning application was found to perform best overall when considering image quality and ease of use in the field. The automatic time-lapse camera was useful for measuring RER over several months without any human intervention. Our results show that inexpensive scanning and automated methods provide correct measurements of root elongation and length (but not diameter when using the time-lapse camera). These methods are capable of detecting fine roots to a diameter of 0.1 mm and can therefore be selected by the user depending on the data required.
Review of automatic detection of pig behaviours by using image analysis
NASA Astrophysics Data System (ADS)
Han, Shuqing; Zhang, Jianhua; Zhu, Mengshuai; Wu, Jianzhai; Kong, Fantao
2017-06-01
Automatic detection of lying, moving, feeding, drinking, and aggressive behaviours of pigs by means of image analysis can save observation input by staff. It would help staff make early detection of diseases or injuries of pigs during breeding and improve management efficiency of swine industry. This study describes the progress of pig behaviour detection based on image analysis and advancement in image segmentation of pig body, segmentation of pig adhesion and extraction of pig behaviour characteristic parameters. Challenges for achieving automatic detection of pig behaviours were summarized.
Automatic segmentation of time-lapse microscopy images depicting a live Dharma embryo.
Zacharia, Eleni; Bondesson, Maria; Riu, Anne; Ducharme, Nicole A; Gustafsson, Jan-Åke; Kakadiaris, Ioannis A
2011-01-01
Biological inferences about the toxicity of chemicals reached during experiments on the zebrafish Dharma embryo can be greatly affected by the analysis of the time-lapse microscopy images depicting the embryo. Among the stages of image analysis, automatic and accurate segmentation of the Dharma embryo is the most crucial and challenging. In this paper, an accurate and automatic segmentation approach for the segmentation of the Dharma embryo data obtained by fluorescent time-lapse microscopy is proposed. Experiments performed in four stacks of 3D images over time have shown promising results.
NASA Astrophysics Data System (ADS)
Jende, Phillipp; Nex, Francesco; Gerke, Markus; Vosselman, George
2018-07-01
Mobile Mapping (MM) solutions have become a significant extension to traditional data acquisition methods over the last years. Independently from the sensor carried by a platform, may it be laser scanners or cameras, high-resolution data postings are opposing a poor absolute localisation accuracy in urban areas due to GNSS occlusions and multipath effects. Potentially inaccurate position estimations are propagated by IMUs which are furthermore prone to drift effects. Thus, reliable and accurate absolute positioning on a par with MM's high-quality data remains an open issue. Multiple and diverse approaches have shown promising potential to mitigate GNSS errors in urban areas, but cannot achieve decimetre accuracy, require manual effort, or have limitations with respect to costs and availability. This paper presents a fully automatic approach to support the correction of MM imaging data based on correspondences with airborne nadir images. These correspondences can be employed to correct the MM platform's orientation by an adjustment solution. Unlike MM as such, aerial images do not suffer from GNSS occlusions, and their accuracy is usually verified by employing well-established methods using ground control points. However, a registration between MM and aerial images is a non-standard matching scenario, and requires several strategies to yield reliable and accurate correspondences. Scale, perspective and content strongly vary between both image sources, thus traditional feature matching methods may fail. To this end, the registration process is designed to focus on common and clearly distinguishable elements, such as road markings, manholes, or kerbstones. With a registration accuracy of about 98%, reliable tie information between MM and aerial data can be derived. Even though, the adjustment strategy is not covered in its entirety in this paper, accuracy results after adjustment will be presented. It will be shown that a decimetre accuracy is well achievable in a real data test scenario.
Gudjonsdottir, J; Svensson, J R; Campling, S; Brennan, P C; Jonsdottir, B
2009-11-01
Image quality and radiation dose to the patient are important factors in computed tomography (CT). To provide constant image quality, tube current modulation (TCM) performed by automatic exposure control (AEC) adjusts the tube current to the patient's size and shape. To evaluate the effects of patient centering on tube current-time product (mAs) and image noise. An oval-shaped acrylic phantom was scanned in various off-center positions, at 30-mm intervals within a 500-mm field of view, using three different CT scanners. Acquisition parameters were similar to routine abdomen examinations at each site. The mAs was recorded and noise measured in the images. The correlation of mAs and noise with position was calculated using Pearson correlation. In all three scanners, the mAs delivered by the AEC changed with y-position of the phantom (P<0.001), with correlation values of 0.98 for scanners A and B and -0.98 for scanner C. With x-position, mAs changes were 4.9% or less. As the phantom moved into the y-positions, compared with the iso-center, the mAs varied by up to +70%, -34%, and +56% in scanners A, B, and C, respectively. For scanners A and B, noise in two regions of interest in the lower part of the phantom decreased with elevation, with correlation factors from -0.95 to -0.86 (P<0.02). In the x-direction, significant noise relationships (P<0.005) were only seen in scanner A. This study demonstrates that patient centering markedly affects the efficacy of AEC function and that tube current changes vary between scanners. Tube position when acquiring the scout projection radiograph is decisive for the direction of the mAs change. Off-center patient positions cause errors in tube current modulation that can outweigh the dose reduction gained by AEC use, and image quality is affected.
Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.
2017-01-01
Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613
Graphical user interface for image acquisition and processing
Goldberg, Kenneth A.
2002-01-01
An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.
Automatic system for ionization chamber current measurements.
Brancaccio, Franco; Dias, Mauro S; Koskinas, Marina F
2004-12-01
The present work describes an automatic system developed for current integration measurements at the Laboratório de Metrologia Nuclear of Instituto de Pesquisas Energéticas e Nucleares. This system includes software (graphic user interface and control) and a module connected to a microcomputer, by means of a commercial data acquisition card. Measurements were performed in order to check the performance and for validating the proposed design.
Schmidt, Rita; Seginer, Amir; Frydman, Lucio
2016-05-01
Single-shot imaging by spatiotemporal encoding (SPEN) can provide higher immunity to artifacts than its echo planar imaging-based counterparts. Further improvements in resolution and signal-to-noise ratio could be made by rescinding the sequence's single-scan nature. To explore this option, an interleaved SPEN version was developed that was capable of delivering optimized images due to its use of a referenceless correction algorithm. A characteristic element of SPEN encoding is the absence of aliasing when its signals are undersampled along the low-bandwidth dimension. This feature was exploited in this study to segment a SPEN experiment into a number of interleaved shots whose inaccuracies were automatically compared and corrected as part of a navigator-free image reconstruction analysis. This could account for normal phase noises, as well as for object motions during the signal collection. The ensuing interleaved SPEN method was applied to phantoms and human volunteers and delivered high-quality images even in inhomogeneous or mobile environments. Submillimeter functional MRI activation maps confined to gray matter regions as well as submillimeter diffusion coefficient maps of human brains were obtained. We have developed an interleaved SPEN approach for the acquisition of high-definition images that promises a wider range of functional and diffusion MRI applications even in challenging environments. © 2015 Wiley Periodicals, Inc.
MilxXplore: a web-based system to explore large imaging datasets.
Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J
2013-01-01
As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis.
Landsat 8 on-orbit characterization and calibration system
Micijevic, Esad; Morfitt, Ron; Choate, Michael J.
2011-01-01
The Landsat Data Continuity Mission (LDCM) is planning to launch the Landsat 8 satellite in December 2012, which continues an uninterrupted record of consistently calibrated globally acquired multispectral images of the Earth started in 1972. The satellite will carry two imaging sensors: the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). The OLI will provide visible, near-infrared and short-wave infrared data in nine spectral bands while the TIRS will acquire thermal infrared data in two bands. Both sensors have a pushbroom design and consequently, each has a large number of detectors to be characterized. Image and calibration data downlinked from the satellite will be processed by the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center using the Landsat 8 Image Assessment System (IAS), a component of the Ground System. In addition to extracting statistics from all Earth images acquired, the IAS will process and trend results from analysis of special calibration acquisitions, such as solar diffuser, lunar, shutter, night, lamp and blackbody data, and preselected calibration sites. The trended data will be systematically processed and analyzed, and calibration and characterization parameters will be updated using both automatic and customized manual tools. This paper describes the analysis tools and the system developed to monitor and characterize on-orbit performance and calibrate the Landsat 8 sensors and image data products.
NASA Astrophysics Data System (ADS)
Hueso, R.; Mendikoa, I.; Sánchez-Lavega, A.; Pérez-Hoyos, S.; Rojas, J. F.; García-Melendo, E.
2015-10-01
PlanetCam UPV/EHU [1] is an astronomical instrument designed for high-resolution observations of Solar System planets. The main scientific themes are atmospheric dynamics and the vertical cloud structure of Jupiter and Saturn. The instrument uses a dichroic mirror to separate the light in two beams with spectral ranges from 380 nm to1 micron (visible channel) and from 1 to 1.7 microns (Short Wave InfraRed, SWIR channel) and two detectors working simultaneously with fast acquisition modes. High-resolution images are built using lucky imaging techniques [2]. Several hundred short exposed images are obtained and stored in fits files. Images are automatically reduced by a pipeline called PLAYLIST (written in IDL and requiring no interaction by the user)which selects the best frames and co-registers them using image correlation over several tie-points. The result is a high signal to noise ratio image that can be processed to show the faint structures in the data. PlanetCam is a visiting instrument mainly built for the 1.2 3 and 2.2m telescopes at Calar Alto Observatory in Spain but it has also been tested in the 1.5 m Telescope Carlos Sanchez in Tenerife and the 1.05 m Telescope at the Pic du Midi observatory.
Intraoperative computed tomography.
Tonn, J C; Schichor, C; Schnell, O; Zausinger, S; Uhl, E; Morhard, D; Reiser, M
2011-01-01
Intraoperative computed tomography (iCT) has gained increasing impact among modern neurosurgical techniques. Multislice CT with a sliding gantry in the OR provides excellent diagnostic image quality in the visualization of vascular lesions as well as bony structures including skull base and spine. Due to short acquisition times and a high spatial and temporal resolution, various modalities such as iCT-angiography, iCT-cerebral perfusion and the integration of intraoperative navigation with automatic re-registration after scanning can be performed. This allows a variety of applications, e.g. intraoperative angiography, intraoperative cerebral perfusion studies, update of cerebral and spinal navigation, stereotactic procedures as well as resection control in tumour surgery. Its versatility promotes its use in a multidisciplinary setting. Radiation exposure is comparable to standard CT systems outside the OR. For neurosurgical purposes, however, new hardware components (e.g. a radiolucent headholder system) had to be developed. Having a different range of applications compared to intraoperative MRI, it is an attractive modality for intraoperative imaging being comparatively easy to install and cost efficient.
Hu, Chenghuan; Huang, Feizhou; Zhang, Rui; Zhu, Shaihong; Nie, Wanpin; Liu, Xunyang; Liu, Yinglong; Li, Peng
2015-01-01
Using optics combined with automatic control and computer real-time image detection technology, a novel noninvasive method of noncontact pressure manometry was developed based on the airflow and laser detection technology in this study. The new esophageal venous pressure measurement system was tested in-vitro experiments. A stable and adjustable pulse stream was produced from a self-developed pump and a laser emitting apparatus could generate optical signals which can be captured by image acquisition and analysis system program. A synchronization system simultaneous measured the changes of air pressure and the deformation of the vein wall to capture the vascular deformation while simultaneously record the current pressure value. The results of this study indicated that the pressure values tested by the new method have good correlation with the actual pressure value in animal experiments. The new method of noninvasive pressure measurement based on the airflow and laser detection technology is accurate, feasible, repeatable and has a good application prospects.
Autonomous Onboard Science Data Analysis for Comet Missions
NASA Technical Reports Server (NTRS)
Thompson, David R.; Tran, Daniel Q.; McLaren, David; Chien, Steve A.; Bergman, Larry; Castano, Rebecca; Doyle, Richard; Estlin, Tara; Lenda, Matthew
2012-01-01
Coming years will bring several comet rendezvous missions. The Rosetta spacecraft arrives at Comet 67P/Churyumov-Gerasimenko in 2014. Subsequent rendezvous might include a mission such as the proposed Comet Hopper with multiple surface landings, as well as Comet Nucleus Sample Return (CNSR) and Coma Rendezvous and Sample Return (CRSR). These encounters will begin to shed light on a population that, despite several previous flybys, remains mysterious and poorly understood. Scientists still have little direct knowledge of interactions between the nucleus and coma, their variation across different comets or their evolution over time. Activity may change on short timescales so it is challenging to characterize with scripted data acquisition. Here we investigate automatic onboard image analysis that could act faster than round-trip light time to capture unexpected outbursts and plume activity. We describe one edge-based method for detect comet nuclei and plumes, and test the approach on an existing catalog of comet images. Finally, we quantify benefits to specific measurement objectives by simulating a basic plume monitoring campaign.
Code of Federal Regulations, 2013 CFR
2013-10-01
... automatic data processing equipment, word processing, and similar types of commercially available equipment... CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Purchase Orders 1513.507 Clauses. (a) It is the general...
NASA Astrophysics Data System (ADS)
Martel, Anne L.
2004-04-01
In order to extract quantitative information from dynamic contrast-enhanced MR images (DCE-MRI) it is usually necessary to identify an arterial input function. This is not a trivial problem if there are no major vessels present in the field of view. Most existing techniques rely on operator intervention or use various curve parameters to identify suitable pixels but these are often specific to the anatomical region or the acquisition method used. They also require the signal from several pixels to be averaged in order to improve the signal to noise ratio, however this introduces errors due to partial volume effects. We have described previously how factor analysis can be used to automatically separate arterial and venous components from DCE-MRI studies of the brain but although that method works well for single slice images through the brain when the blood brain barrier technique is intact, it runs into problems for multi-slice images with more complex dynamics. This paper will describe a factor analysis method that is more robust in such situations and is relatively insensitive to the number of physiological components present in the data set. The technique is very similar to that used to identify spectral end-members from multispectral remote sensing images.
ICA-based artefact and accelerated fMRI acquisition for improved Resting State Network imaging
Griffanti, Ludovica; Salimi-Khorshidi, Gholamreza; Beckmann, Christian F.; Auerbach, Edward J.; Douaud, Gwenaëlle; Sexton, Claire E.; Zsoldos, Enikő; Ebmeier, Klaus P; Filippini, Nicola; Mackay, Clare E.; Moeller, Steen; Xu, Junqian; Yacoub, Essa; Baselli, Giuseppe; Ugurbil, Kamil; Miller, Karla L.; Smith, Stephen M.
2014-01-01
The identification of resting state networks (RSNs) and the quantification of their functional connectivity in resting-state fMRI (rfMRI) are seriously hindered by the presence of artefacts, many of which overlap spatially or spectrally with RSNs. Moreover, recent developments in fMRI acquisition yield data with higher spatial and temporal resolutions, but may increase artefacts both spatially and/or temporally. Hence the correct identification and removal of non-neural fluctuations is crucial, especially in accelerated acquisitions. In this paper we investigate the effectiveness of three data-driven cleaning procedures, compare standard against higher (spatial and temporal) resolution accelerated fMRI acquisitions, and investigate the combined effect of different acquisitions and different cleanup approaches. We applied single-subject independent component analysis (ICA), followed by automatic component classification with FMRIB’s ICA-based X-noiseifier (FIX) to identify artefactual components. We then compared two first-level (within-subject) cleaning approaches for removing those artefacts and motion-related fluctuations from the data. The effectiveness of the cleaning procedures were assessed using timeseries (amplitude and spectra), network matrix and spatial map analyses. For timeseries and network analyses we also tested the effect of a second-level cleaning (informed by group-level analysis). Comparing these approaches, the preferable balance between noise removal and signal loss was achieved by regressing out of the data the full space of motion-related fluctuations and only the unique variance of the artefactual ICA components. Using similar analyses, we also investigated the effects of different cleaning approaches on data from different acquisition sequences. With the optimal cleaning procedures, functional connectivity results from accelerated data were statistically comparable or significantly better than the standard (unaccelerated) acquisition, and, crucially, with higher spatial and temporal resolution. Moreover, we were able to perform higher dimensionality ICA decompositions with the accelerated data, which is very valuable for detailed network analyses. PMID:24657355
Griffanti, Ludovica; Salimi-Khorshidi, Gholamreza; Beckmann, Christian F; Auerbach, Edward J; Douaud, Gwenaëlle; Sexton, Claire E; Zsoldos, Enikő; Ebmeier, Klaus P; Filippini, Nicola; Mackay, Clare E; Moeller, Steen; Xu, Junqian; Yacoub, Essa; Baselli, Giuseppe; Ugurbil, Kamil; Miller, Karla L; Smith, Stephen M
2014-07-15
The identification of resting state networks (RSNs) and the quantification of their functional connectivity in resting-state fMRI (rfMRI) are seriously hindered by the presence of artefacts, many of which overlap spatially or spectrally with RSNs. Moreover, recent developments in fMRI acquisition yield data with higher spatial and temporal resolutions, but may increase artefacts both spatially and/or temporally. Hence the correct identification and removal of non-neural fluctuations is crucial, especially in accelerated acquisitions. In this paper we investigate the effectiveness of three data-driven cleaning procedures, compare standard against higher (spatial and temporal) resolution accelerated fMRI acquisitions, and investigate the combined effect of different acquisitions and different cleanup approaches. We applied single-subject independent component analysis (ICA), followed by automatic component classification with FMRIB's ICA-based X-noiseifier (FIX) to identify artefactual components. We then compared two first-level (within-subject) cleaning approaches for removing those artefacts and motion-related fluctuations from the data. The effectiveness of the cleaning procedures was assessed using time series (amplitude and spectra), network matrix and spatial map analyses. For time series and network analyses we also tested the effect of a second-level cleaning (informed by group-level analysis). Comparing these approaches, the preferable balance between noise removal and signal loss was achieved by regressing out of the data the full space of motion-related fluctuations and only the unique variance of the artefactual ICA components. Using similar analyses, we also investigated the effects of different cleaning approaches on data from different acquisition sequences. With the optimal cleaning procedures, functional connectivity results from accelerated data were statistically comparable or significantly better than the standard (unaccelerated) acquisition, and, crucially, with higher spatial and temporal resolution. Moreover, we were able to perform higher dimensionality ICA decompositions with the accelerated data, which is very valuable for detailed network analyses. Copyright © 2014 Elsevier Inc. All rights reserved.
The algorithm for automatic detection of the calibration object
NASA Astrophysics Data System (ADS)
Artem, Kruglov; Irina, Ugfeld
2017-06-01
The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.