NASA Astrophysics Data System (ADS)
Almeida, Isabel P.; Schyns, Lotte E. J. R.; Vaniqui, Ana; van der Heyden, Brent; Dedes, George; Resch, Andreas F.; Kamp, Florian; Zindler, Jaap D.; Parodi, Katia; Landry, Guillaume; Verhaegen, Frank
2018-06-01
Proton beam ranges derived from dual-energy computed tomography (DECT) images from a dual-spiral radiotherapy (RT)-specific CT scanner were assessed using Monte Carlo (MC) dose calculations. Images from a dual-source and a twin-beam DECT scanner were also used to establish a comparison to the RT-specific scanner. Proton ranges extracted from conventional single-energy CT (SECT) were additionally performed to benchmark against literature values. Using two phantoms, a DECT methodology was tested as input for GEANT4 MC proton dose calculations. Proton ranges were calculated for different mono-energetic proton beams irradiating both phantoms; the results were compared to the ground truth based on the phantom compositions. The same methodology was applied in a head-and-neck cancer patient using both SECT and dual-spiral DECT scans from the RT-specific scanner. A pencil-beam-scanning plan was designed, which was subsequently optimized by MC dose calculations, and differences in proton range for the different image-based simulations were assessed. For phantoms, the DECT method yielded overall better material segmentation with >86% of the voxel correctly assigned for the dual-spiral and dual-source scanners, but only 64% for a twin-beam scanner. For the calibration phantom, the dual-spiral scanner yielded range errors below 1.2 mm (0.6% of range), like the errors yielded by the dual-source scanner (<1.1 mm, <0.5%). With the validation phantom, the dual-spiral scanner yielded errors below 0.8 mm (0.9%), whereas SECT yielded errors up to 1.6 mm (2%). For the patient case, where the absolute truth was missing, proton range differences between DECT and SECT were on average in ‑1.2 ± 1.2 mm (‑0.5% ± 0.5%). MC dose calculations were successfully performed on DECT images, where the dual-spiral scanner resulted in media segmentation and range accuracy as good as the dual-source CT. In the patient, the various methods showed relevant range differences.
2009-01-01
Background Structural Magnetic Resonance Imaging (sMRI) of the brain is employed in the assessment of a wide range of neuropsychiatric disorders. In order to improve statistical power in such studies it is desirable to pool scanning resources from multiple centres. The CaliBrain project was designed to provide for an assessment of scanner differences at three centres in Scotland, and to assess the practicality of pooling scans from multiple-centres. Methods We scanned healthy subjects twice on each of the 3 scanners in the CaliBrain project with T1-weighted sequences. The tissue classifier supplied within the Statistical Parametric Mapping (SPM5) application was used to map the grey and white tissue for each scan. We were thus able to assess within scanner variability and between scanner differences. We have sought to correct for between scanner differences by adjusting the probability mappings of tissue occupancy (tissue priors) used in SPM5 for tissue classification. The adjustment procedure resulted in separate sets of tissue priors being developed for each scanner and we refer to these as scanner specific priors. Results Voxel Based Morphometry (VBM) analyses and metric tests indicated that the use of scanner specific priors reduced tissue classification differences between scanners. However, the metric results also demonstrated that the between scanner differences were not reduced to the level of within scanner variability, the ideal for scanner harmonisation. Conclusion Our results indicate the development of scanner specific priors for SPM can assist in pooling of scan resources from different research centres. This can facilitate improvements in the statistical power of quantitative brain imaging studies. PMID:19445668
NASA Astrophysics Data System (ADS)
Ravnik, Domen; Jerman, Tim; Pernuš, Franjo; Likar, Boštjan; Å piclin, Žiga
2018-03-01
Performance of a convolutional neural network (CNN) based white-matter lesion segmentation in magnetic resonance (MR) brain images was evaluated under various conditions involving different levels of image preprocessing and augmentation applied and different compositions of the training dataset. On images of sixty multiple sclerosis patients, half acquired on one and half on another scanner of different vendor, we first created a highly accurate multi-rater consensus based lesion segmentations, which were used in several experiments to evaluate the CNN segmentation result. First, the CNN was trained and tested without preprocessing the images and by using various combinations of preprocessing techniques, namely histogram-based intensity standardization, normalization by whitening, and train dataset augmentation by flipping the images across the midsagittal plane. Then, the CNN was trained and tested on images of the same, different or interleaved scanner datasets using a cross-validation approach. The results indicate that image preprocessing has little impact on performance in a same-scanner situation, while between-scanner performance benefits most from intensity standardization and normalization, but also further by incorporating heterogeneous multi-scanner datasets in the training phase. Under such conditions the between-scanner performance of the CNN approaches that of the ideal situation, when the CNN is trained and tested on the same scanner dataset.
Optimal retinal cyst segmentation from OCT images
NASA Astrophysics Data System (ADS)
Oguz, Ipek; Zhang, Li; Abramoff, Michael D.; Sonka, Milan
2016-03-01
Accurate and reproducible segmentation of cysts and fluid-filled regions from retinal OCT images is an important step allowing quantification of the disease status, longitudinal disease progression, and response to therapy in wet-pathology retinal diseases. However, segmentation of fluid-filled regions from OCT images is a challenging task due to their inhomogeneous appearance, the unpredictability of their number, size and location, as well as the intensity profile similarity between such regions and certain healthy tissue types. While machine learning techniques can be beneficial for this task, they require large training datasets and are often over-fitted to the appearance models of specific scanner vendors. We propose a knowledge-based approach that leverages a carefully designed cost function and graph-based segmentation techniques to provide a vendor-independent solution to this problem. We illustrate the results of this approach on two publicly available datasets with a variety of scanner vendors and retinal disease status. Compared to a previous machine-learning based approach, the volume similarity error was dramatically reduced from 81:3+/-56:4% to 22:2+/-21:3% (paired t-test, p << 0:001).
A low-cost three-dimensional laser surface scanning approach for defining body segment parameters.
Pandis, Petros; Bull, Anthony Mj
2017-11-01
Body segment parameters are used in many different applications in ergonomics as well as in dynamic modelling of the musculoskeletal system. Body segment parameters can be defined using different methods, including techniques that involve time-consuming manual measurements of the human body, used in conjunction with models or equations. In this study, a scanning technique for measuring subject-specific body segment parameters in an easy, fast, accurate and low-cost way was developed and validated. The scanner can obtain the body segment parameters in a single scanning operation, which takes between 8 and 10 s. The results obtained with the system show a standard deviation of 2.5% in volumetric measurements of the upper limb of a mannequin and 3.1% difference between scanning volume and actual volume. Finally, the maximum mean error for the moment of inertia by scanning a standard-sized homogeneous object was 2.2%. This study shows that a low-cost system can provide quick and accurate subject-specific body segment parameter estimates.
Liu, Hon-Man; Chen, Shan-Kai; Chen, Ya-Fang; Lee, Chung-Wei; Yeh, Lee-Ren
2016-01-01
Purpose To assess the inter session reproducibility of automatic segmented MRI-derived measures by FreeSurfer in a group of subjects with normal-appearing MR images. Materials and Methods After retrospectively reviewing a brain MRI database from our institute consisting of 14,758 adults, those subjects who had repeat scans and had no history of neurodegenerative disorders were selected for morphometry analysis using FreeSurfer. A total of 34 subjects were grouped by MRI scanner model. After automatic segmentation using FreeSurfer, label-wise comparison (involving area, thickness, and volume) was performed on all segmented results. An intraclass correlation coefficient was used to estimate the agreement between sessions. Wilcoxon signed rank test was used to assess the population mean rank differences across sessions. Mean-difference analysis was used to evaluate the difference intervals across scanners. Absolute percent difference was used to estimate the reproducibility errors across the MRI models. Kruskal-Wallis test was used to determine the across-scanner effect. Results The agreement in segmentation results for area, volume, and thickness measurements of all segmented anatomical labels was generally higher in Signa Excite and Verio models when compared with Sonata and TrioTim models. There were significant rank differences found across sessions in some labels of different measures. Smaller difference intervals in global volume measurements were noted on images acquired by Signa Excite and Verio models. For some brain regions, significant MRI model effects were observed on certain segmentation results. Conclusions Short-term scan-rescan reliability of automatic brain MRI morphometry is feasible in the clinical setting. However, since repeatability of software performance is contingent on the reproducibility of the scanner performance, the scanner performance must be calibrated before conducting such studies or before using such software for retrospective reviewing. PMID:26812647
A feasibility study of limb volume measuring systems
NASA Technical Reports Server (NTRS)
Lafferty, J. F.; Carter, W. M.
1974-01-01
Evaluation of the various techniques by which limb volume can be measured indicates that the odometric (electromechanical) method and the reflective scanner (optical) have a high probability of meeting the specifications of the LBNP experiments. Both of these methods provide segmental measurements from which the cross sectional area of the limb can be determined.
Andreini, Daniele; Mushtaq, Saima; Pontone, Gianluca; Conte, Edoardo; Guglielmo, Marco; Annoni, Andrea; Baggiano, Andrea; Formenti, Alberto; Ditali, Valentina; Mancini, Maria Elisabetta; Zanchi, Simone; Melotti, Eleonora; Trabattoni, Daniela; Montorsi, Piero; Ravagnani, Paolo Mario; Fiorentini, Cesare; Bartorelli, Antonio L; Pepi, Mauro
2018-04-15
Aim of the study was to evaluate image quality, radiation exposure and diagnostic accuracy of coronary CT angiography (CCTA) performed with a novel cardiac CT scanner in patients with very high heart rate (HR). We prospectively enrolled 202 patients (111 men, mean age 66±8years) with suspected coronary artery disease who underwent CCTA with a whole-organ volumetric CT scanner. The HR during the scan was ≥80bpm in 100 patients (Group 1), while it was ≤65bpm in the remaining 102 patients (Group 2). In all patients, image quality score and coronary interpretability were evaluated and effective dose (ED) was recorded. In 86 of the 202 enrolled patients (40 patients in Group 1, 46 patients in Group 2) who were referred for a clinically indicated invasive coronary angiography (ICA) within 6months, diagnostic accuracy of CCTA vs. ICA was evaluated. Mean image quality and coronary interpretability were very high in both Groups (Likert=3.35 vs. 3.39 and 97.3% [1542/1584 segments] and 98% [1569/1600 segments] in Group 1 and Group 2, respectively). Mean ED was lower in Group 2 (1.1±0.5mSv) compared to Group 1 (2.9±1.6mSv). In Group 1, sensitivity and specificity of CCTA for detection of >50% stenosis vs. ICA were 95.2% and 98.9% in a segment-based analysis and 100% and 81.8% in a patient-based analysis, respectively. The whole organ high-definition CT scanner allows evaluating coronary arteries in patients with high HR with excellent image quality, coronary interpretability and low radiation exposure. Copyright © 2017 Elsevier B.V. All rights reserved.
Collaborative SDOCT Segmentation and Analysis Software.
Yun, Yeyi; Carass, Aaron; Lang, Andrew; Prince, Jerry L; Antony, Bhavna J
2017-02-01
Spectral domain optical coherence tomography (SDOCT) is routinely used in the management and diagnosis of a variety of ocular diseases. This imaging modality also finds widespread use in research, where quantitative measurements obtained from the images are used to track disease progression. In recent years, the number of available scanners and imaging protocols grown and there is a distinct absence of a unified tool that is capable of visualizing, segmenting, and analyzing the data. This is especially noteworthy in longitudinal studies, where data from older scanners and/or protocols may need to be analyzed. Here, we present a graphical user interface (GUI) that allows users to visualize and analyze SDOCT images obtained from two commonly used scanners. The retinal surfaces in the scans can be segmented using a previously described method, and the retinal layer thicknesses can be compared to a normative database. If necessary, the segmented surfaces can also be corrected and the changes applied. The interface also allows users to import and export retinal layer thickness data to an SQL database, thereby allowing for the collation of data from a number of collaborating sites.
Fischbach, Katharina; Kosiek, Otrud; Friebe, Björn; Wybranski, Christian; Schnackenburg, Bernhard; Schmeisser, Alexander; Smid, Jan; Ricke, Jens; Pech, Maciej
2017-01-01
Cardiac magnetic resonance imaging (cMRI) has become the non-invasive reference standard for the evaluation of cardiac function and viability. The introduction of open, high-field, 1.0T (HFO) MR scanners offers advantages for examinations of obese, claustrophobic and paediatric patients.The aim of our study was to compare standard cMRI sequences from an HFO scanner and those from a cylindrical, 1.5T MR system. Fifteen volunteers underwent cMRI both in an open HFO and in a cylindrical MR system. The protocol consisted of cine and unenhanced tissue sequences. The signal-to-noise ratio (SNR) for each sequence and blood-myocardium contrast for the cine sequences were assessed. Image quality and artefacts were rated. The location and number of non-diagnostic segments was determined. Volunteers' tolerance to examinations in both scanners was investigated. SNR was significantly lower in the HFO scanner (all p<0.001). However, the contrast of the cine sequence was significantly higher in the HFO platform compared to the 1.5T MR scanner (0.685±0.41 vs. 0.611±0.54; p<0.001). Image quality was comparable for all sequences (all p>0.05). Overall, only few non-diagnostic myocardial segments were recorded: 6/960 (0.6%) by the HFO and 17/960 (1.8%) segments by the cylindrical system. The volunteers expressed a preference for the open MR system (p<0.01). Standard cardiac MRI sequences in an HFO platform offer a high image quality that is comparable to the quality of images acquired in a cylindrical 1.5T MR scanner. An open scanner design may potentially improve tolerance of cardiac MRI and therefore allow to examine an even broader patient spectrum.
2018-01-01
Use of additive manufacturing is growing rapidly in the orthotics field. This technology allows orthotics to be designed directly on digital scans of limbs. However, little information is available about scanners and 3D scans. The aim of this study is to look at the agreement between manual measurements, high-level and low-cost handheld 3D scanners. We took two manual measurements and three 3D scans with each scanner from 14 lower limbs. The lower limbs were divided into 17 sections of 30mm each from 180mm above the mid-patella to 300mm below. Time to record and to process the three 3D scans for scanners methods were compared with Student t-test while Bland-Altman plots were used to study agreement between circumferences of each section from the three methods. The record time was 97s shorter with high-level scanner than with the low-cost (p = .02) while the process time was nine times quicker with the low-cost scanner (p < .01). An overestimation of 2.5mm was found in high-level scanner compared to manual measurement, but with a better repeatability between measurements. The low-cost scanner tended to overestimate the circumferences from 0.1% to 1.5%, overestimation being greater for smaller circumferences. In conclusion, 3D scanners provide more information about the shape of the lower limb, but the reliability depends on the 3D scanner and the size of the scanned segment. Low-cost scanners could be useful for clinicians because of the simple and fast process, but attention should be focused on accuracy, which depends on the scanned body segment. PMID:29320560
Dessery, Yoann; Pallari, Jari
2018-01-01
Use of additive manufacturing is growing rapidly in the orthotics field. This technology allows orthotics to be designed directly on digital scans of limbs. However, little information is available about scanners and 3D scans. The aim of this study is to look at the agreement between manual measurements, high-level and low-cost handheld 3D scanners. We took two manual measurements and three 3D scans with each scanner from 14 lower limbs. The lower limbs were divided into 17 sections of 30mm each from 180mm above the mid-patella to 300mm below. Time to record and to process the three 3D scans for scanners methods were compared with Student t-test while Bland-Altman plots were used to study agreement between circumferences of each section from the three methods. The record time was 97s shorter with high-level scanner than with the low-cost (p = .02) while the process time was nine times quicker with the low-cost scanner (p < .01). An overestimation of 2.5mm was found in high-level scanner compared to manual measurement, but with a better repeatability between measurements. The low-cost scanner tended to overestimate the circumferences from 0.1% to 1.5%, overestimation being greater for smaller circumferences. In conclusion, 3D scanners provide more information about the shape of the lower limb, but the reliability depends on the 3D scanner and the size of the scanned segment. Low-cost scanners could be useful for clinicians because of the simple and fast process, but attention should be focused on accuracy, which depends on the scanned body segment.
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras
Morris, Mark; Sellers, William I.
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.
Peyer, Kathrin E; Morris, Mark; Sellers, William I
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.
Gopakumar, Gopalakrishna Pillai; Swetha, Murali; Sai Siva, Gorthi; Sai Subrahmanyam, Gorthi R K
2018-03-01
The present paper introduces a focus stacking-based approach for automated quantitative detection of Plasmodium falciparum malaria from blood smear. For the detection, a custom designed convolutional neural network (CNN) operating on focus stack of images is used. The cell counting problem is addressed as the segmentation problem and we propose a 2-level segmentation strategy. Use of CNN operating on focus stack for the detection of malaria is first of its kind, and it not only improved the detection accuracy (both in terms of sensitivity [97.06%] and specificity [98.50%]) but also favored the processing on cell patches and avoided the need for hand-engineered features. The slide images are acquired with a custom-built portable slide scanner made from low-cost, off-the-shelf components and is suitable for point-of-care diagnostics. The proposed approach of employing sophisticated algorithmic processing together with inexpensive instrumentation can potentially benefit clinicians to enable malaria diagnosis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Yarnykh, V.; Korostyshevskaya, A.
2017-08-01
Macromolecular proton fraction (MPF) is a biophysical parameter describing the amount of macromolecular protons involved into magnetization exchange with water protons in tissues. MPF represents a significant interest as a magnetic resonance imaging (MRI) biomarker of myelin for clinical applications. A recent fast MPF mapping method enabled clinical translation of MPF measurements due to time-efficient acquisition based on the single-point constrained fit algorithm. However, previous MPF mapping applications utilized only 3 Tesla MRI scanners and modified pulse sequences, which are not commonly available. This study aimed to test the feasibility of MPF mapping implementation on a 1.5 Tesla clinical scanner using standard manufacturer’s sequences and compare the performance of this method between 1.5 and 3 Tesla scanners. MPF mapping was implemented on 1.5 and 3 Tesla MRI units of one manufacturer with either optimized custom-written or standard product pulse sequences. Whole-brain three-dimensional MPF maps obtained from a single volunteer were compared between field strengths and implementation options. MPF maps demonstrated similar quality at both field strengths. MPF values in segmented brain tissues and specific anatomic regions appeared in close agreement. This experiment demonstrates the feasibility of fast MPF mapping using standard sequences on 1.5 T and 3 T clinical scanners.
San José, Verónica; Bellot-Arcís, Carlos; Tarazona, Beatriz; Zamora, Natalia; O Lagravère, Manuel
2017-01-01
Background To compare the reliability and accuracy of direct and indirect dental measurements derived from two types of 3D virtual models: generated by intraoral laser scanning (ILS) and segmented cone beam computed tomography (CBCT), comparing these with a 2D digital model. Material and Methods One hundred patients were selected. All patients’ records included initial plaster models, an intraoral scan and a CBCT. Patients´ dental arches were scanned with the iTero® intraoral scanner while the CBCTs were segmented to create three-dimensional models. To obtain 2D digital models, plaster models were scanned using a conventional 2D scanner. When digital models had been obtained using these three methods, direct dental measurements were measured and indirect measurements were calculated. Differences between methods were assessed by means of paired t-tests and regression models. Intra and inter-observer error were analyzed using Dahlberg´s d and coefficients of variation. Results Intraobserver and interobserver error for the ILS model was less than 0.44 mm while for segmented CBCT models, the error was less than 0.97 mm. ILS models provided statistically and clinically acceptable accuracy for all dental measurements, while CBCT models showed a tendency to underestimate measurements in the lower arch, although within the limits of clinical acceptability. Conclusions ILS and CBCT segmented models are both reliable and accurate for dental measurements. Integration of ILS with CBCT scans would get dental and skeletal information altogether. Key words:CBCT, intraoral laser scanner, 2D digital models, 3D models, dental measurements, reliability. PMID:29410764
NASA Technical Reports Server (NTRS)
Cook, M.
1990-01-01
Qualification testing of Combustion Engineering's AMDATA Intraspect/98 Data Acquisition and Imaging System that applies to the redesigned solid rocket motor (RSRM) case membrane case-to-insulation bondline inspection was performed. Testing was performed at M-67, the Thiokol Corp. RSRM Assembly Facility. The purpose of the inspection was to verify the integrity of the case membrane case-to-insulation bondline. The case membrane scanner was calibrated on the redesigned solid rocket motor case segment calibration standard, which had an intentional 1.0 by 1.0 in. case-to-insulation unbond. The case membrane scanner was then used to scan a 20 by 20 in. membrane area of the case segment. Calibration of the scanner was then rechecked on the calibration standard to ensure that the calibration settings did not change during the case membrane scan. This procedure was successfully performed five times to qualify the unbond detection capability of the case membrane scanner.
Jung, Kwan-Jin; Prasad, Parikshit; Qin, Yulin; Anderson, John R.
2013-01-01
A method to extract the subject's overt verbal response from the obscuring acoustic noise in an fMRI scan is developed by applying active noise cancellation with a conventional MRI microphone. Since the EPI scanning and its accompanying acoustic noise in fMRI are repetitive, the acoustic noise in one time segment was used as a reference noise in suppressing the acoustic noise in subsequent segments. However, the acoustic noise from the scanner was affected by the subject's movements, so the reference noise was adaptively adjusted as the scanner's acoustic properties varied in time. This method was successfully applied to a cognitive fMRI experiment with overt verbal responses. PMID:15723385
Transfer learning improves supervised image segmentation across imaging protocols.
van Opbroek, Annegreet; Ikram, M Arfan; Vernooij, Meike W; de Bruijne, Marleen
2015-05-01
The variation between images obtained with different scanners or different imaging protocols presents a major challenge in automatic segmentation of biomedical images. This variation especially hampers the application of otherwise successful supervised-learning techniques which, in order to perform well, often require a large amount of labeled training data that is exactly representative of the target data. We therefore propose to use transfer learning for image segmentation. Transfer-learning techniques can cope with differences in distributions between training and target data, and therefore may improve performance over supervised learning for segmentation across scanners and scan protocols. We present four transfer classifiers that can train a classification scheme with only a small amount of representative training data, in addition to a larger amount of other training data with slightly different characteristics. The performance of the four transfer classifiers was compared to that of standard supervised classification on two magnetic resonance imaging brain-segmentation tasks with multi-site data: white matter, gray matter, and cerebrospinal fluid segmentation; and white-matter-/MS-lesion segmentation. The experiments showed that when there is only a small amount of representative training data available, transfer learning can greatly outperform common supervised-learning approaches, minimizing classification errors by up to 60%.
Okur, Aylin; Kantarcı, Mecit; Kızrak, Yeşim; Yıldız, Sema; Pirimoğlu, Berhan; Karaca, Leyla; Oğul, Hayri; Sevimli, Serdar
2014-01-01
PURPOSE We aimed to use a noninvasive method for quantifying T1 values of chronic myocardial infarction scar by cardiac magnetic resonance imaging (MRI), and determine its diagnostic performance. MATERIALS AND METHODS We performed cardiac MRI on 29 consecutive patients with known coronary artery disease (CAD) on 3.0 Tesla MRI scanner. An unenhanced T1 mapping technique was used to calculate T1 relaxation time of myocardial scar tissue, and its diagnostic performance was evaluated. Chronic scar tissue was identified by delayed contrast-enhancement (DE) MRI and T2-weighted images. Sensitivity, specificity, and accuracy values were calculated for T1 mapping using DE images as the gold standard. RESULTS Four hundred and forty-two segments were analyzed in 26 patients. While myocardial chronic scar was demonstrated in 45 segments on DE images, T1 mapping MRI showed a chronic scar area in 54 segments. T1 relaxation time was higher in chronic scar tissue, compared with remote areas (1314±98 ms vs. 1099±90 ms, P < 0.001). Therefore, increased T1 values were shown in areas of myocardium colocalized with areas of DE and normal signal on T2-weighted images. There was a significant correlation between T1 mapping and DE images in evaluation of myocardial wall injury extent (P < 0.05). We calculated sensitivity, specificity, and accuracy as 95.5%, 97%, and 96%, respectively. CONCLUSION The results of the present study reveal that T1 mapping MRI combined with T2-weighted images might be a feasible imaging modality for detecting chronic myocardial infarction scar tissue. PMID:25010366
Automatic liver volume segmentation and fibrosis classification
NASA Astrophysics Data System (ADS)
Bal, Evgeny; Klang, Eyal; Amitai, Michal; Greenspan, Hayit
2018-02-01
In this work, we present an automatic method for liver segmentation and fibrosis classification in liver computed-tomography (CT) portal phase scans. The input is a full abdomen CT scan with an unknown number of slices, and the output is a liver volume segmentation mask and a fibrosis grade. A multi-stage analysis scheme is applied to each scan, including: volume segmentation, texture features extraction and SVM based classification. Data contains portal phase CT examinations from 80 patients, taken with different scanners. Each examination has a matching Fibroscan grade. The dataset was subdivided into two groups: first group contains healthy cases and mild fibrosis, second group contains moderate fibrosis, severe fibrosis and cirrhosis. Using our automated algorithm, we achieved an average dice index of 0.93 ± 0.05 for segmentation and a sensitivity of 0.92 and specificity of 0.81for classification. To the best of our knowledge, this is a first end to end automatic framework for liver fibrosis classification; an approach that, once validated, can have a great potential value in the clinic.
Automatic segmentation and volumetry of multiple sclerosis brain lesions from MR images
Jain, Saurabh; Sima, Diana M.; Ribbens, Annemie; Cambron, Melissa; Maertens, Anke; Van Hecke, Wim; De Mey, Johan; Barkhof, Frederik; Steenwijk, Martijn D.; Daams, Marita; Maes, Frederik; Van Huffel, Sabine; Vrenken, Hugo; Smeets, Dirk
2015-01-01
The location and extent of white matter lesions on magnetic resonance imaging (MRI) are important criteria for diagnosis, follow-up and prognosis of multiple sclerosis (MS). Clinical trials have shown that quantitative values, such as lesion volumes, are meaningful in MS prognosis. Manual lesion delineation for the segmentation of lesions is, however, time-consuming and suffers from observer variability. In this paper, we propose MSmetrix, an accurate and reliable automatic method for lesion segmentation based on MRI, independent of scanner or acquisition protocol and without requiring any training data. In MSmetrix, 3D T1-weighted and FLAIR MR images are used in a probabilistic model to detect white matter (WM) lesions as an outlier to normal brain while segmenting the brain tissue into grey matter, WM and cerebrospinal fluid. The actual lesion segmentation is performed based on prior knowledge about the location (within WM) and the appearance (hyperintense on FLAIR) of lesions. The accuracy of MSmetrix is evaluated by comparing its output with expert reference segmentations of 20 MRI datasets of MS patients. Spatial overlap (Dice) between the MSmetrix and the expert lesion segmentation is 0.67 ± 0.11. The intraclass correlation coefficient (ICC) equals 0.8 indicating a good volumetric agreement between the MSmetrix and expert labelling. The reproducibility of MSmetrix' lesion volumes is evaluated based on 10 MS patients, scanned twice with a short interval on three different scanners. The agreement between the first and the second scan on each scanner is evaluated through the spatial overlap and absolute lesion volume difference between them. The spatial overlap was 0.69 ± 0.14 and absolute total lesion volume difference between the two scans was 0.54 ± 0.58 ml. Finally, the accuracy and reproducibility of MSmetrix compare favourably with other publicly available MS lesion segmentation algorithms, applied on the same data using default parameter settings. PMID:26106562
Mishra, Atul; Jain, Narendra; Bhagwat, Anand
2017-07-01
Peripheral arterial occlusive disease (PAOD) may cause disabling claudication or critical limb ischemia. Multidetector computed tomography (CT) technology has evolved to the level of 256-slice CT scanners which has significantly improved the spatial and temporal resolution of the images. This has provided the capability of chasing the contrast bolus at a fast speed enabling angiographic imaging of long segments of the body. These images can be reconstructed in various planes and various modes for detailed analysis of the peripheral vascular diseases which helps in making treatment decision. The aim of this retrospective study was to compare the CT angiograms (CTAs) of all cases of PAOD done by 256-slice CT scanner at a tertiary care vascular center and comparing these images with the digital subtraction angiograms (DSAs) of these patients. The retrospective study included 53 patients who underwent both CTA and DSA at our center over a period of 3 years from March 2013 to March 2016. The CTA showed high sensitivity (93%) and specificity (92.7%) for overall assessment of degree of stenosis in a vascular segment in cases of aortic and lower limb occlusive disease. The assessment of lesions of infrapopliteal segment was comparatively inferior (sensitivity 91.6%, accuracy 73.3%, and positive predictive value 78.5%), more so in the presence of significant calcification. The advantages of CTA were its noninvasive nature, ability to image large area of body, almost no adverse effects to the patients, and better assessment of vessel wall disease. However, the CTA assessment of collaterals was inferior with a sensitivity of only 62.7% as compared to DSA. Overall, 256-slice CTA provides fast and accurate imaging of vascular tree which can restrict DSA only in few selected cases as a problem-solving tool where clinico-radiological mismatch is present.
Evaluation of a High-Resolution Benchtop Micro-CT Scanner for Application in Porous Media Research
NASA Astrophysics Data System (ADS)
Tuller, M.; Vaz, C. M.; Lasso, P. O.; Kulkarni, R.; Ferre, T. A.
2010-12-01
Recent advances in Micro Computed Tomography (MCT) provided the motivation to thoroughly evaluate and optimize scanning, image reconstruction/segmentation and pore-space analysis capabilities of a new generation benchtop MCT scanner and associated software package. To demonstrate applicability to soil research the project was focused on determination of porosities and pore size distributions of two Brazilian Oxisols from segmented MCT-data. Effects of metal filters and various acquisition parameters (e.g. total rotation, rotation step, and radiograph frame averaging) on image quality and acquisition time are evaluated. Impacts of sample size and scanning resolution on CT-derived porosities and pore-size distributions are illustrated.
Client/server approach to image capturing
NASA Astrophysics Data System (ADS)
Tuijn, Chris; Stokes, Earle
1998-01-01
The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.
Automatic segmentation of vessels in in-vivo ultrasound scans
NASA Astrophysics Data System (ADS)
Tamimi-Sarnikowski, Philip; Brink-Kjær, Andreas; Moshavegh, Ramin; Arendt Jensen, Jørgen
2017-03-01
Ultrasound has become highly popular to monitor atherosclerosis, by scanning the carotid artery. The screening involves measuring the thickness of the vessel wall and diameter of the lumen. An automatic segmentation of the vessel lumen, can enable the determination of lumen diameter. This paper presents a fully automatic segmentation algorithm, for robustly segmenting the vessel lumen in longitudinal B-mode ultrasound images. The automatic segmentation is performed using a combination of B-mode and power Doppler images. The proposed algorithm includes a series of preprocessing steps, and performs a vessel segmentation by use of the marker-controlled watershed transform. The ultrasound images used in the study were acquired using the bk3000 ultrasound scanner (BK Ultrasound, Herlev, Denmark) with two transducers "8L2 Linear" and "10L2w Wide Linear" (BK Ultrasound, Herlev, Denmark). The algorithm was evaluated empirically and applied to a dataset of in-vivo 1770 images recorded from 8 healthy subjects. The segmentation results were compared to manual delineation performed by two experienced users. The results showed a sensitivity and specificity of 90.41+/-11.2 % and 97.93+/-5.7% (mean+/-standard deviation), respectively. The amount of overlap of segmentation and manual segmentation, was measured by the Dice similarity coefficient, which was 91.25+/-11.6%. The empirical results demonstrated the feasibility of segmenting the vessel lumen in ultrasound scans using a fully automatic algorithm.
Ahmed, Abdella M; Tashima, Hideaki; Yamaya, Taiga
2018-03-01
The dominant factor limiting the intrinsic spatial resolution of a positron emission tomography (PET) system is the size of the crystal elements in the detector. To increase sensitivity and achieve high spatial resolution, it is essential to use advanced depth-of-interaction (DOI) detectors and arrange them close to the subject. The DOI detectors help maintain high spatial resolution by mitigating the parallax error caused by the thickness of the scintillator near the peripheral regions of the field-of-view. As an optimal geometry for a brain PET scanner, with high sensitivity and spatial resolution, we proposed and developed the helmet-chin PET scanner using 54 four-layered DOI detectors consisting of a 16 × 16 × 4 array of GSOZ scintillator crystals with dimensions of 2.8 × 2.8 × 7.5 mm 3 . All the detectors used in the helmet-chin PET scanner had the same spatial resolution. In this study, we conducted a feasibility study of a new add-on detector arrangement for the helmet PET scanner by replacing the chin detector with a segmented crystal cube, having high spatial resolution in all directions, which can be placed inside the mouth. The crystal cube (which we have named the mouth-insert detector) has an array of 20 × 20 × 20 LYSO crystal segments with dimensions of 1 × 1 × 1 mm 3 . Thus, the scanner is formed by the combination of the helmet and mouth-insert detectors, and is referred to as the helmet-mouth-insert PET scanner. The results show that the helmet-mouth-insert PET scanner has comparable sensitivity and improved spatial resolution near the center of the hemisphere, compared to the helmet-chin PET scanner.
Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz
2017-01-01
To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.
Hess, M A; Duncan, R F
1996-01-01
Preferential translation of Drosophila heat shock protein 70 (Hsp70) mRNA requires only the 5'-untranslated region (5'-UTR). The sequence of this region suggests that it has relatively little secondary structure, which may facilitate efficient protein synthesis initiation. To determine whether minimal 5'-UTR secondary structure is required for preferential translation during heat shock, the effect of introducing stem-loops into the Hsp70 mRNA 5'-UTR was measured. Stem-loops of -11 kcal/mol abolished translation during heat shock, but did not reduce translation in non-heat shocked cells. A -22 kcal/mol stem-loop was required to comparably inhibit translation during growth at normal temperatures. To investigate whether specific sequence elements are also required for efficient preferential translation, deletion and mutation analyses were conducted in a truncated Hsp70 5'-UTR containing only the cap-proximal and AUG-proximal segments. Linker-scanner mutations in the cap-proximal segment (+1 to +37) did not impair translation. Re-ordering the segments reduced mRNA translational efficiency by 50%. Deleting the AUG-proximal segment severely inhibited translation. A 5-extension of the full-length leader specifically impaired heat shock translation. These results indicate that heat shock reduces the capacity to unwind 5-UTR secondary structure, allowing only mRNAs with minimal 5'-UTR secondary structure to be efficiently translated. A function for specific sequences is also suggested. PMID:8710519
A functional-based segmentation of human body scans in arbitrary postures.
Werghi, Naoufel; Xiao, Yijun; Siebert, Jan Paul
2006-02-01
This paper presents a general framework that aims to address the task of segmenting three-dimensional (3-D) scan data representing the human form into subsets which correspond to functional human body parts. Such a task is challenging due to the articulated and deformable nature of the human body. A salient feature of this framework is that it is able to cope with various body postures and is in addition robust to noise, holes, irregular sampling and rigid transformations. Although whole human body scanners are now capable of routinely capturing the shape of the whole body in machine readable format, they have not yet realized their potential to provide automatic extraction of key body measurements. Automated production of anthropometric databases is a prerequisite to satisfying the needs of certain industrial sectors (e.g., the clothing industry). This implies that in order to extract specific measurements of interest, whole body 3-D scan data must be segmented by machine into subsets corresponding to functional human body parts. However, previously reported attempts at automating the segmentation process suffer from various limitations, such as being restricted to a standard specific posture and being vulnerable to scan data artifacts. Our human body segmentation algorithm advances the state of the art to overcome the above limitations and we present experimental results obtained using both real and synthetic data that confirm the validity, effectiveness, and robustness of our approach.
Twelve automated thresholding methods for segmentation of PET images: a phantom study.
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M
2012-06-21
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Twelve automated thresholding methods for segmentation of PET images: a phantom study
NASA Astrophysics Data System (ADS)
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.
2012-06-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Storelli, L; Pagani, E; Rocca, M A; Horsfield, M A; Gallo, A; Bisecco, A; Battaglini, M; De Stefano, N; Vrenken, H; Thomas, D L; Mancini, L; Ropele, S; Enzinger, C; Preziosa, P; Filippi, M
2016-07-21
The automatic segmentation of MS lesions could reduce time required for image processing together with inter- and intraoperator variability for research and clinical trials. A multicenter validation of a proposed semiautomatic method for hyperintense MS lesion segmentation on dual-echo MR imaging is presented. The classification technique used is based on a region-growing approach starting from manual lesion identification by an expert observer with a final segmentation-refinement step. The method was validated in a cohort of 52 patients with relapsing-remitting MS, with dual-echo images acquired in 6 different European centers. We found a mathematic expression that made the optimization of the method independent of the need for a training dataset. The automatic segmentation was in good agreement with the manual segmentation (dice similarity coefficient = 0.62 and root mean square error = 2 mL). Assessment of the segmentation errors showed no significant differences in algorithm performance between the different MR scanner manufacturers (P > .05). The method proved to be robust, and no center-specific training of the algorithm was required, offering the possibility for application in a clinical setting. Adoption of the method should lead to improved reliability and less operator time required for image analysis in research and clinical trials in MS. © 2016 American Society of Neuroradiology.
An Efficient, Hierarchical Viewpoint Planning Strategy for Terrestrial Laser Scanner Networks
NASA Astrophysics Data System (ADS)
Jia, F.; Lichti, D. D.
2018-05-01
Terrestrial laser scanner (TLS) techniques have been widely adopted in a variety of applications. However, unlike in geodesy or photogrammetry, insufficient attention has been paid to the optimal TLS network design. It is valuable to develop a complete design system that can automatically provide an optimal plan, especially for high-accuracy, large-volume scanning networks. To achieve this goal, one should look at the "optimality" of the solution as well as the computational complexity in reaching it. In this paper, a hierarchical TLS viewpoint planning strategy is developed to solve the optimal scanner placement problems. If one targeted object to be scanned is simplified as discretized wall segments, any possible viewpoint can be evaluated by a score table representing its visible segments under certain scanning geometry constraints. Thus, the design goal is to find a minimum number of viewpoints that achieves complete coverage of all wall segments. The efficiency is improved by densifying viewpoints hierarchically, instead of a "brute force" search within the entire workspace. The experiment environments in this paper were simulated from two buildings located on University of Calgary campus. Compared with the "brute force" strategy in terms of the quality of the solutions and the runtime, it is shown that the proposed strategy can provide a scanning network with a compatible quality but with more than a 70 % time saving.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khatonabadi, Maryam; Kim, Hyun J.; Lu, Peiyun
Purpose: In AAPM Task Group 204, the size-specific dose estimate (SSDE) was developed by providing size adjustment factors which are applied to the Computed Tomography (CT) standardized dose metric, CTDI{sub vol}. However, that work focused on fixed tube current scans and did not specifically address tube current modulation (TCM) scans, which are currently the majority of clinical scans performed. The purpose of this study was to extend the SSDE concept to account for TCM by investigating the feasibility of using anatomic and organ specific regions of scanner output to improve accuracy of dose estimates. Methods: Thirty-nine adult abdomen/pelvis and 32more » chest scans from clinically indicated CT exams acquired on a multidetector CT using TCM were obtained with Institutional Review Board approval for generating voxelized models. Along with image data, raw projection data were obtained to extract TCM functions for use in Monte Carlo simulations. Patient size was calculated using the effective diameter described in TG 204. In addition, the scanner-reported CTDI{sub vol} (CTDI{sub vol,global}) was obtained for each patient, which is based on the average tube current across the entire scan. For the abdomen/pelvis scans, liver, spleen, and kidneys were manually segmented from the patient datasets; for the chest scans, lungs and for female models only, glandular breast tissue were segmented. For each patient organ doses were estimated using Monte Carlo Methods. To investigate the utility of regional measures of scanner output, regional and organ anatomic boundaries were identified from image data and used to calculate regional and organ-specific average tube current values. From these regional and organ-specific averages, CTDI{sub vol} values, referred to as regional and organ-specific CTDI{sub vol}, were calculated for each patient. Using an approach similar to TG 204, all CTDI{sub vol} values were used to normalize simulated organ doses; and the ability of each normalized dose to correlate with patient size was investigated. Results: For all five organs, the correlations with patient size increased when organ doses were normalized by regional and organ-specific CTDI{sub vol} values. For example, when estimating dose to the liver, CTDI{sub vol,global} yielded a R{sup 2} value of 0.26, which improved to 0.77 and 0.86, when using the regional and organ-specific CTDI{sub vol} for abdomen and liver, respectively. For breast dose, the global CTDI{sub vol} yielded a R{sup 2} value of 0.08, which improved to 0.58 and 0.83, when using the regional and organ-specific CTDI{sub vol} for chest and breasts, respectively. The R{sup 2} values also increased once the thoracic models were separated for the analysis into females and males, indicating differences between genders in this region not explained by a simple measure of effective diameter. Conclusions: This work demonstrated the utility of regional and organ-specific CTDI{sub vol} as normalization factors when using TCM. It was demonstrated that CTDI{sub vol,global} is not an effective normalization factor in TCM exams where attenuation (and therefore tube current) varies considerably throughout the scan, such as abdomen/pelvis and even thorax. These exams can be more accurately assessed for dose using regional CTDI{sub vol} descriptors that account for local variations in scanner output present when TCM is employed.« less
A 3D ultrasound scanner: real time filtering and rendering algorithms.
Cifarelli, D; Ruggiero, C; Brusacà, M; Mazzarella, M
1997-01-01
The work described here has been carried out within a collaborative project between DIST and ESAOTE BIOMEDICA aiming to set up a new ultrasonic scanner performing 3D reconstruction. A system is being set up to process and display 3D ultrasonic data in a fast, economical and user friendly way to help the physician during diagnosis. A comparison is presented among several algorithms for digital filtering, data segmentation and rendering for real time, PC based, three-dimensional reconstruction from B-mode ultrasonic biomedical images. Several algorithms for digital filtering have been compared as relates to processing time and to final image quality. Three-dimensional data segmentation techniques and rendering has been carried out with special reference to user friendly features for foreseeable applications and reconstruction speed.
Validation of automated white matter hyperintensity segmentation.
Smart, Sean D; Firbank, Michael J; O'Brien, John T
2011-01-01
Introduction. White matter hyperintensities (WMHs) are a common finding on MRI scans of older people and are associated with vascular disease. We compared 3 methods for automatically segmenting WMHs from MRI scans. Method. An operator manually segmented WMHs on MRI images from a 3T scanner. The scans were also segmented in a fully automated fashion by three different programmes. The voxel overlap between manual and automated segmentation was compared. Results. Between observer overlap ratio was 63%. Using our previously described in-house software, we had overlap of 62.2%. We investigated the use of a modified version of SPM segmentation; however, this was not successful, with only 14% overlap. Discussion. Using our previously reported software, we demonstrated good segmentation of WMHs in a fully automated fashion.
Catana, Ciprian; van der Kouwe, Andre; Benner, Thomas; Michel, Christian J.; Hamm, Michael; Fenchel, Matthias; Fischl, Bruce; Rosen, Bruce; Schmand, Matthias; Sorensen, A. Gregory
2013-01-01
A number of factors have to be considered for implementing an accurate attenuation correction (AC) in a combined MR-PET scanner. In this work, some of these challenges were investigated and an AC method based entirely on the MR data obtained with a single dedicated sequence was developed and used for neurological studies performed with the MR-PET human brain scanner prototype. Methods The focus was on the bone/air segmentation problem, the bone linear attenuation coefficient selection and the RF coil positioning. The impact of these factors on the PET data quantification was studied in simulations and experimental measurements performed on the combined MR-PET scanner. A novel dual-echo ultra-short echo time (DUTE) MR sequence was proposed for head imaging. Simultaneous MR-PET data were acquired and the PET images reconstructed using the proposed MR-DUTE-based AC method were compared with the PET images reconstructed using a CT-based AC. Results Our data suggest that incorrectly accounting for the bone tissue attenuation can lead to large underestimations (>20%) of the radiotracer concentration in the cortex. Assigning a linear attenuation coefficient of 0.143 or 0.151 cm−1 to bone tissue appears to give the best trade-off between bias and variability in the resulting images. Not identifying the internal air cavities introduces large overestimations (>20%) in adjacent structures. Based on these results, the segmented CT AC method was established as the “silver standard” for the segmented MR-based AC method. Particular to an integrated MR-PET scanner, ignoring the RF coil attenuation can cause large underestimations (i.e. up to 50%) in the reconstructed images. Furthermore, the coil location in the PET field of view has to be accurately known. Good quality bone/air segmentation can be performed using the DUTE data. The PET images obtained using the MR-DUTE- and CT-based AC methods compare favorably in most of the brain structures. Conclusion An MR-DUTE-based AC method was implemented considering all these factors and our preliminary results suggest that this method could potentially be as accurate as the segmented CT method and it could be used for quantitative neurological MR-PET studies. PMID:20810759
Catana, Ciprian; van der Kouwe, Andre; Benner, Thomas; Michel, Christian J; Hamm, Michael; Fenchel, Matthias; Fischl, Bruce; Rosen, Bruce; Schmand, Matthias; Sorensen, A Gregory
2010-09-01
Several factors have to be considered for implementing an accurate attenuation-correction (AC) method in a combined MR-PET scanner. In this work, some of these challenges were investigated, and an AC method based entirely on the MRI data obtained with a single dedicated sequence was developed and used for neurologic studies performed with the MR-PET human brain scanner prototype. The focus was on the problem of bone-air segmentation, selection of the linear attenuation coefficient for bone, and positioning of the radiofrequency coil. The impact of these factors on PET data quantification was studied in simulations and experimental measurements performed on the combined MR-PET scanner. A novel dual-echo ultrashort echo time (DUTE) MRI sequence was proposed for head imaging. Simultaneous MR-PET data were acquired, and the PET images reconstructed using the proposed DUTE MRI-based AC method were compared with the PET images that had been reconstructed using a CT-based AC method. Our data suggest that incorrectly accounting for the bone tissue attenuation can lead to large underestimations (>20%) of the radiotracer concentration in the cortex. Assigning a linear attenuation coefficient of 0.143 or 0.151 cm(-1) to bone tissue appears to give the best trade-off between bias and variability in the resulting images. Not identifying the internal air cavities introduces large overestimations (>20%) in adjacent structures. On the basis of these results, the segmented CT AC method was established as the silver standard for the segmented MRI-based AC method. For an integrated MR-PET scanner, in particular, ignoring the radiofrequency coil attenuation can cause large underestimations (i.e.,
Lettau, Michael; Kotter, Elmar; Bendszus, Martin; Hähnel, Stefan
2014-10-01
CT angiography (CTA) is an increasingly used method for evaluation of stented vessel segments. Our aim was to compare the appearance of different carotid artery stents in vitro on CTA using different CT scanners. Of particular interest was the measurement of artificial lumen narrowing (ALN) caused by the stent material within the stented vessel segment to determine whether CTA can be used to detect in-stent restenosis. CTA appearances of 16 carotid artery stents of different designs and sizes (4.0 to 11.0 mm) were investigated in vitro. CTA was performed using 16-, 64- and 320-row CT scanners. For each stent, artificial lumen narrowing (ALN) was calculated. ALN ranged from 18.77% to 59.86%. ALN in different stents differed significantly. In most stents, ALN decreased with increasing stent diameter. In all but one stents, ALN using sharp image kernels was significantly lower than ALN using medium image kernels. Considering all stents, ALN did not significantly differ using different CT scanners or imaging protocols. CTA evaluation of vessel patency after stent placement is possible, but is considerably impaired by ALN. Investigators should be informed about the method of choice for every stent and stent manufacturers should be aware of potential artifacts caused by their stents during noninvasive diagnostic methods such as CTA. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Assessment of the impact of the scanner-related factors on brain morphometry analysis with Brainvisa
2011-01-01
Background Brain morphometry is extensively used in cross-sectional studies. However, the difference in the estimated values of the morphometric measures between patients and healthy subjects may be small and hence overshadowed by the scanner-related variability, especially with multicentre and longitudinal studies. It is important therefore to investigate the variability and reliability of morphometric measurements between different scanners and different sessions of the same scanner. Methods We assessed the variability and reliability for the grey matter, white matter, cerebrospinal fluid and cerebral hemisphere volumes as well as the global sulcal index, sulcal surface and mean geodesic depth using Brainvisa. We used datasets obtained across multiple MR scanners at 1.5 T and 3 T from the same groups of 13 and 11 healthy volunteers, respectively. For each morphometric measure, we conducted ANOVA analysis and verified whether the estimated values were significantly different across different scanners or different sessions of the same scanner. The between-centre and between-visit reliabilities were estimated from their contribution to the total variance, using a random-effects ANOVA model. To estimate the main processes responsible for low reliability, the results of brain segmentation were compared to those obtained using FAST within FSL. Results In a considerable number of cases, the main effects of both centre and visit factors were found to be significant. Moreover, both between-centre and between-visit reliabilities ranged from poor to excellent for most morphometric measures. A comparison between segmentation using Brainvisa and FAST revealed that FAST improved the reliabilities for most cases, suggesting that morphometry could benefit from improving the bias correction. However, the results were still significantly different across different scanners or different visits. Conclusions Our results confirm that for morphometry analysis with the current version of Brainvisa using data from multicentre or longitudinal studies, the scanner-related variability must be taken into account and where possible should be corrected for. We also suggest providing some flexibility to Brainvisa for a step-by-step analysis of the robustness of this package in terms of reproducibility of the results by allowing the bias corrected images to be imported from other packages and bias correction step be skipped, for example. PMID:22189342
Semantic Segmentation of Indoor Point Clouds Using Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Babacan, K.; Chen, L.; Sohn, G.
2017-11-01
As Building Information Modelling (BIM) thrives, geometry becomes no longer sufficient; an ever increasing variety of semantic information is needed to express an indoor model adequately. On the other hand, for the existing buildings, automatically generating semantically enriched BIM from point cloud data is in its infancy. The previous research to enhance the semantic content rely on frameworks in which some specific rules and/or features that are hand coded by specialists. These methods immanently lack generalization and easily break in different circumstances. On this account, a generalized framework is urgently needed to automatically and accurately generate semantic information. Therefore we propose to employ deep learning techniques for the semantic segmentation of point clouds into meaningful parts. More specifically, we build a volumetric data representation in order to efficiently generate the high number of training samples needed to initiate a convolutional neural network architecture. The feedforward propagation is used in such a way to perform the classification in voxel level for achieving semantic segmentation. The method is tested both for a mobile laser scanner point cloud, and a larger scale synthetically generated data. We also demonstrate a case study, in which our method can be effectively used to leverage the extraction of planar surfaces in challenging cluttered indoor environments.
EXPLORER: Changing the molecular imaging paradigm with total-body PET/CT (Conference Presentation)
NASA Astrophysics Data System (ADS)
Cherry, Simon R.; Badawi, Ramsey D.; Jones, Terry
2016-04-01
Positron emission tomography (PET) is the highest sensitivity technique for human whole-body imaging studies. However, current clinical PET scanners do not make full use of the available signal, as they only permit imaging of a 15-25 cm segment of the body at one time. Given the limited sensitive region, whole-body imaging with clinical PET scanners requires relatively long scan times and subjects the patient to higher than necessary radiation doses. The EXPLORER initiative aims to build a 2-meter axial length PET scanner to allow imaging the entire subject at once, capturing nearly the entire available PET signal. EXPLORER will acquire data with ~40-fold greater sensitivity leading to a six-fold increase in reconstructed signal-to-noise ratio for imaging the total body. Alternatively, total-body images with the EXPLORER scanner will be able to be acquired in ~30 seconds or with ~0.15 mSv injected dose, while maintaining current PET image quality. The superior sensitivity will open many new avenues for biomedical research. Specifically for cancer applications, high sensitivity PET will enable detection of smaller lesions. Additionally, greater sensitivity will allow imaging out to 10 half-lives of positron emitting radiotracers. This will enable 1) metabolic ultra-staging with FDG by extending the uptake and clearance time to 3-5 hours to significantly improve contrast and 2) improved kinetic imaging with short-lived radioisotopes such as C-11, crucial for drug development studies. Frequent imaging studies of the same subject to study disease progression or to track response to therapy will be possible with the low dose capabilities of the EXPLORER scanner. The low dose capabilities will also open up new imaging possibilities in pediatrics and adolescents to better study developmental disorders. This talk will review the basis for developing total-body PET, potential applications, and review progress to date in developing EXPLORER, the first total-body PET scanner.
Validation of Automated White Matter Hyperintensity Segmentation
Smart, Sean D.; Firbank, Michael J.; O'Brien, John T.
2011-01-01
Introduction. White matter hyperintensities (WMHs) are a common finding on MRI scans of older people and are associated with vascular disease. We compared 3 methods for automatically segmenting WMHs from MRI scans. Method. An operator manually segmented WMHs on MRI images from a 3T scanner. The scans were also segmented in a fully automated fashion by three different programmes. The voxel overlap between manual and automated segmentation was compared. Results. Between observer overlap ratio was 63%. Using our previously described in-house software, we had overlap of 62.2%. We investigated the use of a modified version of SPM segmentation; however, this was not successful, with only 14% overlap. Discussion. Using our previously reported software, we demonstrated good segmentation of WMHs in a fully automated fashion. PMID:21904678
NASA Astrophysics Data System (ADS)
Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.
2012-12-01
Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.
Line Segmentation of 2d Laser Scanner Point Clouds for Indoor Slam Based on a Range of Residuals
NASA Astrophysics Data System (ADS)
Peter, M.; Jafri, S. R. U. N.; Vosselman, G.
2017-09-01
Indoor mobile laser scanning (IMLS) based on the Simultaneous Localization and Mapping (SLAM) principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors) which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of n points with respect to the line is σ / √n. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.
Automated estimation of leaf distribution for individual trees based on TLS point clouds
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Rutzinger, Martin; Bremer, Magnus
2017-04-01
Light Detection and Ranging (LiDAR) especially the ground based LiDAR (Terrestrial Laser Scanning - TLS) is an operational used and widely available measurement tool supporting forest inventory updating and research in forest ecology. High resolution point clouds from TLS already represent single leaves which can be used for a more precise estimation of Leaf Area Index (LAI) and for higher accurate biomass estimation. However, currently the methodology for extracting single leafs from the unclassified point clouds for individual trees is still missing. The aim of this study is to present a novel segmentation approach in order to extract single leaves and derive features related to leaf morphology (such as area, slope, length and width) of each single leaf from TLS point cloud data. For the study two exemplary single trees were scanned in leaf-on condition on the university campus of Innsbruck during calm wind conditions. A northern red oak (Quercus rubra) was scanned by a discrete return recording Optech ILRIS-3D TLS scanner and a tulip tree (Liliodendron tulpifera) with Riegl VZ-6000 scanner. During the scanning campaign a reference dataset was measured parallel to scanning. In this case 230 leaves were randomly collected around the lower branches of the tree and photos were taken. The developed workflow steps were the following: in the first step normal vectors and eigenvalues were calculated based on the user specified neighborhood. Then using the direction of the largest eigenvalue outliers i.e. ghost points were removed. After that region growing segmentation based on the curvature and angles between normal vectors was applied on the filtered point cloud. On each segment a RANSAC plane fitting algorithm was applied in order to extract the segment based normal vectors. Using the related features of the calculated segments the stem and branches were labeled as non-leaf and other segments were classified as leaf. The validation of the different segmentation parameters was evaluated as the following: i) the sum area of the collected leaves and the point cloud, ii) the segmented leaf length-width ratio iii) the distribution of the leaf area for the segmented and the reference-ones were compared and the ideal parameter-set was found. The results show that the leaves can be captured with the developed workflow and the slope can be determined robustly for the segmented leaves. However, area, length and width values are systematically depending on the angle and the distance from the scanner. For correction of the systematic underestimation, more systematic measurement or LiDAR simulation is required for further detailed analysis. The results of leaf segmentation algorithm show high potential in generating more precise tree models with correctly located leaves in order to extract more precise input model for biological modeling of LAI or atmospheric corrections studies. The presented workflow also can be used in monitoring the change of angle of the leaves due to sun irradiation, water balance, and day-night rhythm.
Patient-specific CT dosimetry calculation: a feasibility study.
Fearon, Thomas; Xie, Huchen; Cheng, Jason Y; Ning, Holly; Zhuge, Ying; Miller, Robert W
2011-11-15
Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of "standard man". Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient-specific CT dosimetry. A radiation treatment planning system was modified to calculate patient-specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose-volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi-empirical, measured correction-based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point-by-point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%-20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient-specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation.
Application of Quantitative MRI for Brain Tissue Segmentation at 1.5 T and 3.0 T Field Strengths
West, Janne; Blystad, Ida; Engström, Maria; Warntjes, Jan B. M.; Lundberg, Peter
2013-01-01
Background Brain tissue segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) are important in neuroradiological applications. Quantitative Mri (qMRI) allows segmentation based on physical tissue properties, and the dependencies on MR scanner settings are removed. Brain tissue groups into clusters in the three dimensional space formed by the qMRI parameters R1, R2 and PD, and partial volume voxels are intermediate in this space. The qMRI parameters, however, depend on the main magnetic field strength. Therefore, longitudinal studies can be seriously limited by system upgrades. The aim of this work was to apply one recently described brain tissue segmentation method, based on qMRI, at both 1.5 T and 3.0 T field strengths, and to investigate similarities and differences. Methods In vivo qMRI measurements were performed on 10 healthy subjects using both 1.5 T and 3.0 T MR scanners. The brain tissue segmentation method was applied for both 1.5 T and 3.0 T and volumes of WM, GM, CSF and brain parenchymal fraction (BPF) were calculated on both field strengths. Repeatability was calculated for each scanner and a General Linear Model was used to examine the effect of field strength. Voxel-wise t-tests were also performed to evaluate regional differences. Results Statistically significant differences were found between 1.5 T and 3.0 T for WM, GM, CSF and BPF (p<0.001). Analyses of main effects showed that WM was underestimated, while GM and CSF were overestimated on 1.5 T compared to 3.0 T. The mean differences between 1.5 T and 3.0 T were -66 mL WM, 40 mL GM, 29 mL CSF and -1.99% BPF. Voxel-wise t-tests revealed regional differences of WM and GM in deep brain structures, cerebellum and brain stem. Conclusions Most of the brain was identically classified at the two field strengths, although some regional differences were observed. PMID:24066153
Experimental flat-panel high-spatial-resolution volume CT of the temporal bone.
Gupta, Rajiv; Bartling, Soenke H; Basu, Samit K; Ross, William R; Becker, Hartmut; Pfoh, Armin; Brady, Thomas; Curtin, Hugh D
2004-09-01
A CT scanner employing a digital flat-panel detector is capable of very high spatial resolution as compared with a multi-section CT (MSCT) scanner. Our purpose was to determine how well a prototypical volume CT (VCT) scanner with a flat-panel detector system defines fine structures in temporal bone. Four partially manipulated temporal-bone specimens were imaged by use of a prototypical cone-beam VCT scanner with a flat-panel detector system at an isometric resolution of 150 microm at the isocenter. These specimens were also depicted by state-of-the-art multisection CT (MSCT). Forty-two structures imaged by both scanners were qualitatively assessed and rated, and scores assigned to VCT findings were compared with those of MSCT. Qualitative assessment of anatomic structures, lesions, cochlear implants, and middle-ear hearing aids indicated that image quality was significantly better with VCT (P < .001). Structures near the spatial-resolution limit of MSCT (e.g., bony covering of the tympanic segment of the facial canal, the incudo-stapedial joint, the proximal vestibular aqueduct, the interscalar septum, and the modiolus) had higher contrast and less partial-volume effect with VCT. The flat-panel prototype provides better definition of fine osseous structures of temporal bone than that of currently available MSCT scanners. This study provides impetus for further research in increasing spatial resolution beyond that offered by the current state-of-the-art scanners.
Use of Landsat-derived temporal profiles for corn-soybean feature extraction and classification
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; Carnes, J. G.; Austin, W. W.
1982-01-01
A physical model is presented, which has been derived from multitemporal-multispectral data acquired by Landsat satellites to describe the behavior and new features that are crop specific. A feasibility study over 40 sites was performed to classify the segment pixels into those of corn, soybeans, and others using the new features and a linear classifier. Results agree well with other existing methods, and it is shown the multitemporal-multispectral scanner data can be transformed into two parameters that are closely related to the target of interest and thus can be used in classification. The approach is less time intensive than other techniques and requires labeling of only pure pixels.
Application of polymer sensitive MRI sequence to localization of EEG electrodes.
Butler, Russell; Gilbert, Guillaume; Descoteaux, Maxime; Bernier, Pierre-Michel; Whittingstall, Kevin
2017-02-15
The growing popularity of simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) opens up the possibility of imaging EEG electrodes while the subject is in the scanner. Such information could be useful for improving the fusion of EEG-fMRI datasets. Here, we report for the first time how an ultra-short echo time (UTE) MR sequence can image the materials of an MR-compatible EEG cap, finding that electrodes and some parts of the wiring are visible in a high resolution UTE. Using these images, we developed a segmentation procedure to obtain electrode coordinates based on voxel intensity from the raw UTE, using hand labeled coordinates as the starting point. We were able to visualize and segment 95% of EEG electrodes using a short (3.5min) UTE sequence. We provide scripts and template images so this approach can now be easily implemented to obtain precise, subject-specific EEG electrode positions while adding minimal acquisition time to the simultaneous EEG-fMRI protocol. T1 gel artifacts are not robust enough to localize all electrodes across subjects, the polymers composing Brainvision cap electrodes are not visible on a T1, and adding T1 visible materials to the EEG cap is not always possible. We therefore consider our method superior to existing methods for obtaining electrode positions in the scanner, as it is hardware free and should work on a wide range of materials (caps). EEG electrode positions are obtained with high precision and no additional hardware. Copyright © 2016 Elsevier B.V. All rights reserved.
Developments in holographic-based scanner designs
NASA Astrophysics Data System (ADS)
Rowe, David M.
1997-07-01
Holographic-based scanning systems have been used for years in the high resolution prepress markets where monochromatic lasers are generally utilized. However, until recently, due to the dispersive properties of holographic optical elements (HOEs), along with the high cost associated with recording 'master' HOEs, holographic scanners have not been able to penetrate major scanning markets such as the laser printer and digital copier markets, low to mid-range imagesetter markets, and the non-contact inspection scanner market. Each of these markets has developed cost effective laser diode based solutions using conventional scanning approaches such as polygon/f-theta lens combinations. In order to penetrate these markets, holographic-based systems must exhibit low cost and immunity to wavelength shifts associated with laser diodes. This paper describes recent developments in the design of holographic scanners in which multiple HOEs, each possessing optical power, are used in conjunction with one curved mirror to passively correct focal plane position errors and spot size changes caused by the wavelength instability of laser diodes. This paper also describes recent advancements in low cost production of high quality HOEs and curved mirrors. Together these developments allow holographic scanners to be economically competitive alternatives to conventional devices in every segment of the laser scanning industry.
2012-08-01
respiratory motions using 4D tagged magnetic resonance imaging ( MRI ) data and 4D high-resolution respiratory-gated CT data respectively. Both...dimensional segmented human anatomy. Medical Physics, 1994. 21(2): p. 299-302. 6. Zubal, I.G., et al. High resolution, MRI -based, segmented...the beam direction. T2-weighted images were acquired after 24 hours with a 3T- MRI scanner using a turbo spin-echo sequence. Imaging parameters were
Poynton, Clare B; Chen, Kevin T; Chonde, Daniel B; Izquierdo-Garcia, David; Gollub, Randy L; Gerstner, Elizabeth R; Batchelor, Tracy T; Catana, Ciprian
2014-01-01
We present a new MRI-based attenuation correction (AC) approach for integrated PET/MRI systems that combines both segmentation- and atlas-based methods by incorporating dual-echo ultra-short echo-time (DUTE) and T1-weighted (T1w) MRI data and a probabilistic atlas. Segmented atlases were constructed from CT training data using a leave-one-out framework and combined with T1w, DUTE, and CT data to train a classifier that computes the probability of air/soft tissue/bone at each voxel. This classifier was applied to segment the MRI of the subject of interest and attenuation maps (μ-maps) were generated by assigning specific linear attenuation coefficients (LACs) to each tissue class. The μ-maps generated with this "Atlas-T1w-DUTE" approach were compared to those obtained from DUTE data using a previously proposed method. For validation of the segmentation results, segmented CT μ-maps were considered to the "silver standard"; the segmentation accuracy was assessed qualitatively and quantitatively through calculation of the Dice similarity coefficient (DSC). Relative change (RC) maps between the CT and MRI-based attenuation corrected PET volumes were also calculated for a global voxel-wise assessment of the reconstruction results. The μ-maps obtained using the Atlas-T1w-DUTE classifier agreed well with those derived from CT; the mean DSCs for the Atlas-T1w-DUTE-based μ-maps across all subjects were higher than those for DUTE-based μ-maps; the atlas-based μ-maps also showed a lower percentage of misclassified voxels across all subjects. RC maps from the atlas-based technique also demonstrated improvement in the PET data compared to the DUTE method, both globally as well as regionally.
MEMS temperature scanner: principles, advances, and applications
NASA Astrophysics Data System (ADS)
Otto, Thomas; Saupe, Ray; Stock, Volker; Gessner, Thomas
2010-02-01
Contactless measurement of temperatures has gained enormous significance in many application fields, ranging from climate protection over quality control to object recognition in public places or military objects. Thereby measurement of linear or spatially temperature distribution is often necessary. For this purposes mostly thermographic cameras or motor driven temperature scanners are used today. Both are relatively expensive and the motor drive devices are limited regarding to the scanning rate additionally. An economic alternative are temperature scanner devices based on micro mirrors. The micro mirror, attached in a simple optical setup, reflects the emitted radiation from the observed heat onto an adapted detector. A line scan of the target object is obtained by periodic deflection of the micro scanner. Planar temperature distribution will be achieved by perpendicularly moving the target object or the scanner device. Using Planck radiation law the temperature of the object is calculated. The device can be adapted to different temperature ranges and resolution by using different detectors - cooled or uncooled - and parameterized scanner parameters. With the basic configuration 40 spatially distributed measuring points can be determined with temperatures in a range from 350°C - 1000°C. The achieved miniaturization of such scanners permits the employment in complex plants with high building density or in direct proximity to the measuring point. The price advantage enables a lot of applications, especially new application in the low-price market segment This paper shows principle, setup and application of a temperature measurement system based on micro scanners working in the near infrared range. Packaging issues and measurement results will be discussed as well.
Valero, Enrique; Adán, Antonio; Cerrada, Carlos
2012-01-01
In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled. PMID:23443369
Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan
2016-01-01
Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction. PMID:28603407
Real-time Awake Animal Motion Tracking System for SPECT Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon
Enhancements have been made in the development of a real-time optical pose measurement and tracking system that provides 3D position and orientation data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained small animals. Three optical cameras with infrared (IR) illumination view the head movements of an animal enclosed in a transparent burrow. Markers placed on the head provide landmark points for image segmentation. Strobed IR LED s are synchronized to the cameras and illuminate the markers to prevent motion blur for each set of images. The system using the three cameras automatically segments themore » markers, detects missing data, rejects false reflections, performs trinocular marker correspondence, and calculates the 3D pose of the animal s head. Improvements have been made in methods for segmentation, tracking, and 3D calculation to give higher speed and more accurate measurements during a scan. The optical hardware has been installed within a Siemens MicroCAT II small animal scanner at Johns Hopkins without requiring functional changes to the scanner operation. The system has undergone testing using both phantoms and live mice and has been characterized in terms of speed, accuracy, robustness, and reliability. Experimental data showing these motion tracking results are given.« less
Comparison of working efficiency of terrestrial laser scanner in day and night conditions
NASA Astrophysics Data System (ADS)
Arslan, A. E.; Kalkan, K.
2013-10-01
Terrestrial Laser Scanning is a popular and widely used technique to scan existing objects, document historical sites and items, and remodel them if and when needed. Their ability to collect thousands of point data per second makes them an invaluable tool in many areas from engineering to historical reconstruction. There are many scanners in the market with different technical specifications. One main technical specification of laser scanners is range and illumination. In this study, it is tested to be determined the optimal working times of a laser scanner and the scanners consistency with its specifications sheet. In order to conduct this work, series of GNSS measurements in Istanbul Technical University have been carried out, connected to the national reference network, to determine precise positions of target points and the scanner, which makes possible to define a precise distance between the scanner and targets. Those ground surveys has been used for calibration and registration purposes. Two different scan campaigns conducted at 12 am and 11 pm to compare working efficiency of laser scanner in different illumination conditions and targets are measured with a handheld spectro-radiometer in order to determine their reflective characteristics. The obtained results are compared and their accuracies have been analysed.
Quantitative Neuroimaging Software for Clinical Assessment of Hippocampal Volumes on MR Imaging
Ahdidan, Jamila; Raji, Cyrus A.; DeYoe, Edgar A.; Mathis, Jedidiah; Noe, Karsten Ø.; Rimestad, Jens; Kjeldsen, Thomas K.; Mosegaard, Jesper; Becker, James T.; Lopez, Oscar
2015-01-01
Background: Multiple neurological disorders including Alzheimer’s disease (AD), mesial temporal sclerosis, and mild traumatic brain injury manifest with volume loss on brain MRI. Subtle volume loss is particularly seen early in AD. While prior research has demonstrated the value of this additional information from quantitative neuroimaging, very few applications have been approved for clinical use. Here we describe a US FDA cleared software program, NeuroreaderTM, for assessment of clinical hippocampal volume on brain MRI. Objective: To present the validation of hippocampal volumetrics on a clinical software program. Method: Subjects were drawn (n = 99) from the Alzheimer Disease Neuroimaging Initiative study. Volumetric brain MR imaging was acquired in both 1.5 T (n = 59) and 3.0 T (n = 40) scanners in participants with manual hippocampal segmentation. Fully automated hippocampal segmentation and measurement was done using a multiple atlas approach. The Dice Similarity Coefficient (DSC) measured the level of spatial overlap between NeuroreaderTM and gold standard manual segmentation from 0 to 1 with 0 denoting no overlap and 1 representing complete agreement. DSC comparisons between 1.5 T and 3.0 T scanners were done using standard independent samples T-tests. Results: In the bilateral hippocampus, mean DSC was 0.87 with a range of 0.78–0.91 (right hippocampus) and 0.76–0.91 (left hippocampus). Automated segmentation agreement with manual segmentation was essentially equivalent at 1.5 T (DSC = 0.879) versus 3.0 T (DSC = 0.872). Conclusion: This work provides a description and validation of a software program that can be applied in measuring hippocampal volume, a biomarker that is frequently abnormal in AD and other neurological disorders. PMID:26484924
Automatic segmentation of cortical vessels in pre- and post-tumor resection laser range scan images
NASA Astrophysics Data System (ADS)
Ding, Siyi; Miga, Michael I.; Thompson, Reid C.; Garg, Ishita; Dawant, Benoit M.
2009-02-01
Measurement of intra-operative cortical brain movement is necessary to drive mechanical models developed to predict sub-cortical shift. At our institution, this is done with a tracked laser range scanner. This device acquires both 3D range data and 2D photographic images. 3D cortical brain movement can be estimated if 2D photographic images acquired over time can be registered. Previously, we have developed a method, which permits this registration using vessels visible in the images. But, vessel segmentation required the localization of starting and ending points for each vessel segment. Here, we propose a method, which automates the segmentation process further. This method involves several steps: (1) correction of lighting artifacts, (2) vessel enhancement, and (3) vessels' centerline extraction. Result obtained on 5 images obtained in the operating room suggests that our method is robust and is able to segment vessels reliably.
LANDSAT-4 horizon scanner full orbit data averages
NASA Technical Reports Server (NTRS)
Stanley, J. P.; Bilanow, S.
1983-01-01
Averages taken over full orbit data spans of the pitch and roll residual measurement errors of the two conical Earth sensors operating on the LANDSAT 4 spacecraft are described. The variability of these full orbit averages over representative data throughtout the year is analyzed to demonstrate the long term stability of the sensor measurements. The data analyzed consist of 23 segments of sensor measurements made at 2 to 4 week intervals. Each segment is roughly 24 hours in length. The variation of full orbit average as a function of orbit within a day as a function of day of year is examined. The dependence on day of year is based on association the start date of each segment with the mean full orbit average for the segment. The peak-to-peak and standard deviation values of the averages for each data segment are computed and their variation with day of year are also examined.
Lung lobe modeling and segmentation with individualized surface meshes
NASA Astrophysics Data System (ADS)
Blaffert, Thomas; Barschdorf, Hans; von Berg, Jens; Dries, Sebastian; Franz, Astrid; Klinder, Tobias; Lorenz, Cristian; Renisch, Steffen; Wiemker, Rafael
2008-03-01
An automated segmentation of lung lobes in thoracic CT images is of interest for various diagnostic purposes like the quantification of emphysema or the localization of tumors within the lung. Although the separating lung fissures are visible in modern multi-slice CT-scanners, their contrast in the CT-image often does not separate the lobes completely. This makes it impossible to build a reliable segmentation algorithm without additional information. Our approach uses general anatomical knowledge represented in a geometrical mesh model to construct a robust lobe segmentation, which even gives reasonable estimates of lobe volumes if fissures are not visible at all. The paper describes the generation of the lung model mesh including lobes by an average volume model, its adaptation to individual patient data using a special fissure feature image, and a performance evaluation over a test data set showing an average segmentation accuracy of 1 to 3 mm.
LANDSAT-D data format control book. Volume 5: (Payload)
NASA Technical Reports Server (NTRS)
Andrew, H.
1981-01-01
The LANDSAT-D flight segment payload is the thematic mapper and the multispectral scanner. Narrative and visual descriptions of the LANDSAT-D payload data handling hardware and data flow paths from the sensing instruments through to the GSFC LANDSAT-D data management system are provided. Key subsystems are examined.
Patient‐specific CT dosimetry calculation: a feasibility study
Xie, Huchen; Cheng, Jason Y.; Ning, Holly; Zhuge, Ying; Miller, Robert W.
2011-01-01
Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of “standard man”. Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient‐specific CT dosimetry. A radiation treatment planning system was modified to calculate patient‐specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose‐volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi‐empirical, measured correction‐based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point‐by‐point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%–20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient‐specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation. PACS numbers: 87.55.D‐, 87.57.Q‐, 87.53.Bn, 87.55.K‐ PMID:22089016
LANDSAT-D ground segment operations plan, revision A
NASA Technical Reports Server (NTRS)
Evans, B.
1982-01-01
The basic concept for the utilization of LANDSAT ground processing resources is described. Only the steady state activities that support normal ground processing are addressed. This ground segment operations plan covers all processing of the multispectral scanner and the processing of thematic mapper through data acquisition and payload correction data generation for the LANDSAT 4 mission. The capabilities embedded in the hardware and software elements are presented from an operations viewpoint. The personnel assignments associated with each functional process and the mechanisms available for controlling the overall data flow are identified.
Mell, Matthew; Tefera, Girma; Thornton, Frank; Siepman, David; Turnipseed, William
2007-03-01
The diagnostic accuracy of magnetic resonance angiography (MRA) in the infrapopliteal arterial segment is not well defined. This study evaluated the clinical utility and diagnostic accuracy of time-resolved imaging of contrast kinetics (TRICKS) MRA compared with digital subtraction contrast angiography (DSA) in planning for percutaneous interventions of popliteal and infrapopliteal arterial occlusive disease. Patients who underwent percutaneous lower extremity interventions for popliteal or tibial occlusive disease were identified for this study. Preprocedural TRICKS MRA was performed with 1.5 Tesla (GE Healthcare, Waukesha, Wis) magnetic resonance imaging scanners with a flexible peripheral vascular coil, using the TRICKS technique with gadodiamide injection. DSA was performed using standard techniques in angiography suite with a 15-inch image intensifier. DSA was considered the gold standard. The MRA and DSA were then evaluated in a blinded fashion by a radiologist and a vascular surgeon. The popliteal artery and tibioperoneal trunk were evaluated separately, and the tibial arteries were divided into proximal, mid, and distal segments. Each segment was interpreted as normal (0% to 49% stenosis), stenotic (50% to 99% stenosis), or occluded (100%). Lesion morphology was classified according to the TransAtlantic Inter-Society Consensus (TASC). We calculated concordance between the imaging studies and the sensitivity and specificity of MRA. The clinical utility of MRA was also assessed in terms of identifying arterial access site as well as predicting technical success of the percutaneous treatment. Comparisons were done on 150 arterial segments in 30 limbs of 27 patients. When evaluated by TASC classification, TRICKS MRA correlated with DSA in 83% of the popliteal and in 88% of the infrapopliteal segments. MRA correctly identified significant disease of the popliteal artery with a sensitivity of 94% and a specificity of 92%, and of the tibial arteries with a sensitivity of 100% and specificity of 84%. When used to evaluate for stenosis vs occlusion, MRA interpretation agreed with DSA 90% of the time. Disagreement occurred in 15 arterial segments, most commonly in distal tibioperoneal arteries. MRA misdiagnosed occlusion for stenosis in 11 of 15 segments, and stenosis for occlusion in four of 15 segments. Arterial access was accurately planned based on preprocedural MRA findings in 29 of 30 patients. MRA predicted technical success 83% of the time. Five technical failures were due to inability to cross arterial occlusions, all accurately identified by MRA. TRICKS MRA is an accurate method of evaluating patients for popliteal and infrapopliteal arterial occlusive disease and can be used for planning percutaneous interventions.
A shape-based segmentation method for mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen
2013-07-01
Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.
Initialisation of 3D level set for hippocampus segmentation from volumetric brain MR images
NASA Astrophysics Data System (ADS)
Hajiesmaeili, Maryam; Dehmeshki, Jamshid; Bagheri Nakhjavanlo, Bashir; Ellis, Tim
2014-04-01
Shrinkage of the hippocampus is a primary biomarker for Alzheimer's disease and can be measured through accurate segmentation of brain MR images. The paper will describe the problem of initialisation of a 3D level set algorithm for hippocampus segmentation that must cope with the some challenging characteristics, such as small size, wide range of intensities, narrow width, and shape variation. In addition, MR images require bias correction, to account for additional inhomogeneity associated with the scanner technology. Due to these inhomogeneities, using a single initialisation seed region inside the hippocampus is prone to failure. Alternative initialisation strategies are explored, such as using multiple initialisations in different sections (such as the head, body and tail) of the hippocampus. The Dice metric is used to validate our segmentation results with respect to ground truth for a dataset of 25 MR images. Experimental results indicate significant improvement in segmentation performance using the multiple initialisations techniques, yielding more accurate segmentation results for the hippocampus.
NASA Astrophysics Data System (ADS)
Berndt, Bianca; Landry, Guillaume; Schwarz, Florian; Tessonnier, Thomas; Kamp, Florian; Dedes, George; Thieke, Christian; Würl, Matthias; Kurz, Christopher; Ganswindt, Ute; Verhaegen, Frank; Debus, Jürgen; Belka, Claus; Sommer, Wieland; Reiser, Maximilian; Bauer, Julia; Parodi, Katia
2017-03-01
The purpose of this work was to evaluate the ability of single and dual energy computed tomography (SECT, DECT) to estimate tissue composition and density for usage in Monte Carlo (MC) simulations of irradiation induced β + activity distributions. This was done to assess the impact on positron emission tomography (PET) range verification in proton therapy. A DECT-based brain tissue segmentation method was developed for white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF). The elemental composition of reference tissues was assigned to closest CT numbers in DECT space (DECTdist). The method was also applied to SECT data (SECTdist). In a validation experiment, the proton irradiation induced PET activity of three brain equivalent solutions (BES) was compared to simulations based on different tissue segmentations. Five patients scanned with a dual source DECT scanner were analyzed to compare the different segmentation methods. A single magnetic resonance (MR) scan was used for comparison with an established segmentation toolkit. Additionally, one patient with SECT and post-treatment PET scans was investigated. For BES, DECTdist and SECTdist reduced differences to the reference simulation by up to 62% when compared to the conventional stoichiometric segmentation (SECTSchneider). In comparison to MR brain segmentation, Dice similarity coefficients for WM, GM and CSF were 0.61, 0.67 and 0.66 for DECTdist and 0.54, 0.41 and 0.66 for SECTdist. MC simulations of PET treatment verification in patients showed important differences between DECTdist/SECTdist and SECTSchneider for patients with large CSF areas within the treatment field but not in WM and GM. Differences could be misinterpreted as PET derived range shifts of up to 4 mm. DECTdist and SECTdist yielded comparable activity distributions, and comparison of SECTdist to a measured patient PET scan showed improved agreement when compared to SECTSchneider. The agreement between predicted and measured PET activity distributions was improved by employing a brain specific segmentation applicable to both DECT and SECT data.
Spatio-Temporal Regularization for Longitudinal Registration to Subject-Specific 3d Template
Guizard, Nicolas; Fonov, Vladimir S.; García-Lorenzo, Daniel; Nakamura, Kunio; Aubert-Broche, Bérengère; Collins, D. Louis
2015-01-01
Neurodegenerative diseases such as Alzheimer's disease present subtle anatomical brain changes before the appearance of clinical symptoms. Manual structure segmentation is long and tedious and although automatic methods exist, they are often performed in a cross-sectional manner where each time-point is analyzed independently. With such analysis methods, bias, error and longitudinal noise may be introduced. Noise due to MR scanners and other physiological effects may also introduce variability in the measurement. We propose to use 4D non-linear registration with spatio-temporal regularization to correct for potential longitudinal inconsistencies in the context of structure segmentation. The major contribution of this article is the use of individual template creation with spatio-temporal regularization of the deformation fields for each subject. We validate our method with different sets of real MRI data, compare it to available longitudinal methods such as FreeSurfer, SPM12, QUARC, TBM, and KNBSI, and demonstrate that spatially local temporal regularization yields more consistent rates of change of global structures resulting in better statistical power to detect significant changes over time and between populations. PMID:26301716
Adaptive region-growing with maximum curvature strategy for tumor segmentation in 18F-FDG PET
NASA Astrophysics Data System (ADS)
Tan, Shan; Li, Laquan; Choi, Wookjin; Kang, Min Kyu; D'Souza, Warren D.; Lu, Wei
2017-07-01
Accurate tumor segmentation in PET is crucial in many oncology applications. We developed an adaptive region-growing (ARG) algorithm with a maximum curvature strategy (ARG_MC) for tumor segmentation in PET. The ARG_MC repeatedly applied a confidence connected region-growing algorithm with increasing relaxing factor f. The optimal relaxing factor (ORF) was then determined at the transition point on the f-volume curve, where the volume just grew from the tumor into the surrounding normal tissues. The ARG_MC along with five widely used algorithms were tested on a phantom with 6 spheres at different signal to background ratios and on two clinic datasets including 20 patients with esophageal cancer and 11 patients with non-Hodgkin lymphoma (NHL). The ARG_MC did not require any phantom calibration or any a priori knowledge of the tumor or PET scanner. The identified ORF varied with tumor types (mean ORF = 9.61, 3.78 and 2.55 respectively for the phantom, esophageal cancer, and NHL datasets), and varied from one tumor to another. For the phantom, the ARG_MC ranked the second in segmentation accuracy with an average Dice similarity index (DSI) of 0.86, only slightly worse than Daisne’s adaptive thresholding method (DSI = 0.87), which required phantom calibration. For both the esophageal cancer dataset and the NHL dataset, the ARG_MC had the highest accuracy with an average DSI of 0.87 and 0.84, respectively. The ARG_MC was robust to parameter settings and region of interest selection, and it did not depend on scanners, imaging protocols, or tumor types. Furthermore, the ARG_MC made no assumption about the tumor size or tumor uptake distribution, making it suitable for segmenting tumors with heterogeneous FDG uptake. In conclusion, the ARG_MC was accurate, robust and easy to use, it provides a highly potential tool for PET tumor segmentation in clinic.
Adaptive Region-Growing with Maximum Curvature Strategy for Tumor Segmentation in 18F-FDG PET
Tan, Shan; Li, Laquan; Choi, Wookjin; Kang, Min Kyu; D’Souza, Warren D.; Lu, Wei
2017-01-01
Accurate tumor segmentation in PET is crucial in many oncology applications. We developed an adaptive region-growing (ARG) algorithm with a maximum curvature strategy (ARG_MC) for tumor segmentation in PET. The ARG_MC repeatedly applied a confidence connected region-growing (CCRG) algorithm with increasing relaxing factor f. The optimal relaxing factor (ORF) was then determined at the transition point on the f-volume curve, where the volume just grew from the tumor into the surrounding normal tissues. The ARG_MC along with five widely used algorithms were tested on a phantom with 6 spheres at different signal to background ratios and on two clinic datasets including 20 patients with esophageal cancer and 11 patients with non-Hodgkin lymphoma (NHL). The ARG_MC did not require any phantom calibration or any a priori knowledge of the tumor or PET scanner. The identified ORF varied with tumor types (mean ORF = 9.61, 3.78 and 2.55 respectively for the phantom, esophageal cancer, and NHL datasets), and varied from one tumor to another. For the phantom, the ARG_MC ranked the second in segmentation accuracy with an average Dice similarity index (DSI) of 0.86, only slightly worse than Daisne’s adaptive thresholding method (DSI=0.87), which required phantom calibration. For both the esophageal cancer dataset and the NHL dataset, the ARG_MC had the highest accuracy with an average DSI of 0.87 and 0.84, respectively. The ARG_MC was robust to parameter settings and region of interest selection, and it did not depend on scanners, imaging protocols, or tumor types. Furthermore, the ARG_MC made no assumption about the tumor size or tumor uptake distribution, making it suitable for segmenting tumors with heterogeneous FDG uptake. In conclusion, the ARG_MC was accurate, robust and easy to use, it provides a highly potential tool for PET tumor segmentation in clinic. PMID:28604372
Comparison of multi-arm VRX CT scanners through computer models
NASA Astrophysics Data System (ADS)
Rendon, David A.; DiBianca, Frank A.; Keyes, Gary S.
2007-03-01
Variable Resolution X-ray (VRX) CT scanners allow imaging of different sized anatomy at the same level of detail using the same device. This is achieved by tilting the x-ray detectors so that the projected size of the detecting elements is varied producing reconstructions of smaller fields of view with higher spatial resolution.1 The detector can be divided in two or more separate segments, called arms, which can be placed at different angles, allowing some flexibility for the scanner design. In particular, several arms can be set at different angles creating a target region of considerably higher resolution that can be used to track the evolution of a previously diagnosed condition, while keeping the patient completely inside the field of view (FOV).2 This work presents newly-developed computer models of single-slice VRX scanners that allow us to study and compare different configurations (that is, various types of detectors arranged in any number of arms arranged in different geometries) in terms of spatial and contrast resolution. In particular, we are interested in comparing the performance of various geometric configurations that would otherwise be considered equivalent (using the same equipment, imaging FOVs of the same sizes, and having a similar overall scanner size). For this, a VRX simulator was developed, along with mathematical phantoms for spatial resolution and contrast analysis. These tools were used to compare scanner configurations that can be reproduced with materials presently available in our lab.
Hypo-Fractionated Conformal Radiation Therapy to the Tumor Bed After Segmental Mastectomy
2004-07-01
conserving surgery for breast cancer were first offered slan- speed helical CT scanner . CT images were transferred to dard conventional 6-week RT. Only...Zhou S, Prosnitz RG, et al. The impact of breast cancer treated with breast conserving therapy. J Surg im, diated left ventricular volume on the
An automatic vision-based malaria diagnosis system.
Vink, J P; Laubscher, M; Vlutters, R; Silamut, K; Maude, R J; Hasan, M U; DE Haan, G
2013-06-01
Malaria is a worldwide health problem with 225 million infections each year. A fast and easy-to-use method, with high performance is required to differentiate malaria from non-malarial fevers. Manual examination of blood smears is currently the gold standard, but it is time-consuming, labour-intensive, requires skilled microscopists and the sensitivity of the method depends heavily on the skills of the microscopist. We propose an easy-to-use, quantitative cartridge-scanner system for vision-based malaria diagnosis, focusing on low malaria parasite densities. We have used special finger-prick cartridges filled with acridine orange to obtain a thin blood film and a dedicated scanner to image the cartridge. Using supervised learning, we have built a Plasmodium falciparum detector. A two-step approach was used to first segment potentially interesting areas, which are then analysed in more detail. The performance of the detector was validated using 5,420 manually annotated parasite images from malaria parasite culture in medium, as well as using 40 cartridges of 11,780 images containing healthy blood. From finger prick to result, the prototype cartridge-scanner system gave a quantitative diagnosis in 16 min, of which only 1 min required manual interaction of basic operations. It does not require a wet lab or a skilled operator and provides parasite images for manual review and quality control. In healthy samples, the image analysis part of the system achieved an overall specificity of 99.999978% at the level of (infected) red blood cells, resulting in at most seven false positives per microlitre. Furthermore, the system showed a sensitivity of 75% at the cell level, enabling the detection of low parasite densities in a fast and easy-to-use manner. A field trial in Chittagong (Bangladesh) indicated that future work should primarily focus on improving the filling process of the cartridge and the focus control part of the scanner. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Optical Automatic Car Identification (OACI) : Volume 1. Advanced System Specification.
DOT National Transportation Integrated Search
1978-12-01
A performance specification is provided in this report for an Optical Automatic Car Identification (OACI) scanner system which features 6% improved readability over existing industry scanner systems. It also includes the analysis and rationale which ...
Auzias, G; Takerkart, S; Deruelle, C
2016-05-01
Pooling data acquired on different MR scanners is a commonly used practice to increase the statistical power of studies based on MRI-derived measurements. Such studies are very appealing since they should make it possible to detect more subtle effects related to pathologies. However, the influence of confounds introduced by scanner-related variations remains unclear. When studying brain morphometry descriptors, it is crucial to investigate whether scanner-induced errors can exceed the effect of the disease itself. More specifically, in the context of developmental pathologies such as autism spectrum disorders (ASD), it is essential to evaluate the influence of the scanner on age-related effects. In this paper, we studied a dataset composed of 159 anatomical MR images pooled from three different scanners, including 75 ASD patients and 84 healthy controls. We quantitatively assessed the effects of the age, pathology, and scanner factors on cortical thickness measurements. Our results indicate that scan pooling from different sites would be less fruitful in some cortical regions than in others. Although the effect of age is consistent across scanners, the interaction between the age and scanner factors is important and significant in some specific cortical areas.
Sunderland, John J; Christian, Paul E
2015-01-01
The Clinical Trials Network (CTN) of the Society of Nuclear Medicine and Molecular Imaging (SNMMI) operates a PET/CT phantom imaging program using the CTN's oncology clinical simulator phantom, designed to validate scanners at sites that wish to participate in oncology clinical trials. Since its inception in 2008, the CTN has collected 406 well-characterized phantom datasets from 237 scanners at 170 imaging sites covering the spectrum of commercially available PET/CT systems. The combined and collated phantom data describe a global profile of quantitative performance and variability of PET/CT data used in both clinical practice and clinical trials. Individual sites filled and imaged the CTN oncology PET phantom according to detailed instructions. Standard clinical reconstructions were requested and submitted. The phantom itself contains uniform regions suitable for scanner calibration assessment, lung fields, and 6 hot spheric lesions with diameters ranging from 7 to 20 mm at a 4:1 contrast ratio with primary background. The CTN Phantom Imaging Core evaluated the quality of the phantom fill and imaging and measured background standardized uptake values to assess scanner calibration and maximum standardized uptake values of all 6 lesions to review quantitative performance. Scanner make-and-model-specific measurements were pooled and then subdivided by reconstruction to create scanner-specific quantitative profiles. Different makes and models of scanners predictably demonstrated different quantitative performance profiles including, in some cases, small calibration bias. Differences in site-specific reconstruction parameters increased the quantitative variability among similar scanners, with postreconstruction smoothing filters being the most influential parameter. Quantitative assessment of this intrascanner variability over this large collection of phantom data gives, for the first time, estimates of reconstruction variance introduced into trials from allowing trial sites to use their preferred reconstruction methodologies. Predictably, time-of-flight-enabled scanners exhibited less size-based partial-volume bias than non-time-of-flight scanners. The CTN scanner validation experience over the past 5 y has generated a rich, well-curated phantom dataset from which PET/CT make-and-model and reconstruction-dependent quantitative behaviors were characterized for the purposes of understanding and estimating scanner-based variances in clinical trials. These results should make it possible to identify and recommend make-and-model-specific reconstruction strategies to minimize measurement variability in cancer clinical trials. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Poynton, Clare B; Chen, Kevin T; Chonde, Daniel B; Izquierdo-Garcia, David; Gollub, Randy L; Gerstner, Elizabeth R; Batchelor, Tracy T; Catana, Ciprian
2014-01-01
We present a new MRI-based attenuation correction (AC) approach for integrated PET/MRI systems that combines both segmentation- and atlas-based methods by incorporating dual-echo ultra-short echo-time (DUTE) and T1-weighted (T1w) MRI data and a probabilistic atlas. Segmented atlases were constructed from CT training data using a leave-one-out framework and combined with T1w, DUTE, and CT data to train a classifier that computes the probability of air/soft tissue/bone at each voxel. This classifier was applied to segment the MRI of the subject of interest and attenuation maps (μ-maps) were generated by assigning specific linear attenuation coefficients (LACs) to each tissue class. The μ-maps generated with this “Atlas-T1w-DUTE” approach were compared to those obtained from DUTE data using a previously proposed method. For validation of the segmentation results, segmented CT μ-maps were considered to the “silver standard”; the segmentation accuracy was assessed qualitatively and quantitatively through calculation of the Dice similarity coefficient (DSC). Relative change (RC) maps between the CT and MRI-based attenuation corrected PET volumes were also calculated for a global voxel-wise assessment of the reconstruction results. The μ-maps obtained using the Atlas-T1w-DUTE classifier agreed well with those derived from CT; the mean DSCs for the Atlas-T1w-DUTE-based μ-maps across all subjects were higher than those for DUTE-based μ-maps; the atlas-based μ-maps also showed a lower percentage of misclassified voxels across all subjects. RC maps from the atlas-based technique also demonstrated improvement in the PET data compared to the DUTE method, both globally as well as regionally. PMID:24753982
Coupled dictionary learning for joint MR image restoration and segmentation
NASA Astrophysics Data System (ADS)
Yang, Xuesong; Fan, Yong
2018-03-01
To achieve better segmentation of MR images, image restoration is typically used as a preprocessing step, especially for low-quality MR images. Recent studies have demonstrated that dictionary learning methods could achieve promising performance for both image restoration and image segmentation. These methods typically learn paired dictionaries of image patches from different sources and use a common sparse representation to characterize paired image patches, such as low-quality image patches and their corresponding high quality counterparts for the image restoration, and image patches and their corresponding segmentation labels for the image segmentation. Since learning these dictionaries jointly in a unified framework may improve the image restoration and segmentation simultaneously, we propose a coupled dictionary learning method to concurrently learn dictionaries for joint image restoration and image segmentation based on sparse representations in a multi-atlas image segmentation framework. Particularly, three dictionaries, including a dictionary of low quality image patches, a dictionary of high quality image patches, and a dictionary of segmentation label patches, are learned in a unified framework so that the learned dictionaries of image restoration and segmentation can benefit each other. Our method has been evaluated for segmenting the hippocampus in MR T1 images collected with scanners of different magnetic field strengths. The experimental results have demonstrated that our method achieved better image restoration and segmentation performance than state of the art dictionary learning and sparse representation based image restoration and image segmentation methods.
Mathematical modelling of scanner-specific bowtie filters for Monte Carlo CT dosimetry
NASA Astrophysics Data System (ADS)
Kramer, R.; Cassola, V. F.; Andrade, M. E. A.; de Araújo, M. W. C.; Brenner, D. J.; Khoury, H. J.
2017-02-01
The purpose of bowtie filters in CT scanners is to homogenize the x-ray intensity measured by the detectors in order to improve the image quality and at the same time to reduce the dose to the patient because of the preferential filtering near the periphery of the fan beam. For CT dosimetry, especially for Monte Carlo calculations of organ and tissue absorbed doses to patients, it is important to take the effect of bowtie filters into account. However, material composition and dimensions of these filters are proprietary. Consequently, a method for bowtie filter simulation independent of access to proprietary data and/or to a specific scanner would be of interest to many researchers involved in CT dosimetry. This study presents such a method based on the weighted computer tomography dose index, CTDIw, defined in two cylindrical PMMA phantoms of 16 cm and 32 cm diameter. With an EGSnrc-based Monte Carlo (MC) code, ratios CTDIw/CTDI100,a were calculated for a specific CT scanner using PMMA bowtie filter models based on sigmoid Boltzmann functions combined with a scanner filter factor (SFF) which is modified during calculations until the calculated MC CTDIw/CTDI100,a matches ratios CTDIw/CTDI100,a, determined by measurements or found in publications for that specific scanner. Once the scanner-specific value for an SFF has been found, the bowtie filter algorithm can be used in any MC code to perform CT dosimetry for that specific scanner. The bowtie filter model proposed here was validated for CTDIw/CTDI100,a considering 11 different CT scanners and for CTDI100,c, CTDI100,p and their ratio considering 4 different CT scanners. Additionally, comparisons were made for lateral dose profiles free in air and using computational anthropomorphic phantoms. CTDIw/CTDI100,a determined with this new method agreed on average within 0.89% (max. 3.4%) and 1.64% (max. 4.5%) with corresponding data published by CTDosimetry (www.impactscan.org) for the CTDI HEAD and BODY phantoms, respectively. Comparison with results calculated using proprietary data for the PHILIPS Brilliance 64 scanner showed agreement on average within 2.5% (max. 5.8%) and with data measured for that scanner within 2.1% (max. 3.7%). Ratios of CTDI100,c/CTDI100, p for this study and corresponding data published by CTDosimetry (www.impactscan.org) agree on average within about 11% (max. 28.6%). Lateral dose profiles calculated with the proposed bowtie filter and with proprietary data agreed within 2% (max. 5.9%), and both calculated data agreed within 5.4% (max. 11.2%) with measured results. Application of the proposed bowtie filter and of the exactly modelled filter to human phantom Monte Carlo calculations show agreement on the average within less than 5% (max. 7.9%) for organ and tissue absorbed doses.
LANDSAT-D flight segment operations manual, volume 2
NASA Technical Reports Server (NTRS)
Varhola, J.
1981-01-01
Functions, performance capabilities, modes of operation, constraints, redundancy, commands, and telemetry are described for the thematic mapper; the global positioning system; the direct access S-band; the multispectral scanner; the payload correction; the thermal control subsystem; the solar array retention, deployment, and jettison assembly; and the boom antenna retention, deployment, and jettison assembly for LANDSAT 4.
GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.
Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A
2017-03-01
We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.
The Segmentation of Point Clouds with K-Means and ANN (artifical Neural Network)
NASA Astrophysics Data System (ADS)
Kuçak, R. A.; Özdemir, E.; Erol, S.
2017-05-01
Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM) generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM) which is a type of ANN (Artificial Neural Network) segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS) were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated. LIDAR (Light Detection and Ranging) and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.
High resolution, MRI-based, segmented, computerized head phantom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zubal, I.G.; Harrell, C.R.; Smith, E.O.
1999-01-01
The authors have created a high-resolution software phantom of the human brain which is applicable to voxel-based radiation transport calculations yielding nuclear medicine simulated images and/or internal dose estimates. A software head phantom was created from 124 transverse MRI images of a healthy normal individual. The transverse T2 slices, recorded in a 256x256 matrix from a GE Signa 2 scanner, have isotropic voxel dimensions of 1.5 mm and were manually segmented by the clinical staff. Each voxel of the phantom contains one of 62 index numbers designating anatomical, neurological, and taxonomical structures. The result is stored as a 256x256x128 bytemore » array. Internal volumes compare favorably to those described in the ICRP Reference Man. The computerized array represents a high resolution model of a typical human brain and serves as a voxel-based anthropomorphic head phantom suitable for computer-based modeling and simulation calculations. It offers an improved realism over previous mathematically described software brain phantoms, and creates a reference standard for comparing results of newly emerging voxel-based computations. Such voxel-based computations lead the way to developing diagnostic and dosimetry calculations which can utilize patient-specific diagnostic images. However, such individualized approaches lack fast, automatic segmentation schemes for routine use; therefore, the high resolution, typical head geometry gives the most realistic patient model currently available.« less
Lim, Won Hee; Park, Eun Woo; Chae, Hwa Sung; Kwon, Soon Man; Jung, Hoi-In; Baek, Seung-Hak
2017-06-01
The purpose of this study was to compare the results of two- (2D) and three-dimensional (3D) measurements for the alveolar molding effect in patients with unilateral cleft lip and palate. The sample consisted of 23 unilateral cleft lip and palate infants treated with nasoalveolar molding (NAM) appliance. Dental models were fabricated at initial visit (T0; mean age, 23.5 days after birth) and after alveolar molding therapy (T1; mean duration, 83 days). For 3D measurement, virtual models were constructed using a laser scanner and 3D software. For 2D measurement, 1:1 ratio photograph images of dental models were scanned by a scanner. After setting of common reference points and lines for 2D and 3D measurements, 7 linear and 5 angular variables were measured at the T0 and T1 stages, respectively. Wilcoxon signed rank test and Bland-Altman analysis were performed for statistical analysis. The alveolar molding effect of the maxilla following NAM treatment was inward bending of the anterior part of greater segment, forward growth of the lesser segment, and decrease in the cleft gap in the greater segment and lesser segment. Two angular variables showed difference in statistical interpretation of the change by NAM treatment between 2D and 3D measurements (ΔACG-BG-PG and ΔACL-BL-PL). However, Bland-Altman analysis did not exhibit significant difference in the amounts of change in these variables between the 2 measurements. These results suggest that the data from 2D measurement could be reliably used in conjunction with that from 3D measurement.
Scanners for analytic print measurement: the devil in the details
NASA Astrophysics Data System (ADS)
Zeise, Eric K.; Williams, Don; Burns, Peter D.; Kress, William C.
2007-01-01
Inexpensive and easy-to-use linear and area-array scanners have frequently substituted as colorimeters and densitometers for low-frequency (i.e., large area) hard copy image measurement. Increasingly, scanners are also being used for high spatial frequency, image microstructure measurements, which were previously reserved for high performance microdensitometers. In this paper we address characteristics of flatbed reflection scanners in the evaluation of print uniformity, geometric distortion, geometric repeatability and the influence of scanner MTF and noise on analytic measurements. Suggestions are made for the specification and evaluation of scanners to be used in print image quality standards that are being developed.
A software tool for automatic classification and segmentation of 2D/3D medical images
NASA Astrophysics Data System (ADS)
Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur
2013-02-01
Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.
Accuracy of MSCT Coronary Angiography with 64 Row CT Scanner—Facing the Facts
Wehrschuetz, M.; Wehrschuetz, E.; Schuchlenz, H.; Schaffler, G.
2010-01-01
Improvements in multislice computed tomography (MSCT) angiography of the coronary vessels have enabled the minimally invasive detection of coronary artery stenoses, while quantitative coronary angiography (QCA) is the accepted reference standard for evaluation thereof. Sixteen-slice MSCT showed promising diagnostic accuracy in detecting coronary artery stenoses haemodynamically and the subsequent introduction of 64-slice scanners promised excellent and fast results for coronary artery studies. This prompted us to evaluate the diagnostic accuracy, sensitivity, specificity, and the negative und positive predictive value of 64-slice MSCT in the detection of haemodynamically significant coronary artery stenoses. Thirty-seven consecutive subjects with suspected coronary artery disease were evaluated with MSCT angiography and the results compared with QCA. All vessels were considered for the assessment of significant coronary artery stenosis (diameter reduction ≥ 50%). Thirteen patients (35%) were identified as having significant coronary artery stenoses on QCA with 6.3% (35/555) affected segments. None of the coronary segments were excluded from analysis. Overall sensitivity for classifying stenoses of 64-slice MSCT was 69%, specificity was 92%, positive predictive value was 38% and negative predictive value was 98%. The interobserver variability for detection of significant lesions had a k-value of 0.43. Sixty-four-slice MSCT offers the diagnostic potential to detect coronary artery disease, to quantify haemodynamically significant coronary artery stenoses and to avoid unnecessary invasive coronary artery examinations. PMID:20567636
Biedermann, Sarah; Fuss, Johannes; Zheng, Lei; Sartorius, Alexander; Falfán-Melgoza, Claudia; Demirakca, Traute; Gass, Peter; Ende, Gabriele; Weber-Fahr, Wolfgang
2012-07-16
Voluntary exercise has tremendous effects on adult hippocampal plasticity and metabolism and thus sculpts the hippocampal structure of mammals. High-field (1)H magnetic resonance (MR) investigations at 9.4 T of metabolic and structural changes can be performed non-invasively in the living rodent brain. Numerous molecular and cellular mechanisms mediating the effects of exercise on brain plasticity and behavior have been detected in vitro. However, in vivo attempts have been rare. In this work a method for voxel based morphometry (VBM) was developed with automatic tissue segmentation in mice using a 9.4 T animal scanner equipped with a (1)H-cryogenic coil. The thus increased signal to noise ratio enabled the acquisition of high resolution T2-weighted images of the mouse brain in vivo and the creation of group specific tissue class maps for the segmentation and normalization with SPM. The method was used together with hippocampal single voxel (1)H MR spectroscopy to assess the structural and metabolic differences in the mouse brain due to voluntary wheel running. A specific increase of hippocampal volume with a concomitant decrease of hippocampal glutamate levels in voluntary running mice was observed. An inverse correlation of hippocampal gray matter volume and glutamate concentration indicates a possible implication of the glutamatergic system for hippocampal volume. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rachakonda, Prem; Muralikrishnan, Bala; Cournoyer, Luc; Cheok, Geraldine; Lee, Vincent; Shilling, Meghan; Sawyer, Daniel
2017-10-01
The Dimensional Metrology Group at the National Institute of Standards and Technology is performing research to support the development of documentary standards within the ASTM E57 committee. This committee is addressing the point-to-point performance evaluation of a subclass of 3D imaging systems called terrestrial laser scanners (TLSs), which are laser-based and use a spherical coordinate system. This paper discusses the usage of sphere targets for this effort, and methods to minimize the errors due to the determination of their centers. The key contributions of this paper include methods to segment sphere data from a TLS point cloud, and the study of some of the factors that influence the determination of sphere centers.
A robust hidden Markov Gauss mixture vector quantizer for a noisy source.
Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M
2009-07-01
Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.
Lin, Yi-Cheng; Shih, Yao-Chia; Tseng, Wen-Yih I; Chu, Yu-Hsiu; Wu, Meng-Tien; Chen, Ta-Fu; Tang, Pei-Fang; Chiu, Ming-Jang
2014-05-01
Diffusion spectrum imaging (DSI) of MRI can detect neural fiber tract changes. We investigated integrity of cingulum bundle (CB) in patients with mild cognitive impairment (MCI) and early Alzheimer's disease (EAD) using DSI tractography and explored its relationship with cognitive functions. We recruited 8 patients with MCI, 9 with EAD and 15 healthy controls (HC). All subjects received a battery of neuropsychological tests to access their executive, memory and language functions. We used a 3.0-tesla MRI scanner to obtain T1- and T2-weighted images for anatomy and used a pulsed gradient twice-refocused spin-echo diffusion echo-planar imaging sequence to acquire DSI. Patients with EAD performed significantly poorer than the HC on most tests in executive and memory functions. Significantly smaller general fractional anisotropy (GFA) values were found in the posterior and inferior segments of left CB and of the anterior segment of right CB of the EAD compared with those of the HC. Spearman's correlation on the patient groups showed that GFA values of the posterior segment of the left CB were significantly negatively associated with the time used to complete Color Trails Test Part II and positively correlated with performance of the logical memory and visual reproduction. GFA values of inferior segment of bilateral CB were positively associated with the performance of visual recognition. DSI tractography demonstrates significant preferential degeneration of the CB on the left side in patients with EAD. The location-specific degeneration is associated with corresponding declines in both executive and memory functions.
Vessel segmentation in 3D spectral OCT scans of the retina
NASA Astrophysics Data System (ADS)
Niemeijer, Meindert; Garvin, Mona K.; van Ginneken, Bram; Sonka, Milan; Abràmoff, Michael D.
2008-03-01
The latest generation of spectral optical coherence tomography (OCT) scanners is able to image 3D cross-sectional volumes of the retina at a high resolution and high speed. These scans offer a detailed view of the structure of the retina. Automated segmentation of the vessels in these volumes may lead to more objective diagnosis of retinal vascular disease including hypertensive retinopathy, retinopathy of prematurity. Additionally, vessel segmentation can allow color fundus images to be registered to these 3D volumes, possibly leading to a better understanding of the structure and localization of retinal structures and lesions. In this paper we present a method for automatically segmenting the vessels in a 3D OCT volume. First, the retina is automatically segmented into multiple layers, using simultaneous segmentation of their boundary surfaces in 3D. Next, a 2D projection of the vessels is produced by only using information from certain segmented layers. Finally, a supervised, pixel classification based vessel segmentation approach is applied to the projection image. We compared the influence of two methods for the projection on the performance of the vessel segmentation on 10 optic nerve head centered 3D OCT scans. The method was trained on 5 independent scans. Using ROC analysis, our proposed vessel segmentation system obtains an area under the curve of 0.970 when compared with the segmentation of a human observer.
Estimating organ doses from tube current modulated CT examinations using a generalized linear model.
Bostani, Maryam; McMillan, Kyle; Lu, Peiyun; Kim, Grace Hyun J; Cody, Dianna; Arbique, Gary; Greenberg, S Bruce; DeMarco, John J; Cagnon, Chris H; McNitt-Gray, Michael F
2017-04-01
Currently, available Computed Tomography dose metrics are mostly based on fixed tube current Monte Carlo (MC) simulations and/or physical measurements such as the size specific dose estimate (SSDE). In addition to not being able to account for Tube Current Modulation (TCM), these dose metrics do not represent actual patient dose. The purpose of this study was to generate and evaluate a dose estimation model based on the Generalized Linear Model (GLM), which extends the ability to estimate organ dose from tube current modulated examinations by incorporating regional descriptors of patient size, scanner output, and other scan-specific variables as needed. The collection of a total of 332 patient CT scans at four different institutions was approved by each institution's IRB and used to generate and test organ dose estimation models. The patient population consisted of pediatric and adult patients and included thoracic and abdomen/pelvis scans. The scans were performed on three different CT scanner systems. Manual segmentation of organs, depending on the examined anatomy, was performed on each patient's image series. In addition to the collected images, detailed TCM data were collected for all patients scanned on Siemens CT scanners, while for all GE and Toshiba patients, data representing z-axis-only TCM, extracted from the DICOM header of the images, were used for TCM simulations. A validated MC dosimetry package was used to perform detailed simulation of CT examinations on all 332 patient models to estimate dose to each segmented organ (lungs, breasts, liver, spleen, and kidneys), denoted as reference organ dose values. Approximately 60% of the data were used to train a dose estimation model, while the remaining 40% was used to evaluate performance. Two different methodologies were explored using GLM to generate a dose estimation model: (a) using the conventional exponential relationship between normalized organ dose and size with regional water equivalent diameter (WED) and regional CTDI vol as variables and (b) using the same exponential relationship with the addition of categorical variables such as scanner model and organ to provide a more complete estimate of factors that may affect organ dose. Finally, estimates from generated models were compared to those obtained from SSDE and ImPACT. The Generalized Linear Model yielded organ dose estimates that were significantly closer to the MC reference organ dose values than were organ doses estimated via SSDE or ImPACT. Moreover, the GLM estimates were better than those of SSDE or ImPACT irrespective of whether or not categorical variables were used in the model. While the improvement associated with a categorical variable was substantial in estimating breast dose, the improvement was minor for other organs. The GLM approach extends the current CT dose estimation methods by allowing the use of additional variables to more accurately estimate organ dose from TCM scans. Thus, this approach may be able to overcome the limitations of current CT dose metrics to provide more accurate estimates of patient dose, in particular, dose to organs with considerable variability across the population. © 2017 American Association of Physicists in Medicine.
Kuhn, T; Gullett, J M; Nguyen, P; Boutzoukas, A E; Ford, A; Colon-Perez, L M; Triplett, W; Carney, P R; Mareci, T H; Price, C C; Bauer, R M
2016-06-01
This study examined the reliability of high angular resolution diffusion tensor imaging (HARDI) data collected on a single individual across several sessions using the same scanner. HARDI data was acquired for one healthy adult male at the same time of day on ten separate days across a one-month period. Environmental factors (e.g. temperature) were controlled across scanning sessions. Tract Based Spatial Statistics (TBSS) was used to assess session-to-session variability in measures of diffusion, fractional anisotropy (FA) and mean diffusivity (MD). To address reliability within specific structures of the medial temporal lobe (MTL; the focus of an ongoing investigation), probabilistic tractography segmented the Entorhinal cortex (ERc) based on connections with Hippocampus (HC), Perirhinal (PRc) and Parahippocampal (PHc) cortices. Streamline tractography generated edge weight (EW) metrics for the aforementioned ERc connections and, as comparison regions, connections between left and right rostral and caudal anterior cingulate cortex (ACC). Coefficients of variation (CoV) were derived for the surface area and volumes of these ERc connectivity-defined regions (CDR) and for EW across all ten scans, expecting that scan-to-scan reliability would yield low CoVs. TBSS revealed no significant variation in FA or MD across scanning sessions. Probabilistic tractography successfully reproduced histologically-verified adjacent medial temporal lobe circuits. Tractography-derived metrics displayed larger ranges of scanner-to-scanner variability. Connections involving HC displayed greater variability than metrics of connection between other investigated regions. By confirming the test retest reliability of HARDI data acquisition, support for the validity of significant results derived from diffusion data can be obtained.
NASA Technical Reports Server (NTRS)
Tilton, James C.; Ramapriyan, H. K.
1989-01-01
A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.
Morphological patterns of lip prints in Mangaloreans based on Suzuki and Tsuchihashi classification
Jeergal, Prabhakar A; Pandit, Siddharth; Desai, Dinkar; Surekha, R; Jeergal, Vasanti A
2016-01-01
Introduction: Cheiloscopy is the study of the furrows or grooves present on the red part or vermilion border of the human lips. The present study aims to classify the characteristics of lip prints and to know the most common morphological pattern specific to Mangalorean people of Southern India. For the first time, this study also assesses the association between gender and different lip segments within a population. Materials and Methods: A total of 200 residents of Mangalore (100 males and 100 females) were included of age ranging from 18 years to 60 years. Materials used to take the impression of lips included red lipstick, A4 size white bond paper and cellophane tape. The prints obtained were scanned using a Canon Image Scanner and stored in a folder on a personal computer. The images were cropped and inverted in gray scale using Adobe Photoshop software. Each lip print was divided into eight segments and was examined. Suzuki and Tsuchihashi's classification (1970) was used to classify the types of grooves, and the results were statistically analyzed. Six types of grooves were recorded in the Mangalorean's lips. Statistical Analysis: Association between gender and different lip segments was tested using Chi-square analysis in the given population. Results: In males, the groove Type I' was the highest recorded followed by Type III, Type II, Type I, Type IV and Type V in descending order. In females, Type I' was the highest recorded followed by Type II, Type III, Type IV, Type I and Type V in descending order. Conclusion: Males and females displayed statistically significant differences in lip print patterns for different lip sites: lower medial lip, as well as upper and lower lateral segments. Only the upper medial lip segment displayed no statistically significant difference in lip print pattern between males and females. This shows that the distribution of lip prints is generally dissimilar for males and females, with varying predominance according to lip segment. PMID:27601831
Tsiflikas, Ilias; Drosch, Tanja; Brodoefel, Harald; Thomas, Christoph; Reimann, Anja; Till, Alexander; Nittka, Daniel; Kopp, Andreas F; Schroeder, Stephen; Heuschmid, Martin; Burgstahler, Christof
2010-08-06
Cardiac multi-detector computed tomography (MDCT) permits accurate visualization of high-grade coronary artery stenosis. However, in patients with heart rate irregularities, MDCT was found to have limitations. Thus, the aim of the present study was to evaluate the diagnostic accuracy of a new dual-source computed tomography (DSCT) scanner generation with 83 ms temporal resolution in patients without stable sinus rhythm. 44 patients (31 men, mean age 67.5+/-9.2 years) without stable sinus rhythm and scheduled for invasive coronary angiography (ICA) because of suspected (n=17) or known coronary artery disease (CAD, n=27) were included in this study. All patients were examined with DSCT (Somatom Definition, Siemens). Besides assessment of total calcium score, all coronary segments were analyzed with regard to the presence of significant coronary artery lesions (>50%). The findings were compared to ICA in a blinded fashion. During CT examination, heart rhythm was as follows: 25 patients (57%) atrial fibrillation, 7 patients (16%) ventricular extrasystoles (two of them with atrial fibrillation), 4 patients (9%) supraventricular extrasystoles, 10 patients (23%) sinus arrhythmia (heart rate variability>10 bpm). Mean heart rate was 69+/-14 bpm, median 65 bpm. Mean Agatston score equivalent (ASE) was 762, ranging from 0 to 4949.7 ASE. Prevalence of CAD was 68% (30/44). 155 segments (27%) showed "step-ladder" artifacts and 28 segments (5%) could not be visualized by DSCT. Only 70 segments (12%) were completely imaged without any artifacts. Based on a coronary segment model, sensitivity was 73%, specificity 91%, positive predictive value 63%, and negative predictive value 94% for the detection of significant lesions (>or=50% diameter stenosis). Overall accuracy was 88%. In patients with heart rate irregularities, including patients with atrial fibrillation and a high prevalence of coronary artery disease, the diagnostic yield of dual-source computed tomography is still hampered due to a high number of segments with "step-ladder" artifacts. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.
Berndt, Bianca; Landry, Guillaume; Schwarz, Florian; Tessonnier, Thomas; Kamp, Florian; Dedes, George; Thieke, Christian; Würl, Matthias; Kurz, Christopher; Ganswindt, Ute; Verhaegen, Frank; Debus, Jürgen; Belka, Claus; Sommer, Wieland; Reiser, Maximilian; Bauer, Julia; Parodi, Katia
2017-03-21
The purpose of this work was to evaluate the ability of single and dual energy computed tomography (SECT, DECT) to estimate tissue composition and density for usage in Monte Carlo (MC) simulations of irradiation induced β + activity distributions. This was done to assess the impact on positron emission tomography (PET) range verification in proton therapy. A DECT-based brain tissue segmentation method was developed for white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF). The elemental composition of reference tissues was assigned to closest CT numbers in DECT space (DECT dist ). The method was also applied to SECT data (SECT dist ). In a validation experiment, the proton irradiation induced PET activity of three brain equivalent solutions (BES) was compared to simulations based on different tissue segmentations. Five patients scanned with a dual source DECT scanner were analyzed to compare the different segmentation methods. A single magnetic resonance (MR) scan was used for comparison with an established segmentation toolkit. Additionally, one patient with SECT and post-treatment PET scans was investigated. For BES, DECT dist and SECT dist reduced differences to the reference simulation by up to 62% when compared to the conventional stoichiometric segmentation (SECT Schneider ). In comparison to MR brain segmentation, Dice similarity coefficients for WM, GM and CSF were 0.61, 0.67 and 0.66 for DECT dist and 0.54, 0.41 and 0.66 for SECT dist . MC simulations of PET treatment verification in patients showed important differences between DECT dist /SECT dist and SECT Schneider for patients with large CSF areas within the treatment field but not in WM and GM. Differences could be misinterpreted as PET derived range shifts of up to 4 mm. DECT dist and SECT dist yielded comparable activity distributions, and comparison of SECT dist to a measured patient PET scan showed improved agreement when compared to SECT Schneider . The agreement between predicted and measured PET activity distributions was improved by employing a brain specific segmentation applicable to both DECT and SECT data.
Matheoud, R; Ferrando, O; Valzano, S; Lizio, D; Sacchetti, G; Ciarmiello, A; Foppiano, F; Brambilla, M
2015-07-01
Resolution modeling (RM) of PET systems has been introduced in iterative reconstruction algorithms for oncologic PET. The RM recovers the loss of resolution and reduces the associated partial volume effect. While these methods improved the observer performance, particularly in the detection of small and faint lesions, their impact on quantification accuracy still requires thorough investigation. The aim of this study was to characterize the performances of the RM algorithms under controlled conditions simulating a typical (18)F-FDG oncologic study, using an anthropomorphic phantom and selected physical figures of merit, used for image quantification. Measurements were performed on Biograph HiREZ (B_HiREZ) and Discovery 710 (D_710) PET/CT scanners and reconstructions were performed using the standard iterative reconstructions and the RM algorithms associated to each scanner: TrueX and SharpIR, respectively. RM determined a significant improvement in contrast recovery for small targets (≤17 mm diameter) only for the D_710 scanner. The maximum standardized uptake value (SUVmax) increased when RM was applied using both scanners. The SUVmax of small targets was on average lower with the B_HiREZ than with the D_710. Sharp IR improved the accuracy of SUVmax determination, whilst TrueX showed an overestimation of SUVmax for sphere dimensions greater than 22 mm. The goodness of fit of adaptive threshold algorithms worsened significantly when RM algorithms were employed for both scanners. Differences in general quantitative performance were observed for the PET scanners analyzed. Segmentation of PET images using adaptive threshold algorithms should not be undertaken in conjunction with RM reconstructions. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge
Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip “Eddie”; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant
2014-01-01
Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p < 0.05) and had an efficient implementation with a run time of 8 minutes and 3 second per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/. PMID:24418598
Segmented Gamma Scanner for Small Containers of Uranium Processing Waste- 12295
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, K.E.; Smith, S.K.; Gailey, S.
2012-07-01
The Segmented Gamma Scanner (SGS) is commonly utilized in the assay of 55-gallon drums containing radioactive waste. Successfully deployed calibration methods include measurement of vertical line source standards in representative matrices and mathematical efficiency calibrations. The SGS technique can also be utilized to assay smaller containers, such as those used for criticality safety in uranium processing facilities. For such an application, a Can SGS System is aptly suited for the identification and quantification of radionuclides present in fuel processing wastes. Additionally, since the significant presence of uranium lumping can confound even a simple 'pass/fail' measurement regimen, the high-resolution gamma spectroscopymore » allows for the use of lump-detection techniques. In this application a lump correction is not required, but the application of a differential peak approach is used to simply identify the presence of U-235 lumps. The Can SGS is similar to current drum SGSs, but differs in the methodology for vertical segmentation. In the current drum SGS, the drum is placed on a rotator at a fixed vertical position while the detector, collimator, and transmission source are moved vertically to effect vertical segmentation. For the Can SGS, segmentation is more efficiently done by raising and lowering the rotator platform upon which the small container is positioned. This also reduces the complexity of the system mechanism. The application of the Can SGS introduces new challenges to traditional calibration and verification approaches. In this paper, we revisit SGS calibration methodology in the context of smaller waste containers, and as applied to fuel processing wastes. Specifically, we discuss solutions to the challenges introduced by requiring source standards to fit within the confines of the small containers and the unavailability of high-enriched uranium source standards. We also discuss the implementation of a previously used technique for identifying the presence of uranium lumping. The SGS technique is a well-accepted NDA technique applicable to containers of almost any size. It assumes a homogenous matrix and activity distribution throughout the entire container; an assumption that is at odds with the detection of lumps within the assay item typical of uranium-processing waste. This fact, in addition to the difficultly in constructing small reference standards of uranium-bearing materials, required the methodology used for performing an efficiency curve calibration to be altered. The solution discussed in this paper is demonstrated to provide good results for both the segment activity and full container activity when measuring heterogeneous source distributions. The application of this approach will need to be based on process knowledge of the assay items, as biases can be introduced if used with homogenous, or nearly homogenous, activity distributions. The bias will need to be quantified for each combination of container geometry and SGS scanning settings. One recommended approach for using the heterogeneous calibration discussed here is to assay each item using a homogenous calibration initially. Review of the segment activities compared to the full container activity will signal the presence of a non-uniform activity distribution as the segment activity will be grossly disproportionate to the full container activity. Upon seeing this result, the assay should either be reanalyzed or repeated using the heterogeneous calibration. (authors)« less
Mollet, Pieter; Keereman, Vincent; Bini, Jason; Izquierdo-Garcia, David; Fayad, Zahi A; Vandenberghe, Stefaan
2014-02-01
Quantitative PET imaging relies on accurate attenuation correction. Recently, there has been growing interest in combining state-of-the-art PET systems with MR imaging in a sequential or fully integrated setup. As CT becomes unavailable for these systems, an alternative approach to the CT-based reconstruction of attenuation coefficients (μ values) at 511 keV must be found. Deriving μ values directly from MR images is difficult because MR signals are related to the proton density and relaxation properties of tissue. Therefore, most research groups focus on segmentation or atlas registration techniques. Although studies have shown that these methods provide viable solutions in particular applications, some major drawbacks limit their use in whole-body PET/MR. Previously, we used an annulus-shaped PET transmission source inside the field of view of a PET scanner to measure attenuation coefficients at 511 keV. In this work, we describe the use of this method in studies of patients with the sequential time-of-flight (TOF) PET/MR scanner installed at the Icahn School of Medicine at Mount Sinai, New York, NY. Five human PET/MR and CT datasets were acquired. The transmission-based attenuation correction method was compared with conventional CT-based attenuation correction and the 3-segment, MR-based attenuation correction available on the TOF PET/MR imaging scanner. The transmission-based method overcame most problems related to the MR-based technique, such as truncation artifacts of the arms, segmentation artifacts in the lungs, and imaging of cortical bone. Additionally, the TOF capabilities of the PET detectors allowed the simultaneous acquisition of transmission and emission data. Compared with the MR-based approach, the transmission-based method provided average improvements in PET quantification of 6.4%, 2.4%, and 18.7% in volumes of interest inside the lung, soft tissue, and bone tissue, respectively. In conclusion, a transmission-based technique with an annulus-shaped transmission source will be more accurate than a conventional MR-based technique for measuring attenuation coefficients at 511 keV in future whole-body PET/MR studies.
Normative morphometric data for cerebral cortical areas over the lifetime of the adult human brain.
Potvin, Olivier; Dieumegarde, Louis; Duchesne, Simon
2017-08-01
Proper normative data of anatomical measurements of cortical regions, allowing to quantify brain abnormalities, are lacking. We developed norms for regional cortical surface areas, thicknesses, and volumes based on cross-sectional MRI scans from 2713 healthy individuals aged 18 to 94 years using 23 samples provided by 21 independent research groups. The segmentation was conducted using FreeSurfer, a widely used and freely available automated segmentation software. Models predicting regional cortical estimates of each hemisphere were produced using age, sex, estimated total intracranial volume (eTIV), scanner manufacturer, magnetic field strength, and interactions as predictors. The explained variance for the left/right cortex was 76%/76% for surface area, 43%/42% for thickness, and 80%/80% for volume. The mean explained variance for all regions was 41% for surface areas, 27% for thicknesses, and 46% for volumes. Age, sex and eTIV predicted most of the explained variance for surface areas and volumes while age was the main predictors for thicknesses. Scanner characteristics generally predicted a limited amount of variance, but this effect was stronger for thicknesses than surface areas and volumes. For new individuals, estimates of their expected surface area, thickness and volume based on their characteristics and the scanner characteristics can be obtained using the derived formulas, as well as Z score effect sizes denoting the extent of the deviation from the normative sample. Models predicting normative values were validated in independent samples of healthy adults, showing satisfactory validation R 2 . Deviations from the normative sample were measured in individuals with mild Alzheimer's disease and schizophrenia and expected patterns of deviations were observed. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Buchmann, Jens; Kaplan, Bernhard A.; Prohaska, Steffen; Laufer, Jan
2017-03-01
Quantitative photoacoustic tomography (qPAT) aims to extract physiological parameters, such as blood oxygen saturation (sO2), from measured multi-wavelength image data sets. The challenge of this approach lies in the inherently nonlinear fluence distribution in the tissue, which has to be accounted for by using an appropriate model, and the large scale of the inverse problem. In addition, the accuracy of experimental and scanner-specific parameters, such as the wavelength dependence of the incident fluence, the acoustic detector response, the beam profile and divergence, needs to be considered. This study aims at quantitative imaging of blood sO2, as it has been shown to be a more robust parameter compared to absolute concentrations. We propose a Monte-Carlo-based inversion scheme in conjunction with a reduction in the number of variables achieved using image segmentation. The inversion scheme is experimentally validated in tissue-mimicking phantoms consisting of polymer tubes suspended in a scattering liquid. The tubes were filled with chromophore solutions at different concentration ratios. 3-D multi-spectral image data sets were acquired using a Fabry-Perot based PA scanner. A quantitative comparison of the measured data with the output of the forward model is presented. Parameter estimates of chromophore concentration ratios were found to be within 5 % of the true values.
Automated posterior cranial fossa volumetry by MRI: applications to Chiari malformation type I.
Bagci, A M; Lee, S H; Nagornaya, N; Green, B A; Alperin, N
2013-09-01
Quantification of PCF volume and the degree of PCF crowdedness were found beneficial for differential diagnosis of tonsillar herniation and prediction of surgical outcome in CMI. However, lack of automated methods limits the clinical use of PCF volumetry. An atlas-based method for automated PCF segmentation tailored for CMI is presented. The method performance is assessed in terms of accuracy and spatial overlap with manual segmentation. The degree of association between PCF volumes and the lengths of previously proposed linear landmarks is reported. T1-weighted volumetric MR imaging data with 1-mm isotropic resolution obtained with the use of a 3T scanner from 14 patients with CMI and 3 healthy subjects were used for the study. Manually delineated PCF from 9 patients was used to establish a CMI-specific reference for an atlas-based automated PCF parcellation approach. Agreement between manual and automated segmentation of 5 different CMI datasets was verified by means of the t test. Measurement reproducibility was established through the use of 2 repeated scans from 3 healthy subjects. Degree of linear association between PCF volume and 6 linear landmarks was determined by means of Pearson correlation. PCF volumes measured by use of the automated method and with manual delineation were similar, 196.2 ± 8.7 mL versus 196.9 ± 11.0 mL, respectively. The mean relative difference of -0.3 ± 1.9% was not statistically significant. Low measurement variability, with a mean absolute percentage value of 0.6 ± 0.2%, was achieved. None of the PCF linear landmarks were significantly associated with PCF volume. PCF and tissue content volumes can be reliably measured in patients with CMI by use of an atlas-based automated segmentation method.
McClymont, Darryl; Mehnert, Andrew; Trakic, Adnan; Kennedy, Dominic; Crozier, Stuart
2014-04-01
To present and evaluate a fully automatic method for segmentation (i.e., detection and delineation) of suspicious tissue in breast MRI. The method, based on mean-shift clustering and graph-cuts on a region adjacency graph, was developed and its parameters tuned using multimodal (T1, T2, DCE-MRI) clinical breast MRI data from 35 subjects (training data). It was then tested using two data sets. Test set 1 comprises data for 85 subjects (93 lesions) acquired using the same protocol and scanner system used to acquire the training data. Test set 2 comprises data for eight subjects (nine lesions) acquired using a similar protocol but a different vendor's scanner system. Each lesion was manually delineated in three-dimensions by an experienced breast radiographer to establish segmentation ground truth. The regions of interest identified by the method were compared with the ground truth and the detection and delineation accuracies quantitatively evaluated. One hundred percent of the lesions were detected with a mean of 4.5 ± 1.2 false positives per subject. This false-positive rate is nearly 50% better than previously reported for a fully automatic breast lesion detection system. The median Dice coefficient for Test set 1 was 0.76 (interquartile range, 0.17), and 0.75 (interquartile range, 0.16) for Test set 2. The results demonstrate the efficacy and accuracy of the proposed method as well as its potential for direct application across different MRI systems. It is (to the authors' knowledge) the first fully automatic method for breast lesion detection and delineation in breast MRI.
Expedient range enhanced 3-D robot colour vision
NASA Astrophysics Data System (ADS)
Jarvis, R. A.
1983-01-01
Computer vision has been chosen, in many cases, as offering the richest form of sensory information which can be utilized for guiding robotic manipulation. The present investigation is concerned with the problem of three-dimensional (3D) visual interpretation of colored objects in support of robotic manipulation of those objects with a minimum of semantic guidance. The scene 'interpretations' are aimed at providing basic parameters to guide robotic manipulation rather than to provide humans with a detailed description of what the scene 'means'. Attention is given to overall system configuration, hue transforms, a connectivity analysis, plan/elevation segmentations, range scanners, elevation/range segmentation, higher level structure, eye in hand research, and aspects of array and video stream processing.
Freyer, Marcus; Ale, Angelique; Schulz, Ralf B; Zientkowska, Marta; Ntziachristos, Vasilis; Englmeier, Karl-Hans
2010-01-01
The recent development of hybrid imaging scanners that integrate fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) allows the utilization of x-ray information as image priors for improving optical tomography reconstruction. To fully capitalize on this capacity, we consider a framework for the automatic and fast detection of different anatomic structures in murine XCT images. To accurately differentiate between different structures such as bone, lung, and heart, a combination of image processing steps including thresholding, seed growing, and signal detection are found to offer optimal segmentation performance. The algorithm and its utilization in an inverse FMT scheme that uses priors is demonstrated on mouse images.
Goto, Masami; Suzuki, Makoto; Mizukami, Shinya; Abe, Osamu; Aoki, Shigeki; Miyati, Tosiaki; Fukuda, Michinari; Gomi, Tsutomu; Takeda, Tohoru
2016-10-11
An understanding of the repeatability of measured results is important for both the atlas-based and voxel-based morphometry (VBM) methods of magnetic resonance (MR) brain volumetry. However, many recent studies that have investigated the repeatability of brain volume measurements have been performed using static magnetic fields of 1-4 tesla, and no study has used a low-strength static magnetic field. The aim of this study was to investigate the repeatability of measured volumes using the atlas-based method and a low-strength static magnetic field (0.4 tesla). Ten healthy volunteers participated in this study. Using a 0.4 tesla magnetic resonance imaging (MRI) scanner and a quadrature head coil, three-dimensional T 1 -weighted images (3D-T 1 WIs) were obtained from each subject, twice on the same day. VBM8 software was used to construct segmented normalized images [gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images]. The regions-of-interest (ROIs) of GM, WM, CSF, hippocampus (HC), orbital gyrus (OG), and cerebellum posterior lobe (CPL) were generated using WFU PickAtlas. The percentage change was defined as[100 × (measured volume with first segmented image - mean volume in each subject)/(mean volume in each subject)]The average percentage change was calculated as the percentage change in the 6 ROIs of the 10 subjects. The mean of the average percentage changes for each ROI was as follows: GM, 0.556%; WM, 0.324%; CSF, 0.573%; HC, 0.645%; OG, 1.74%; and CPL, 0.471%. The average percentage change was higher for the orbital gyrus than for the other ROIs. We consider that repeatability of the atlas-based method is similar between 0.4 and 1.5 tesla MR scanners. To our knowledge, this is the first report to show that the level of repeatability with a 0.4 tesla MR scanner is adequate for the estimation of brain volume change by the atlas-based method.
PET/CT scanners: a hardware approach to image fusion.
Townsend, David W; Beyer, Thomas; Blodgett, Todd M
2003-07-01
New technology that combines positron tomography with x-ray computed tomography (PET/CT) is available from all major vendors of PET imaging equipment: CTI, Siemens, GE, Philips. Although not all vendors have made the same design choices as those described in this review all have in common that their high performance design places a commercial CT scanner in tandem with a commercial PET scanner. The level of physical integration is actually less than that of the original prototype design where the CT and PET components were mounted on the same rotating support. There will undoubtedly be a demand for PET/CT technology with a greater level of integration, and at a reduced cost. This may be achieved through the design of a scanner specifically for combined anatomical and functional imaging, rather than a design combining separate CT and PET scanners, as in the current approaches. By avoiding the duplication of data acquisition and image reconstruction functions, for example, a more integrated design should also allow cost savings over current commercial PET/CT scanners. The goal is then to design and build a device specifically for imaging the function and anatomy of cancer in the most optimal and effective way, without conceptualizing it as combined PET and CT. The development of devices specifically for imaging a particular disease (eg, cancer) differs from the conventional approach of, for example, an all-purpose anatomical imaging device such as a CT scanner. This new concept targets more of a disease management approach rather than the usual division into the medical specialties of radiology (anatomical imaging) and nuclear medicine (functional imaging). Copyright 2003 Elsevier Inc. All rights reserved.
Performance Evaluation of a PEM Scanner Using the NEMA NU 4—2008 Small Animal PET Standards
NASA Astrophysics Data System (ADS)
Luo, Weidong; Anashkin, Edward; Matthews, Christopher G.
2010-02-01
The recently published NEMA NU 4-2008 Standards has been specially designed for evaluating the performance of small animal PET scanners used in preclinical applications. In this paper, we report on the NU 4 performance of a clinical positron emission mammography (PEM) system. Since there are no PEM specific performance test protocols available, and the NU 2 protocol (intended for whole-body PET scanners) cannot be applied without modification due to the compact design of the PEM scanner, we decided to evaluate the NU 4 Standards as an alternative. We obtained the following results: Trans-axial spatial resolution 1.8 mm FWHM for high resolution reconstruction mode and 2.4 mm FWHM for standard resolution reconstruction mode with no significant variation within the field of view. The total system sensitivity was 0.16 cps/Bq. In image quality testing, the uniformity was found to be 3.9% STD at the standard resolution mode and 5.6% at the high resolution mode when measured with a 34 mm paddle separation. The NEMA NU 4-2008 Standards were found to be a practicable tool to evaluate the performance of the PEM scanner after some modifications to address the specifics of its detector configuration. Furthermore, the PEM scanner's in-plane spatial resolution was comparable to other small animal PET scanners with good image quality.
Villanueva Campos, A M; Tardáguila de la Fuente, G; Utrera Pérez, E; Jurado Basildo, C; Mera Fernández, D; Martínez Rodríguez, C
To analyze whether there are significant differences in the objective quantitative parameters obtained in the postprocessing of dual-energy CT enterography studies between bowel segments with radiologic signs of Crohn's disease and radiologically normal segments. This retrospective study analyzed 33 patients (16 men and 17 women; mean age 54 years) with known Crohn's disease who underwent CT enterography on a dual-energy scanner with oral sorbitol and intravenous contrast material in the portal phase. Images obtained with dual energy were postprocessed to obtain color maps (iodine maps). For each patient, regions of interest were traced on these color maps and the density of iodine (mg/ml) and the fat fraction (%) were calculated for the wall of a pathologic bowel segment with radiologic signs of Crohn's disease and for the wall of a healthy bowel segment; the differences in these parameters between the two segments were analyzed. The density of iodine was lower in the radiologically normal segments than in the pathologic segments [1.8 ± 0.4mg/ml vs. 3.7 ± 0.9mg/ml; p<0.05]. The fat fraction was higher in the radiologically normal segments than in the pathologic segments [32.42% ± 6.5 vs. 22.23% ± 9.4; p<0.05]. There are significant differences in the iodine density and fat fraction between bowel segments with radiologic signs of Crohn's disease and radiologically normal segments. Copyright © 2018 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
Izquierdo-Garcia, David; Hansen, Adam E; Förster, Stefan; Benoit, Didier; Schachoff, Sylvia; Fürst, Sebastian; Chen, Kevin T; Chonde, Daniel B; Catana, Ciprian
2014-11-01
We present an approach for head MR-based attenuation correction (AC) based on the Statistical Parametric Mapping 8 (SPM8) software, which combines segmentation- and atlas-based features to provide a robust technique to generate attenuation maps (μ maps) from MR data in integrated PET/MR scanners. Coregistered anatomic MR and CT images of 15 glioblastoma subjects were used to generate the templates. The MR images from these subjects were first segmented into 6 tissue classes (gray matter, white matter, cerebrospinal fluid, bone, soft tissue, and air), which were then nonrigidly coregistered using a diffeomorphic approach. A similar procedure was used to coregister the anatomic MR data for a new subject to the template. Finally, the CT-like images obtained by applying the inverse transformations were converted to linear attenuation coefficients to be used for AC of PET data. The method was validated on 16 new subjects with brain tumors (n = 12) or mild cognitive impairment (n = 4) who underwent CT and PET/MR scans. The μ maps and corresponding reconstructed PET images were compared with those obtained using the gold standard CT-based approach and the Dixon-based method available on the Biograph mMR scanner. Relative change (RC) images were generated in each case, and voxel- and region-of-interest-based analyses were performed. The leave-one-out cross-validation analysis of the data from the 15 atlas-generation subjects showed small errors in brain linear attenuation coefficients (RC, 1.38% ± 4.52%) compared with the gold standard. Similar results (RC, 1.86% ± 4.06%) were obtained from the analysis of the atlas-validation datasets. The voxel- and region-of-interest-based analysis of the corresponding reconstructed PET images revealed quantification errors of 3.87% ± 5.0% and 2.74% ± 2.28%, respectively. The Dixon-based method performed substantially worse (the mean RC values were 13.0% ± 10.25% and 9.38% ± 4.97%, respectively). Areas closer to the skull showed the largest improvement. We have presented an SPM8-based approach for deriving the head μ map from MR data to be used for PET AC in integrated PET/MR scanners. Its implementation is straightforward and requires only the morphologic data acquired with a single MR sequence. The method is accurate and robust, combining the strengths of both segmentation- and atlas-based approaches while minimizing their drawbacks. © 2014 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Izquierdo-Garcia, David; Hansen, Adam E.; Förster, Stefan; Benoit, Didier; Schachoff, Sylvia; Fürst, Sebastian; Chen, Kevin T.; Chonde, Daniel B.; Catana, Ciprian
2014-01-01
We present an approach for head MR-based attenuation correction (MR-AC) based on the Statistical Parametric Mapping (SPM8) software that combines segmentation- and atlas-based features to provide a robust technique to generate attenuation maps (µ-maps) from MR data in integrated PET/MR scanners. Methods Coregistered anatomical MR and CT images acquired in 15 glioblastoma subjects were used to generate the templates. The MR images from these subjects were first segmented into 6 tissue classes (gray and white matter, cerebro-spinal fluid, bone and soft tissue, and air), which were then non-rigidly coregistered using a diffeomorphic approach. A similar procedure was used to coregister the anatomical MR data for a new subject to the template. Finally, the CT-like images obtained by applying the inverse transformations were converted to linear attenuation coefficients (LACs) to be used for AC of PET data. The method was validated on sixteen new subjects with brain tumors (N=12) or mild cognitive impairment (N=4) who underwent CT and PET/MR scans. The µ-maps and corresponding reconstructed PET images were compared to those obtained using the gold standard CT-based approach and the Dixon-based method available on the Siemens Biograph mMR scanner. Relative change (RC) images were generated in each case and voxel- and region of interest (ROI)-based analyses were performed. Results The leave-one-out cross-validation analysis of the data from the 15 atlas-generation subjects showed small errors in brain LACs (RC=1.38%±4.52%) compared to the gold standard. Similar results (RC=1.86±4.06%) were obtained from the analysis of the atlas-validation datasets. The voxel- and ROI-based analysis of the corresponding reconstructed PET images revealed quantification errors of 3.87±5.0% and 2.74±2.28%, respectively. The Dixon-based method performed substantially worse (the mean RC values were 13.0±10.25% and 9.38±4.97%, respectively). Areas closer to skull showed the largest improvement. Conclusion We have presented an SPM8-based approach for deriving the head µ-map from MR data to be used for PET AC in integrated PET/MR scanners. Its implementation is straightforward and only requires the morphological data acquired with a single MR sequence. The method is very accurate and robust, combining the strengths of both segmentation- and atlas-based approaches while minimizing their drawbacks. PMID:25278515
NASA Astrophysics Data System (ADS)
Tomczak, Kamil; Jakubowski, Jacek; Fiołek, Przemysław
2017-06-01
Crack width measurement is an important element of research on the progress of self-healing cement composites. Due to the nature of this research, the method of measuring the width of cracks and their changes over time must meet specific requirements. The article presents a novel method of measuring crack width based on images from a scanner with an optical resolution of 6400 dpi, subject to initial image processing in the ImageJ development environment and further processing and analysis of results. After registering a series of images of the cracks at different times using SIFT conversion (Scale-Invariant Feature Transform), a dense network of line segments is created in all images, intersecting the cracks perpendicular to the local axes. Along these line segments, brightness profiles are extracted, which are the basis for determination of crack width. The distribution and rotation of the line of intersection in a regular layout, automation of transformations, management of images and profiles of brightness, and data analysis to determine the width of cracks and their changes over time are made automatically by own code in the ImageJ and VBA environment. The article describes the method, tests on its properties, sources of measurement uncertainty. It also presents an example of application of the method in research on autogenous self-healing of concrete, specifically the ability to reduce a sample crack width and its full closure within 28 days of the self-healing process.
Segmentation of remotely sensed data using parallel region growing
NASA Technical Reports Server (NTRS)
Tilton, J. C.; Cox, S. C.
1983-01-01
The improved spatial resolution of the new earth resources satellites will increase the need for effective utilization of spatial information in machine processing of remotely sensed data. One promising technique is scene segmentation by region growing. Region growing can use spatial information in two ways: only spatially adjacent regions merge together, and merging criteria can be based on region-wide spatial features. A simple region growing approach is described in which the similarity criterion is based on region mean and variance (a simple spatial feature). An effective way to implement region growing for remote sensing is as an iterative parallel process on a large parallel processor. A straightforward parallel pixel-based implementation of the algorithm is explored and its efficiency is compared with sequential pixel-based, sequential region-based, and parallel region-based implementations. Experimental results from on aircraft scanner data set are presented, as is a discussioon of proposed improvements to the segmentation algorithm.
NASA Astrophysics Data System (ADS)
Holmgren, J.; Tulldahl, H. M.; Nordlöf, J.; Nyström, M.; Olofsson, K.; Rydell, J.; Willén, E.
2017-10-01
A system was developed for automatic estimations of tree positions and stem diameters. The sensor trajectory was first estimated using a positioning system that consists of a low precision inertial measurement unit supported by image matching with data from a stereo-camera. The initial estimation of the sensor trajectory was then calibrated by adjustments of the sensor pose using the laser scanner data. Special features suitable for forest environments were used to solve the correspondence and matching problems. Tree stem diameters were estimated for stem sections using laser data from individual scanner rotations and were then used for calibration of the sensor pose. A segmentation algorithm was used to associate stem sections to individual tree stems. The stem diameter estimates of all stem sections associated to the same tree stem were then combined for estimation of stem diameter at breast height (DBH). The system was validated on four 20 m radius circular plots and manual measured trees were automatically linked to trees detected in laser data. The DBH could be estimated with a RMSE of 19 mm (6 %) and a bias of 8 mm (3 %). The calibrated sensor trajectory and the combined use of circle fits from individual scanner rotations made it possible to obtain reliable DBH estimates also with a low precision positioning system.
Fusion of laser and image sensory data for 3-D modeling of the free navigation space
NASA Technical Reports Server (NTRS)
Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.
1994-01-01
A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Li, Xiang; Segars, W. Paul; Frush, Donald P.; Samei, Ehsan
2012-03-01
The purpose of this work was twofold: (a) to estimate patient- and cohort-specific radiation dose and cancer risk index for abdominopelvic computer tomography (CT) scans; (b) to evaluate the effects of patient anatomical characteristics (size, age, and gender) and CT scanner model on dose and risk conversion coefficients. The study included 100 patient models (42 pediatric models, 58 adult models) and multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare). A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which DLP-normalized-effective dose (k factor) and DLP-normalized-risk index values (q factor) were derived. The k factor showed exponential decrease with increasing patient size. For a given gender, q factor showed exponential decrease with both increasing patient size and patient age. The discrepancies in k and q factors across scanners were on average 8% and 15%, respectively. This study demonstrates the feasibility of estimating patient-specific organ dose and cohort-specific effective dose and risk index in abdominopelvic CT requiring only the knowledge of patient size, gender, and age.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreibmann, E; Shu, H; Cordova, J
Purpose: We report on an automated segmentation algorithm for defining radiation therapy target volumes using spectroscopic MR images (sMRI) acquired at nominal voxel resolution of 100 microliters. Methods: Wholebrain sMRI combining 3D echo-planar spectroscopic imaging, generalized auto-calibrating partially-parallel acquisitions, and elliptical k-space encoding were conducted on 3T MRI scanner with 32-channel head coil array creating images. Metabolite maps generated include choline (Cho), creatine (Cr), and N-acetylaspartate (NAA), as well as Cho/NAA, Cho/Cr, and NAA/Cr ratio maps. Automated segmentation was achieved by concomitantly considering sMRI metabolite maps with standard contrast enhancing (CE) imaging in a pipeline that first uses the watermore » signal for skull stripping. Subsequently, an initial blob of tumor region is identified by searching for regions of FLAIR abnormalities that also display reduced NAA activity using a mean ratio correlation and morphological filters. These regions are used as starting point for a geodesic level-set refinement that adapts the initial blob to the fine details specific to each metabolite. Results: Accuracy of the segmentation model was tested on a cohort of 12 patients that had sMRI datasets acquired pre, mid and post-treatment, providing a broad range of enhancement patterns. Compared to classical imaging, where heterogeneity in the tumor appearance and shape across posed a greater challenge to the algorithm, sMRI’s regions of abnormal activity were easily detected in the sMRI metabolite maps when combining the detail available in the standard imaging with the local enhancement produced by the metabolites. Results can be imported in the treatment planning, leading in general increase in the target volumes (GTV60) when using sMRI+CE MRI compared to the standard CE MRI alone. Conclusion: Integration of automated segmentation of sMRI metabolite maps into planning is feasible and will likely streamline acceptance of this new acquisition modality in clinical practice.« less
NS001MS - Landsat-D thematic mapper band aircraft scanner
NASA Technical Reports Server (NTRS)
Richard, R. R.; Merkel, R. F.; Meeks, G. R.
1978-01-01
The thematic mapper is a multispectral scanner which will be launched aboard Landsat-D in the early 1980s. Compared with previous Landsat scanners, this instrument will have an improved spatial resolution (30 m) and new spectral bands. Designated NS001MS, the scanner is designed to duplicate the thematic mapper spectral bands plus two additional bands (1.0 to 1.3 microns and 2.08 to 2.35 microns) in an aircraft scanner for evaluation and investigation prior to design and launch of the final thematic mapper. Applicable specifications used in defining the thematic mapper were retained in the NS001MS design, primarily with respect to spectral bandwidths, noise equivalent reflectance, and noise equivalent difference temperature. The technical design and operational characteristics of the multispectral scanner (with thematic mapper bands) are discussed.
Automatic segmentation of the glenohumeral cartilages from magnetic resonance images.
Neubert, A; Yang, Z; Engstrom, C; Xia, Y; Strudwick, M W; Chandra, S S; Fripp, J; Crozier, S
2016-10-01
Magnetic resonance (MR) imaging plays a key role in investigating early degenerative disorders and traumatic injuries of the glenohumeral cartilages. Subtle morphometric and biochemical changes of potential relevance to clinical diagnosis, treatment planning, and evaluation can be assessed from measurements derived from in vivo MR segmentation of the cartilages. However, segmentation of the glenohumeral cartilages, using approaches spanning manual to automated methods, is technically challenging, due to their thin, curved structure and overlapping intensities of surrounding tissues. Automatic segmentation of the glenohumeral cartilages from MR imaging is not at the same level compared to the weight-bearing knee and hip joint cartilages despite the potential applications with respect to clinical investigation of shoulder disorders. In this work, the authors present a fully automated segmentation method for the glenohumeral cartilages using MR images of healthy shoulders. The method involves automated segmentation of the humerus and scapula bones using 3D active shape models, the extraction of the expected bone-cartilage interface, and cartilage segmentation using a graph-based method. The cartilage segmentation uses localization, patient specific tissue estimation, and a model of the cartilage thickness variation. The accuracy of this method was experimentally validated using a leave-one-out scheme on a database of MR images acquired from 44 asymptomatic subjects with a true fast imaging with steady state precession sequence on a 3 T scanner (Siemens Trio) using a dedicated shoulder coil. The automated results were compared to manual segmentations from two experts (an experienced radiographer and an experienced musculoskeletal anatomist) using the Dice similarity coefficient (DSC) and mean absolute surface distance (MASD) metrics. Accurate and precise bone segmentations were achieved with mean DSC of 0.98 and 0.93 for the humeral head and glenoid fossa, respectively. Mean DSC scores of 0.74 and 0.72 were obtained for the humeral and glenoid cartilage volumes, respectively. The manual interobserver reliability evaluated by DSC was 0.80 ± 0.03 and 0.76 ± 0.04 for the two cartilages, implying that the automated results were within an acceptable 10% difference. The MASD between the automatic and the corresponding manual cartilage segmentations was less than 0.4 mm (previous studies reported mean cartilage thickness of 1.3 mm). This work shows the feasibility of volumetric segmentation and separation of the glenohumeral cartilages from MR images. To their knowledge, this is the first fully automated algorithm for volumetric segmentation of the individual glenohumeral cartilages from MR images. The approach was validated against manual segmentations from experienced analysts. In future work, the approach will be validated on imaging datasets acquired with various MR contrasts in patients.
Automatic segmentation of the glenohumeral cartilages from magnetic resonance images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neubert, A., E-mail: ales.neubert@csiro.au
Purpose: Magnetic resonance (MR) imaging plays a key role in investigating early degenerative disorders and traumatic injuries of the glenohumeral cartilages. Subtle morphometric and biochemical changes of potential relevance to clinical diagnosis, treatment planning, and evaluation can be assessed from measurements derived from in vivo MR segmentation of the cartilages. However, segmentation of the glenohumeral cartilages, using approaches spanning manual to automated methods, is technically challenging, due to their thin, curved structure and overlapping intensities of surrounding tissues. Automatic segmentation of the glenohumeral cartilages from MR imaging is not at the same level compared to the weight-bearing knee and hipmore » joint cartilages despite the potential applications with respect to clinical investigation of shoulder disorders. In this work, the authors present a fully automated segmentation method for the glenohumeral cartilages using MR images of healthy shoulders. Methods: The method involves automated segmentation of the humerus and scapula bones using 3D active shape models, the extraction of the expected bone–cartilage interface, and cartilage segmentation using a graph-based method. The cartilage segmentation uses localization, patient specific tissue estimation, and a model of the cartilage thickness variation. The accuracy of this method was experimentally validated using a leave-one-out scheme on a database of MR images acquired from 44 asymptomatic subjects with a true fast imaging with steady state precession sequence on a 3 T scanner (Siemens Trio) using a dedicated shoulder coil. The automated results were compared to manual segmentations from two experts (an experienced radiographer and an experienced musculoskeletal anatomist) using the Dice similarity coefficient (DSC) and mean absolute surface distance (MASD) metrics. Results: Accurate and precise bone segmentations were achieved with mean DSC of 0.98 and 0.93 for the humeral head and glenoid fossa, respectively. Mean DSC scores of 0.74 and 0.72 were obtained for the humeral and glenoid cartilage volumes, respectively. The manual interobserver reliability evaluated by DSC was 0.80 ± 0.03 and 0.76 ± 0.04 for the two cartilages, implying that the automated results were within an acceptable 10% difference. The MASD between the automatic and the corresponding manual cartilage segmentations was less than 0.4 mm (previous studies reported mean cartilage thickness of 1.3 mm). Conclusions: This work shows the feasibility of volumetric segmentation and separation of the glenohumeral cartilages from MR images. To their knowledge, this is the first fully automated algorithm for volumetric segmentation of the individual glenohumeral cartilages from MR images. The approach was validated against manual segmentations from experienced analysts. In future work, the approach will be validated on imaging datasets acquired with various MR contrasts in patients.« less
McCollough, Cynthia H; Ulzheimer, Stefan; Halliburton, Sandra S; Shanneik, Kaiss; White, Richard D; Kalender, Willi A
2007-05-01
To develop a consensus standard for quantification of coronary artery calcium (CAC). A standard for CAC quantification was developed by a multi-institutional, multimanufacturer international consortium of cardiac radiologists, medical physicists, and industry representatives. This report specifically describes the standardization of scan acquisition and reconstruction parameters, the use of patient size-specific tube current values to achieve a prescribed image noise, and the use of the calcium mass score to eliminate scanner- and patient size-based variations. An anthropomorphic phantom containing calibration inserts and additional phantom rings were used to simulate small, medium-size, and large patients. The three phantoms were scanned by using the recommended protocols for various computed tomography (CT) systems to determine the calibration factors that relate measured CT numbers to calcium hydroxyapatite density and to determine the tube current values that yield comparable noise values. Calculation of the calcium mass score was standardized, and the variance in Agatston, volume, and mass scores was compared among CT systems. Use of the recommended scanning parameters resulted in similar noise for small, medium-size, and large phantoms with all multi-detector row CT scanners. Volume scores had greater interscanner variance than did Agatston and calcium mass scores. Use of a fixed calcium hydroxyapatite density threshold (100 mg/cm(3)), as compared with use of a fixed CT number threshold (130 HU), reduced interscanner variability in Agatston and calcium mass scores. With use of a density segmentation threshold, the calcium mass score had the smallest variance as a function of patient size. Standardized quantification of CAC yielded comparable image noise, spatial resolution, and mass scores among different patient sizes and different CT systems and facilitated reduced radiation dose for small and medium-size patients.
A Scalable Framework For Segmenting Magnetic Resonance Images
Hore, Prodip; Goldgof, Dmitry B.; Gu, Yuhua; Maudsley, Andrew A.; Darkazanli, Ammar
2009-01-01
A fast, accurate and fully automatic method of segmenting magnetic resonance images of the human brain is introduced. The approach scales well allowing fast segmentations of fine resolution images. The approach is based on modifications of the soft clustering algorithm, fuzzy c-means, that enable it to scale to large data sets. Two types of modifications to create incremental versions of fuzzy c-means are discussed. They are much faster when compared to fuzzy c-means for medium to extremely large data sets because they work on successive subsets of the data. They are comparable in quality to application of fuzzy c-means to all of the data. The clustering algorithms coupled with inhomogeneity correction and smoothing are used to create a framework for automatically segmenting magnetic resonance images of the human brain. The framework is applied to a set of normal human brain volumes acquired from different magnetic resonance scanners using different head coils, acquisition parameters and field strengths. Results are compared to those from two widely used magnetic resonance image segmentation programs, Statistical Parametric Mapping and the FMRIB Software Library (FSL). The results are comparable to FSL while providing significant speed-up and better scalability to larger volumes of data. PMID:20046893
Huang, Hsiao-Hui; Huang, Chun-Yu; Chen, Chiao-Ning; Wang, Yun-Wen; Huang, Teng-Yi
2018-01-01
Native T1 value is emerging as a reliable indicator of abnormal heart conditions related to myocardial fibrosis. Investigators have extensively used the standardized myocardial segmentation of the American Heart Association (AHA) to measure regional T1 values of the left ventricular (LV) walls. In this paper, we present a fully automatic system to analyze modified Look-Locker inversion recovery images and to report regional T1 values of AHA segments. Ten healthy individuals participated in the T1 mapping study with a 3.0 T scanner after providing informed consent. First, we obtained masks of an LV blood-pool region and LV walls by using an image synthesis method and a layer-growing method. Subsequently, the LV walls were divided into AHA segments by identifying the boundaries of the septal regions and by using a radial projection method. The layer-growing method significantly enhanced the accuracy of the derived myocardium mask. We compared the T1 values that were obtained using manual region of interest selections and those obtained using the automatic system. The average T1 difference of the calculated segments was 4.6 ± 1.5%. This study demonstrated a practical and robust method of obtaining native T1 values of AHA segments in LV walls.
Automatic segmentation of brain MRIs and mapping neuroanatomy across the human lifespan
NASA Astrophysics Data System (ADS)
Keihaninejad, Shiva; Heckemann, Rolf A.; Gousias, Ioannis S.; Rueckert, Daniel; Aljabar, Paul; Hajnal, Joseph V.; Hammers, Alexander
2009-02-01
A robust model for the automatic segmentation of human brain images into anatomically defined regions across the human lifespan would be highly desirable, but such structural segmentations of brain MRI are challenging due to age-related changes. We have developed a new method, based on established algorithms for automatic segmentation of young adults' brains. We used prior information from 30 anatomical atlases, which had been manually segmented into 83 anatomical structures. Target MRIs came from 80 subjects (~12 individuals/decade) from 20 to 90 years, with equal numbers of men, women; data from two different scanners (1.5T, 3T), using the IXI database. Each of the adult atlases was registered to each target MR image. By using additional information from segmentation into tissue classes (GM, WM and CSF) to initialise the warping based on label consistency similarity before feeding this into the previous normalised mutual information non-rigid registration, the registration became robust enough to accommodate atrophy and ventricular enlargement with age. The final segmentation was obtained by combination of the 30 propagated atlases using decision fusion. Kernel smoothing was used for modelling the structural volume changes with aging. Example linear correlation coefficients with age were, for lateral ventricular volume, rmale=0.76, rfemale=0.58 and, for hippocampal volume, rmale=-0.6, rfemale=-0.4 (allρ<0.01).
Methods for CT automatic exposure control protocol translation between scanner platforms.
McKenney, Sarah E; Seibert, J Anthony; Lamba, Ramit; Boone, John M
2014-03-01
An imaging facility with a diverse fleet of CT scanners faces considerable challenges when propagating CT protocols with consistent image quality and patient dose across scanner makes and models. Although some protocol parameters can comfortably remain constant among scanners (eg, tube voltage, gantry rotation time), the automatic exposure control (AEC) parameter, which selects the overall mA level during tube current modulation, is difficult to match among scanners, especially from different CT manufacturers. Objective methods for converting tube current modulation protocols among CT scanners were developed. Three CT scanners were investigated, a GE LightSpeed 16 scanner, a GE VCT scanner, and a Siemens Definition AS+ scanner. Translation of the AEC parameters such as noise index and quality reference mAs across CT scanners was specifically investigated. A variable-diameter poly(methyl methacrylate) phantom was imaged on the 3 scanners using a range of AEC parameters for each scanner. The phantom consisted of 5 cylindrical sections with diameters of 13, 16, 20, 25, and 32 cm. The protocol translation scheme was based on matching either the volumetric CT dose index or image noise (in Hounsfield units) between two different CT scanners. A series of analytic fit functions, corresponding to different patient sizes (phantom diameters), were developed from the measured CT data. These functions relate the AEC metric of the reference scanner, the GE LightSpeed 16 in this case, to the AEC metric of a secondary scanner. When translating protocols between different models of CT scanners (from the GE LightSpeed 16 reference scanner to the GE VCT system), the translation functions were linear. However, a power-law function was necessary to convert the AEC functions of the GE LightSpeed 16 reference scanner to the Siemens Definition AS+ secondary scanner, because of differences in the AEC functionality designed by these two companies. Protocol translation on the basis of quantitative metrics (volumetric CT dose index or measured image noise) is feasible. Protocol translation has a dependency on patient size, especially between the GE and Siemens systems. Translation schemes that preserve dose levels may not produce identical image quality. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Morsdorf, F.; Meier, E.; Koetz, B.; Nüesch, D.; Itten, K.; Allgöwer, B.
2003-04-01
The potential of airborne laserscanning for mapping forest stands has been intensively evaluated in the past few years. Algorithms deriving structural forest parameters in a stand-wise manner from laser data have been successfully implemented by a number of researchers. However, with very high point density laser (>20 points/m^2) data we pursue the approach of deriving these parameters on a single-tree basis. We explore the potential of delineating single trees from laser scanner raw data (x,y,z- triples) and validate this approach with a dataset of more than 2000 georeferenced trees, including tree height and crown diameter, gathered on a long term forest monitoring site by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL). The accuracy of the laser scanner is evaluated trough 6 reference targets, being 3x3 m^2 in size and horizontally plain, for validating both the horizontal and vertical accuracy of the laser scanner by matching of triangular irregular networks (TINs). Single trees are segmented by a clustering analysis in all three coordinate dimensions and their geometric properties can then be derived directly from the tree cluster.
Segmentation of mouse dynamic PET images using a multiphase level set method
NASA Astrophysics Data System (ADS)
Cheng-Liao, Jinxiu; Qi, Jinyi
2010-11-01
Image segmentation plays an important role in medical diagnosis. Here we propose an image segmentation method for four-dimensional mouse dynamic PET images. We consider that voxels inside each organ have similar time activity curves. The use of tracer dynamic information allows us to separate regions that have similar integrated activities in a static image but with different temporal responses. We develop a multiphase level set method that utilizes both the spatial and temporal information in a dynamic PET data set. Different weighting factors are assigned to each image frame based on the noise level and activity difference among organs of interest. We used a weighted absolute difference function in the data matching term to increase the robustness of the estimate and to avoid over-partition of regions with high contrast. We validated the proposed method using computer simulated dynamic PET data, as well as real mouse data from a microPET scanner, and compared the results with those of a dynamic clustering method. The results show that the proposed method results in smoother segments with the less number of misclassified voxels.
Sharing skills: using augmented reality for human-robot collaboration
NASA Astrophysics Data System (ADS)
Giesler, Bjorn; Steinhaus, Peter; Walther, Marcus; Dillmann, Ruediger
2004-05-01
Both stationary 'industrial' and autonomous mobile robots nowadays pervade many workplaces, but human-friendly interaction with them is still very much an experimental subject. One of the reasons for this is that computer and robotic systems are very bad at performing certain tasks well and robust. A prime example is classification of sensor readings: Which part of a 3D depth image is the cup, which the saucer, which the table? These are tasks that humans excel at. To alleviate this problem, we propose a team approah, wherein the robot records sensor data and uses an Augmented-Reality (AR) system to present the data to the user directly in the 3D environment. The user can then perform classification decisions directly on the data by pointing, gestures and speech commands. After the classification has been performed by the user, the robot takes the classified data and matches it to its environment model. As a demonstration of this approach, we present an initial system for creating objects on-the-fly in the environment model. A rotating laser scanner is used to capture a 3D snapshot of the environment. This snapshot is presented to the user as an overlay over his view of the scene. The user classifies unknown objects by pointing at them. The system segments the snapshot according to the user's indications and presents the results of segmentation back to the user, who can then inspect, correct and enhance them interactively. After a satisfying result has been reached, the laser-scanner can take more snapshots from other angles and use the previous segmentation hints to construct a 3D model of the object.
On the fallacy of quantitative segmentation for T1-weighted MRI
NASA Astrophysics Data System (ADS)
Plassard, Andrew J.; Harrigan, Robert L.; Newton, Allen T.; Rane, Swati; Pallavaram, Srivatsan; D'Haese, Pierre F.; Dawant, Benoit M.; Claassen, Daniel O.; Landman, Bennett A.
2016-03-01
T1-weighted magnetic resonance imaging (MRI) generates contrasts with primary sensitivity to local T1 properties (with lesser T2 and PD contributions). The observed signal intensity is determined by these local properties and the sequence parameters of the acquisition. In common practice, a range of acceptable parameters is used to ensure "similar" contrast across scanners used for any particular study (e.g., the ADNI standard MPRAGE). However, different studies may use different ranges of parameters and report the derived data as simply "T1-weighted". Physics and imaging authors pay strong heed to the specifics of the imaging sequences, but image processing authors have historically been more lax. Herein, we consider three T1-weighted sequences acquired the same underlying protocol (MPRAGE) and vendor (Philips), but "normal study-to-study variation" in parameters. We show that the gray matter/white matter/cerebrospinal fluid contrast is subtly but systemically different between these images and yields systemically different measurements of brain volume. The problem derives from the visually apparent boundary shifts, which would also be seen by a human rater. We present and evaluate two solutions to produce consistent segmentation results across imaging protocols. First, we propose to acquire multiple sequences on a subset of the data and use the multi-modal imaging as atlases to segment target images any of the available sequences. Second (if additional imaging is not available), we propose to synthesize atlases of the target imaging sequence and use the synthesized atlases in place of atlas imaging data. Both approaches significantly improve consistency of target labeling.
3D intrathoracic region definition and its application to PET-CT analysis
NASA Astrophysics Data System (ADS)
Cheirsilp, Ronnarit; Bascom, Rebecca; Allen, Thomas W.; Higgins, William E.
2014-03-01
Recently developed integrated PET-CT scanners give co-registered multimodal data sets that offer complementary three-dimensional (3D) digital images of the chest. PET (positron emission tomography) imaging gives highly specific functional information of suspect cancer sites, while CT (X-ray computed tomography) gives associated anatomical detail. Because the 3D CT and PET scans generally span the body from the eyes to the knees, accurate definition of the intrathoracic region is vital for focusing attention to the central-chest region. In this way, diagnostically important regions of interest (ROIs), such as central-chest lymph nodes and cancer nodules, can be more efficiently isolated. We propose a method for automatic segmentation of the intrathoracic region from a given co-registered 3D PET-CT study. Using the 3D CT scan as input, the method begins by finding an initial intrathoracic region boundary for a given 2D CT section. Next, active contour analysis, driven by a cost function depending on local image gradient, gradient-direction, and contour shape features, iteratively estimates the contours spanning the intrathoracic region on neighboring 2D CT sections. This process continues until the complete region is defined. We next present an interactive system that employs the segmentation method for focused 3D PET-CT chest image analysis. A validation study over a series of PET-CT studies reveals that the segmentation method gives a Dice index accuracy of less than 98%. In addition, further results demonstrate the utility of the method for focused 3D PET-CT chest image analysis, ROI definition, and visualization.
MR Scanner Systems Should Be Adequately Characterized in Diffusion-MRI of the Breast
Giannelli, Marco; Sghedoni, Roberto; Iacconi, Chiara; Iori, Mauro; Traino, Antonio Claudio; Guerrisi, Maria; Mascalchi, Mario; Toschi, Nicola; Diciotti, Stefano
2014-01-01
Breast imaging represents a relatively recent and promising field of application of quantitative diffusion-MRI techniques. In view of the importance of guaranteeing and assessing its reliability in clinical as well as research settings, the aim of this study was to specifically characterize how the main MR scanner system-related factors affect quantitative measurements in diffusion-MRI of the breast. In particular, phantom acquisitions were performed on three 1.5 T MR scanner systems by different manufacturers, all equipped with a dedicated multi-channel breast coil as well as acquisition sequences for diffusion-MRI of the breast. We assessed the accuracy, inter-scan and inter-scanner reproducibility of the mean apparent diffusion coefficient measured along the main orthogonal directions (
Reinstein, Dan Z.; Archer, Timothy J.; Silverman, Ronald H.; Coleman, D. Jackson
2008-01-01
Purpose To determine the accuracy, repeatability, and reproducibility of measurement of lateral dimensions using the Artemis (Ultralink LLC) very high-frequency (VHF) digital ultrasound (US) arc scanner. Setting London Vision Clinic, London, United Kingdom. Methods A test object was measured first with a micrometer and then with the Artemis arc scanner. Five sets of 10 consecutive B-scans of the test object were performed with the scanner. The test object was removed from the system between each scan set. One expert observer and one newly trained observer separately measured the lateral dimension of the test object. Two-factor analysis of variance was performed. The accuracy was calculated as the average bias of the scan set averages. The repeatability and reproducibility coefficients were calculated. The coefficient of variation (CV) was calculated for repeatability and reproducibility. Results The test object was measured to be 10.80 mm wide. The mean lateral dimension bias was 0.00 mm. The repeatability coefficient was 0.114 mm. The reproducibility coefficient was 0.026 mm. The repeatability CV was 0.38%, and the reproducibility CV was 0.09%. There was no statistically significant variation between observers (P = .0965). There was a statistically significant variation between scan sets (P = .0036) attributed to minor vertical changes in the alignment of the test object between consecutive scan sets. Conclusion The Artemis VHF digital US arc scanner obtained accurate, repeatable, and reproducible measurements of lateral dimensions of the size commonly found in the anterior segment. PMID:17081860
What is Scanner and NonScanner?
Atmospheric Science Data Center
2014-12-08
... instruments specifically designed by a team of electronic, thermal, and mechanical experts, built and integrated with the ERBS and NOAA ... of three co-planar detectors (longwave, shortwave and total energy), all of which scan from one limb of the Earth to the other, across the ...
Hybrid registration of PET/CT in thoracic region with pre-filtering PET sinogram
NASA Astrophysics Data System (ADS)
Mokri, S. S.; Saripan, M. I.; Marhaban, M. H.; Nordin, A. J.; Hashim, S.
2015-11-01
The integration of physiological (PET) and anatomical (CT) images in cancer delineation requires an accurate spatial registration technique. Although hybrid PET/CT scanner is used to co-register these images, significant misregistrations exist due to patient and respiratory/cardiac motions. This paper proposes a hybrid feature-intensity based registration technique for hybrid PET/CT scanner. First, simulated PET sinogram was filtered with a 3D hybrid mean-median before reconstructing the image. The features were then derived from the segmented structures (lung, heart and tumor) from both images. The registration was performed based on modified multi-modality demon registration with multiresolution scheme. Apart from visual observations improvements, the proposed registration technique increased the normalized mutual information index (NMI) between the PET/CT images after registration. All nine tested datasets show marked improvements in mutual information (MI) index than free form deformation (FFD) registration technique with the highest MI increase is 25%.
Autonomous surgical robotics using 3-D ultrasound guidance: feasibility study.
Whitman, John; Fronheiser, Matthew P; Ivancevich, Nikolas M; Smith, Stephen W
2007-10-01
The goal of this study was to test the feasibility of using a real-time 3D (RT3D) ultrasound scanner with a transthoracic matrix array transducer probe to guide an autonomous surgical robot. Employing a fiducial alignment mark on the transducer to orient the robot's frame of reference and using simple thresholding algorithms to segment the 3D images, we tested the accuracy of using the scanner to automatically direct a robot arm that touched two needle tips together within a water tank. RMS measurement error was 3.8% or 1.58 mm for an average path length of 41 mm. Using these same techniques, the autonomous robot also performed simulated needle biopsies of a cyst-like lesion in a tissue phantom. This feasibility study shows the potential for 3D ultrasound guidance of an autonomous surgical robot for simple interventional tasks, including lesion biopsy and foreign body removal.
Brain morphometry in blind and sighted subjects.
Maller, Jerome J; Thomson, Richard H; Ng, Amanda; Mann, Collette; Eager, Michael; Ackland, Helen; Fitzgerald, Paul B; Egan, Gary; Rosenfeld, Jeffrey V
2016-11-01
Previous neuroimaging studies have demonstrated structural brain alterations in blind subjects, but most have focused on primary open angle glaucoma or retinopathy of prematurity, used low-field scanners, a limited number of receive channels, or have presented uncorrected results. We recruited 10 blind and 10 age and sex-matched controls to undergo high-resolution MRI using a 3T scanner and a 32-channel receive coil. We evaluated whole-brain morphological differences between the groups as well as manual segmentation of regional hippocampal volumes. There were no hippocampal volume differences between the groups. Whole-brain morphometry showed white matter volume differences between blind and sighted groups including localised larger regions in the visual cortex (occipital gyral volume and thickness) among those with blindness early in life compared to those with blindness later in life. Hence, in our patients, blindness resulted in brain volumetric differences that depend upon duration of blindness. Copyright © 2016 Elsevier Ltd. All rights reserved.
Valero, Enrique; Adan, Antonio; Cerrada, Carlos
2012-01-01
This paper is focused on the automatic construction of 3D basic-semantic models of inhabited interiors using laser scanners with the help of RFID technologies. This is an innovative approach, in whose field scarce publications exist. The general strategy consists of carrying out a selective and sequential segmentation from the cloud of points by means of different algorithms which depend on the information that the RFID tags provide. The identification of basic elements of the scene, such as walls, floor, ceiling, windows, doors, tables, chairs and cabinets, and the positioning of their corresponding models can then be calculated. The fusion of both technologies thus allows a simplified 3D semantic indoor model to be obtained. This method has been tested in real scenes under difficult clutter and occlusion conditions, and has yielded promising results. PMID:22778609
Automated image quality assessment for chest CT scans.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2018-02-01
Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-09-21
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units ([Formula: see text]) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into [Formula: see text] was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of [Formula: see text] corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
NASA Astrophysics Data System (ADS)
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-10-01
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units (HU ) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into HU was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of 4~mm corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
Bourcier, Romain; Détraz, Lili; Serfaty, Jean Michel; Delasalle, Beatrice Guyomarch; Mirza, Mahmood; Derraz, Imad; Toulgoat, Frédérique; Naggara, Olivier; Toquet, Claire; Desal, Hubert
2017-11-01
The susceptibility vessel sign (SVS) on magnetic resonance imaging (MRI) is related to thrombus location, composition, and size in acute stroke. No previous study has determined its inter-MRI scanner variability. We aimed to compare the diagnostic accuracy in-vitro of four different MRI scanners for the characterization of histologic thrombus composition. Thirty-five manufactured thrombi analogs of different composition that were histologically categorized as fibrin-dominant, mixed, or red blood cell (RBC)-dominant were scanned on four different MRI units with T2* sequence. Nine radiologists, blinded to thrombus composition and MRI scanner model, classified twice, in a 2-week interval, the SVS of each thrombus as absent, questionable, or present. We calculated the weighted kappa with 95% confidence interval (CI), sensitivity, specificity and accuracy of the SVS on each MRI scanner to detect RBC-dominant thrombi. The SVS was present in 42%, absent in 33%, and questionable in 25% of thrombi. The interscanner agreement was moderate to good, ranging from .45 (CI: .37-.52) to .67 (CI: .61-.74). The correlation between the SVS and the thrombus composition was moderate (κ: .50 [CI: .44-.55]) to good κ: .76 ([CI: .72-.80]). Sensitivity, specificity, and accuracy to identify RBC-dominant clots were significantly different between MRI scanners (P < .001). The diagnostic accuracy of SVS to determine thrombus composition varies significantly among MRI scanners. Normalization of T2*sequences between scanners may be needed to better predict thrombus composition in multicenter studies. Copyright © 2017 by the American Society of Neuroimaging.
Fast, Automated, Scalable Generation of Textured 3D Models of Indoor Environments
2014-12-18
expensive travel and on-site visits. Different applications require models of different complexities, both with and without furniture geometry. The...environment and to localize the system in the environment over time. The datasets shown in this paper were generated by a backpack -mounted system that uses 2D...voxel is found to intersect the line segment from a scanner to a corresponding scan point. If a laser passes through a voxel, that voxel is considered
Visible and infrared imaging radiometers for ocean observations
NASA Technical Reports Server (NTRS)
Barnes, W. L.
1977-01-01
The current status of visible and infrared sensors designed for the remote monitoring of the oceans is reviewed. Emphasis is placed on multichannel scanning radiometers that are either operational or under development. Present design practices and parameter constraints are discussed. Airborne sensor systems examined include the ocean color scanner and the ocean temperature scanner. The costal zone color scanner and advanced very high resolution radiometer are reviewed with emphasis on design specifications. Recent technological advances and their impact on sensor design are examined.
Wheat Ear Detection in Plots by Segmenting Mobile Laser Scanner Data
NASA Astrophysics Data System (ADS)
Velumani, K.; Oude Elberink, S.; Yang, M. Y.; Baret, F.
2017-09-01
The use of Light Detection and Ranging (LiDAR) to study agricultural crop traits is becoming popular. Wheat plant traits such as crop height, biomass fractions and plant population are of interest to agronomists and biologists for the assessment of a genotype's performance in the environment. Among these performance indicators, plant population in the field is still widely estimated through manual counting which is a tedious and labour intensive task. The goal of this study is to explore the suitability of LiDAR observations to automate the counting process by the individual detection of wheat ears in the agricultural field. However, this is a challenging task owing to the random cropping pattern and noisy returns present in the point cloud. The goal is achieved by first segmenting the 3D point cloud followed by the classification of segments into ears and non-ears. In this study, two segmentation techniques: a) voxel-based segmentation and b) mean shift segmentation were adapted to suit the segmentation of plant point clouds. An ear classification strategy was developed to distinguish the ear segments from leaves and stems. Finally, the ears extracted by the automatic methods were compared with reference ear segments prepared by manual segmentation. Both the methods had an average detection rate of 85 %, aggregated over different flowering stages. The voxel-based approach performed well for late flowering stages (wheat crops aged 210 days or more) with a mean percentage accuracy of 94 % and takes less than 20 seconds to process 50,000 points with an average point density of 16 points/cm2. Meanwhile, the mean shift approach showed comparatively better counting accuracy of 95% for early flowering stage (crops aged below 225 days) and takes approximately 4 minutes to process 50,000 points.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-03-08
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-01-01
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062
Systematic monitoring and evaluation of M7 scanner performance and data quality
NASA Technical Reports Server (NTRS)
Stewart, S.; Christenson, D.; Larsen, L.
1974-01-01
An investigation was conducted to provide the information required to maintain data quality of the Michigan M7 Multispectral scanner by systematic checks on specific system performance characteristics. Data processing techniques which use calibration data gathered routinely every mission have been developed to assess current data quality. Significant changes from past data quality are thus identified and attempts made to discover their causes. Procedures for systematic monitoring of scanner data quality are discussed. In the solar reflective region, calculations of Noise Equivalent Change in Radiance on a permission basis are compared to theoretical tape-recorder limits to provide an estimate of overall scanner performance. M7 signal/noise characteristics are examined.
Canine hippocampal formation composited into three-dimensional structure using MPRAGE.
Jung, Mi-Ae; Nahm, Sang-Soep; Lee, Min-Su; Lee, In-Hye; Lee, Ah-Ra; Jang, Dong-Pyo; Kim, Young-Bo; Cho, Zang-Hee; Eom, Ki-Dong
2010-07-01
This study was performed to anatomically illustrate the living canine hippocampal formation in three-dimensions (3D), and to evaluate its relationship to surrounding brain structures. Three normal beagle dogs were scanned on a MR scanner with inversion recovery segmented 3D gradient echo sequence (known as MP-RAGE: Magnetization Prepared Rapid Gradient Echo). The MRI data was manually segmented and reconstructed into a 3D model using the 3D slicer software tool. From the 3D model, the spatial relationships between hippocampal formation and surrounding structures were evaluated. With the increased spatial resolution and contrast of the MPRAGE, the canine hippocampal formation was easily depicted. The reconstructed 3D image allows easy understanding of the hippocampal contour and demonstrates the structural relationship of the hippocampal formation to surrounding structures in vivo.
Bias atlases for segmentation-based PET attenuation correction using PET-CT and MR.
Ouyang, Jinsong; Chun, Se Young; Petibon, Yoann; Bonab, Ali A; Alpert, Nathaniel; Fakhri, Georges El
2013-10-01
This study was to obtain voxel-wise PET accuracy and precision using tissue-segmentation for attenuation correction. We applied multiple thresholds to the CTs of 23 patients to classify tissues. For six of the 23 patients, MR images were also acquired. The MR fat/in-phase ratio images were used for fat segmentation. Segmented tissue classes were used to create attenuation maps, which were used for attenuation correction in PET reconstruction. PET bias images were then computed using the PET reconstructed with the original CT as the reference. We registered the CTs for all the patients and transformed the corresponding bias images accordingly. We then obtained the mean and standard deviation bias atlas using all the registered bias images. Our CT-based study shows that four-class segmentation (air, lungs, fat, other tissues), which is available on most PET-MR scanners, yields 15.1%, 4.1%, 6.6%, and 12.9% RMSE bias in lungs, fat, non-fat soft-tissues, and bones, respectively. An accurate fat identification is achievable using fat/in-phase MR images. Furthermore, we have found that three-class segmentation (air, lungs, other tissues) yields less than 5% standard deviation of bias within the heart, liver, and kidneys. This implies that three-class segmentation can be sufficient to achieve small variation of bias for imaging these three organs. Finally, we have found that inter- and intra-patient lung density variations contribute almost equally to the overall standard deviation of bias within the lungs.
NASA Technical Reports Server (NTRS)
Hasell, P. G., Jr.; Peterson, L. M.; Thomson, F. J.; Work, E. A.; Kriegler, F. J.
1977-01-01
The development of an experimental airborne multispectral scanner to provide both active (laser illuminated) and passive (solar illuminated) data from a commonly registered surface scene is discussed. The system was constructed according to specifications derived in an initial programs design study. The system was installed in an aircraft and test flown to produce illustrative active and passive multi-spectral imagery. However, data was not collected nor analyzed for any specific application.
Cross-Cultural Differences in the Neural Correlates of Specific and General Recognition
Paige, Laura E.; Ksander, John C.; Johndro, Hunter A.; Gutchess, Angela H.
2017-01-01
Research suggests that culture influences how people perceive the world, which extends to memory specificity, or how much perceptual detail is remembered. The present study investigated cross-cultural differences (Americans vs. East Asians) at the time of encoding in the neural correlates of specific vs. general memory formation. Participants encoded photos of everyday items in the scanner and 48 hours later completed a surprise recognition test. The recognition test consisted of same (i.e., previously seen in scanner), similar (i.e., same name, different features), or new photos (i.e., items not previously seen in scanner). For Americans compared to East Asians, we predicted greater activation in the hippocampus and right fusiform for specific memory at recognition, as these regions were implicated previously in encoding perceptual details. Results revealed that East Asians activated the left fusiform and left hippocampus more than Americans for specific vs. general memory. Follow-up analyses ruled out alternative explanations of retrieval difficulty and familiarity for this pattern of cross-cultural differences at encoding. Results overall suggest that culture should be considered as another individual difference that affects memory specificity and modulates neural regions underlying these processes. PMID:28256199
Cross-cultural differences in the neural correlates of specific and general recognition.
Paige, Laura E; Ksander, John C; Johndro, Hunter A; Gutchess, Angela H
2017-06-01
Research suggests that culture influences how people perceive the world, which extends to memory specificity, or how much perceptual detail is remembered. The present study investigated cross-cultural differences (Americans vs East Asians) at the time of encoding in the neural correlates of specific versus general memory formation. Participants encoded photos of everyday items in the scanner and 48 h later completed a surprise recognition test. The recognition test consisted of same (i.e., previously seen in scanner), similar (i.e., same name, different features), or new photos (i.e., items not previously seen in scanner). For Americans compared to East Asians, we predicted greater activation in the hippocampus and right fusiform for specific memory at recognition, as these regions were implicated previously in encoding perceptual details. Results revealed that East Asians activated the left fusiform and left hippocampus more than Americans for specific versus general memory. Follow-up analyses ruled out alternative explanations of retrieval difficulty and familiarity for this pattern of cross-cultural differences at encoding. Results overall suggest that culture should be considered as another individual difference that affects memory specificity and modulates neural regions underlying these processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Selecting a CT scanner for cardiac imaging: the heart of the matter.
Lewis, Maria A; Pascoal, Ana; Keevil, Stephen F; Lewis, Cornelius A
2016-09-01
Coronary angiography to assess the presence and degree of arterial stenosis is an examination now routinely performed on CT scanners. Although developments in CT technology over recent years have made great strides in improving the diagnostic accuracy of this technique, patients with certain characteristics can still be "difficult to image". The various groups will benefit from different technological enhancements depending on the type of challenge they present. Good temporal and spatial resolution, wide longitudinal (z-axis) detector coverage and high X-ray output are the key requirements of a successful CT coronary angiography (CTCA) scan. The requirement for optimal patient dose is a given. The different scanner models recommended for CTCA all excel in different aspects. The specification data presented here for these scanners and the explanation of the impact of the different features should help in making a more informed decision when selecting a scanner for CTCA.
In vivo ultrasound imaging of the bone cortex
NASA Astrophysics Data System (ADS)
Renaud, Guillaume; Kruizinga, Pieter; Cassereau, Didier; Laugier, Pascal
2018-06-01
Current clinical ultrasound scanners cannot be used to image the interior morphology of bones because these scanners fail to address the complicated physics involved for exact image reconstruction. Here, we show that if the physics is properly addressed, bone cortex can be imaged using a conventional transducer array and a programmable ultrasound scanner. We provide in vivo proof for this technique by scanning the radius and tibia of two healthy volunteers and comparing the thickness of the radius bone with high-resolution peripheral x-ray computed tomography. Our method assumes a medium that is composed of different homogeneous layers with unique elastic anisotropy and ultrasonic wave-speed values. The applicable values of these layers are found by optimizing image sharpness and intensity over a range of relevant values. In the algorithm of image reconstruction we take wave refraction between the layers into account using a ray-tracing technique. The estimated values of the ultrasonic wave-speed and anisotropy in cortical bone are in agreement with ex vivo studies reported in the literature. These parameters are of interest since they were proposed as biomarkers for cortical bone quality. In this paper we discuss the physics involved with ultrasound imaging of bone and provide an algorithm to successfully image the first segment of cortical bone.
Zeitoun, Rania; Hussein, Manar
2017-11-01
To reach a practical approach to interpret MDCT findings in post-operative spine cases and to change the false belief of CT failure in the setting of instruments secondary to related artefacts. We performed observational retrospective analysis of premier, early and late MDCT scans in 68 post-operative spine patients, with emphasis on instruments related complications and osseous fusion status. We used a grading system for assessment of osseous fusion in 35 patients and we further analysed the findings in failure of fusion, grade (D). We observed a variety of instruments related complications (mostly screws medially penetrating the pedicle) and osseous fusion status in late scans. We graded 11 interbody and 14 posterolateral levels as osseous fusion failure, showing additional instruments related complications, end plates erosive changes, adjacent segments spondylosis and malalignment. Modern MDCT scanners provide high quality images and are strongly recommended in assessment of the instruments and status of osseous fusion. In post-operative imaging of the spine, it is essential to be aware for what you are looking for, in relevance to the date of surgery. Advances in knowledge: Modern MDCT scanners allow assessment of instruments position and integrity and osseous fusion status in post-operative spine. We propose a helpful algorithm to simplify interpreting post-operative spine imaging.
Validation of Body Volume Acquisition by Using Elliptical Zone Method.
Chiu, C-Y; Pease, D L; Fawkner, S; Sanders, R H
2016-12-01
The elliptical zone method (E-Zone) can be used to obtain reliable body volume data including total body volume and segmental volumes with inexpensive and portable equipment. The purpose of this research was to assess the accuracy of body volume data obtained from E-Zone by comparing them with those acquired from the 3D photonic scanning method (3DPS). 17 male participants with diverse somatotypes were recruited. Each participant was scanned twice on the same day by a 3D whole-body scanner and photographed twice for the E-Zone analysis. The body volume data acquired from 3DPS was regarded as the reference against which the accuracy of the E-Zone was assessed. The relative technical error of measurement (TEM) of total body volume estimations was around 3% for E-Zone. E-Zone can estimate the segmental volumes of upper torso, lower torso, thigh, shank, upper arm and lower arm accurately (relative TEM<10%) but the accuracy for small segments including the neck, hand and foot were poor. In summary, E-Zone provides a reliable, inexpensive, portable, and simple method to obtain reasonable estimates of total body volume and to indicate segmental volume distribution. © Georg Thieme Verlag KG Stuttgart · New York.
Consistent interactive segmentation of pulmonary ground glass nodules identified in CT studies
NASA Astrophysics Data System (ADS)
Zhang, Li; Fang, Ming; Naidich, David P.; Novak, Carol L.
2004-05-01
Ground glass nodules (GGNs) have proved especially problematic in lung cancer diagnosis, as despite frequently being malignant they characteristically have extremely slow rates of growth. This problem is further magnified by the small size of many of these lesions now being routinely detected following the introduction of multislice CT scanners capable of acquiring contiguous high resolution 1 to 1.25 mm sections throughout the thorax in a single breathhold period. Although segmentation of solid nodules can be used clinically to determine volume doubling times quantitatively, reliable methods for segmentation of pure ground glass nodules have yet to be introduced. Our purpose is to evaluate a newly developed computer-based segmentation method for rapid and reproducible measurements of pure ground glass nodules. 23 pure or mixed ground glass nodules were identified in a total of 8 patients by a radiologist and subsequently segmented by our computer-based method using Markov random field and shape analysis. The computer-based segmentation was initialized by a click point. Methodological consistency was assessed using the overlap ratio between 3 segmentations initialized by 3 different click points for each nodule. The 95% confidence interval on the mean of the overlap ratios proved to be [0.984, 0.998]. The computer-based method failed on two nodules that were difficult to segment even manually either due to especially low contrast or markedly irregular margins. While achieving consistent manual segmentation of ground glass nodules has proven problematic most often due to indistinct boundaries and interobserver variability, our proposed method introduces a powerful new tool for obtaining reproducible quantitative measurements of these lesions. It is our intention to further document the value of this approach with a still larger set of ground glass nodules.
Multiple Echo Diffusion Tensor Acquisition Technique (MEDITATE) on a 3T clinical scanner
Baete, Steven H.; Cho, Gene; Sigmund, Eric E.
2013-01-01
This paper describes the concepts and implementation of an MRI method, Multiple Echo Diffusion Tensor Acquisition Technique (MEDITATE), which is capable of acquiring apparent diffusion tensor maps in two scans on a 3T clinical scanner. In each MEDITATE scan, a set of RF-pulses generates multiple echoes whose amplitudes are diffusion-weighted in both magnitude and direction by a pattern of diffusion gradients. As a result, two scans acquired with different diffusion weighting strengths suffice for accurate estimation of diffusion tensor imaging (DTI)-parameters. The MEDITATE variation presented here expands previous MEDITATE approaches to adapt to the clinical scanner platform, such as exploiting longitudinal magnetization storage to reduce T2-weighting. Fully segmented multi-shot Cartesian encoding is used for image encoding. MEDITATE was tested on isotropic (agar gel), anisotropic diffusion phantoms (asparagus), and in vivo skeletal muscle in healthy volunteers with cardiac-gating. Comparisons of accuracy were performed with standard twice-refocused spin echo (TRSE) DTI in each case and good quantitative agreement was found between diffusion eigenvalues, mean diffusivity, and fractional anisotropy derived from TRSE-DTI and from the MEDITATE sequence. Orientation patterns were correctly reproduced in both isotropic and anisotropic phantoms, and approximately so for in vivo imaging. This illustrates that the MEDITATE method of compressed diffusion encoding is feasible on the clinical scanner platform. With future development and employment of appropriate view-sharing image encoding this technique may be used in clinical applications requiring time-sensitive acquisition of DTI parameters such as dynamical DTI in muscle. PMID:23828606
Color accuracy and reproducibility in whole slide imaging scanners
Shrestha, Prarthana; Hulsken, Bas
2014-01-01
Abstract We propose a workflow for color reproduction in whole slide imaging (WSI) scanners, such that the colors in the scanned images match to the actual slide color and the inter-scanner variation is minimum. We describe a new method of preparation and verification of the color phantom slide, consisting of a standard IT8-target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several International Color Consortium (ICC) compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color space. Based on the quality of the color reproduction in histopathology slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed workflow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We quantify color difference using the CIE-DeltaE2000 metric, where DeltaE values below 1 are considered imperceptible. Our evaluation on 14 phantom slides, manufactured according to the proposed method, shows an average inter-slide color difference below 1 DeltaE. The proposed workflow is implemented and evaluated in 35 WSI scanners developed at Philips, called the Ultra Fast Scanners (UFS). The color accuracy, measured as DeltaE between the scanner reproduced colors and the reference colorimetric values of the phantom patches, is improved on average to 3.5 DeltaE in calibrated scanners from 10 DeltaE in uncalibrated scanners. The average inter-scanner color difference is found to be 1.2 DeltaE. The improvement in color performance upon using the proposed method is apparent with the visual color quality of the tissue scans. PMID:26158041
Calibration procedure for a laser triangulation scanner with uncertainty evaluation
NASA Astrophysics Data System (ADS)
Genta, Gianfranco; Minetola, Paolo; Barbato, Giulio
2016-11-01
Most of low cost 3D scanning devices that are nowadays available on the market are sold without a user calibration procedure to correct measurement errors related to changes in environmental conditions. In addition, there is no specific international standard defining a procedure to check the performance of a 3D scanner along time. This paper aims at detailing a thorough methodology to calibrate a 3D scanner and assess its measurement uncertainty. The proposed procedure is based on the use of a reference ball plate and applied to a triangulation laser scanner. Experimental results show that the metrological performance of the instrument can be greatly improved by the application of the calibration procedure that corrects systematic errors and reduces the device's measurement uncertainty.
Transportation or CT scanners: a theory and method of health resources allocation.
Greenwald, H P; Woodward, J M; Berg, D H
1979-01-01
Cost containment and access to appropriate care are the two most frequently discussed issues in contemporary health policy. Conceiving of the health services available in specific regions as "packages" of diverse items, the authors of this article consider the economic trade-offs among the various resources needed for appropriate care. In the discussion that follows, we examine the trade-offs between two divergent offering of the health care system: high technology medicine and support services. Specifically, we examine several strategies designed to achieve an optimal mix of investments in CT scanners and transportation resources in the South Chicago region. Using linear programming as a method for examining these options, the authors found that 1) the proper location of CT scanners is as important for cost containment as optimal number, and 2) excess capacity in the utilization of a single resource--CT scanners--need not imply inefficiency in the overall delivery of the service. These findings help demonstrate the importance of viewing health care as a package of interrelated services, both for achieving cost containment and for providing access to appropriate care. PMID:391772
NASA Astrophysics Data System (ADS)
Wang, Lei; Strehlow, Jan; Rühaak, Jan; Weiler, Florian; Diez, Yago; Gubern-Merida, Albert; Diekmann, Susanne; Laue, Hendrik; Hahn, Horst K.
2015-03-01
In breast cancer screening for high-risk women, follow-up magnetic resonance images (MRI) are acquired with a time interval ranging from several months up to a few years. Prior MRI studies may provide additional clinical value when examining the current one and thus have the potential to increase sensitivity and specificity of screening. To build a spatial correlation between suspicious findings in both current and prior studies, a reliable alignment method between follow-up studies is desirable. However, long time interval, different scanners and imaging protocols, and varying breast compression can result in a large deformation, which challenges the registration process. In this work, we present a fast and robust spatial alignment framework, which combines automated breast segmentation and current-prior registration techniques in a multi-level fashion. First, fully automatic breast segmentation is applied to extract the breast masks that are used to obtain an initial affine transform. Then, a non-rigid registration algorithm using normalized gradient fields as similarity measure together with curvature regularization is applied. A total of 29 subjects and 58 breast MR images were collected for performance assessment. To evaluate the global registration accuracy, the volume overlap and boundary surface distance metrics are calculated, resulting in an average Dice Similarity Coefficient (DSC) of 0.96 and root mean square distance (RMSD) of 1.64 mm. In addition, to measure local registration accuracy, for each subject a radiologist annotated 10 pairs of markers in the current and prior studies representing corresponding anatomical locations. The average distance error of marker pairs dropped from 67.37 mm to 10.86 mm after applying registration.
Vaz de Souza, Daniel; Schirru, Elia; Mannocci, Francesco; Foschi, Federico; Patel, Shanon
2017-01-01
The aim of this study was to compare the diagnostic efficacy of 2 cone-beam computed tomographic (CBCT) units with parallax periapical (PA) radiographs for the detection and classification of simulated external cervical resorption (ECR) lesions. Simulated ECR lesions were created on 13 mandibular teeth from 3 human dry mandibles. PA and CBCT scans were taken using 2 different units, Kodak CS9300 (Carestream Health Inc, Rochester, NY) and Morita 3D Accuitomo 80 (J Morita, Kyoto, Japan), before and after the creation of the ECR lesions. The lesions were then classified according to Heithersay's classification and their position on the root surface. Sensitivity, specificity, positive predictive values, negative predictive values, and receiver operator characteristic curves as well as the reproducibility of each technique were determined for diagnostic accuracy. The area under the receiver operating characteristic value for diagnostic accuracy for PA radiography and Kodak and Morita CBCT scanners was 0.872, 0.99, and 0.994, respectively. The sensitivity and specificity for both CBCT scanners were significantly better than PA radiography (P < .001). There was no statistical difference between the sensitivity and specificity of the 2 scanners. The percentage of correct diagnoses according to the tooth type was 87.4% for the Kodak scanner, 88.3% for the Morita scanner, and 48.5% for PA radiography.The ECR lesions were correctly identified according to the tooth surface in 87.8% Kodak, 89.1% Morita and 49.4% PA cases. The ECR lesions were correctly classified according to Heithersay classification in 70.5% of Kodak, 69.2% of Morita, and 39.7% of PA cases. This study revealed that both CBCT scanners tested were equally accurate in diagnosing ECR and significantly better than PA radiography. CBCT scans were more likely to correctly categorize ECR according to the Heithersay classification compared with parallax PA radiographs. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tseytlin, Mark; Stolin, Alexander V.; Guggilapu, Priyaankadevi; Bobko, Andrey A.; Khramtsov, Valery V.; Tseytlin, Oxana; Raylman, Raymond R.
2018-05-01
The advent of hybrid scanners, combining complementary modalities, has revolutionized the application of advanced imaging technology to clinical practice and biomedical research. In this project, we investigated the melding of two complementary, functional imaging methods: positron emission tomography (PET) and electron paramagnetic resonance imaging (EPRI). PET radiotracers can provide important information about cellular parameters, such as glucose metabolism. While EPR probes can provide assessment of tissue microenvironment, measuring oxygenation and pH, for example. Therefore, a combined PET/EPRI scanner promises to provide new insights not attainable with current imagers by simultaneous acquisition of multiple components of tissue microenvironments. To explore the simultaneous acquisition of PET and EPR images, a prototype system was created by combining two existing scanners. Specifically, a silicon photomultiplier (SiPM)-based PET scanner ring designed as a portable scanner was combined with an EPRI scanner designed for the imaging of small animals. The ability of the system to obtain simultaneous images was assessed with a small phantom consisting of four cylinders containing both a PET tracer and EPR spin probe. The resulting images demonstrated the ability to obtain contemporaneous PET and EPR images without cross-modality interference. Given the promising results from this initial investigation, the next step in this project is the construction of the next generation pre-clinical PET/EPRI scanner for multi-parametric assessment of physiologically-important parameters of tissue microenvironments.
NASA Astrophysics Data System (ADS)
Leng, Shuai; Zhou, Wei; Yu, Zhicong; Halaweish, Ahmed; Krauss, Bernhard; Schmidt, Bernhard; Yu, Lifeng; Kappler, Steffen; McCollough, Cynthia
2017-09-01
Photon-counting computed tomography (PCCT) uses a photon counting detector to count individual photons and allocate them to specific energy bins by comparing photon energy to preset thresholds. This enables simultaneous multi-energy CT with a single source and detector. Phantom studies were performed to assess the spectral performance of a research PCCT scanner by assessing the accuracy of derived images sets. Specifically, we assessed the accuracy of iodine quantification in iodine map images and of CT number accuracy in virtual monoenergetic images (VMI). Vials containing iodine with five known concentrations were scanned on the PCCT scanner after being placed in phantoms representing the attenuation of different size patients. For comparison, the same vials and phantoms were also scanned on 2nd and 3rd generation dual-source, dual-energy scanners. After material decomposition, iodine maps were generated, from which iodine concentration was measured for each vial and phantom size and compared with the known concentration. Additionally, VMIs were generated and CT number accuracy was compared to the reference standard, which was calculated based on known iodine concentration and attenuation coefficients at each keV obtained from the U.S. National Institute of Standards and Technology (NIST). Results showed accurate iodine quantification (root mean square error of 0.5 mgI/cc) and accurate CT number of VMIs (percentage error of 8.9%) using the PCCT scanner. The overall performance of the PCCT scanner, in terms of iodine quantification and VMI CT number accuracy, was comparable to that of EID-based dual-source, dual-energy scanners.
A new methodological approach for PET implementation in radiotherapy treatment planning.
Bellan, Elena; Ferretti, Alice; Capirci, Carlo; Grassetto, Gaia; Gava, Marcello; Chondrogiannis, Sotirios; Virdis, Graziella; Marzola, Maria Cristina; Massaro, Arianna; Rubello, Domenico; Nibale, Otello
2012-05-01
In this paper, a new methodological approach to using PET information in radiotherapy treatment planning has been discussed. Computed tomography (CT) represents the primary modality to plan personalized radiation treatment, because it provides the basic electron density map for correct dose calculation. If PET scanning is also performed it is typically coregistered with the CT study. This operation can be executed automatically by a hybrid PET/CT scanner or, if the PET and CT imaging sets have been acquired through different equipment, by a dedicated module of the radiotherapy treatment planning system. Both approaches have some disadvantages: in the first case, the bore of a PET/CT system generally used in clinical practice often does not allow the use of certain bulky devices for patient immobilization in radiotherapy, whereas in the second case the result could be affected by limitations in window/level visualization of two different image modalities, and the displayed PET volumes can appear not to be related to the actual uptake into the patient. To overcome these problems, at our centre a specific procedure has been studied and tested in 30 patients, allowing good results of precision in the target contouring to be obtained. The process consists of segmentation of the biological target volume by a dedicated PET/CT console and its export to a dedicated radiotherapy system, where an image registration between the CT images acquired by the PET/CT scanner and a large-bore CT is performed. The planning target volume is contoured only on the large-bore CT and is used for virtual simulation, to individuate permanent skin markers on the patient.
Automatic Nuclei Segmentation in H&E Stained Breast Cancer Histopathology Images
Veta, Mitko; van Diest, Paul J.; Kornegoor, Robert; Huisman, André; Viergever, Max A.; Pluim, Josien P. W.
2013-01-01
The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8. PMID:23922958
Automatic nuclei segmentation in H&E stained breast cancer histopathology images.
Veta, Mitko; van Diest, Paul J; Kornegoor, Robert; Huisman, André; Viergever, Max A; Pluim, Josien P W
2013-01-01
The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8.
Cartwheel projections of segmented pulmonary vasculature for the detection of pulmonary embolism
NASA Astrophysics Data System (ADS)
Kiraly, Atilla P.; Naidich, David P.; Novak, Carol L.
2005-04-01
Pulmonary embolism (PE) detection via contrast-enhanced computed tomography (CT) images is an increasingly important topic of research. Accurate identification of PE is of critical importance in determining the need for further treatment. However, current multi-slice CT scanners provide datasets typically containing 600 or more images per patient, making it desirable to have a visualization method to help radiologists focus directly on potential candidates that might otherwise have been overlooked. This is especially important when assessing the ability of CT to identify smaller, sub-segmental emboli. We propose a cartwheel projection approach to PE visualization that computes slab projections of the original data aided by vessel segmentation. Previous research on slab visualization for PE has utilized the entire volumetric dataset, requiring thin slabs and necessitating the use of maximum intensity projection (MIP). Our use of segmentation within the projection computation allows the use of thicker slabs than previous methods, as well as the ability to employ visualization variations that are only possible with segmentation. Following automatic segmentation of the pulmonary vessels, slabs may be rotated around the X-, Y- or Z-axis. These slabs are rendered by preferentially using voxels within the lung vessels. This effectively eliminates distracting information not relevant to diagnosis, lessening both the chance of overlooking a subtle embolus and minimizing time on spent evaluating false positives. The ability to employ thicker slabs means fewer images need to be evaluated, yielding a more efficient workflow.
Mateos-Pérez, José María; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Vaquero, Juan José
2016-02-01
We present a novel segmentation algorithm for dynamic PET studies that groups pixels according to the similarity of their time-activity curves. Sixteen mice bearing a human tumor cell line xenograft (CH-157MN) were imaged with three different (68)Ga-DOTA-peptides (DOTANOC, DOTATATE, DOTATOC) using a small animal PET-CT scanner. Regional activities (input function and tumor) were obtained after manual delineation of regions of interest over the image. The algorithm was implemented under the jClustering framework and used to extract the same regional activities as in the manual approach. The volume of distribution in the tumor was computed using the Logan linear method. A Kruskal-Wallis test was used to investigate significant differences between the manually and automatically obtained volumes of distribution. The algorithm successfully segmented all the studies. No significant differences were found for the same tracer across different segmentation methods. Manual delineation revealed significant differences between DOTANOC and the other two tracers (DOTANOC - DOTATATE, p=0.020; DOTANOC - DOTATOC, p=0.033). Similar differences were found using the leader-follower algorithm. An open implementation of a novel segmentation method for dynamic PET studies is presented and validated in rodent studies. It successfully replicated the manual results obtained in small-animal studies, thus making it a reliable substitute for this task and, potentially, for other dynamic segmentation procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hara, Takanori; Urikura, Atsushi; Ichikawa, Katsuhiro; Hoshino, Takashi; Nishimaru, Eiji; Niwa, Shinji
2016-04-01
To analyse the temporal resolution (TR) of modern computed tomography (CT) scanners using the impulse method, and assess the actual maximum TR at respective helical acquisition modes. To assess the actual TR of helical acquisition modes of a 128-slice dual source CT (DSCT) scanner and a 320-row area detector CT (ADCT) scanner, we assessed the TRs of various acquisition combinations of a pitch factor (P) and gantry rotation time (R). The TR of the helical acquisition modes for the 128-slice DSCT scanner continuously improved with a shorter gantry rotation time and greater pitch factor. However, for the 320-row ADCT scanner, the TR with a pitch factor of <1.0 was almost equal to the gantry rotation time, whereas with pitch factor of >1.0, it was approximately one half of the gantry rotation time. The maximum TR values of single- and dual-source helical acquisition modes for the 128-slice DSCT scanner were 0.138 (R/P=0.285/1.5) and 0.074s (R/P=0.285/3.2), and the maximum TR values of the 64×0.5- and 160×0.5-mm detector configurations of the helical acquisition modes for the 320-row ADCT scanner were 0.120 (R/P=0.275/1.375) and 0.195s (R/P=0.3/0.6), respectively. Because the TR of a CT scanner is not accurately depicted in the specifications of the individual scanner, appropriate acquisition conditions should be determined based on the actual TR measurement. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Airborne multispectral data collection
NASA Technical Reports Server (NTRS)
Hasell, P. G., Jr.
1974-01-01
Multispectral mapping accomplishments using the M7 airborne scanner are summarized. The M7 system is described and overall results of specific data collection flight operations since June 1971 are reviewed. A major advantage of the M7 system is that all spectral bands of the scanner are in common spatial registration, whereas in the M5 they were not.
Radiation dose and cancer risk estimates in helical CT for pulmonary tuberculosis infections
NASA Astrophysics Data System (ADS)
Adeleye, Bamise; Chetty, Naven
2017-12-01
The preference for computed tomography (CT) for the clinical assessment of pulmonary tuberculosis (PTB) infections has increased the concern about the potential risk of cancer in exposed patients. In this study, we investigated the correlation between cancer risk and radiation doses from different CT scanners, assuming an equivalent scan protocol. Radiation doses from three 16-slice units were estimated using the CT-Expo dosimetry software version 2.4 and standard CT scan protocol for patients with suspected PTB infections. The lifetime risk of cancer for each scanner was determined using the methodology outlined in the BEIR VII report. Organ doses were significantly different (P < 0.05) between the scanners. The calculated effective dose for scanner H2 is 34% and 37% higher than scanners H3 and H1 respectively. A high and statistically significant correlation was observed between estimated lifetime cancer risk for both male (r2 = 0.943, P < 0.05) and female patients (r2 = 0.989, P < 0.05). The risk variation between the scanners was slightly higher than 2% for all ages but was much smaller for specific ages for male and female patients (0.2% and 0.7%, respectively). These variations provide an indication that the use of a scanner optimizing protocol is imperative.
Validity of Automated Choroidal Segmentation in SS-OCT and SD-OCT.
Zhang, Li; Buitendijk, Gabriëlle H S; Lee, Kyungmoo; Sonka, Milan; Springelkamp, Henriët; Hofman, Albert; Vingerling, Johannes R; Mullins, Robert F; Klaver, Caroline C W; Abràmoff, Michael D
2015-05-01
To evaluate the validity of a novel fully automated three-dimensional (3D) method capable of segmenting the choroid from two different optical coherence tomography scanners: swept-source OCT (SS-OCT) and spectral-domain OCT (SD-OCT). One hundred eight subjects were imaged using SS-OCT and SD-OCT. A 3D method was used to segment the choroid and quantify the choroidal thickness along each A-scan. The segmented choroidal posterior boundary was evaluated by comparing to manual segmentation. Differences were assessed to test the agreement between segmentation results of the same subject. Choroidal thickness was defined as the Euclidian distance between Bruch's membrane and the choroidal posterior boundary, and reproducibility was analyzed using automatically and manually determined choroidal thicknesses. For SS-OCT, the average choroidal thickness of the entire 6- by 6-mm2 macular region was 219.5 μm (95% confidence interval [CI], 204.9-234.2 μm), and for SD-OCT it was 209.5 μm (95% CI, 197.9-221.0 μm). The agreement between automated and manual segmentations was high: Average relative difference was less than 5 μm, and average absolute difference was less than 15 μm. Reproducibility of choroidal thickness between repeated SS-OCT scans was high (coefficient of variation [CV] of 3.3%, intraclass correlation coefficient [ICC] of 0.98), and differences between SS-OCT and SD-OCT results were small (CV of 11.0%, ICC of 0.73). We have developed a fully automated 3D method for segmenting the choroid and quantifying choroidal thickness along each A-scan. The method yielded high validity. Our method can be used reliably to study local choroidal changes and may improve the diagnosis and management of patients with ocular diseases in which the choroid is affected.
Frisoni, Giovanni B; Jack, Clifford R; Bocchetta, Martina; Bauer, Corinna; Frederiksen, Kristian S; Liu, Yawu; Preboske, Gregory; Swihart, Tim; Blair, Melanie; Cavedo, Enrica; Grothe, Michel J; Lanfredi, Mariangela; Martinez, Oliver; Nishikawa, Masami; Portegies, Marileen; Stoub, Travis; Ward, Chadwich; Apostolova, Liana G; Ganzola, Rossana; Wolf, Dominik; Barkhof, Frederik; Bartzokis, George; DeCarli, Charles; Csernansky, John G; deToledo-Morrell, Leyla; Geerlings, Mirjam I; Kaye, Jeffrey; Killiany, Ronald J; Lehéricy, Stephane; Matsuda, Hiroshi; O'Brien, John; Silbert, Lisa C; Scheltens, Philip; Soininen, Hilkka; Teipel, Stefan; Waldemar, Gunhild; Fellgiebel, Andreas; Barnes, Josephine; Firbank, Michael; Gerritsen, Lotte; Henneman, Wouter; Malykhin, Nikolai; Pruessner, Jens C; Wang, Lei; Watson, Craig; Wolf, Henrike; deLeon, Mony; Pantel, Johannes; Ferrari, Clarissa; Bosco, Paolo; Pasqualetti, Patrizio; Duchesne, Simon; Duvernoy, Henri; Boccardi, Marina
2015-02-01
An international Delphi panel has defined a harmonized protocol (HarP) for the manual segmentation of the hippocampus on MR. The aim of this study is to study the concurrent validity of the HarP toward local protocols, and its major sources of variance. Fourteen tracers segmented 10 Alzheimer's Disease Neuroimaging Initiative (ADNI) cases scanned at 1.5 T and 3T following local protocols, qualified for segmentation based on the HarP through a standard web-platform and resegmented following the HarP. The five most accurate tracers followed the HarP to segment 15 ADNI cases acquired at three time points on both 1.5 T and 3T. The agreement among tracers was relatively low with the local protocols (absolute left/right ICC 0.44/0.43) and much higher with the HarP (absolute left/right ICC 0.88/0.89). On the larger set of 15 cases, the HarP agreement within (left/right ICC range: 0.94/0.95 to 0.99/0.99) and among tracers (left/right ICC range: 0.89/0.90) was very high. The volume variance due to different tracers was 0.9% of the total, comparing favorably to variance due to scanner manufacturer (1.2), atrophy rates (3.5), hemispheric asymmetry (3.7), field strength (4.4), and significantly smaller than the variance due to atrophy (33.5%, P < .001), and physiological variability (49.2%, P < .001). The HarP has high measurement stability compared with local segmentation protocols, and good reproducibility within and among human tracers. Hippocampi segmented with the HarP can be used as a reference for the qualification of human tracers and automated segmentation algorithms. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Liyanage, Kishan Andre; Steward, Christopher; Moffat, Bradford Armstrong; Opie, Nicholas Lachlan; Rind, Gil Simon; John, Sam Emmanuel; Ronayne, Stephen; May, Clive Newton; O'Brien, Terence John; Milne, Marjorie Eileen; Oxley, Thomas James
2016-01-01
Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson's disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution) MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap) to 1 (complete overlap). For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5-0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0-0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access.
Frisoni, Giovanni B.; Jack, Clifford R.; Bocchetta, Martina; Bauer, Corinna; Frederiksen, Kristian S.; Liu, Yawu; Preboske, Gregory; Swihart, Tim; Blair, Melanie; Cavedo, Enrica; Grothe, Michel J.; Lanfredi, Mariangela; Martinez, Oliver; Nishikawa, Masami; Portegies, Marileen; Stoub, Travis; Ward, Chadwich; Apostolova, Liana G.; Ganzola, Rossana; Wolf, Dominik; Barkhof, Frederik; Bartzokis, George; DeCarli, Charles; Csernansky, John G.; deToledo-Morrell, Leyla; Geerlings, Mirjam I.; Kaye, Jeffrey; Killiany, Ronald J.; Lehéricy, Stephane; Matsuda, Hiroshi; O'Brien, John; Silbert, Lisa C.; Scheltens, Philip; Soininen, Hilkka; Teipel, Stefan; Waldemar, Gunhild; Fellgiebel, Andreas; Barnes, Josephine; Firbank, Michael; Gerritsen, Lotte; Henneman, Wouter; Malykhin, Nikolai; Pruessner, Jens C.; Wang, Lei; Watson, Craig; Wolf, Henrike; deLeon, Mony; Pantel, Johannes; Ferrari, Clarissa; Bosco, Paolo; Pasqualetti, Patrizio; Duchesne, Simon; Duvernoy, Henri; Boccardi, Marina
2015-01-01
Background An international Delphi panel has defined a harmonized protocol (HarP) for the manual segmentation of the hippocampus on MR. The aim of this study is to study the concurrent validity of the HarP toward local protocols, and its major sources of variance. Methods Fourteen tracers segmented 10 Alzheimer's Disease Neuroimaging Initiative (ADNI) cases scanned at 1.5 T and 3T following local protocols, qualified for segmentation based on the HarP through a standard web-platform and resegmented following the HarP. The five most accurate tracers followed the HarP to segment 15 ADNI cases acquired at three time points on both 1.5 T and 3T. Results The agreement among tracers was relatively low with the local protocols (absolute left/right ICC 0.44/0.43) and much higher with the HarP (absolute left/right ICC 0.88/0.89). On the larger set of 15 cases, the HarP agreement within (left/right ICC range: 0.94/0.95 to 0.99/0.99) and among tracers (left/right ICC range: 0.89/0.90) was very high. The volume variance due to different tracers was 0.9% of the total, comparing favorably to variance due to scanner manufacturer (1.2), atrophy rates (3.5), hemispheric asymmetry (3.7), field strength (4.4), and significantly smaller than the variance due to atrophy (33.5%, P < .001), and physiological variability (49.2%, P < .001). Conclusions The HarP has high measurement stability compared with local segmentation protocols, and good reproducibility within and among human tracers. Hippocampi segmented with the HarP can be used as a reference for the qualification of human tracers and automated segmentation algorithms. PMID:25267715
Multispectral Scanner for Monitoring Plants
NASA Technical Reports Server (NTRS)
Gat, Nahum
2004-01-01
A multispectral scanner has been adapted to capture spectral images of living plants under various types of illumination for purposes of monitoring the health of, or monitoring the transfer of genes into, the plants. In a health-monitoring application, the plants are illuminated with full-spectrum visible and near infrared light and the scanner is used to acquire a reflected-light spectral signature known to be indicative of the health of the plants. In a gene-transfer- monitoring application, the plants are illuminated with blue or ultraviolet light and the scanner is used to capture fluorescence images from a green fluorescent protein (GFP) that is expressed as result of the gene transfer. The choice of wavelength of the illumination and the wavelength of the fluorescence to be monitored depends on the specific GFP.
Feasibility of Clinician-Facilitated Three-Dimensional Printing of Synthetic Cranioplasty Flaps.
Panesar, Sandip S; Belo, Joao Tiago A; D'Souza, Rhett N
2018-05-01
Integration of three-dimensional (3D) printing and stereolithography into clinical practice is in its nascence, and concepts may be esoteric to the practicing neurosurgeon. Currently, creation of 3D printed implants involves recruitment of offsite third parties. We explored a range of 3D scanning and stereolithographic techniques to create patient-specific synthetic implants using an onsite, clinician-facilitated approach. We simulated bilateral craniectomies in a single cadaveric specimen. We devised 3 methods of creating stereolithographically viable virtual models from removed bone. First, we used preoperative and postoperative computed tomography scanner-derived bony window models from which the flap was extracted. Second, we used an entry-level 3D light scanner to scan and render models of the individual bone pieces. Third, we used an arm-mounted, 3D laser scanner to create virtual models using a real-time approach. Flaps were printed from the computed tomography scanner and laser scanner models only in a ultraviolet-cured polymer. The light scanner did not produce suitable virtual models for printing. The computed tomography scanner-derived models required extensive postfabrication modification to fit the existing defects. The laser scanner models assumed good fit within the defects without any modification. The methods presented varying levels of complexity in acquisition and model rendering. Each technique required hardware at varying in price points from $0 to approximately $100,000. The laser scanner models produced the best quality parts, which had near-perfect fit with the original defects. Potential neurosurgical applications of this technology are discussed. Copyright © 2018 Elsevier Inc. All rights reserved.
Planning guidelines for computerized transaxial tomography (CT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1976-11-23
Guidelines to assist local communities in review and decisionmaking related to computerized tomography (CT) 'head' and 'whole body' scanner needs and placement are presented. Although medical benefits for head scanning are well established, the proper role of whole body scanning in relation to other diagnostic procedures has not been determined. It is recommended that a 20 percent weighted consideration could be given to a potential CT scanner applicant's present capabilities in diagnostic 'body' work. The following guidelines for CT are recommended for use in assessing work qualifications of potential CT scanner applicants: (1) The facility must have an active neurosurgicalmore » service, with a geographically full-time board - certified neurosurgeon and at least 50 intracranial procedures performed annually. (2) The facility must have an active neurological service, with a geographically full-time board - certified neurologist. (3) The facility must have on staff a qualified neuroradiologist. It is recommended that the CT scanner utilization level be a minimum of 3,000 examinations per year per unit of new equipment. The applicant must submit financial data and must be committed to providing care to all patients, independent of ability to pay. The applicant must submit letters from area hospitals agreeing to utilize the scanner services. Additional criteria are given for body scanning work and for the number of scanners in a specific area. Detailed information is presented about scanner development and use in southeastern Pennsylvania and neighboring planning areas, and the cost of scanner operations is compared with revenues. The CT scanner committee membership is included.« less
NASA Astrophysics Data System (ADS)
Raylman, Raymond R.; Stolin, Alexander V.; Sompalli, Prashanth; Randall, Nicole Bunda; Martone, Peter F.; Clinthorne, Neal H.
2015-10-01
Staging of head and neck cancer (HNC) is often hindered by the limited resolution of standard whole body PET scanners, which can make it challenging to detect small areas of metastatic disease in regional lymph nodes and accurately delineate tumor boundaries. In this investigation, the performance of a proposed high resolution PET/CT scanner designed specifically for imaging of the head and neck region was explored. The goal is to create a dedicated PET/CT system that will enhance the staging and treatment of HNCs. Its performance was assessed by simulating the scanning of a three-dimensional Rose-Burger contrast phantom. To extend the results from the simulation studies, an existing scanner with a similar geometry to the dedicated system and a whole body, clinical PET/CT scanner were used to image a Rose-Burger contrast phantom and a phantom simulating the neck of an HNC patient (out-of-field-of-view sources of activity were not included). Images of the contrast detail phantom acquired with Breast-PET/CT and simulated head and neck scanner both produced object contrasts larger than the images created by the clinical scanner. Images of a neck phantom acquired with the Breast-PET/CT scanner permitted the identification of all of the simulated metastases, while it was not possible to identify any of the simulated metastasis with the clinical scanner. The initial results from this study demonstrate the potential benefits of high-resolution PET systems for improving the diagnosis and treatment of HNC.
Tung, Matthew K; Cameron, James D; Casan, Joshua M; Crossett, Marcus; Troupis, John M; Meredith, Ian T; Seneviratne, Sujith K
2013-01-01
Minimization of radiation exposure remains an important subject that occurs in parallel with advances in scanner technology. We report our experience of evolving radiation dose and its determinants after the introduction of 320-multidetector row cardiac CT within a single tertiary cardiology referral service. Four cohorts of consecutive patients (total 525 scans), who underwent cardiac CT at defined time points as early as 2008, are described. These include a cohort just after scanner installation, after 2 upgrades of the operating system, and after introduction of an adaptive iterative image reconstruction algorithm. The proportions of nondiagnostic coronary artery segments and studies with nondiagnostic segments were compared between cohorts. Significant reductions were observed in median radiation doses in all cohorts compared with the initial cohort (P < .001). Median dose-length product fell from 944 mGy · cm (interquartile range [IQR], 567.3-1426.5 mGy · cm) to 156 mGy · cm (IQR, 99.2-265.0 mGy · cm). Although the proportion of prospectively triggered scans has increased, reductions in radiation dose have occurred independently of distribution of scan formats. In multiple regression that combined all groups, determinants of dose-length product were tube output, the number of cardiac cycles scanned, tube voltage, scan length, scan format, body mass index, phase width, and heart rate (adjusted R(2) = 0.85, P < .001). The proportion of nondiagnostic coronary artery segments was slightly increased in group 4 (2.9%; P < .01). While maintaining diagnostic quality in 320-multidetector row cardiac CT, the radiation dose has decreased substantially because of a combination of dose-reduction protocols and technical improvements. Continued minimization of radiation dose will increase the potential for cardiac CT to expand as a cardiac imaging modality. Copyright © 2013 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
A comparison of the accuracy of intraoral scanners using an intraoral environment simulator.
Park, Hye-Nan; Lim, Young-Jun; Yi, Won-Jin; Han, Jung-Suk; Lee, Seung-Pyo
2018-02-01
The aim of this study was to design an intraoral environment simulator and to assess the accuracy of two intraoral scanners using the simulator. A box-shaped intraoral environment simulator was designed to simulate two specific intraoral environments. The cast was scanned 10 times by Identica Blue (MEDIT, Seoul, South Korea), TRIOS (3Shape, Copenhagen, Denmark), and CS3500 (Carestream Dental, Georgia, USA) scanners in the two simulated groups. The distances between the left and right canines (D3), first molars (D6), second molars (D7), and the left canine and left second molar (D37) were measured. The distance data were analyzed by the Kruskal-Wallis test. The differences in intraoral environments were not statistically significant ( P >.05). Between intraoral scanners, statistically significant differences ( P <.05) were revealed by the Kruskal-Wallis test with regard to D3 and D6. No difference due to the intraoral environment was revealed. The simulator will contribute to the higher accuracy of intraoral scanners in the future.
Yasaka, Koichiro; Akai, Hiroyuki; Mackin, Dennis; Court, Laurence; Moros, Eduardo; Ohtomo, Kuni; Kiryu, Shigeru
2017-05-01
Quantitative computed tomography (CT) texture analyses for images with and without filtration are gaining attention to capture the heterogeneity of tumors. The aim of this study was to investigate how quantitative texture parameters using image filtering vary among different computed tomography (CT) scanners using a phantom developed for radiomics studies.A phantom, consisting of 10 different cartridges with various textures, was scanned under 6 different scanning protocols using four CT scanners from four different vendors. CT texture analyses were performed for both unfiltered images and filtered images (using a Laplacian of Gaussian spatial band-pass filter) featuring fine, medium, and coarse textures. Forty-five regions of interest were placed for each cartridge (x) in a specific scan image set (y), and the average of the texture values (T(x,y)) was calculated. The interquartile range (IQR) of T(x,y) among the 6 scans was calculated for a specific cartridge (IQR(x)), while the IQR of T(x,y) among the 10 cartridges was calculated for a specific scan (IQR(y)), and the median IQR(y) was then calculated for the 6 scans (as the control IQR, IQRc). The median of their quotient (IQR(x)/IQRc) among the 10 cartridges was defined as the variability index (VI).The VI was relatively small for the mean in unfiltered images (0.011) and for standard deviation (0.020-0.044) and entropy (0.040-0.044) in filtered images. Skewness and kurtosis in filtered images featuring medium and coarse textures were relatively variable across different CT scanners, with VIs of 0.638-0.692 and 0.430-0.437, respectively.Various quantitative CT texture parameters are robust and variable among different scanners, and the behavior of these parameters should be taken into consideration.
Mis-segmentation in voxel-based morphometry due to a signal intensity change in the putamen.
Goto, Masami; Abe, Osamu; Miyati, Tosiaki; Aoki, Shigeki; Gomi, Tsutomu; Takeda, Tohoru
2017-12-01
The aims of this study were to demonstrate an association between changes in the signal intensity of the putamen on three-dimensional T1-weighted magnetic resonance images (3D-T1WI) and mis-segmentation, using the voxel-based morphometry (VBM) 8 toolbox. The sagittal 3D-T1WIs of 22 healthy volunteers were obtained for VBM analysis using the 1.5-T MR scanner. We prepared five levels of 3D-T1WI signal intensity (baseline, same level, background level, low level, and high level) in regions of interest containing the putamen. Groups of smoothed, spatially normalized tissue images were compared to the baseline group using a paired t test. The baseline was compared to the other four levels. In all comparisons, significant volume changes were observed around and outside the area that included the signal intensity change. The present study demonstrated an association between a change in the signal intensity of the putamen on 3D-T1WI and changed volume in segmented tissue images.
NASA Astrophysics Data System (ADS)
You, Wonsang; Andescavage, Nickie; Zun, Zungho; Limperopoulos, Catherine
2017-03-01
Intravoxel incoherent motion (IVIM) magnetic resonance imaging is an emerging non-invasive technique that has been recently applied to quantify in vivo global placental perfusion. We propose a robust semi-automated method for segmenting the placenta into fetal and maternal compartments from IVIM data, using a multi-label image segmentation algorithm called `GrowCut'. Placental IVIM data were acquired on a 1.5T scanner from 16 healthy pregnant women between 21-37 gestational weeks. The voxel-wise perfusion fraction was then estimated after non-rigid image registration. The seed regions of the fetal and maternal compartments were determined using structural T2-weighted reference images, and improved progressively through an iterative process of the GrowCut algorithm to accurately encompass fetal and maternal compartments. We demonstrated that the placental perfusion fraction decreased in both fetal (-0.010/week) and maternal compartments (-0.013/week) while their relative difference (ffetal-fmaternal) gradually increased with advancing gestational age (+0.003/week, p=0.065). Our preliminary results show that the proposed method was effective in distinguishing placental compartments using IVIM.
Generation, recognition, and consistent fusion of partial boundary representations from range images
NASA Astrophysics Data System (ADS)
Kohlhepp, Peter; Hanczak, Andrzej M.; Li, Gang
1994-10-01
This paper presents SOMBRERO, a new system for recognizing and locating 3D, rigid, non- moving objects from range data. The objects may be polyhedral or curved, partially occluding, touching or lying flush with each other. For data collection, we employ 2D time- of-flight laser scanners mounted to a moving gantry robot. By combining sensor and robot coordinates, we obtain 3D cartesian coordinates. Boundary representations (Brep's) provide view independent geometry models that are both efficiently recognizable and derivable automatically from sensor data. SOMBRERO's methods for generating, matching and fusing Brep's are highly synergetic. A split-and-merge segmentation algorithm with dynamic triangular builds a partial (21/2D) Brep from scattered data. The recognition module matches this scene description with a model database and outputs recognized objects, their positions and orientations, and possibly surfaces corresponding to unknown objects. We present preliminary results in scene segmentation and recognition. Partial Brep's corresponding to different range sensors or viewpoints can be merged into a consistent, complete and irredundant 3D object or scene model. This fusion algorithm itself uses the recognition and segmentation methods.
NASA Astrophysics Data System (ADS)
Xu, Ye; van Beek, Edwin J.; McLennan, Geoffrey; Guo, Junfeng; Sonka, Milan; Hoffman, Eric
2006-03-01
In this study we utilize our texture characterization software (3-D AMFM) to characterize interstitial lung diseases (including emphysema) based on MDCT generated volumetric data using 3-dimensional texture features. We have sought to test whether the scanner and reconstruction filter (kernel) type affect the classification of lung diseases using the 3-D AMFM. We collected MDCT images in three subject groups: emphysema (n=9), interstitial pulmonary fibrosis (IPF) (n=10), and normal non-smokers (n=9). In each group, images were scanned either on a Siemens Sensation 16 or 64-slice scanner, (B50f or B30 recon. kernel) or a Philips 4-slice scanner (B recon. kernel). A total of 1516 volumes of interest (VOIs; 21x21 pixels in plane) were marked by two chest imaging experts using the Iowa Pulmonary Analysis Software Suite (PASS). We calculated 24 volumetric features. Bayesian methods were used for classification. Images from different scanners/kernels were combined in all possible combinations to test how robust the tissue classification was relative to the differences in image characteristics. We used 10-fold cross validation for testing the result. Sensitivity, specificity and accuracy were calculated. One-way Analysis of Variances (ANOVA) was used to compare the classification result between the various combinations of scanner and reconstruction kernel types. This study yielded a sensitivity of 94%, 91%, 97%, and 93% for emphysema, ground-glass, honeycombing, and normal non-smoker patterns respectively using a mixture of all three subject groups. The specificity for these characterizations was 97%, 99%, 99%, and 98%, respectively. The F test result of ANOVA shows there is no significant difference (p <0.05) between different combinations of data with respect to scanner and convolution kernel type. Since different MDCT and reconstruction kernel types did not show significant differences in regards to the classification result, this study suggests that the 3-D AMFM can be generally introduced.
Trattner, Sigal; Halliburton, Sandra; Thompson, Carla M; Xu, Yanping; Chelliah, Anjali; Jambawalikar, Sachin R; Peng, Boyu; Peters, M Robert; Jacobs, Jill E; Ghesani, Munir; Jang, James J; Al-Khalidi, Hussein; Einstein, Andrew J
2018-01-01
This study sought to determine updated conversion factors (k-factors) that would enable accurate estimation of radiation effective dose (ED) for coronary computed tomography angiography (CTA) and calcium scoring performed on 12 contemporary scanner models and current clinical cardiac protocols and to compare these methods to the standard chest k-factor of 0.014 mSv·mGy -1 cm -1 . Accurate estimation of ED from cardiac CT scans is essential to meaningfully compare the benefits and risks of different cardiac imaging strategies and optimize test and protocol selection. Presently, ED from cardiac CT is generally estimated by multiplying a scanner-reported parameter, the dose-length product, by a k-factor which was determined for noncardiac chest CT, using single-slice scanners and a superseded definition of ED. Metal-oxide-semiconductor field-effect transistor radiation detectors were positioned in organs of anthropomorphic phantoms, which were scanned using all cardiac protocols, 120 clinical protocols in total, on 12 CT scanners representing the spectrum of scanners from 5 manufacturers (GE, Hitachi, Philips, Siemens, Toshiba). Organ doses were determined for each protocol, and ED was calculated as defined in International Commission on Radiological Protection Publication 103. Effective doses and scanner-reported dose-length products were used to determine k-factors for each scanner model and protocol. k-Factors averaged 0.026 mSv·mGy -1 cm -1 (95% confidence interval: 0.0258 to 0.0266) and ranged between 0.020 and 0.035 mSv·mGy -1 cm -1 . The standard chest k-factor underestimates ED by an average of 46%, ranging from 30% to 60%, depending on scanner, mode, and tube potential. Factors were higher for prospective axial versus retrospective helical scan modes, calcium scoring versus coronary CTA, and higher (100 to 120 kV) versus lower (80 kV) tube potential and varied among scanner models (range of average k-factors: 0.0229 to 0.0277 mSv·mGy -1 cm -1 ). Cardiac k-factors for all scanners and protocols are considerably higher than the k-factor currently used to estimate ED of cardiac CT studies, suggesting that radiation doses from cardiac CT have been significantly and systematically underestimated. Using cardiac-specific factors can more accurately inform the benefit-risk calculus of cardiac-imaging strategies. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Procedure M - A framework for stratified area estimation. [in multispectral scanner data processing
NASA Technical Reports Server (NTRS)
Kauth, R. J.; Cicone, R. C.; Malila, W. A.
1980-01-01
This paper describes Procedure M, a systematic approach to processing multispectral scanner data for classification and acreage estimation. A general discussion of the rationale and development of the procedure is given in the context of large-area agricultural applications. Specific examples are given in the form of test results on acreage estimation of spring small grains.
MFP scanner motion characterization using self-printed target
NASA Astrophysics Data System (ADS)
Kim, Minwoong; Bauer, Peter; Wagner, Jerry K.; Allebach, Jan P.
2015-01-01
Multifunctional printers (MFP) are products that combine the functions of a printer, scanner, and copier. Our goal is to help customers to be able to easily diagnose scanner or print quality issues with their products by developing an automated diagnostic system embedded in the product. We specifically focus on the characterization of scanner motions, which may be defective due to irregular movements of the scan-head. The novel design of our test page and two-stage diagnostic algorithm are described in this paper. The most challenging issue is to evaluate the scanner performance properly when both printer and scanner units contribute to the motion errors. In the first stage called the uncorrected-print-error-stage, aperiodic and periodic motion behaviors are characterized in both the spatial and frequency domains. Since it is not clear how much of the error is contributed by each unit, the scanned input is statistically analyzed in the second stage called the corrected-print-error-stage. Finally, the described diagnostic algorithms output the estimated scan error and print error separately as RMS values of the displacement of the scan and print lines, respectively, from their nominal positions in the scanner or printer motion direction. We validate our test page design and approaches by ground truth obtained from a high-precision, chrome-on-glass reticle manufactured using semiconductor chip fabrication technologies.
NASA Astrophysics Data System (ADS)
Jansen, Jan T. M.; Shrimpton, Paul C.
2016-07-01
The ImPACT (imaging performance assessment of CT scanners) CT patient dosimetry calculator is still used world-wide to estimate organ and effective doses (E) for computed tomography (CT) examinations, although the tool is based on Monte Carlo calculations reflecting practice in the early 1990’s. Subsequent developments in CT scanners, definitions of E, anthropomorphic phantoms, computers and radiation transport codes, have all fuelled an urgent need for updated organ dose conversion factors for contemporary CT. A new system for such simulations has been developed and satisfactorily tested. Benchmark comparisons of normalised organ doses presently derived for three old scanners (General Electric 9800, Philips Tomoscan LX and Siemens Somatom DRH) are within 5% of published values. Moreover, calculated normalised values of CT Dose Index for these scanners are in reasonable agreement (within measurement and computational uncertainties of ±6% and ±1%, respectively) with reported standard measurements. Organ dose coefficients calculated for a contemporary CT scanner (Siemens Somatom Sensation 16) demonstrate potential deviations by up to around 30% from the surrogate values presently assumed (through a scanner matching process) when using the ImPACT CT Dosimetry tool for newer scanners. Also, illustrative estimates of E for some typical examinations and a range of anthropomorphic phantoms demonstrate the significant differences (by some 10’s of percent) that can arise when changing from the previously adopted stylised mathematical phantom to the voxel phantoms presently recommended by the International Commission on Radiological Protection (ICRP), and when following the 2007 ICRP recommendations (updated from 1990) concerning tissue weighting factors. Further simulations with the validated dosimetry system will provide updated series of dose coefficients for a wide range of contemporary scanners.
Tls Field Data Based Intensity Correction for Forest Environments
NASA Astrophysics Data System (ADS)
Heinzel, J.; Huber, M. O.
2016-06-01
Terrestrial laser scanning (TLS) is increasingly used for forestry applications. Besides the three dimensional point coordinates, the 'intensity' of the reflected signal plays an important role in forestry and vegetation studies. The benefit of the signal intensity is caused by the wavelength of the laser that is within the near infrared (NIR) for most scanners. The NIR is highly indicative for various vegetation characteristics. However, the intensity as recorded by most terrestrial scanners is distorted by both external and scanner specific factors. Since details about system internal alteration of the signal are often unknown to the user, model driven approaches are impractical. On the other hand, existing data driven calibration procedures require laborious acquisition of separate reference datasets or areas of homogenous reflection characteristics from the field data. In order to fill this gap, the present study introduces an approach to correct unwanted intensity variations directly from the point cloud of the field data. The focus is on the variation over range and sensor specific distortions. Instead of an absolute calibration of the values, a relative correction within the dataset is sufficient for most forestry applications. Finally, a method similar to time series detrending is presented with the only pre-condition of a relative equal distribution of forest objects and materials over range. Our test data covers 50 terrestrial scans captured with a FARO Focus 3D S120 scanner using a laser wavelength of 905 nm. Practical tests demonstrate that our correction method removes range and scanner based alterations of the intensity.
Earth Radiation Budget Experiment (ERBE) scanner instrument anomaly investigation
NASA Technical Reports Server (NTRS)
Watson, N. D.; Miller, J. B.; Taylor, L. V.; Lovell, J. B.; Cox, J. W.; Fedors, J. C.; Kopia, L. P.; Holloway, R. M.; Bradley, O. H.
1985-01-01
The results of an ad-hoc committee investigation of in-Earth orbit operational anomalies noted on two identical Earth Radiation Budget Experiment (ERBE) Scanner instruments on two different spacecraft busses is presented. The anomalies are attributed to the bearings and the lubrication scheme for the bearings. A detailed discussion of the pertinent instrument operations, the approach of the investigation team and the current status of the instruments now in Earth orbit is included. The team considered operational changes for these instruments, rework possibilities for the one instrument which is waiting to be launched, and preferable lubrication considerations for specific space operational requirements similar to those for the ERBE scanner bearings.
Souza, Roberto; Lucena, Oeslle; Garrafa, Julia; Gobbi, David; Saluzzi, Marina; Appenzeller, Simone; Rittner, Letícia; Frayne, Richard; Lotufo, Roberto
2018-04-15
This paper presents an open, multi-vendor, multi-field strength magnetic resonance (MR) T1-weighted volumetric brain imaging dataset, named Calgary-Campinas-359 (CC-359). The dataset is composed of images of older healthy adults (29-80 years) acquired on scanners from three vendors (Siemens, Philips and General Electric) at both 1.5 T and 3 T. CC-359 is comprised of 359 datasets, approximately 60 subjects per vendor and magnetic field strength. The dataset is approximately age and gender balanced, subject to the constraints of the available images. It provides consensus brain extraction masks for all volumes generated using supervised classification. Manual segmentation results for twelve randomly selected subjects performed by an expert are also provided. The CC-359 dataset allows investigation of 1) the influences of both vendor and magnetic field strength on quantitative analysis of brain MR; 2) parameter optimization for automatic segmentation methods; and potentially 3) machine learning classifiers with big data, specifically those based on deep learning methods, as these approaches require a large amount of data. To illustrate the utility of this dataset, we compared to the results of a supervised classifier, the results of eight publicly available skull stripping methods and one publicly available consensus algorithm. A linear mixed effects model analysis indicated that vendor (p-value<0.001) and magnetic field strength (p-value<0.001) have statistically significant impacts on skull stripping results. Copyright © 2017 Elsevier Inc. All rights reserved.
Training labels for hippocampal segmentation based on the EADC-ADNI harmonized hippocampal protocol.
Boccardi, Marina; Bocchetta, Martina; Morency, Félix C; Collins, D Louis; Nishikawa, Masami; Ganzola, Rossana; Grothe, Michel J; Wolf, Dominik; Redolfi, Alberto; Pievani, Michela; Antelmi, Luigi; Fellgiebel, Andreas; Matsuda, Hiroshi; Teipel, Stefan; Duchesne, Simon; Jack, Clifford R; Frisoni, Giovanni B
2015-02-01
The European Alzheimer's Disease Consortium and Alzheimer's Disease Neuroimaging Initiative (ADNI) Harmonized Protocol (HarP) is a Delphi definition of manual hippocampal segmentation from magnetic resonance imaging (MRI) that can be used as the standard of truth to train new tracers, and to validate automated segmentation algorithms. Training requires large and representative data sets of segmented hippocampi. This work aims to produce a set of HarP labels for the proper training and certification of tracers and algorithms. Sixty-eight 1.5 T and 67 3 T volumetric structural ADNI scans from different subjects, balanced by age, medial temporal atrophy, and scanner manufacturer, were segmented by five qualified HarP tracers whose absolute interrater intraclass correlation coefficients were 0.953 and 0.975 (left and right). Labels were validated as HarP compliant through centralized quality check and correction. Hippocampal volumes (mm(3)) were as follows: controls: left = 3060 (standard deviation [SD], 502), right = 3120 (SD, 897); mild cognitive impairment (MCI): left = 2596 (SD, 447), right = 2686 (SD, 473); and Alzheimer's disease (AD): left = 2301 (SD, 492), right = 2445 (SD, 525). Volumes significantly correlated with atrophy severity at Scheltens' scale (Spearman's ρ = <-0.468, P = <.0005). Cerebrospinal fluid spaces (mm(3)) were as follows: controls: left = 23 (32), right = 25 (25); MCI: left = 15 (13), right = 22 (16); and AD: left = 11 (13), right = 20 (25). Five subjects (3.7%) presented with unusual anatomy. This work provides reference hippocampal labels for the training and certification of automated segmentation algorithms. The publicly released labels will allow the widespread implementation of the standard segmentation protocol. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Estimating Radiation Dose Metrics for Patients Undergoing Tube Current Modulation CT Scans
NASA Astrophysics Data System (ADS)
McMillan, Kyle Lorin
Computed tomography (CT) has long been a powerful tool in the diagnosis of disease, identification of tumors and guidance of interventional procedures. With CT examinations comes the concern of radiation exposure and the associated risks. In order to properly understand those risks on a patient-specific level, organ dose must be quantified for each CT scan. Some of the most widely used organ dose estimates are derived from fixed tube current (FTC) scans of a standard sized idealized patient model. However, in current clinical practice, patient size varies from neonates weighing just a few kg to morbidly obese patients weighing over 200 kg, and nearly all CT exams are performed with tube current modulation (TCM), a scanning technique that adjusts scanner output according to changes in patient attenuation. Methods to account for TCM in CT organ dose estimates have been previously demonstrated, but these methods are limited in scope and/or restricted to idealized TCM profiles that are not based on physical observations and not scanner specific (e.g. don't account for tube limits, scanner-specific effects, etc.). The goal of this work was to develop methods to estimate organ doses to patients undergoing CT scans that take into account both the patient size as well as the effects of TCM. This work started with the development and validation of methods to estimate scanner-specific TCM schemes for any voxelized patient model. An approach was developed to generate estimated TCM schemes that match actual TCM schemes that would have been acquired on the scanner for any patient model. Using this approach, TCM schemes were then generated for a variety of body CT protocols for a set of reference voxelized phantoms for which TCM information does not currently exist. These are whole body patient models representing a variety of sizes, ages and genders that have all radiosensitive organs identified. TCM schemes for these models facilitated Monte Carlo-based estimates of fully-, partially- and indirectly-irradiated organ dose from TCM CT exams. By accounting for the effects of patient size in the organ dose estimates, a comprehensive set of patient-specific dose estimates from TCM CT exams was developed. These patient-specific organ dose estimates from TCM CT exams will provide a more complete understanding of the dose impact and risks associated with modern body CT scanning protocols.
Investigation of hyper-NA scanner emulation for photomask CDU performance
NASA Astrophysics Data System (ADS)
Poortinga, Eric; Scheruebl, Thomas; Conley, Will; Sundermann, Frank
2007-02-01
As the semiconductor industry moves toward immersion lithography using numerical apertures above 1.0 the quality of the photomask becomes even more crucial. Photomask specifications are driven by the critical dimension (CD) metrology within the wafer fab. Knowledge of the CD values at resist level provides a reliable mechanism for the prediction of device performance. Ultimately, tolerances of device electrical properties drive the wafer linewidth specifications of the lithography group. Staying within this budget is influenced mainly by the scanner settings, resist process, and photomask quality. Tightening of photomask specifications is one mechanism for meeting the wafer CD targets. The challenge lies in determining how photomask level metrology results influence wafer level imaging performance. Can it be inferred that photomask level CD performance is the direct contributor to wafer level CD performance? With respect to phase shift masks, criteria such as phase and transmission control are generally tightened with each technology node. Are there other photomask relevant influences that effect wafer CD performance? A comprehensive study is presented supporting the use of scanner emulation based photomask CD metrology to predict wafer level within chip CD uniformity (CDU). Using scanner emulation with the photomask can provide more accurate wafer level prediction because it inherently includes all contributors to image formation related to the 3D topography such as the physical CD, phase, transmission, sidewall angle, surface roughness, and other material properties. Emulated images from different photomask types were captured to provide CD values across chip. Emulated scanner image measurements were completed using an AIMS TM45-193i with its hyper-NA, through-pellicle data acquisition capability including the Global CDU Map TM software option for AIMS TM tools. The through-pellicle data acquisition capability is an essential prerequisite for capturing final CDU data (after final clean and pellicle mounting) before the photomask ships or for re-qualification at the wafer fab. Data was also collected on these photomasks using a conventional CD-SEM metrology system with the pellicles removed. A comparison was then made to wafer prints demonstrating the benefit of using scanner emulation based photomask CD metrology.
Vigneault, Davis M; Xie, Weidi; Ho, Carolyn Y; Bluemke, David A; Noble, J Alison
2018-05-22
Pixelwise segmentation of the left ventricular (LV) myocardium and the four cardiac chambers in 2-D steady state free precession (SSFP) cine sequences is an essential preprocessing step for a wide range of analyses. Variability in contrast, appearance, orientation, and placement of the heart between patients, clinical views, scanners, and protocols makes fully automatic semantic segmentation a notoriously difficult problem. Here, we present Ω-Net (Omega-Net): A novel convolutional neural network (CNN) architecture for simultaneous localization, transformation into a canonical orientation, and semantic segmentation. First, an initial segmentation is performed on the input image; second, the features learned during this initial segmentation are used to predict the parameters needed to transform the input image into a canonical orientation; and third, a final segmentation is performed on the transformed image. In this work, Ω-Nets of varying depths were trained to detect five foreground classes in any of three clinical views (short axis, SA; four-chamber, 4C; two-chamber, 2C), without prior knowledge of the view being segmented. This constitutes a substantially more challenging problem compared with prior work. The architecture was trained using three-fold cross-validation on a cohort of patients with hypertrophic cardiomyopathy (HCM, N=42) and healthy control subjects (N=21). Network performance, as measured by weighted foreground intersection-over-union (IoU), was substantially improved for the best-performing Ω-Net compared with U-Net segmentation without localization or orientation (0.858 vs 0.834). In addition, to be comparable with other works, Ω-Net was retrained from scratch using five-fold cross-validation on the publicly available 2017 MICCAI Automated Cardiac Diagnosis Challenge (ACDC) dataset. The Ω-Net outperformed the state-of-the-art method in segmentation of the LV and RV bloodpools, and performed slightly worse in segmentation of the LV myocardium. We conclude that this architecture represents a substantive advancement over prior approaches, with implications for biomedical image segmentation more generally. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Lavoie, Lindsey K.
The technology of computed tomography (CT) imaging has soared over the last decade with the use of multi-detector CT (MDCT) scanners that are capable of performing studies in a matter of seconds. While the diagnostic information obtained from MDCT imaging is extremely valuable, it is important to ensure that the radiation doses resulting from these studies are at acceptably safe levels. This research project focused on the measurement of organ doses resulting from modern MDCT scanners. A commercially-available dosimetry system was used to measure organ doses. Small dosimeters made of optically-stimulated luminescent (OSL) material were analyzed with a portable OSL reader. Detailed verification of this system was performed. Characteristics studied include energy, scatter, and angular responses; dose linearity, ability to erase the exposed dose and ability to reuse dosimeters multiple times. The results of this verification process were positive. While small correction factors needed to be applied to the dose reported by the OSL reader, these factors were small and expected. Physical, tomographic pediatric and adult phantoms were used to measure organ doses. These phantoms were developed from CT images and are composed of tissue-equivalent materials. Because the adult phantom is comprised of numerous segments, dosimeters were placed in the phantom at several organ locations, and doses to select organs were measured using three clinical protocols: pediatric craniosynostosis, adult brain perfusion and adult cardiac CT angiography (CTA). A wide-beam, 320-slice, volumetric CT scanner and a 64-slice, MDCT scanner were used for organ dose measurements. Doses ranged from 1 to 26 mGy for the pediatric protocol, 1 to 1241 mGy for the brain perfusion protocol, and 2-100 mGy for the cardiac protocol. In most cases, the doses measured on the 64-slice scanner were higher than those on the 320-slice scanner. A methodology to measure organ doses with OSL dosimeters received from CT imaging has been presented. These measurements are especially important in keeping with the ALARA (as low as reasonably achievable) principle. While diagnostic information from CT imaging is valuable and necessary, the dose to patients is always a consideration. This methodology aids in this important task. (Full text of this dissertation may be available via the University of Florida Libraries web site. Please check http://www.uflib.ufl.edu/etd.html)
A comparison of the accuracy of intraoral scanners using an intraoral environment simulator
Park, Hye-Nan; Lim, Young-Jun; Yi, Won-Jin
2018-01-01
PURPOSE The aim of this study was to design an intraoral environment simulator and to assess the accuracy of two intraoral scanners using the simulator. MATERIALS AND METHODS A box-shaped intraoral environment simulator was designed to simulate two specific intraoral environments. The cast was scanned 10 times by Identica Blue (MEDIT, Seoul, South Korea), TRIOS (3Shape, Copenhagen, Denmark), and CS3500 (Carestream Dental, Georgia, USA) scanners in the two simulated groups. The distances between the left and right canines (D3), first molars (D6), second molars (D7), and the left canine and left second molar (D37) were measured. The distance data were analyzed by the Kruskal-Wallis test. RESULTS The differences in intraoral environments were not statistically significant (P>.05). Between intraoral scanners, statistically significant differences (P<.05) were revealed by the Kruskal-Wallis test with regard to D3 and D6. CONCLUSION No difference due to the intraoral environment was revealed. The simulator will contribute to the higher accuracy of intraoral scanners in the future. PMID:29503715
Risks of exposure to ionizing and millimeter-wave radiation from airport whole-body scanners.
Moulder, John E
2012-06-01
Considerable public concern has been expressed around the world about the radiation risks posed by the backscatter (ionizing radiation) and millimeter-wave (nonionizing radiation) whole-body scanners that have been deployed at many airports. The backscatter and millimeter-wave scanners currently deployed in the U.S. almost certainly pose negligible radiation risks if used as intended, but their safety is difficult-to-impossible to prove using publicly accessible data. The scanners are widely disliked and often feared, which is a problem made worse by what appears to be a veil of secrecy that covers their specifications and dosimetry. Therefore, for these and future similar technologies to gain wide acceptance, more openness is needed, as is independent review and regulation. Publicly accessible, and preferably peer-reviewed evidence is needed that the deployed units (not just the prototypes) meet widely-accepted safety standards. It is also critical that risk-perception issues be handled more competently.
Nodular melanoma serendipitously detected by airport full body scanners.
Mayer, Jonathan E; Adams, Brian B
2015-01-01
Nodular melanoma is the most dangerous form of melanoma and often evades early detection. We present a frequently traveling businessman whose nodular melanoma was detected by airport full body scanners. For about 20 flights over 2 months, the airport full body scanners singled out an area on his left lower leg for a pat-down. Dermatologic examination discovered a nodular melanoma in this area, and after surgical excision, the man traveled without incident. This case raises the possibility of using full body imaging in the detection of melanomas, especially of the nodular subtype. In its current form, full body scanning would most likely not be sensitive or specific enough to become a recommended screening tool. Nonetheless, for travelers with areas repeatedly singled out by the machines without a known justification, airport scanners could serve as incidental free screening for suspicious nodular lesions that should prompt dermatologist referral. © 2014 S. Karger AG, Basel.
GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain
NASA Astrophysics Data System (ADS)
Huang, Lan; Du, Youfu; Chen, Gongyang
2015-03-01
Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.
Stolin, Alexander V; Martone, Peter F; Jaliparthi, Gangadhar; Raylman, Raymond R
2017-01-01
Positron emission tomography (PET) scanners designed for imaging of small animals have transformed translational research by reducing the necessity to invasively monitor physiology and disease progression. Virtually all of these scanners are based on the use of pixelated detector modules arranged in rings. This design, while generally successful, has some limitations. Specifically, use of discrete detector modules to construct PET scanners reduces detection sensitivity and can introduce artifacts in reconstructed images, requiring the use of correction methods. To address these challenges, and facilitate measurement of photon depth-of-interaction in the detector, we investigated a small animal PET scanner (called AnnPET) based on a monolithic annulus of scintillator. The scanner was created by placing 12 flat facets around the outer surface of the scintillator to accommodate placement of silicon photomultiplier arrays. Its performance characteristics were explored using Monte Carlo simulations and sections of the NEMA NU4-2008 protocol. Results from this study revealed that AnnPET's reconstructed spatial resolution is predicted to be [Formula: see text] full width at half maximum in the radial, tangential, and axial directions. Peak detection sensitivity is predicted to be 10.1%. Images of simulated phantoms (mini-hot rod and mouse whole body) yielded promising results, indicating the potential of this system for enhancing PET imaging of small animals.
3D space positioning and image feature extraction for workpiece
NASA Astrophysics Data System (ADS)
Ye, Bing; Hu, Yi
2008-03-01
An optical system of 3D parameters measurement for specific area of a workpiece has been presented and discussed in this paper. A number of the CCD image sensors are employed to construct the 3D coordinate system for the measured area. The CCD image sensor of the monitoring target is used to lock the measured workpiece when it enters the field of view. The other sensors, which are placed symmetrically beam scanners, measure the appearance of the workpiece and the characteristic parameters. The paper established target image segmentation and the image feature extraction algorithm to lock the target, based on the geometric similarity of objective characteristics, rapid locking the goal can be realized. When line laser beam scan the tested workpiece, a number of images are extracted equal time interval and the overlapping images are processed to complete image reconstruction, and achieve the 3D image information. From the 3D coordinate reconstruction model, the 3D characteristic parameters of the tested workpiece are gained. The experimental results are provided in the paper.
Multiparametric Analysis of the Tumor Microenvironment: Hypoxia Markers and Beyond.
Mayer, Arnulf; Vaupel, Peter
2017-01-01
We have established a novel in situ protein analysis pipeline, which is built upon highly sensitive, multichannel immunofluorescent staining of paraffin sections of human and xenografted tumor tissue. Specimens are digitized using slide scanners equipped with suitable light sources and fluorescence filter combinations. Resulting digital images are subsequently subjected to quantitative image analysis using a primarily object-based approach, which comprises segmentation of single cells or higher-order structures (e.g., blood vessels), cell shape approximation, measurement of signal intensities in individual fluorescent channels and correlation of these data with positional information for each object. Our approach could be particularly useful for the study of the hypoxic tumor microenvironment as it can be utilized to systematically explore the influence of spatial factors on cell phenotypes, e.g., the distance of a given cell type from the nearest blood vessel on the cellular expression of hypoxia-associated biomarkers and other proteins reflecting their specific state of activation or function. In this report, we outline the basic methodology and provide an outlook on possible use cases.
Nimbus 7 Coastal Zone Color Scanner (CZCS). Level 1 data product users' guide
NASA Technical Reports Server (NTRS)
Williams, S. P.; Szajna, E. F.; Hovis, W. A.
1985-01-01
The coastal zone color scanner (CZCS) is a scanning multispectral radiometer designed specifically for the remote sensing of Ocean Color parameters from an Earth orbiting space platform. A technical manual which is intended for users of NIMBUS 7 CZCS Level 1 data products is presented. It contains information needed by investigators and data processing personnel to operate on the data using digital computers and related equipment.
Comparison of Epson scanner quality for radiochromic film evaluation
Alnawaf, Hani; Yu, Peter K.N.
2012-01-01
Epson Desktop scanners have been quoted as devices which match the characteristics required for the evaluation of radiation dose exposure by radiochromic films. Specifically, models such as the 10000XL have been used successfully for image analysis and are recommended by ISP for dosimetry purposes. This note investigates and compares the scanner characteristics of three Epson desktop scanner models including the Epson 10000XL, V700, and V330. Both of the latter are substantially cheaper models capable of A4 scanning. As the price variation between the V330 and the 10000XL is 20‐fold (based on Australian recommended retail price), cost savings by using the cheaper scanners may be warranted based on results. By a direct comparison of scanner uniformity and reproducibility we can evaluate the accuracy of these scanners for radiochromic film dosimetry. Results have shown that all three scanners can produce adequate scanner uniformity and reproducibility, with the inexpensive V330 producing a standard deviation variation across its landscape direction of 0.7% and 1.2% in the portrait direction (reflection mode). This is compared to the V700 in reflection mode of 0.25% and 0.5% for landscape and portrait directions, respectively, and 0.5% and 0.8% for the 10000XL. In transmission mode, the V700 is comparable in reproducibility to the 10000XL for portrait and landscape mode, whilst the V330 is only capable of scanning in the landscape direction and produces a standard deviation in this direction of 1.0% compared to 0.6% (V700) and 0.25% (10000XL). Results have shown that the V700 and 10000XL are comparable scanners in quality and accuracy with the 10000XL obviously capable of imaging over an A3 area as opposed to an A4 area for the V700. The V330 scanner produced slightly lower accuracy and quality with uncertainties approximately twice as much as the other scanners. However, the results show that the V330 is still an adequate scanner and could be used for radiation dosimetry purposes. As such, if budgetary requirements are limited, the V700 scanner would be the recommended option at a price eight times cheaper than the 10000XL; however, the V330 produces adequate results at a price which is 2.5 times cheaper again. This may be a consideration for smaller institutions or individuals working with radiochromic film dosimetry. PACS number: 87.55.Qr; 87.56.Fc PMID:22955661
Comparison of Epson scanner quality for radiochromic film evaluation.
Alnawaf, Hani; Yu, Peter K N; Butson, Martin
2012-09-06
Epson Desktop scanners have been quoted as devices which match the characteristics required for the evaluation of radiation dose exposure by radiochromic films. Specifically, models such as the 10000XL have been used successfully for image analysis and are recommended by ISP for dosimetry purposes. This note investigates and compares the scanner characteristics of three Epson desktop scanner models including the Epson 10000XL, V700, and V330. Both of the latter are substantially cheaper models capable of A4 scanning. As the price variation between the V330 and the 10000XL is 20-fold (based on Australian recommended retail price), cost savings by using the cheaper scanners may be warranted based on results. By a direct comparison of scanner uniformity and reproducibility we can evaluate the accuracy of these scanners for radiochromic film dosimetry. Results have shown that all three scanners can produce adequate scanner uniformity and reproducibility, with the inexpensive V330 producing a standard deviation variation across its landscape direction of 0.7% and 1.2% in the portrait direction (reflection mode). This is compared to the V700 in reflection mode of 0.25% and 0.5% for landscape and portrait directions, respectively, and 0.5% and 0.8% for the 10000XL. In transmission mode, the V700 is comparable in reproducibility to the 10000XL for portrait and landscape mode, whilst the V330 is only capable of scanning in the landscape direction and produces a standard deviation in this direction of 1.0% compared to 0.6% (V700) and 0.25% (10000XL). Results have shown that the V700 and 10000XL are comparable scanners in quality and accuracy with the 10000XL obviously capable of imaging over an A3 area as opposed to an A4 area for the V700. The V330 scanner produced slightly lower accuracy and quality with uncertainties approximately twice as much as the other scanners. However, the results show that the V330 is still an adequate scanner and could be used for radiation dosimetry purposes. As such, if budgetary requirements are limited, the V700 scanner would be the recommended option at a price eight times cheaper than the 10000XL; however, the V330 produces adequate results at a price which is 2.5 times cheaper again. This may be a consideration for smaller institutions or individuals working with radiochromic film dosimetry.
Giannelli, Marco; Diciotti, Stefano; Tessa, Carlo; Mascalchi, Mario
2010-01-01
Although in EPI-fMRI analyses typical acquisition parameters (TR, TE, matrix, slice thickness, etc.) are generally employed, various readout bandwidth (BW) values are used as a function of gradients characteristics of the MR scanner. Echo spacing (ES) is another fundamental parameter of EPI-fMRI acquisition sequences but the employed ES value is not usually reported in fMRI studies. In the present work, the authors investigated the effect of ES and BW on basic performances of EPI-fMRI sequences in terms of temporal stability and overall image quality of time series acquisition. EPI-fMRI acquisitions of the same water phantom were performed using two clinical MR scanner systems (scanners A and B) with different gradient characteristics and functional designs of radiofrequency coils. For both scanners, the employed ES values ranged from 0.75 to 1.33 ms. The used BW values ranged from 125.0 to 250.0 kHz/64pixels and from 78.1 to 185.2 kHz/64pixels for scanners A and B, respectively. The temporal stability of EPI-fMRI sequence was assessed measuring the signal-to-fluctuation noise ratio (SFNR) and signal drift (DR), while the overall image quality was assessed evaluating the signal-to-noise ratio (SNR(ts)) and nonuniformity (NU(ts)) of the time series acquisition. For both scanners, no significant effect of ES and BW on signal drift was revealed. The SFNR, NU(ts) and SNR(ts) values of scanner A did not significantly vary with ES. On the other hand, the SFNR, NU(ts), and SNR(ts) values of scanner B significantly varied with ES. SFNR (5.8%) and SNR(ts) (5.9%) increased with increasing ES. SFNR (25% scanner A, 32% scanner B) and SNR(ts) (26.2% scanner A, 30.1% scanner B) values of both scanners significantly decreased with increasing BW. NU(ts) values of scanners A and B were less than 3% for all BW and ES values. Nonetheless, scanner A was characterized by a significant upward trend (3% percentage of variation) of time series nonuniformity with increasing BW while NU(ts) of scanner B significantly increased (19% percentage of variation) with increasing ES. Temporal stability (SFNR and DR) and overall image quality (NU(ts) and SNR(ts)) of EPI-fMRI time series can significantly vary with echo spacing and readout bandwidth. The specific pattern of variation may depend on the performance of each single MR scanner system in terms of gradients characteristics, EPI sequence calibrations (eddy currents, shimming, etc.), and functional design of radiofrequency coil. Our results indicate that the employment of low BW improves not only the signal-to-noise ratio of EPI-fMRI time series but also the temporal stability of functional acquisitions. The use of minimum ES values is not entirely advantageous when the MR scanner system is characterized by gradients with low performances and suboptimal EPI sequence calibration. Since differences in basic performances of MR scanner system are potential source of variability for fMRI activation, phantom measurements of SFNR, DR, NU(ts), and SNR(ts) can be executed before subjects acquisitions to monitor the stability of MR scanner performances in clinical group comparison and longitudinal studies.
Lindig, Tobias; Kotikalapudi, Raviteja; Schweikardt, Daniel; Martin, Pascal; Bender, Friedemann; Klose, Uwe; Ernemann, Ulrike; Focke, Niels K; Bender, Benjamin
2018-04-15
Voxel-based morphometry is still mainly based on T1-weighted MRI scans. Misclassification of vessels and dura mater as gray matter has been previously reported. Goal of the present work was to evaluate the effect of multimodal segmentation methods available in SPM12, and their influence on identification of age related atrophy and lesion detection in epilepsy patients. 3D T1-, T2- and FLAIR-images of 77 healthy adults (mean age 35.8 years, 19-66 years, 45 females), 7 patients with malformation of cortical development (MCD) (mean age 28.1 years,19-40 years, 3 females), and 5 patients with left hippocampal sclerosis (LHS) (mean age 49.0 years, 25-67 years, 3 females) from a 3T scanner were evaluated. Segmentation based on T1-only, T1+T2, T1+FLAIR, T2+FLAIR, and T1+T2+FLAIR were compared in the healthy subjects. Clinical VBM results based on the different segmentation approaches for MCD and for LHS were compared. T1-only segmentation overestimated total intracranial volume by about 80ml compared to the other segmentation methods. This was due to misclassification of dura mater and vessels as GM and CSF. Significant differences were found for several anatomical regions: the occipital lobe, the basal ganglia/thalamus, the pre- and postcentral gyrus, the cerebellum, and the brainstem. None of the segmentation methods yielded completely satisfying results for the basal ganglia/thalamus and the brainstem. The best correlation with age could be found for the multimodal T1+T2+FLAIR segmentation. Highest T-scores for identification of LHS were found for T1+T2 segmentation, while highest T-scores for MCD were dependent on lesion and anatomical location. Multimodal segmentation is superior to T1-only segmentation and reduces the misclassification of dura mater and vessels as GM and CSF. Depending on the anatomical region and the pathology of interest (atrophy, lesion detection, etc.), different combinations of T1, T2 and FLAIR yield optimal results. Copyright © 2017 Elsevier Inc. All rights reserved.
WE-G-18C-05: Characterization of Cross-Vendor, Cross-Field Strength MR Image Intensity Variations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, E; Prah, D
2014-06-15
Purpose: Variations in MR image intensity and image intensity nonuniformity (IINU) can challenge the accuracy of intensity-based image segmentation and registration algorithms commonly applied in radiotherapy. The goal of this work was to characterize MR image intensity variations across scanner vendors and field strengths commonly used in radiotherapy. Methods: ACR-MRI phantom images were acquired at 1.5T and 3.0T on GE (450w and 750, 23.1), Siemens (Espree and Verio, VB17B), and Philips (Ingenia, 4.1.3) scanners using commercial spin-echo sequences with matched parameters (TE/TR: 20/500 ms, rBW: 62.5 kHz, TH/skip: 5/5mm). Two radiofrequency (RF) coil combinations were used for each scanner: bodymore » coil alone, and combined body and phased-array head coils. Vendorspecific B1- corrections (PURE/Pre-Scan Normalize/CLEAR) were applied in all head coil cases. Images were transferred offline, corrected for IINU using the MNI N3 algorithm, and normalized. Coefficients of variation (CV=σ/μ) and peak image uniformity (PIU = 1−(Smax−Smin)/(Smax+Smin)) estimates were calculated for one homogeneous phantom slice. Kruskal-Wallis and Wilcoxon matched-pairs tests compared mean MR signal intensities and differences between original and N3 image CV and PIU. Results: Wide variations in both MR image intensity and IINU were observed across scanner vendors, field strengths, and RF coil configurations. Applying the MNI N3 correction for IINU resulted in significant improvements in both CV and PIU (p=0.0115, p=0.0235). However, wide variations in overall image intensity persisted, requiring image normalization to improve consistency across vendors, field strengths, and RF coils. These results indicate that B1- correction routines alone may be insufficient in compensating for IINU and image scaling, warranting additional corrections prior to use of MR images in radiotherapy. Conclusions: MR image intensities and IINU vary as a function of scanner vendor, field strength, and RF coil configuration. A two-step strategy consisting of MNI N3 correction followed by normalization was required to improve MR image consistency. Funding provided by Advancing a Healthier Wisconsin.« less
Bengtsson, Henrik; Jönsson, Göran; Vallon-Christersson, Johan
2004-11-12
Non-linearities in observed log-ratios of gene expressions, also known as intensity dependent log-ratios, can often be accounted for by global biases in the two channels being compared. Any step in a microarray process may introduce such offsets and in this article we study the biases introduced by the microarray scanner and the image analysis software. By scanning the same spotted oligonucleotide microarray at different photomultiplier tube (PMT) gains, we have identified a channel-specific bias present in two-channel microarray data. For the scanners analyzed it was in the range of 15-25 (out of 65,535). The observed bias was very stable between subsequent scans of the same array although the PMT gain was greatly adjusted. This indicates that the bias does not originate from a step preceding the scanner detector parts. The bias varies slightly between arrays. When comparing estimates based on data from the same array, but from different scanners, we have found that different scanners introduce different amounts of bias. So do various image analysis methods. We propose a scanning protocol and a constrained affine model that allows us to identify and estimate the bias in each channel. Backward transformation removes the bias and brings the channels to the same scale. The result is that systematic effects such as intensity dependent log-ratios are removed, but also that signal densities become much more similar. The average scan, which has a larger dynamical range and greater signal-to-noise ratio than individual scans, can then be obtained. The study shows that microarray scanners may introduce a significant bias in each channel. Such biases have to be calibrated for, otherwise systematic effects such as intensity dependent log-ratios will be observed. The proposed scanning protocol and calibration method is simple to use and is useful for evaluating scanner biases or for obtaining calibrated measurements with extended dynamical range and better precision. The cross-platform R package aroma, which implements all described methods, is available for free from http://www.maths.lth.se/bioinformatics/.
Creation of 3D Multi-Body Orthodontic Models by Using Independent Imaging Sensors
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-01-01
In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning. PMID:23385416
Creation of 3D multi-body orthodontic models by using independent imaging sensors.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-02-05
In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning.
DLA based compressed sensing for high resolution MR microscopy of neuronal tissue
NASA Astrophysics Data System (ADS)
Nguyen, Khieu-Van; Li, Jing-Rebecca; Radecki, Guillaume; Ciobanu, Luisa
2015-10-01
In this work we present the implementation of compressed sensing (CS) on a high field preclinical scanner (17.2 T) using an undersampling trajectory based on the diffusion limited aggregation (DLA) random growth model. When applied to a library of images this approach performs better than the traditional undersampling based on the polynomial probability density function. In addition, we show that the method is applicable to imaging live neuronal tissues, allowing significantly shorter acquisition times while maintaining the image quality necessary for identifying the majority of neurons via an automatic cell segmentation algorithm.
Stratis, Andreas; Zhang, Guozhi; Lopez-Rendon, Xochitl; Politis, Constantinus; Hermans, Robert; Jacobs, Reinhilde; Bogaerts, Ria; Shaheen, Eman; Bosmans, Hilde
2017-09-01
To calculate organ doses and estimate the effective dose for justification purposes in patients undergoing orthognathic treatment planning purposes and temporal bone imaging in dental cone beam CT (CBCT) and Multidetector CT (MDCT) scanners. The radiation dose to the ICRP reference male voxel phantom was calculated for dedicated orthognathic treatment planning acquisitions via Monte Carlo simulations in two dental CBCT scanners, Promax 3D Max (Planmeca, FI) and NewTom VGi evo (QR s.r.l, IT) and in Somatom Definition Flash (Siemens, DE) MDCT scanner. For temporal bone imaging, radiation doses were calculated via MC simulations for a CBCT protocol in NewTom 5G (QR s.r.l, IT) and with the use of a software tool (CT-expo) for Somatom Force (Siemens, DE). All procedures had been optimized at the acceptance tests of the devices. For orthognathic protocols, dental CBCT scanners deliver lower doses compared to MDCT scanners. The estimated effective dose (ED) was 0.32mSv for a normal resolution operation mode in Promax 3D Max, 0.27mSv in VGi-evo and 1.18mSv in the Somatom Definition Flash. For temporal bone protocols, the Somatom Force resulted in an estimated ED of 0.28mSv while for NewTom 5G the ED was 0.31 and 0.22mSv for monolateral and bilateral imaging respectively. Two clinical exams which are carried out with both a CBCT or a MDCT scanner were compared in terms of radiation dose. Dental CBCT scanners deliver lower doses for orthognathic patients whereas for temporal bone procedures the doses were similar. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Enabling vendor independent photoacoustic imaging systems with asynchronous laser source
NASA Astrophysics Data System (ADS)
Wu, Yixuan; Zhang, Haichong K.; Boctor, Emad M.
2018-02-01
Channel data acquisition, and synchronization between laser excitation and PA signal acquisition, are two fundamental hardware requirements for photoacoustic (PA) imaging. Unfortunately, however, neither is equipped by most clinical ultrasound scanners. Therefore, less economical specialized research platforms are used in general, which hinders a smooth clinical transition of PA imaging. In previous studies, we have proposed an algorithm to achieve PA imaging using ultrasound post-beamformed (USPB) RF data instead of channel data. This work focuses on enabling clinical ultrasound scanners to implement PA imaging, without requiring synchronization between the laser excitation and PA signal acquisition. Laser synchronization is inherently consisted of two aspects: frequency and phase information. We synchronize without communicating the laser and the ultrasound scanner by investigating USPB images of a point-target phantom in two steps. First, frequency information is estimated by solving a nonlinear optimization problem, under the assumption that the segmented wave-front can only be beamformed into a single spot when synchronization is achieved. Second, after making frequencies of two systems identical, phase delay is estimated by optimizing the image quality while varying phase value. The proposed method is validated through simulation, by manually adding both frequency and phase errors, then applying the proposed algorithm to correct errors and reconstruct PA images. Compared with the ground truth, simulation results indicate that the remaining errors in frequency correction and phase correction are 0.28% and 2.34%, respectively, which affirm the potential of overcoming hardware barriers on PA imaging through software solution.
NASA Technical Reports Server (NTRS)
1991-01-01
The Reusable Reentry Satellite (RRS) System is composed of the payload segment (PS), vehicle segment (VS), and mission support (MS) segments. This specification establishes the performance, design, development, and test requirements for the RRS Rodent Module (RM).
NASA Astrophysics Data System (ADS)
Macher, H.; Landes, T.; Grussenmeyer, P.
2016-06-01
Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.
Performance evaluation of a high-resolution brain PET scanner using four-layer MPPC DOI detectors.
Watanabe, Mitsuo; Saito, Akinori; Isobe, Takashi; Ote, Kibo; Yamada, Ryoko; Moriya, Takahiro; Omura, Tomohide
2017-08-18
A high-resolution positron emission tomography (PET) scanner, dedicated to brain studies, was developed and its performance was evaluated. A four-layer depth of interaction detector was designed containing five detector units axially lined up per layer board. Each of the detector units consists of a finely segmented (1.2 mm) LYSO scintillator array and an 8 × 8 array of multi-pixel photon counters. Each detector layer has independent front-end and signal processing circuits, and the four detector layers are assembled as a detector module. The new scanner was designed to form a detector ring of 430 mm diameter with 32 detector modules and 168 detector rings with a 1.2 mm pitch. The total crystal number is 655 360. The transaxial and axial field of views (FOVs) are 330 mm in diameter and 201.6 mm, respectively, which are sufficient to measure a whole human brain. The single-event data generated at each detector module were transferred to the data acquisition servers through optical fiber cables. The single-event data from all detector modules were merged and processed to create coincidence event data in on-the-fly software in the data acquisition servers. For image reconstruction, the high-resolution mode (HR-mode) used a 1.2 mm 2 crystal segment size and the high-speed mode (HS-mode) used a 4.8 mm 2 size by collecting 16 crystal segments of 1.2 mm each to reduce the computational cost. The performance of the brain PET scanner was evaluated. For the intrinsic spatial resolution of the detector module, coincidence response functions of the detector module pair, which faced each other at various angles, were measured by scanning a 0.25 mm diameter 22 Na point source. The intrinsic resolutions were obtained with 1.08 mm full width at half-maximum (FWHM) and 1.25 mm FWHM on average at 0 and 22.5 degrees in the first layer pair, respectively. The system spatial resolutions were less than 1.0 mm FWHM throughout the whole FOV, using a list-mode dynamic RAMLA (LM-DRAMA). The system sensitivity was 21.4 cps kBq -1 as measured using an 18 F line source aligned with the center of the transaxial FOV. High count rate capability was evaluated using a cylindrical phantom (20 cm diameter × 70 cm length), resulting in 249 kcps in true and 27.9 kcps at 11.9 kBq ml -1 at the peak count in a noise equivalent count rate (NECR_2R). Single-event data acquisition and on-the-fly software coincidence detection performed well, exceeding 25 Mcps and 2.3 Mcps for single and coincidence count rates, respectively. Using phantom studies, we also demonstrated its imaging capabilities by means of a 3D Hoffman brain phantom and an ultra-micro hot-spot phantom. The images obtained were of acceptable quality for high-resolution determination. As clinical and pre-clinical studies, we imaged brains of a human and of small animals.
Performance evaluation of a high-resolution brain PET scanner using four-layer MPPC DOI detectors
NASA Astrophysics Data System (ADS)
Watanabe, Mitsuo; Saito, Akinori; Isobe, Takashi; Ote, Kibo; Yamada, Ryoko; Moriya, Takahiro; Omura, Tomohide
2017-09-01
A high-resolution positron emission tomography (PET) scanner, dedicated to brain studies, was developed and its performance was evaluated. A four-layer depth of interaction detector was designed containing five detector units axially lined up per layer board. Each of the detector units consists of a finely segmented (1.2 mm) LYSO scintillator array and an 8 × 8 array of multi-pixel photon counters. Each detector layer has independent front-end and signal processing circuits, and the four detector layers are assembled as a detector module. The new scanner was designed to form a detector ring of 430 mm diameter with 32 detector modules and 168 detector rings with a 1.2 mm pitch. The total crystal number is 655 360. The transaxial and axial field of views (FOVs) are 330 mm in diameter and 201.6 mm, respectively, which are sufficient to measure a whole human brain. The single-event data generated at each detector module were transferred to the data acquisition servers through optical fiber cables. The single-event data from all detector modules were merged and processed to create coincidence event data in on-the-fly software in the data acquisition servers. For image reconstruction, the high-resolution mode (HR-mode) used a 1.2 mm2 crystal segment size and the high-speed mode (HS-mode) used a 4.8 mm2 size by collecting 16 crystal segments of 1.2 mm each to reduce the computational cost. The performance of the brain PET scanner was evaluated. For the intrinsic spatial resolution of the detector module, coincidence response functions of the detector module pair, which faced each other at various angles, were measured by scanning a 0.25 mm diameter 22Na point source. The intrinsic resolutions were obtained with 1.08 mm full width at half-maximum (FWHM) and 1.25 mm FWHM on average at 0 and 22.5 degrees in the first layer pair, respectively. The system spatial resolutions were less than 1.0 mm FWHM throughout the whole FOV, using a list-mode dynamic RAMLA (LM-DRAMA). The system sensitivity was 21.4 cps kBq-1 as measured using an 18F line source aligned with the center of the transaxial FOV. High count rate capability was evaluated using a cylindrical phantom (20 cm diameter × 70 cm length), resulting in 249 kcps in true and 27.9 kcps at 11.9 kBq ml-1 at the peak count in a noise equivalent count rate (NECR_2R). Single-event data acquisition and on-the-fly software coincidence detection performed well, exceeding 25 Mcps and 2.3 Mcps for single and coincidence count rates, respectively. Using phantom studies, we also demonstrated its imaging capabilities by means of a 3D Hoffman brain phantom and an ultra-micro hot-spot phantom. The images obtained were of acceptable quality for high-resolution determination. As clinical and pre-clinical studies, we imaged brains of a human and of small animals.
Subcortical structure segmentation using probabilistic atlas priors
NASA Astrophysics Data System (ADS)
Gouttard, Sylvain; Styner, Martin; Joshi, Sarang; Smith, Rachel G.; Cody Hazlett, Heather; Gerig, Guido
2007-03-01
The segmentation of the subcortical structures of the brain is required for many forms of quantitative neuroanatomic analysis. The volumetric and shape parameters of structures such as lateral ventricles, putamen, caudate, hippocampus, pallidus and amygdala are employed to characterize a disease or its evolution. This paper presents a fully automatic segmentation of these structures via a non-rigid registration of a probabilistic atlas prior and alongside a comprehensive validation. Our approach is based on an unbiased diffeomorphic atlas with probabilistic spatial priors built from a training set of MR images with corresponding manual segmentations. The atlas building computes an average image along with transformation fields mapping each training case to the average image. These transformation fields are applied to the manually segmented structures of each case in order to obtain a probabilistic map on the atlas. When applying the atlas for automatic structural segmentation, an MR image is first intensity inhomogeneity corrected, skull stripped and intensity calibrated to the atlas. Then the atlas image is registered to the image using an affine followed by a deformable registration matching the gray level intensity. Finally, the registration transformation is applied to the probabilistic maps of each structures, which are then thresholded at 0.5 probability. Using manual segmentations for comparison, measures of volumetric differences show high correlation with our results. Furthermore, the dice coefficient, which quantifies the volumetric overlap, is higher than 62% for all structures and is close to 80% for basal ganglia. The intraclass correlation coefficient computed on these same datasets shows a good inter-method correlation of the volumetric measurements. Using a dataset of a single patient scanned 10 times on 5 different scanners, reliability is shown with a coefficient of variance of less than 2 percents over the whole dataset. Overall, these validation and reliability studies show that our method accurately and reliably segments almost all structures. Only the hippocampus and amygdala segmentations exhibit relative low correlation with the manual segmentation in at least one of the validation studies, whereas they still show appropriate dice overlap coefficients.
Strauss, Keith J
2014-10-01
The management of image quality and radiation dose during pediatric CT scanning is dependent on how well one manages the radiographic techniques as a function of the type of exam, type of CT scanner, and patient size. The CT scanner's display of expected CT dose index volume (CTDIvol) after the projection scan provides the operator with a powerful tool prior to the patient scan to identify and manage appropriate CT techniques, provided the department has established appropriate diagnostic reference levels (DRLs). This paper provides a step-by-step process that allows the development of DRLs as a function of type of exam, of actual patient size and of the individual radiation output of each CT scanner in a department. Abdomen, pelvis, thorax and head scans are addressed. Patient sizes from newborns to large adults are discussed. The method addresses every CT scanner regardless of vendor, model or vintage. We cover adjustments to techniques to manage the impact of iterative reconstruction and provide a method to handle all available voltages other than 120 kV. This level of management of CT techniques is necessary to properly monitor radiation dose and image quality during pediatric CT scans.
Stolin, Alexander V.; Martone, Peter F.; Jaliparthi, Gangadhar; Raylman, Raymond R.
2017-01-01
Abstract. Positron emission tomography (PET) scanners designed for imaging of small animals have transformed translational research by reducing the necessity to invasively monitor physiology and disease progression. Virtually all of these scanners are based on the use of pixelated detector modules arranged in rings. This design, while generally successful, has some limitations. Specifically, use of discrete detector modules to construct PET scanners reduces detection sensitivity and can introduce artifacts in reconstructed images, requiring the use of correction methods. To address these challenges, and facilitate measurement of photon depth-of-interaction in the detector, we investigated a small animal PET scanner (called AnnPET) based on a monolithic annulus of scintillator. The scanner was created by placing 12 flat facets around the outer surface of the scintillator to accommodate placement of silicon photomultiplier arrays. Its performance characteristics were explored using Monte Carlo simulations and sections of the NEMA NU4-2008 protocol. Results from this study revealed that AnnPET’s reconstructed spatial resolution is predicted to be ∼1 mm full width at half maximum in the radial, tangential, and axial directions. Peak detection sensitivity is predicted to be 10.1%. Images of simulated phantoms (mini-hot rod and mouse whole body) yielded promising results, indicating the potential of this system for enhancing PET imaging of small animals. PMID:28097210
An RF dosimeter for independent SAR measurement in MRI scanners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, Di; Bottomley, Paul A.; El-Sharkawy, AbdEl-Monem M.
2013-12-15
Purpose: The monitoring and management of radio frequency (RF) exposure is critical for ensuring magnetic resonance imaging (MRI) safety. Commercial MRI scanners can overestimate specific absorption rates (SAR) and improperly restrict clinical MRI scans or the application of new MRI sequences, while underestimation of SAR can lead to tissue heating and thermal injury. Accurate scanner-independent RF dosimetry is essential for measuring actual exposure when SAR is critical for ensuring regulatory compliance and MRI safety, for establishing RF exposure while evaluating interventional leads and devices, and for routine MRI quality assessment by medical physicists. However, at present there are no scanner-independentmore » SAR dosimeters. Methods: An SAR dosimeter with an RF transducer comprises two orthogonal, rectangular copper loops and a spherical MRI phantom. The transducer is placed in the magnet bore and calibrated to approximate the resistive loading of the scanner's whole-body birdcage RF coil for human subjects in Philips, GE and Siemens 3 tesla (3T) MRI scanners. The transducer loop reactances are adjusted to minimize interference with the transmit RF field (B{sub 1}) at the MRI frequency. Power from the RF transducer is sampled with a high dynamic range power monitor and recorded on a computer. The deposited power is calibrated and tested on eight different MRI scanners. Whole-body absorbed power vs weight and body mass index (BMI) is measured directly on 26 subjects. Results: A single linear calibration curve sufficed for RF dosimetry at 127.8 MHz on three different Philips and three GE 3T MRI scanners. An RF dosimeter operating at 123.2 MHz on two Siemens 3T scanners required a separate transducer and a slightly different calibration curve. Measurement accuracy was ∼3%. With the torso landmarked at the xiphoid, human adult whole‑body absorbed power varied approximately linearly with patient weight and BMI. This indicates that whole-body torso SAR is on average independent of the imaging subject, albeit with fluctuations. Conclusions: Our 3T RF dosimeter and transducers accurately measure RF exposure in body-equivalent loads and provide scanner-independent assessments of whole-body RF power deposition for establishing safety compliance useful for MRI sequence and device testing.« less
Complementary equipment for controlling multiple laser beams on single scanner MPLSM systems
NASA Astrophysics Data System (ADS)
Helm, P. Johannes; Nase, Gabriele; Heggelund, Paul; Reppen, Trond
2010-02-01
Multi-Photon-Laser-Scanning-Microscopy (MPLSM) now stands as one of the most powerful experimental tools in biology. Specifically, MPLSM based in-vivo studies of structures and processes in the brains of small rodents and imaging in brain-slices have led to considerable progress in the field of neuroscience. Equipment allowing for independent control of two laser-beams, one for imaging and one for photochemical manipulation, strongly enhances any MPLSM platform. Some industrial MPLSM producers have introduced double scanner options in MPLSM systems. Here, we describe the upgrade of a single scanner MPLSM system with equipment that is suitable for independently controlling the beams of two Titanium Sapphire lasers. The upgrade is compatible with any actual MPLSM system and can be combined with any commercial or self assembled system. Making use of the pixel-clock, frame-active and line-active signals provided by the scanner-electronics of the MPLSM, the user can, by means of an external unit, select individual pixels or rectangular ROIs within the field of view of an overview-scan to be exposed, or not exposed, to the beam(s) of one or two lasers during subsequent scans. The switching processes of the laser-beams during the subsequent scans are performed by means of Electro-Optical-Modulators (EOMs). While this system does not provide the flexibility of two-scanner modules, it strongly enhances the experimental possibilities of one-scanner systems provided a second laser and two independent EOMs are available. Even multi-scanner-systems can profit from this development, which can be used to independently control any number of laser beams.
What are Segments in Google Analytics
Segments find all sessions that meet a specific condition. You can then apply this segment to any report in Google Analytics (GA). Segments are a way of identifying sessions and users while filters identify specific events, like pageviews.
Optimizing hippocampal segmentation in infants utilizing MRI post-acquisition processing.
Thompson, Deanne K; Ahmadzai, Zohra M; Wood, Stephen J; Inder, Terrie E; Warfield, Simon K; Doyle, Lex W; Egan, Gary F
2012-04-01
This study aims to determine the most reliable method for infant hippocampal segmentation by comparing magnetic resonance (MR) imaging post-acquisition processing techniques: contrast to noise ratio (CNR) enhancement, or reformatting to standard orientation. MR scans were performed with a 1.5 T GE scanner to obtain dual echo T2 and proton density (PD) images at term equivalent (38-42 weeks' gestational age). 15 hippocampi were manually traced four times on ten infant images by 2 independent raters on the original T2 image, as well as images processed by: a) combining T2 and PD images (T2-PD) to enhance CNR; then b) reformatting T2-PD images perpendicular to the long axis of the left hippocampus. CNRs and intraclass correlation coefficients (ICC) were calculated. T2-PD images had 17% higher CNR (15.2) than T2 images (12.6). Original T2 volumes' ICC was 0.87 for rater 1 and 0.84 for rater 2, whereas T2-PD images' ICC was 0.95 for rater 1 and 0.87 for rater 2. Reliability of hippocampal segmentation on T2-PD images was not improved by reformatting images (rater 1 ICC = 0.88, rater 2 ICC = 0.66). Post-acquisition processing can improve CNR and hence reliability of hippocampal segmentation in neonate MR scans when tissue contrast is poor. These findings may be applied to enhance boundary definition in infant segmentation for various brain structures or in any volumetric study where image contrast is sub-optimal, enabling hippocampal structure-function relationships to be explored.
Learning-Based Object Identification and Segmentation Using Dual-Energy CT Images for Security.
Martin, Limor; Tuysuzoglu, Ahmet; Karl, W Clem; Ishwar, Prakash
2015-11-01
In recent years, baggage screening at airports has included the use of dual-energy X-ray computed tomography (DECT), an advanced technology for nondestructive evaluation. The main challenge remains to reliably find and identify threat objects in the bag from DECT data. This task is particularly hard due to the wide variety of objects, the high clutter, and the presence of metal, which causes streaks and shading in the scanner images. Image noise and artifacts are generally much more severe than in medical CT and can lead to splitting of objects and inaccurate object labeling. The conventional approach performs object segmentation and material identification in two decoupled processes. Dual-energy information is typically not used for the segmentation, and object localization is not explicitly used to stabilize the material parameter estimates. We propose a novel learning-based framework for joint segmentation and identification of objects directly from volumetric DECT images, which is robust to streaks, noise and variability due to clutter. We focus on segmenting and identifying a small set of objects of interest with characteristics that are learned from training images, and consider everything else as background. We include data weighting to mitigate metal artifacts and incorporate an object boundary field to reduce object splitting. The overall formulation is posed as a multilabel discrete optimization problem and solved using an efficient graph-cut algorithm. We test the method on real data and show its potential for producing accurate labels of the objects of interest without splits in the presence of metal and clutter.
NASA Astrophysics Data System (ADS)
Maramraju, Sri Harsha; Smith, S. David; Rescia, Sergio; Stoll, Sean; Budassi, Michael; Vaska, Paul; Woody, Craig; Schlyer, David
2012-10-01
We previously integrated a magnetic resonance-(MR-) compatible small-animal positron emission tomograph (PET) in a Bruker 9.4 T microMRI system to obtain simultaneous PET/MR images of a rat's brain and of a gated mouse-heart. To minimize electromagnetic interactions in our MR-PET system, viz., the effect of radiofrequency (RF) pulses on the PET, we tested our modular front-end PET electronics with various shield configurations, including a solid aluminum shield and one of thin segmented layers of copper. We noted that the gradient-echo RF pulses did not affect PET data when the PET electronics were shielded with either the aluminum- or the segmented copper-shields. However, there were spurious counts in the PET data resulting from high-intensity fast spin-echo RF pulses. Compared to the unshielded condition, they were attenuated effectively by the aluminum shield ( 97%) and the segmented copper shield ( 90%). We noted a decline in the noise rates as a function of increasing PET energy-discriminator threshold. In addition, we observed a notable decrease in the signal-to-noise ratio in spin-echo MR images with the segmented copper shields in place; however, this did not substantially degrade the quality of the MR images we obtained. Our results demonstrate that by surrounding a compact PET scanner with thin layers of segmented copper shields and integrating it inside a 9.4 T MR system, we can mitigate the impact of the RF on PET, while acquiring good-quality MR images.
Implementation of relational data base management systems on micro-computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, C.L.
1982-01-01
This dissertation describes an implementation of a Relational Data Base Management System on a microcomputer. A specific floppy disk based hardward called TERAK is being used, and high level query interface which is similar to a subset of the SEQUEL language is provided. The system contains sub-systems such as I/O, file management, virtual memory management, query system, B-tree management, scanner, command interpreter, expression compiler, garbage collection, linked list manipulation, disk space management, etc. The software has been implemented to fulfill the following goals: (1) it is highly modularized. (2) The system is physically segmented into 16 logically independent, overlayable segments,more » in a way such that a minimal amount of memory is needed at execution time. (3) Virtual memory system is simulated that provides the system with seemingly unlimited memory space. (4) A language translator is applied to recognize user requests in the query language. The code generation of this translator generates compact code for the execution of UPDATE, DELETE, and QUERY commands. (5) A complete set of basic functions needed for on-line data base manipulations is provided through the use of a friendly query interface. (6) To eliminate the dependency on the environment (both software and hardware) as much as possible, so that it would be easy to transplant the system to other computers. (7) To simulate each relation as a sequential file. It is intended to be a highly efficient, single user system suited to be used by small or medium sized organizations for, say, administrative purposes. Experiments show that quite satisfying results have indeed been achieved.« less
NASA Astrophysics Data System (ADS)
Gavrielides, Marios A.; DeFilippo, Gino; Berman, Benjamin P.; Li, Qin; Petrick, Nicholas; Schultz, Kurt; Siegelman, Jenifer
2017-03-01
Computed tomography is primarily the modality of choice to assess stability of nonsolid pulmonary nodules (sometimes referred to as ground-glass opacity) for three or more years, with change in size being the primary factor to monitor. Since volume extracted from CT is being examined as a quantitative biomarker of lung nodule size, it is important to examine factors affecting the performance of volumetric CT for this task. More specifically, the effect of reconstruction algorithms and measurement method in the context of low-dose CT protocols has been an under-examined area of research. In this phantom study we assessed volumetric CT with two different measurement methods (model-based and segmentation-based) for nodules with radiodensities of both nonsolid (-800HU and -630HU) and solid (-10HU) nodules, sizes of 5mm and 10mm, and two different shapes (spherical and spiculated). Imaging protocols included CTDIvol typical of screening (1.7mGy) and sub-screening (0.6mGy) scans and different types of reconstruction algorithms across three scanners. Results showed that radio-density was the factor contributing most to overall error based on ANOVA. The choice of reconstruction algorithm or measurement method did not affect substantially the accuracy of measurements; however, measurement method affected repeatability with repeatability coefficients ranging from around 3-5% for the model-based estimator to around 20-30% across reconstruction algorithms for the segmentation-based method. The findings of the study can be valuable toward developing standardized protocols and performance claims for nonsolid nodules.
Object recognition and pose estimation of planar objects from range data
NASA Technical Reports Server (NTRS)
Pendleton, Thomas W.; Chien, Chiun Hong; Littlefield, Mark L.; Magee, Michael
1994-01-01
The Extravehicular Activity Helper/Retriever (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, three-dimensional sensing of the operational environment and objects in the environment will therefore be essential. One of the sensors being considered to provide image data for object recognition and pose estimation is a phase-shift laser scanner. The characteristics of the data provided by this scanner have been studied and algorithms have been developed for segmenting range images into planar surfaces, extracting basic features such as surface area, and recognizing the object based on the characteristics of extracted features. Also, an approach has been developed for estimating the spatial orientation and location of the recognized object based on orientations of extracted planes and their intersection points. This paper presents some of the algorithms that have been developed for the purpose of recognizing and estimating the pose of objects as viewed by the laser scanner, and characterizes the desirability and utility of these algorithms within the context of the scanner itself, considering data quality and noise.
Lemon, W C; Levine, R B
1997-06-01
During the metamorphosis of Manduca sexta the larval nervous system is reorganized to allow the generation of behaviors that are specific to the pupal and adult stages. In some instances, metamorphic changes in neurons that persist from the larval stage are segment-specific and lead to expression of segment-specific behavior in later stages. At the larval-pupal transition, the larval abdominal bending behavior, which is distributed throughout the abdomen, changes to the pupal gin trap behavior which is restricted to three abdominal segments. This study suggests that the neural circuit that underlies larval bending undergoes segment specific modifications to produce the segmentally restricted gin trap behavior. We show, however, that non-gin trap segments go through a developmental change similar to that seen in gin trap segments. Pupal-specific motor patterns are produced by stimulation of sensory neurons in abdominal segments that do not have gin traps and cannot produce the gin trap behavior. In particular, sensory stimulation in non-gin trap pupal segments evokes a motor response that is faster than the larval response and that displays the triphasic contralateral-ipsilateral-contralateral activity pattern that is typical of the pupal gin trap behavior. Despite the alteration of reflex activity in all segments, developmental changes in sensory neuron morphology are restricted to those segments that form gin traps. In non-gin trap segments, persistent sensory neurons do not expand their terminal arbors, as do sensory neurons in gin trap segments, yet are capable of eliciting gin trap-like motor responses.
Robust keyword retrieval method for OCRed text
NASA Astrophysics Data System (ADS)
Fujii, Yusaku; Takebe, Hiroaki; Tanaka, Hiroshi; Hotta, Yoshinobu
2011-01-01
Document management systems have become important because of the growing popularity of electronic filing of documents and scanning of books, magazines, manuals, etc., through a scanner or a digital camera, for storage or reading on a PC or an electronic book. Text information acquired by optical character recognition (OCR) is usually added to the electronic documents for document retrieval. Since texts generated by OCR generally include character recognition errors, robust retrieval methods have been introduced to overcome this problem. In this paper, we propose a retrieval method that is robust against both character segmentation and recognition errors. In the proposed method, the insertion of noise characters and dropping of characters in the keyword retrieval enables robustness against character segmentation errors, and character substitution in the keyword of the recognition candidate for each character in OCR or any other character enables robustness against character recognition errors. The recall rate of the proposed method was 15% higher than that of the conventional method. However, the precision rate was 64% lower.
Andronikou, Savvas; Simpson, Ewan; Klemm, Maciej; Vedajallam, Schadie; Chacko, Anith; Thai, Ngoc Jade
2018-05-26
3D printing has been used in several medical applications. There are no reports however of 3D printing of the brain in children for demonstrating pathology to non-medical professionals such as lawyers. We printed 3D models of the paediatric brain from volumetric MRI in cases of severe and moderate hypoxic ischaemic injury as well as a normal age matched control, as follows: MRI DICOM data was converted to NifTI (Neuroimaging Informatics Technology Initiative) format; segmentation of the brain into CSF, grey, and white matter was performed; the segmented data was converted to STL format and printed on a commercially available scanner. The characteristic volume loss and surface features of hypoxic ischaemic injury are visible in these models, which could be of value in the communication of the nature and severity of such an insult in a court setting as they can be handled and viewed from up close.
Spatial resolution limits for the isotropic-3D PET detector X’tal cube
NASA Astrophysics Data System (ADS)
Yoshida, Eiji; Tashima, Hideaki; Hirano, Yoshiyuki; Inadama, Naoko; Nishikido, Fumihiko; Murayama, Hideo; Yamaya, Taiga
2013-11-01
Positron emission tomography (PET) has become a popular imaging method in metabolism, neuroscience, and molecular imaging. For dedicated human brain and small animal PET scanners, high spatial resolution is needed to visualize small objects. To improve the spatial resolution, we are developing the X’tal cube, which is our new PET detector to achieve isotropic 3D positioning detectability. We have shown that the X’tal cube can achieve 1 mm3 uniform crystal identification performance with the Anger-type calculation even at the block edges. We plan to develop the X’tal cube with even smaller 3D grids for sub-millimeter crystal identification. In this work, we investigate spatial resolution of a PET scanner based on the X’tal cube using Monte Carlo simulations for predicting resolution performance in smaller 3D grids. For spatial resolution evaluation, a point source emitting 511 keV photons was simulated by GATE for all physical processes involved in emission and interaction of positrons. We simulated two types of animal PET scanners. The first PET scanner had a detector ring 14.6 cm in diameter composed of 18 detectors. The second PET scanner had a detector ring 7.8 cm in diameter composed of 12 detectors. After the GATE simulations, we converted the interacting 3D position information to digitalized positions for realistic segmented crystals. We simulated several X’tal cubes with cubic crystals from (0.5 mm)3 to (2 mm)3 in size. Also, for evaluating the effect of DOI resolution, we simulated several X’tal cubes with crystal thickness from (0.5 mm)3 to (9 mm)3. We showed that sub-millimeter spatial resolution was possible using cubic crystals smaller than (1.0 mm)3 even with the assumed physical processes. Also, the weighted average spatial resolutions of both PET scanners with (0.5 mm)3 cubic crystals were 0.53 mm (14.6 cm ring diameter) and 0.48 mm (7.8 cm ring diameter). For the 7.8 cm ring diameter, spatial resolution with 0.5×0.5×1.0 mm3 crystals was improved 39% relative to the (1 mm)3 cubic crystals. On the other hand, spatial resolution with (0.5 mm)3 cubic crystals was improved 47% relative to the (1 mm)3 cubic crystals. The X’tal cube promises better spatial resolution for the 3D crystal block with isotropic resolution.
Combining population and patient-specific characteristics for prostate segmentation on 3D CT images
NASA Astrophysics Data System (ADS)
Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Tade, Funmilayo; Schuster, David M.; Fei, Baowei
2016-03-01
Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy.
Shepard, Lauren; Sommer, Kelsey; Izzo, Richard; Podgorsak, Alexander; Wilson, Michael; Said, Zaid; Rybicki, Frank J; Mitsouras, Dimitrios; Rudin, Stephen; Angel, Erin; Ionita, Ciprian N
2017-02-11
Accurate patient-specific phantoms for device testing or endovascular treatment planning can be 3D printed. We expand the applicability of this approach for cardiovascular disease, in particular, for CT-geometry derived benchtop measurements of Fractional Flow Reserve, the reference standard for determination of significant individual coronary artery atherosclerotic lesions. Coronary CT Angiography (CTA) images during a single heartbeat were acquired with a 320×0.5mm detector row scanner (Toshiba Aquilion ONE). These coronary CTA images were used to create 4 patient-specific cardiovascular models with various grades of stenosis: severe, <75% (n=1); moderate, 50-70% (n=1); and mild, <50% (n=2). DICOM volumetric images were segmented using a 3D workstation (Vitrea, Vital Images); the output was used to generate STL files (using AutoDesk Meshmixer), and further processed to create 3D printable geometries for flow experiments. Multi-material printed models (Stratasys Connex3) were connected to a programmable pulsatile pump, and the pressure was measured proximal and distal to the stenosis using pressure transducers. Compliance chambers were used before and after the model to modulate the pressure wave. A flow sensor was used to ensure flow rates within physiological reported values. 3D model based FFR measurements correlated well with stenosis severity. FFR measurements for each stenosis grade were: 0.8 severe, 0.7 moderate and 0.88 mild. 3D printed models of patient-specific coronary arteries allows for accurate benchtop diagnosis of FFR. This approach can be used as a future diagnostic tool or for testing CT image-based FFR methods.
Schlaffke, Lara; Rehmann, Robert; Froeling, Martijn; Kley, Rudolf; Tegenthoff, Martin; Vorgerd, Matthias; Schmidt-Wilcke, Tobias
2017-10-01
To investigate to what extent inter- and intramuscular variations of diffusion parameters of human calf muscles can be explained by age, gender, muscle location, and body mass index (BMI) in a specific age group (20-35 years). Whole calf muscles of 18 healthy volunteers were evaluated. Magnetic resonance imaging (MRI) was performed using a 3T scanner and a 16-channel Torso XL coil. Diffusion-weighted images were acquired to perform fiber tractography and diffusion tensor imaging (DTI) analysis for each muscle of both legs. Fiber tractography was used to separate seven lower leg muscles. Associations between DTI parameters and confounds were evaluated. All muscles were additionally separated in seven identical segments along the z-axis to evaluate intramuscular differences in diffusion parameters. Fractional anisotropy (FA) and mean diffusivity (MD) were obtained for each muscle with low standard deviations (SDs) (SD FA : 0.01-0.02; SD MD : 0.07-0.14(10 -3 )). We found significant differences in FA values of the tibialis anterior muscle (AT) and extensor digitorum longus (EDL) muscles between men and women for whole muscle FA (two-sample t-tests; AT: P = 0.0014; EDL: P = 0.0004). We showed significant intramuscular differences in diffusion parameters between adjacent segments in most calf muscles (P < 0.001). Whereas muscle insertions showed higher (SD 0.03-0.06) than muscle bellies (SD 0.01-0.03), no relationships between FA or MD with age or BMI were found. Inter- and intramuscular variations in diffusion parameters of the calf were shown, which are not related to age or BMI in this age group. Differences between muscle belly and insertion should be considered when interpreting datasets not including whole muscles. 3 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2017;46:1137-1148. © 2017 International Society for Magnetic Resonance in Medicine.
Computer-aided diagnosis of liver tumors on computed tomography images.
Chang, Chin-Chen; Chen, Hong-Hao; Chang, Yeun-Chung; Yang, Ming-Yang; Lo, Chung-Ming; Ko, Wei-Chun; Lee, Yee-Fan; Liu, Kao-Lang; Chang, Ruey-Feng
2017-07-01
Liver cancer is the tenth most common cancer in the USA, and its incidence has been increasing for several decades. Early detection, diagnosis, and treatment of the disease are very important. Computed tomography (CT) is one of the most common and robust imaging techniques for the detection of liver cancer. CT scanners can provide multiple-phase sequential scans of the whole liver. In this study, we proposed a computer-aided diagnosis (CAD) system to diagnose liver cancer using the features of tumors obtained from multiphase CT images. A total of 71 histologically-proven liver tumors including 49 benign and 22 malignant lesions were evaluated with the proposed CAD system to evaluate its performance. Tumors were identified by the user and then segmented using a region growing algorithm. After tumor segmentation, three kinds of features were obtained for each tumor, including texture, shape, and kinetic curve. The texture was quantified using 3 dimensional (3-D) texture data of the tumor based on the grey level co-occurrence matrix (GLCM). Compactness, margin, and an elliptic model were used to describe the 3-D shape of the tumor. The kinetic curve was established from each phase of tumor and represented as variations in density between each phase. Backward elimination was used to select the best combination of features, and binary logistic regression analysis was used to classify the tumors with leave-one-out cross validation. The accuracy and sensitivity for the texture were 71.82% and 68.18%, respectively, which were better than for the shape and kinetic curve under closed specificity. Combining all of the features achieved the highest accuracy (58/71, 81.69%), sensitivity (18/22, 81.82%), and specificity (40/49, 81.63%). The Az value of combining all features was 0.8713. Combining texture, shape, and kinetic curve features may be able to differentiate benign from malignant tumors in the liver using our proposed CAD system. Copyright © 2017 Elsevier B.V. All rights reserved.
Design of CT reconstruction kernel specifically for clinical lung imaging
NASA Astrophysics Data System (ADS)
Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.
2005-04-01
In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.
Design and validation of Segment--freely available software for cardiovascular image analysis.
Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan
2010-01-11
Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.
Coastal Zone Color Scanner studies
NASA Technical Reports Server (NTRS)
Elrod, J.
1988-01-01
Activities over the past year have included cooperative work with a summer faculty fellow using the Coastal Zone Color Scanner (CZCS) imagery to study the effects of gradients in trophic resources on coral reefs in the Caribbean. Other research included characterization of ocean radiances specific to an acid-waste plume. Other activities include involvement in the quality control of imagery produced in the processing of the global CZCS data set, the collection of various other data global sets, and the subsequent data comparison and analysis.
NOSS flight segment concept study
NASA Technical Reports Server (NTRS)
1979-01-01
An 11 ft wide by 26.5 ft long flat structure weighing almost 14,469 pounds evolved during a low level, inhouse conceptual design study for a national oceanic satellite system spacecraft that would stow directly in the space shuttle. Following STS launch to a 300 Km mission orbit inclination, transfer will be effected to a 800 Km Sun synchronous circular orbit. The instrument completement includes 2 altimeters, 1 scatterometer, 1 large antenna multichannel microwave radiometer, and a coastal zone scanner. The spacecraft, its instruments, and interfaces with STS and TDRSS are described. The mission timeline, potential problem areas, system drivers, and recommended study areas are discussed. Drawings and system block diagrams are included.
DLA based compressed sensing for high resolution MR microscopy of neuronal tissue.
Nguyen, Khieu-Van; Li, Jing-Rebecca; Radecki, Guillaume; Ciobanu, Luisa
2015-10-01
In this work we present the implementation of compressed sensing (CS) on a high field preclinical scanner (17.2 T) using an undersampling trajectory based on the diffusion limited aggregation (DLA) random growth model. When applied to a library of images this approach performs better than the traditional undersampling based on the polynomial probability density function. In addition, we show that the method is applicable to imaging live neuronal tissues, allowing significantly shorter acquisition times while maintaining the image quality necessary for identifying the majority of neurons via an automatic cell segmentation algorithm. Copyright © 2015 Elsevier Inc. All rights reserved.
ARIES NDA Robot operators` manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheer, N.L.; Nelson, D.C.
1998-05-01
The ARIES NDA Robot is an automation device for servicing the material movements for a suite of Non-destructive assay (NDA) instruments. This suite of instruments includes a calorimeter, a gamma isotopic system, a segmented gamma scanner (SGS), and a neutron coincidence counter (NCC). Objects moved by the robot include sample cans, standard cans, and instrument plugs. The robot computer has an RS-232 connection with the NDA Host computer, which coordinates robot movements and instrument measurements. The instruments are expected to perform measurements under the direction of the Host without operator intervention. This user`s manual describes system startup, using the mainmore » menu, manual operation, and error recovery.« less
Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision
Ender, Andreas; Mehl, Albert
2014-01-01
Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes. PMID:24836007
MFP scanner diagnostics using a self-printed target to measure the modulation transfer function
NASA Astrophysics Data System (ADS)
Wang, Weibao; Bauer, Peter; Wagner, Jerry; Allebach, Jan P.
2014-01-01
In the current market, reduction of warranty costs is an important avenue for improving profitability by manufacturers of printer products. Our goal is to develop an autonomous capability for diagnosis of printer and scanner caused defects with mid-range laser multifunction printers (MFPs), so as to reduce warranty costs. If the scanner unit of the MFP is not performing according to specification, this issue needs to be diagnosed. If there is a print quality issue, this can be diagnosed by printing a special test page that is resident in the firmware of the MFP unit, and then scanning it. However, the reliability of this process will be compromised if the scanner unit is defective. Thus, for both scanner and printer image quality issues, it is important to be able to properly evaluate the scanner performance. In this paper, we consider evaluation of the scanner performance by measuring its modulation transfer function (MTF). The MTF is a fundamental tool for assessing the performance of imaging systems. Several ways have been proposed to measure the MTF, all of which require a special target, for example a slanted-edge target. It is unacceptably expensive to ship every MFP with such a standard target, and to expect that the customer can keep track of it. To reduce this cost, in this paper, we develop new approach to this task. It is based on a self-printed slanted-edge target. Then, we propose algorithms to improve the results using a self-printed slanted-edge target. Finally, we present experimental results for MTF measurement using self-printed targets and compare them to the results obtained with standard targets.
Driving imaging and overlay performance to the limits with advanced lithography optimization
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Finders, Jo; van der Laan, Hans; Hinnen, Paul; Kubis, Michael; Beems, Marcel
2012-03-01
Immersion lithography is being extended to 22-nm and even below. Next to generic scanner system improvements, application specific solutions are needed to follow the requirements for CD control and overlay. Starting from the performance budgets, this paper discusses how to improve (in volume manufacturing environment) CDU towards 1-nm and overlay towards 3-nm. The improvements are based on deploying the actuator capabilities of the immersion scanner. The latest generation immersion scanners have extended the correction capabilities for overlay and imaging, offering freeform adjustments of lens, illuminator and wafer grid. In order to determine the needed adjustments the recipe generation per user application is based on a combination wafer metrology data and computational lithography methods. For overlay, focus and CD metrology we use an angle resolved optical scatterometer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christianson, O; Winslow, J; Samei, E
2014-06-15
Purpose: One of the principal challenges of clinical imaging is to achieve an ideal balance between image quality and radiation dose across multiple CT models. The number of scanners and protocols at large medical centers necessitates an automated quality assurance program to facilitate this objective. Therefore, the goal of this work was to implement an automated CT image quality and radiation dose monitoring program based on actual patient data and to use this program to assess consistency of protocols across CT scanner models. Methods: Patient CT scans are routed to a HIPPA compliant quality assurance server. CTDI, extracted using opticalmore » character recognition, and patient size, measured from the localizers, are used to calculate SSDE. A previously validated noise measurement algorithm determines the noise in uniform areas of the image across the scanned anatomy to generate a global noise level (GNL). Using this program, 2358 abdominopelvic scans acquired on three commercial CT scanners were analyzed. Median SSDE and GNL were compared across scanner models and trends in SSDE and GNL with patient size were used to determine the impact of differing automatic exposure control (AEC) algorithms. Results: There was a significant difference in both SSDE and GNL across scanner models (9–33% and 15–35% for SSDE and GNL, respectively). Adjusting all protocols to achieve the same image noise would reduce patient dose by 27–45% depending on scanner model. Additionally, differences in AEC methodologies across vendors resulted in disparate relationships of SSDE and GNL with patient size. Conclusion: The difference in noise across scanner models indicates that protocols are not optimally matched to achieve consistent image quality. Our results indicated substantial possibility for dose reduction while achieving more consistent image appearance. Finally, the difference in AEC methodologies suggests the need for size-specific CT protocols to minimize variability in image quality across CT vendors.« less
Koivula, Lauri; Kapanen, Mika; Seppälä, Tiina; Collan, Juhani; Dowling, Jason A; Greer, Peter B; Gustafsson, Christian; Gunnlaugsson, Adalsteinn; Olsson, Lars E; Wee, Leonard; Korhonen, Juha
2017-12-01
Recent studies have shown that it is possible to conduct entire radiotherapy treatment planning (RTP) workflow using only MR images. This study aims to develop a generalized intensity-based method to generate synthetic CT (sCT) images from standard T2-weighted (T2 w ) MR images of the pelvis. This study developed a generalized dual model HU conversion method to convert standard T2 w MR image intensity values to synthetic HU values, separately inside and outside of atlas-segmented bone volume contour. The method was developed and evaluated with 20 and 35 prostate cancer patients, respectively. MR images with scanning sequences in clinical use were acquired with four different MR scanners of three vendors. For the generated synthetic CT (sCT) images of the 35 prostate patients, the mean (and maximal) HU differences in soft and bony tissue volumes were 16 ± 6 HUs (34 HUs) and -46 ± 56 HUs (181 HUs), respectively, against the true CT images. The average of the PTV mean dose difference in sCTs compared to those in true CTs was -0.6 ± 0.4% (-1.3%). The study provides a generalized method for sCT creation from standard T2 w images of the pelvis. The method produced clinically acceptable dose calculation results for all the included scanners and MR sequences. Copyright © 2017 Elsevier B.V. All rights reserved.
Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations.
Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao
2017-04-11
A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10-0.20 m, and vertical accuracy was approximately 0.01-0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.
NASA Astrophysics Data System (ADS)
Efstathopoulos, E. P.; Kelekis, N. L.; Pantos, I.; Brountzos, E.; Argentos, S.; Grebáč, J.; Ziaka, D.; Katritsis, D. G.; Seimenis, I.
2009-09-01
Computed tomography (CT) coronary angiography has been widely used since the introduction of 64-slice scanners and dual-source CT technology, but high radiation doses have been reported. Prospective ECG-gating using a 'step-and-shoot' axial scanning protocol has been shown to reduce radiation exposure effectively while maintaining diagnostic accuracy. 256-slice scanners with 80 mm detector coverage have been currently introduced into practice, but their impact on radiation exposure has not been adequately studied. The aim of this study was to assess radiation doses associated with CT coronary angiography using a 256-slice CT scanner. Radiation doses were estimated for 25 patients scanned with either prospective or retrospective ECG-gating. Image quality was assessed objectively in terms of mean CT attenuation at selected regions of interest on axial coronary images and subjectively by coronary segment quality scoring. It was found that radiation doses associated with prospective ECG-gating were significantly lower than retrospective ECG-gating (3.2 ± 0.6 mSv versus 13.4 ± 2.7 mSv). Consequently, the radiogenic fatal cancer risk for the patient is much lower with prospective gating (0.0176% versus 0.0737%). No statistically significant differences in image quality were observed between the two scanning protocols for both objective and subjective quality assessments. Therefore, prospective ECG-gating using a 'step-and-shoot' protocol that covers the cardiac anatomy in two axial acquisitions effectively reduces radiation doses in 256-slice CT coronary angiography without compromising image quality.
Souto Bayarri, M; Masip Capdevila, L; Remuiñan Pereira, C; Suárez-Cuenca, J J; Martínez Monzonís, A; Couto Pérez, M I; Carreira Villamor, J M
2015-01-01
To compare the methods of right ventricle segmentation in the short-axis and 4-chamber planes in cardiac magnetic resonance imaging and to correlate the findings with those of the tricuspid annular plane systolic excursion (TAPSE) method in echocardiography. We used a 1.5T MRI scanner to study 26 patients with diverse cardiovascular diseases. In all MRI studies, we obtained cine-mode images from the base to the apex in both the short-axis and 4-chamber planes using steady-state free precession sequences and 6mm thick slices. In all patients, we quantified the end-diastolic volume, end-systolic volume, and the ejection fraction of the right ventricle. On the same day as the cardiac magnetic resonance imaging study, 14 patients also underwent echocardiography with TAPSE calculation of right ventricular function. No statistically significant differences were found in the volumes and function of the right ventricle calculated using the 2 segmentation methods. The correlation between the volume estimations by the two segmentation methods was excellent (r=0,95); the correlation for the ejection fraction was slightly lower (r=0,8). The correlation between the cardiac magnetic resonance imaging estimate of right ventricular ejection fraction and TAPSE was very low (r=0,2, P<.01). Both ventricular segmentation methods quantify right ventricular function adequately. The correlation with the echocardiographic method is low. Copyright © 2012 SERAM. Published by Elsevier España, S.L.U. All rights reserved.
Large-Scale Propagation of Ultrasound in a 3-D Breast Model Based on High-Resolution MRI Data
Tillett, Jason C.; Metlay, Leon A.; Waag, Robert C.
2010-01-01
A 40 × 35 × 25-mm3 specimen of human breast consisting mostly of fat and connective tissue was imaged using a 3-T magnetic resonance scanner. The resolutions in the image plane and in the orthogonal direction were 130 μm and 150 μm, respectively. Initial processing to prepare the data for segmentation consisted of contrast inversion, interpolation, and noise reduction. Noise reduction used a multilevel bidirectional median filter to preserve edges. The volume of data was segmented into regions of fat and connective tissue by using a combination of local and global thresholding. Local thresholding was performed to preserve fine detail, while global thresholding was performed to minimize the interclass variance between voxels classified as background and voxels classified as object. After smoothing the data to avoid aliasing artifacts, the segmented data volume was visualized using iso-surfaces. The isosurfaces were enhanced using transparency, lighting, shading, reflectance, and animation. Computations of pulse propagation through the model illustrate its utility for the study of ultrasound aberration. The results show the feasibility of using the described combination of methods to demonstrate tissue morphology in a form that provides insight about the way ultrasound beams are aberrated in three dimensions by tissue. PMID:20172794
NASA Astrophysics Data System (ADS)
Lynch, John A.; Zaim, Souhil; Zhao, Jenny; Peterfy, Charles G.; Genant, Harry K.
2001-07-01
In osteoarthritis, articular cartilage loses integrity and becomes thinned. This usually occurs at sites which bear weight during normal use. Measurement of such loss from MRI scans, requires precise and reproducible techniques, which can overcome the difficulties of patient repositioning within the scanner. In this study, we combine a previously described technique for segmentation of cartilage from MRI of the knee, with a technique for 3D image registration that matches localized regions of interest at followup and baseline. Two patients, who had recently undergone meniscal surgery, and developed lesions during the 12 month followup period were examined. Image registration matched regions of interest (ROI) between baseline and followup, and changes within the cartilage lesions were estimate to be about a 16% reduction in cartilage volume within each ROI. This was more than 5 times the reproducibility of the measurement, but only represented a change of between 1 and 2% in total femoral cartilage volume. Changes in total cartilage volume may be insensitive for quantifying changes in cartilage morphology. A combined used of automated image segmentation, with 3D image registration could be a useful tool for the precise and sensitive measurement of localized changes in cartilage from MRI of the knee.
Large-scale propagation of ultrasound in a 3-D breast model based on high-resolution MRI data.
Salahura, Gheorghe; Tillett, Jason C; Metlay, Leon A; Waag, Robert C
2010-06-01
A 40 x 35 x 25-mm(3) specimen of human breast consisting mostly of fat and connective tissue was imaged using a 3-T magnetic resonance scanner. The resolutions in the image plane and in the orthogonal direction were 130 microm and 150 microm, respectively. Initial processing to prepare the data for segmentation consisted of contrast inversion, interpolation, and noise reduction. Noise reduction used a multilevel bidirectional median filter to preserve edges. The volume of data was segmented into regions of fat and connective tissue by using a combination of local and global thresholding. Local thresholding was performed to preserve fine detail, while global thresholding was performed to minimize the interclass variance between voxels classified as background and voxels classified as object. After smoothing the data to avoid aliasing artifacts, the segmented data volume was visualized using isosurfaces. The isosurfaces were enhanced using transparency, lighting, shading, reflectance, and animation. Computations of pulse propagation through the model illustrate its utility for the study of ultrasound aberration. The results show the feasibility of using the described combination of methods to demonstrate tissue morphology in a form that provides insight about the way ultrasound beams are aberrated in three dimensions by tissue.
Prazeres, Carlos Eduardo Elias Dos; Magalhães, Tiago Augusto; de Castro Carneiro, Adriano Camargo; Cury, Roberto Caldeira; de Melo Moreira, Valéria; Bello, Juliana Hiromi Silva Matsumoto; Rochitte, Carlos Eduardo
The aim of this study was to compare image quality and radiation dose of coronary computed tomography (CT) angiography performed with dual-source CT scanner using 2 different protocols in patients with atrial fibrillation. Forty-seven patients with AF underwent 2 different acquisition protocols: double high-pitch (DHP) spiral acquisition and retrospective spiral acquisition. The image quality was ranked according to a qualitative score by 2 experts: 1, no evident motion; 2, minimal motion not influencing coronary artery luminal evaluation; and 3, motion with impaired luminal evaluation. A third expert solved any disagreement. A total of 732 segments were evaluated. The DHP group (24 patients, 374 segments) showed more segments classified as score 1 than the retrospective spiral acquisition group (71.3% vs 37.4%). Image quality evaluation agreement was high between observers (κ = 0.8). There was significantly lower radiation exposure for the DHP group (3.65 [1.29] vs 23.57 [10.32] mSv). In this original direct comparison, a DHP spiral protocol for coronary CT angiography acquisition in patients with atrial fibrillation resulted in lower radiation exposure and superior image quality compared with conventional spiral retrospective acquisition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Li; Gao, Yaozong; Shi, Feng
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segmentmore » CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.« less
SU-G-206-07: Dual-Energy CT Inter- and Intra-Scanner Variability Within One Make and Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobsen, M; Wood, C; Cody, D
Purpose: It can be logistically quite difficult to scan patients on the same exact device for their repeat visits in multi-scanner facilities. The reliability between dual-energy CT scanners’ quantitative results is not known, nor is their individual repeatability. Therefore, we evaluated inter- and intra-scanner variability with respect to several key clinical quantitative metrics specific to dual-energy CT. Methods: Eleven identical GE HD-750 CT scanners in a busy clinical environment were used to perform dual-energy (DE) CT scans of a large elliptical quality control (QC) phantom (Gammex, Inc.; Middleton, WI) which contains many standard insert materials. The DE-QC phantom was scannedmore » bi-weekly during 2016; 3 to 4 scans were obtained from each scanner (a total of 35 data sets were used for analysis). Iodine accuracy for the 2mg/ml, 5mg/ml and 15mg/ml rods (from the Iodine(Water) image set) and soft tissue HU (40 HU based on NIST constants) from the 50keV data set were used to assess inter- and intra-scanner variability (standard deviation). Results: Intra-scanner variability average for 2mg/ml Iodine was 0.10 mg/ml (range 0.05–0.15 mg/ml), for 5mg/ml Iodine was 0.12 mg/ml (range 0.07–0.16 mg/ml), for 15 mg/ml Iodine was 0.25 mg/ml (range 0.16–0.37 mg/ml), and for the soft tissue inserts was 2.1 HU (range 1.8–2.6 HU). Inter-scanner variability average for 2mg/ml Iodine was 0.16 mg/ml (range 0.11–0.19 mg/ml), for 5mg/ml Iodine was 0.18 mg/ml (range 0.11–0.22 mg/ml), for 15 mg/ml Iodine was 0.35 mg/ml (range 0.23–0.44 mg/ml), and for the soft tissue inserts was 3.8 HU (range 3.1–4.5 HU). Conclusion: Intra-scanner variability for the iodine and soft tissue inserts averaged 3.1% and 5.2% respectively, and inter-scanner variability for these regions analyzed averaged 5.0% and 9.5%, respectively. Future work will include determination of smallest measurable change and acceptable limits for DE-CT scanner variability over longer time intervals. This research has been supported by funds from Dr. William Murphy, Jr., the John S. Dunn, Sr. Distinguished Chair in Diagnostic Imaging at MD Anderson Cancer Center.« less
An analysis of haze effects on LANDSAT multispectral scanner data
NASA Technical Reports Server (NTRS)
Johnson, W. R.; Sestak, M. L. (Principal Investigator)
1981-01-01
Early season changes in optical depth change brightness, primarily along the soil line; and during crop development, changes in optical depth change both greenness and brightness. Thus, the existence of haze in the imagery could cause an unsuspecting analyst to interpret the spectral appearance as indicating an episodal event when, in fact, haze was present. The techniques for converting LANDSAT-3 data to simulate LANDSAT-2 data are in error. The yellowness and none such computations are affected primarily. Yellowness appears well correlated to optical depth. Experimental evidence with variable background and variable optical depth is needed, however. The variance of picture elements within a spring wheat field is related to its equivalent in optical depth changes caused by haze. This establishes the sensitivity of channel 1 (greenness) pixels to changes in haze levels. The between field picture element means and variances were determined for the spring wheat fields. This shows the variability of channel data on two specific dates, emphasizing that crop development can be influenced by many factors. The atmospheric correction program ATCOR reduces segment data from LANDSAT acquisitions to a common haze level and improves the results of analysis.
Narita, Akihiro; Ohkubo, Masaki; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi
2017-10-01
The aim of this feasibility study using phantoms was to propose a novel method for obtaining computer-generated realistic virtual nodules in lung computed tomography (CT). In the proposed methodology, pulmonary nodule images obtained with a CT scanner are deconvolved with the point spread function (PSF) in the scan plane and slice sensitivity profile (SSP) measured for the scanner; the resultant images are referred to as nodule-like object functions. Next, by convolving the nodule-like object function with the PSF and SSP of another (target) scanner, the virtual nodule can be generated so that it has the characteristics of the spatial resolution of the target scanner. To validate the methodology, the authors applied physical nodules of 5-, 7- and 10-mm-diameter (uniform spheres) included in a commercial CT test phantom. The nodule-like object functions were calculated from the sphere images obtained with two scanners (Scanner A and Scanner B); these functions were referred to as nodule-like object functions A and B, respectively. From these, virtual nodules were generated based on the spatial resolution of another scanner (Scanner C). By investigating the agreement of the virtual nodules generated from the nodule-like object functions A and B, the equivalence of the nodule-like object functions obtained from different scanners could be assessed. In addition, these virtual nodules were compared with the real (true) sphere images obtained with Scanner C. As a practical validation, five types of laboratory-made physical nodules with various complicated shapes and heterogeneous densities, similar to real lesions, were used. The nodule-like object functions were calculated from the images of these laboratory-made nodules obtained with Scanner A. From them, virtual nodules were generated based on the spatial resolution of Scanner C and compared with the real images of laboratory-made nodules obtained with Scanner C. Good agreement of the virtual nodules generated from the nodule-like object functions A and B of the phantom spheres was found, suggesting the validity of the nodule-like object functions. The virtual nodules generated from the nodule-like object function A of the phantom spheres were similar to the real images obtained with Scanner C; the root mean square errors (RMSEs) between them were 10.8, 11.1, and 12.5 Hounsfield units (HU) for 5-, 7-, and 10-mm-diameter spheres, respectively. The equivalent results (RMSEs) using the nodule-like object function B were 15.9, 16.8, and 16.5 HU, respectively. These RMSEs were small considering the high contrast between the sphere density and background density (approximately 674 HU). The virtual nodules generated from the nodule-like object functions of the five laboratory-made nodules were similar to the real images obtained with Scanner C; the RMSEs between them ranged from 6.2 to 8.6 HU in five cases. The nodule-like object functions calculated from real nodule images would be effective to generate realistic virtual nodules. The proposed method would be feasible for generating virtual nodules that have the characteristics of the spatial resolution of the CT system used in each institution, allowing for site-specific nodule generation. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herfst, Rodolf; Dekker, Bert; Witvoet, Gert
One of the major limitations in the speed of the atomic force microscope (AFM) is the bandwidth of the mechanical scanning stage, especially in the vertical (z) direction. According to the design principles of “light and stiff” and “static determinacy,” the bandwidth of the mechanical scanner is limited by the first eigenfrequency of the AFM head in case of tip scanning and by the sample stage in terms of sample scanning. Due to stringent requirements of the system, simply pushing the first eigenfrequency to an ever higher value has reached its limitation. We have developed a miniaturized, high speed AFMmore » scanner in which the dynamics of the z-scanning stage are made insensitive to its surrounding dynamics via suspension of it on specific dynamically determined points. This resulted in a mechanical bandwidth as high as that of the z-actuator (50 kHz) while remaining insensitive to the dynamics of its base and surroundings. The scanner allows a practical z scan range of 2.1 μm. We have demonstrated the applicability of the scanner to the high speed scanning of nanostructures.« less
Semi-automatic breast ultrasound image segmentation based on mean shift and graph cuts.
Zhou, Zhuhuang; Wu, Weiwei; Wu, Shuicai; Tsui, Po-Hsiang; Lin, Chung-Chih; Zhang, Ling; Wang, Tianfu
2014-10-01
Computerized tumor segmentation on breast ultrasound (BUS) images remains a challenging task. In this paper, we proposed a new method for semi-automatic tumor segmentation on BUS images using Gaussian filtering, histogram equalization, mean shift, and graph cuts. The only interaction required was to select two diagonal points to determine a region of interest (ROI) on an input image. The ROI image was shrunken by a factor of 2 using bicubic interpolation to reduce computation time. The shrunken image was smoothed by a Gaussian filter and then contrast-enhanced by histogram equalization. Next, the enhanced image was filtered by pyramid mean shift to improve homogeneity. The object and background seeds for graph cuts were automatically generated on the filtered image. Using these seeds, the filtered image was then segmented by graph cuts into a binary image containing the object and background. Finally, the binary image was expanded by a factor of 2 using bicubic interpolation, and the expanded image was processed by morphological opening and closing to refine the tumor contour. The method was implemented with OpenCV 2.4.3 and Visual Studio 2010 and tested for 38 BUS images with benign tumors and 31 BUS images with malignant tumors from different ultrasound scanners. Experimental results showed that our method had a true positive rate (TP) of 91.7%, a false positive (FP) rate of 11.9%, and a similarity (SI) rate of 85.6%. The mean run time on Intel Core 2.66 GHz CPU and 4 GB RAM was 0.49 ± 0.36 s. The experimental results indicate that the proposed method may be useful in BUS image segmentation. © The Author(s) 2014.
Nankivil, Derek; Waterman, Gar; LaRocca, Francesco; Keller, Brenton; Kuo, Anthony N.; Izatt, Joseph A.
2015-01-01
We describe the first handheld, swept source optical coherence tomography (SSOCT) system capable of imaging both the anterior and posterior segments of the eye in rapid succession. A single 2D microelectromechanical systems (MEMS) scanner was utilized for both imaging modes, and the optical paths for each imaging mode were optimized for their respective application using a combination of commercial and custom optics. The system has a working distance of 26.1 mm and a measured axial resolution of 8 μm (in air). In posterior segment mode, the design has a lateral resolution of 9 μm, 7.4 mm imaging depth range (in air), 4.9 mm 6dB fall-off range (in air), and peak sensitivity of 103 dB over a 22° field of view (FOV). In anterior segment mode, the design has a lateral resolution of 24 μm, imaging depth range of 7.4 mm (in air), 6dB fall-off range of 4.5 mm (in air), depth-of-focus of 3.6 mm, and a peak sensitivity of 99 dB over a 17.5 mm FOV. In addition, the probe includes a wide-field iris imaging system to simplify alignment. A fold mirror assembly actuated by a bi-stable rotary solenoid was used to switch between anterior and posterior segment imaging modes, and a miniature motorized translation stage was used to adjust the objective lens position to correct for patient refraction between −12.6 and + 9.9 D. The entire probe weighs less than 630 g with a form factor of 20.3 x 9.5 x 8.8 cm. Healthy volunteers were imaged to illustrate imaging performance. PMID:26601014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciller, Carlos, E-mail: carlos.cillerruiz@unil.ch; Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern; Centre d’Imagerie BioMédicale, University of Lausanne, Lausanne
Purpose: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Methods and Materials: Manualmore » and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. Results: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. Conclusion: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.« less
Ciller, Carlos; De Zanet, Sandro I; Rüegsegger, Michael B; Pica, Alessia; Sznitman, Raphael; Thiran, Jean-Philippe; Maeder, Philippe; Munier, Francis L; Kowal, Jens H; Cuadra, Meritxell Bach
2015-07-15
Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor. Copyright © 2015 Elsevier Inc. All rights reserved.
SU-E-I-96: A Study About the Influence of ROI Variation On Tumor Segmentation in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L; Tan, S; Lu, W
2014-06-01
Purpose: To study the influence of different regions of interest (ROI) on tumor segmentation in PET. Methods: The experiments were conducted on a cylindrical phantom. Six spheres with different volumes (0.5ml, 1ml, 6ml, 12ml, 16ml and 20 ml) were placed inside a cylindrical container to mimic tumors of different sizes. The spheres were filled with 11C solution as sources and the cylindrical container was filled with 18F-FDG solution as the background. The phantom was continuously scanned in a Biograph-40 True Point/True View PET/CT scanner, and 42 images were reconstructed with source-to-background ratio (SBR) ranging from 16:1 to 1.8:1. We tookmore » a large and a small ROI for each sphere, both of which contain the whole sphere and does not contain any other spheres. Six other ROIs of different sizes were then taken between the large and the small ROI. For each ROI, all images were segmented by eitht thresholding methods and eight advanced methods, respectively. The segmentation results were evaluated by dice similarity index (DSI), classification error (CE) and volume error (VE). The robustness of different methods to ROI variation was quantified using the interrun variation and a generalized Cohen's kappa. Results: With the change of ROI, the segmentation results of all tested methods changed more or less. Compared with all advanced methods, thresholding methods were less affected by the ROI change. In addition, most of the thresholding methods got more accurate segmentation results for all sphere sizes. Conclusion: The results showed that the segmentation performance of all tested methods was affected by the change of ROI. Thresholding methods were more robust to this change and they can segment the PET image more accurately. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less
Preliminary Analysis of Effect of Random Segment Errors on Coronagraph Performance
NASA Technical Reports Server (NTRS)
Stahl, Mark T.; Shaklan, Stuart B.; Stahl, H. Philip
2015-01-01
Are we alone in the Universe is probably the most compelling science question of our generation. To answer it requires a large aperture telescope with extreme wavefront stability. To image and characterize Earth-like planets requires the ability to block 10(exp 10) of the host stars light with a 10(exp -11) stability. For an internal coronagraph, this requires correcting wavefront errors and keeping that correction stable to a few picometers rms for the duration of the science observation. This requirement places severe specifications upon the performance of the observatory, telescope and primary mirror. A key task of the AMTD project (initiated in FY12) is to define telescope level specifications traceable to science requirements and flow those specifications to the primary mirror. From a systems perspective, probably the most important question is: What is the telescope wavefront stability specification? Previously, we suggested this specification should be 10 picometers per 10 minutes; considered issues of how this specification relates to architecture, i.e. monolithic or segmented primary mirror; and asked whether it was better to have few or many segmented. This paper reviews the 10 picometers per 10 minutes specification; provides analysis related to the application of this specification to segmented apertures; and suggests that a 3 or 4 ring segmented aperture is more sensitive to segment rigid body motion that an aperture with fewer or more segments.
A combined learning algorithm for prostate segmentation on 3D CT images.
Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei
2017-11-01
Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate. © 2017 American Association of Physicists in Medicine.
Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans.
Bresch, Erik; Nielsen, Jon; Nayak, Krishna; Narayanan, Shrikanth
2006-10-01
This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given.
Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans (L)
Bresch, Erik; Nielsen, Jon; Nayak, Krishna; Narayanan, Shrikanth
2007-01-01
This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given. PMID:17069275
Lesion insertion in the projection domain: Methods and initial results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Baiyu; Leng, Shuai; Yu, Lifeng
2015-12-15
Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated bothmore » axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically acquired for the ACR phantom in terms of Hounsfield unit and high-contrast resolution. For the validation of the lesion realism, lesions of various types were successfully inserted, including well circumscribed and invasive lesions, homogeneous and heterogeneous lesions, high-contrast and low-contrast lesions, isolated and vessel-attached lesions, and small and large lesions. The two experienced radiologists who reviewed the original and inserted lesions could not identify the lesions that were inserted. The same lesion, when inserted into the projection domain and reconstructed with different parameters, demonstrated a parameter-dependent appearance. Conclusions: A framework has been developed for projection-domain insertion of lesions into commercial CT images, which can be potentially expanded to all geometries of CT scanners. Compared to conventional image-domain methods, the authors’ method reflected the impact of scan and reconstruction parameters on lesion appearance. Compared to prior projection-domain methods, the authors’ method has the potential to achieve higher anatomical complexity by employing clinical patient projections and real patient lesions.« less
Lesion insertion in the projection domain: Methods and initial results
Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia
2015-01-01
Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically acquired for the ACR phantom in terms of Hounsfield unit and high-contrast resolution. For the validation of the lesion realism, lesions of various types were successfully inserted, including well circumscribed and invasive lesions, homogeneous and heterogeneous lesions, high-contrast and low-contrast lesions, isolated and vessel-attached lesions, and small and large lesions. The two experienced radiologists who reviewed the original and inserted lesions could not identify the lesions that were inserted. The same lesion, when inserted into the projection domain and reconstructed with different parameters, demonstrated a parameter-dependent appearance. Conclusions: A framework has been developed for projection-domain insertion of lesions into commercial CT images, which can be potentially expanded to all geometries of CT scanners. Compared to conventional image-domain methods, the authors’ method reflected the impact of scan and reconstruction parameters on lesion appearance. Compared to prior projection-domain methods, the authors’ method has the potential to achieve higher anatomical complexity by employing clinical patient projections and real patient lesions. PMID:26632058
Study of the true performance limits of the Astrometric Multiplexing Area Scanner (AMAS)
NASA Technical Reports Server (NTRS)
Frederick, L. W.; Mcalister, H. A.
1975-01-01
The Astrometric Multiplexing Area Scanner (AMAS) is an instrument designed to perform photoelectric long focus astrometry of small fields. Modulation of a telescope focal plane with a rotating Ronchi ruling produces a frequency modulated signal from which relative positions and magnitudes can be extracted. Evaluation instrumental precision, accuracy and resolution characteristics with respect to a variety of instrumental and cosmical parameters indicates 1.5 micron precision and accuracy for single stars under specific conditions. This value decreases for increased number of field stars, particularly for fainter stars.
Shi, Y; Qi, F; Xue, Z; Chen, L; Ito, K; Matsuo, H; Shen, D
2008-04-01
This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. There are two novelties in the proposed deformable model. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than the general intensity and gradient features, is used to characterize the image features in the vicinity of each pixel. Second, the deformable contour is constrained by both population-based and patient-specific shape statistics, and it yields more robust and accurate segmentation of lung fields for serial chest radiographs. In particular, for segmenting the initial time-point images, the population-based shape statistics is used to constrain the deformable contour; as more subsequent images of the same patient are acquired, the patient-specific shape statistics online collected from the previous segmentation results gradually takes more roles. Thus, this patient-specific shape statistics is updated each time when a new segmentation result is obtained, and it is further used to refine the segmentation results of all the available time-point images. Experimental results show that the proposed method is more robust and accurate than other active shape models in segmenting the lung fields from serial chest radiographs.
Laser scanning endoscope for diagnostic medicine
NASA Astrophysics Data System (ADS)
Ouimette, Donald R.; Nudelman, Sol; Spackman, Thomas; Zaccheo, Scott
1990-07-01
A new type of endoscope is being developed which utilizes an optical raster scanning system for imaging through an endoscope. The optical raster scanner utilizes a high speed, multifaceted, rotating polygon mirror system for horizontal deflection, and a slower speed galvanometer driven mirror as the vertical deflection system. When used in combination, the optical raster scanner traces out a raster similar to an electron beam raster used in television systems. This flying spot of light can then be detected by various types of photosensitive detectors to generate a video image of the surface or scene being illuminated by the scanning beam. The optical raster scanner has been coupled to an endoscope. The raster is projected down the endoscope, thereby illuminating the object to be imaged at the distal end of the endoscope. Elemental photodetectors are placed at the distal or proximal end of the endoscope to detect the reflected illumination from the flying spot of light. This time sequenced signal is captured by an image processor for display and processing. This technique offers the possibility for very small diameter endoscopes since illumination channel requirements are eliminated. Using various lasers, very specific spectral selectivity can be achieved to optimum contrast of specific lesions of interest. Using several laser lines, or a white light source, with detectors of specific spectral response, multiple spectrally selected images can be acquired simultaneously. The potential for co-linear therapy delivery while imaging is also possible.
NASA Astrophysics Data System (ADS)
Koenrades, Maaike A.; Struijs, Ella M.; Klein, Almar; Kuipers, Henny; Geelkerken, Robert H.; Slump, Cornelis H.
2017-03-01
The application of endovascular aortic aneurysm repair has expanded over the last decade. However, the long-term performance of stent grafts, in particular durable fixation and sealing to the aortic wall, remains the main concern of this treatment. The sealing and fixation are challenged at every heartbeat due to downward and radial pulsatile forces. Yet knowledge on cardiac-induced dynamics of implanted stent grafts is sparse, as it is not measured in routine clinical follow-up. Such knowledge is particularly relevant to perform fatigue tests, to predict failure in the individual patient and to improve stent graft designs. Using a physical dynamic stent graft model in an anthropomorphic phantom, we have evaluated the performance of our previously proposed segmentation and registration algorithm to detect periodic motion of stent grafts on ECG-gated (3D+t) CT data. Abdominal aortic motion profiles were simulated in two series of Gaussian based patterns with different amplitudes and frequencies. Experiments were performed on a 64-slice CT scanner with a helical scan protocol and retrospective gating. Motion patterns as estimated by our algorithm were compared to motion patterns obtained from optical camera recordings of the physical stent graft model in motion. Absolute errors of the patterns' amplitude were smaller than 0.28 mm. Even the motion pattern with an amplitude of 0.23 mm was measured, although the amplitude of motion was overestimated by the algorithm with 43%. We conclude that the algorithm performs well for measurement of stent graft motion in the mm and sub-mm range. This ultimately is expected to aid in patient-specific risk assessment and improving stent graft designs.
Wrede, Karsten H.; Johst, Sören; Dammann, Philipp; Özkan, Neriman; Mönninghoff, Christoph; Kraemer, Markus; Maderwald, Stefan; Ladd, Mark E.; Sure, Ulrich; Umutlu, Lale; Schlamann, Marc
2014-01-01
Purpose Conventional saturation pulses cannot be used for 7 Tesla ultra-high-resolution time-of-flight magnetic resonance angiography (TOF MRA) due to specific absorption rate (SAR) limitations. We overcome these limitations by utilizing low flip angle, variable rate selective excitation (VERSE) algorithm saturation pulses. Material and Methods Twenty-five neurosurgical patients (male n = 8, female n = 17; average age 49.64 years; range 26–70 years) with different intracranial vascular pathologies were enrolled in this trial. All patients were examined with a 7 Tesla (Magnetom 7 T, Siemens) whole body scanner system utilizing a dedicated 32-channel head coil. For venous saturation pulses a 35° flip angle was applied. Two neuroradiologists evaluated the delineation of arterial vessels in the Circle of Willis, delineation of vascular pathologies, presence of artifacts, vessel-tissue contrast and overall image quality of TOF MRA scans in consensus on a five-point scale. Normalized signal intensities in the confluence of venous sinuses, M1 segment of left middle cerebral artery and adjacent gray matter were measured and vessel-tissue contrasts were calculated. Results Ratings for the majority of patients ranged between good and excellent for most of the evaluated features. Venous saturation was sufficient for all cases with minor artifacts in arteriovenous malformations and arteriovenous fistulas. Quantitative signal intensity measurements showed high vessel-tissue contrast for confluence of venous sinuses, M1 segment of left middle cerebral artery and adjacent gray matter. Conclusion The use of novel low flip angle VERSE algorithm pulses for saturation of venous vessels can overcome SAR limitations in 7 Tesla ultra-high-resolution TOF MRA. Our protocol is suitable for clinical application with excellent image quality for delineation of various intracranial vascular pathologies. PMID:25232868
NASA Astrophysics Data System (ADS)
Latulippe, Maxime; Felfoul, Ouajdi; Dupont, Pierre E.; Martel, Sylvain
2016-02-01
The magnetic navigation of drugs in the vascular network promises to increase the efficacy and reduce the secondary toxicity of cancer treatments by targeting tumors directly. Recently, dipole field navigation (DFN) was proposed as the first method achieving both high field and high navigation gradient strengths for whole-body interventions in deep tissues. This is achieved by introducing large ferromagnetic cores around the patient inside a magnetic resonance imaging (MRI) scanner. However, doing so distorts the static field inside the scanner, which prevents imaging during the intervention. This limitation constrains DFN to open-loop navigation, thus exposing the risk of a harmful toxicity in case of a navigation failure. Here, we are interested in periodically assessing drug targeting efficiency using MRI even in the presence of a core. We demonstrate, using a clinical scanner, that it is in fact possible to acquire, in specific regions around a core, images of sufficient quality to perform this task. We show that the core can be moved inside the scanner to a position minimizing the distortion effect in the region of interest for imaging. Moving the core can be done automatically using the gradient coils of the scanner, which then also enables the core to be repositioned to perform navigation to additional targets. The feasibility and potential of the approach are validated in an in vitro experiment demonstrating navigation and assessment at two targets.
MR Imaging-Guided Attenuation Correction of PET Data in PET/MR Imaging.
Izquierdo-Garcia, David; Catana, Ciprian
2016-04-01
Attenuation correction (AC) is one of the most important challenges in the recently introduced combined PET/magnetic resonance (MR) scanners. PET/MR AC (MR-AC) approaches aim to develop methods that allow accurate estimation of the linear attenuation coefficients of the tissues and other components located in the PET field of view. MR-AC methods can be divided into 3 categories: segmentation, atlas, and PET based. This review provides a comprehensive list of the state-of-the-art MR-AC approaches and their pros and cons. The main sources of artifacts are presented. Finally, this review discusses the current status of MR-AC approaches for clinical applications. Copyright © 2016 Elsevier Inc. All rights reserved.
Optimal Shape of a Gamma-ray Collimator: single vs double knife edge
NASA Astrophysics Data System (ADS)
Metz, Albert; Hogenbirk, Alfred
2017-09-01
Gamma-ray collimators in nuclear waste scanners are used for selecting a narrow vertical segment in activity measurements of waste vessels. The system that is used by NRG uses tapered slit collimators of both the single and double knife edge type. The properties of these collimators were investigated by means of Monte Carlo simulations. We found that single knife edge collimators are highly preferable for a conservative estimate of the activity of the waste vessels. These collimators show much less dependence on the angle of incidence of the radiation than double knife edge collimators. This conclusion also applies to cylindrical collimators of the single knife edge type, that are generally used in medical imaging spectroscopy.
Thermal surveillance of active volcanoes
NASA Technical Reports Server (NTRS)
Friedman, J. D. (Principal Investigator)
1973-01-01
The author has identified the following significant results. There are three significant scientific results of the discovery of 48 pinpoint anomalies on the upper flanks of Mt. Rainier: (1) Many of these points may actually be the location of fumarolic vapor emission or warm ground considerably below the summit crater. (2) Discovery of these small anomalies required specific V/H scanner settings for precise elevation on Mt. Rainier's flank, to avoid smearing the anomalies to the point of nonrecognition. Several past missions flown to map the thermal anomalies of the summit area did not/detect the flank anomalies. (3) This illustrates the value of the aerial IR scanner as a geophysical tool suited to specific problem-oriented missions, in contrast to its more general value in a regional or reconnaissance anomaly-mapping role.
NASA Technical Reports Server (NTRS)
Enison, R. L.
1971-01-01
A computer program called Character String Scanner (CSS), is presented. It is designed to search a data set for any specified group of characters and then to flag this group. The output of the CSS program is a listing of the data set being searched with the specified group of characters being flagged by asterisks. Therefore, one may readily identify specific keywords, groups of keywords or specified lines of code internal to a computer program, in a program output, or in any other specific data set. Possible applications of this program include the automatic scan of an output data set for pertinent keyword data, the editing of a program to change the appearance of a certain word or group of words, and the conversion of a set of code to a different set of code.
Kazakauskaite, Egle; Husmann, Lars; Stehli, Julia; Fuchs, Tobias; Fiechter, Michael; Klaeser, Bernd; Ghadri, Jelena R; Gebhard, Catherine; Gaemperli, Oliver; Kaufmann, Philipp A
2013-02-01
A new generation of high definition computed tomography (HDCT) 64-slice devices complemented by a new iterative image reconstruction algorithm-adaptive statistical iterative reconstruction, offer substantially higher resolution compared to standard definition CT (SDCT) scanners. As high resolution confers higher noise we have compared image quality and radiation dose of coronary computed tomography angiography (CCTA) from HDCT versus SDCT. Consecutive patients (n = 93) underwent HDCT, and were compared to 93 patients who had previously undergone CCTA with SDCT matched for heart rate (HR), HR variability and body mass index (BMI). Tube voltage and current were adapted to the patient's BMI, using identical protocols in both groups. The image quality of all CCTA scans was evaluated by two independent readers in all coronary segments using a 4-point scale (1, excellent image quality; 2, blurring of the vessel wall; 3, image with artefacts but evaluative; 4, non-evaluative). Effective radiation dose was calculated from DLP multiplied by a conversion factor (0.014 mSv/mGy × cm). The mean image quality score from HDCT versus SDCT was comparable (2.02 ± 0.68 vs. 2.00 ± 0.76). Mean effective radiation dose did not significantly differ between HDCT (1.7 ± 0.6 mSv, range 1.0-3.7 mSv) and SDCT (1.9 ± 0.8 mSv, range 0.8-5.5 mSv; P = n.s.). HDCT scanners allow low-dose 64-slice CCTA scanning with higher resolution than SDCT but maintained image quality and equally low radiation dose. Whether this will translate into higher accuracy of HDCT for CAD detection remains to be evaluated.
Attenuation correction for the large non-human primate brain imaging using microPET.
Naidoo-Variawa, S; Lehnert, W; Kassiou, M; Banati, R; Meikle, S R
2010-04-21
Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a (57)Co transmission point source with a 4% energy window. The optimal energy window for a (68)Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for (57)Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [(18)F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass (57)Co (4% energy window) or (68)Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.
Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations
Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao
2017-01-01
A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed. PMID:28398256
Sabarudin, Akmal; Sun, Zhonghua; Yusof, Ahmad Khairuddin Md
2013-09-30
This study is conducted to investigate and compare image quality and radiation dose between prospective ECG-triggered and retrospective ECG-gated coronary CT angiography (CCTA) with the use of single-source CT (SSCT) and dual-source CT (DSCT). A total of 209 patients who underwent CCTA with suspected coronary artery disease scanned with SSCT (n=95) and DSCT (n=114) scanners using prospective ECG-triggered and retrospective ECG-gated protocols were recruited from two institutions. The image was assessed by two experienced observers, while quantitative assessment was performed by measuring the image noise, the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR). Effective dose was calculated using the latest published conversion coefficient factor. A total of 2087 out of 2880 coronary artery segments were assessable, with 98.0% classified as of sufficient and 2.0% as of insufficient image quality for clinical diagnosis. There was no significant difference in overall image quality between prospective ECG-triggered and retrospective gated protocols, whether it was performed with DSCT or SSCT scanners. Prospective ECG-triggered protocol was compared in terms of radiation dose calculation between DSCT (6.5 ± 2.9 mSv) and SSCT (6.2 ± 1.0 mSv) scanners and no significant difference was noted (p=0.99). However, the effective dose was significantly lower with DSCT (18.2 ± 8.3 mSv) than with SSCT (28.3 ± 7.0 mSv) in the retrospective gated protocol. Prospective ECG-triggered CCTA reduces radiation dose significantly compared to retrospective ECG-gated CCTA, while maintaining good image quality. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Attenuation correction for the large non-human primate brain imaging using microPET
NASA Astrophysics Data System (ADS)
Naidoo-Variawa, S.; Lehnert, W.; Kassiou, M.; Banati, R.; Meikle, S. R.
2010-04-01
Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a 57Co transmission point source with a 4% energy window. The optimal energy window for a 68Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for 57Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [18F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass 57Co (4% energy window) or 68Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.
Subtle In-Scanner Motion Biases Automated Measurement of Brain Anatomy From In Vivo MRI
Alexander-Bloch, Aaron; Clasen, Liv; Stockman, Michael; Ronan, Lisa; Lalonde, Francois; Giedd, Jay; Raznahan, Armin
2016-01-01
While the potential for small amounts of motion in functional magnetic resonance imaging (fMRI) scans to bias the results of functional neuroimaging studies is well appreciated, the impact of in-scanner motion on morphological analysis of structural MRI is relatively under-studied. Even among “good quality” structural scans, there may be systematic effects of motion on measures of brain morphometry. In the present study, the subjects’ tendency to move during fMRI scans, acquired in the same scanning sessions as their structural scans, yielded a reliable, continuous estimate of in-scanner motion. Using this approach within a sample of 127 children, adolescents, and young adults, significant relationships were found between this measure and estimates of cortical gray matter volume and mean curvature, as well as trend-level relationships with cortical thickness. Specifically, cortical volume and thickness decreased with greater motion, and mean curvature increased. These effects of subtle motion were anatomically heterogeneous, were present across different automated imaging pipelines, showed convergent validity with effects of frank motion assessed in a separate sample of 274 scans, and could be demonstrated in both pediatric and adult populations. Thus, using different motion assays in two large non-overlapping sets of structural MRI scans, convergent evidence showed that in-scanner motion—even at levels which do not manifest in visible motion artifact—can lead to systematic and regionally specific biases in anatomical estimation. These findings have special relevance to structural neuroimaging in developmental and clinical datasets, and inform ongoing efforts to optimize neuroanatomical analysis of existing and future structural MRI datasets in non-sedated humans. PMID:27004471
CT protocol management: simplifying the process by using a master protocol concept.
Szczykutowicz, Timothy P; Bour, Robert K; Rubert, Nicholas; Wendt, Gary; Pozniak, Myron; Ranallo, Frank N
2015-07-08
This article explains a method for creating CT protocols for a wide range of patient body sizes and clinical indications, using detailed tube current information from a small set of commonly used protocols. Analytical expressions were created relating CT technical acquisition parameters which can be used to create new CT protocols on a given scanner or customize protocols from one scanner to another. Plots of mA as a function of patient size for specific anatomical regions were generated and used to identify the tube output needs for patients as a function of size for a single master protocol. Tube output data were obtained from the DICOM header of clinical images from our PACS and patient size was measured from CT localizer radiographs under IRB approval. This master protocol was then used to create 11 additional master protocols. The 12 master protocols were further combined to create 39 single and multiphase clinical protocols. Radiologist acceptance rate of exams scanned using the clinical protocols was monitored for 12,857 patients to analyze the effectiveness of the presented protocol management methods using a two-tailed Fisher's exact test. A single routine adult abdominal protocol was used as the master protocol to create 11 additional master abdominal protocols of varying dose and beam energy. Situations in which the maximum tube current would have been exceeded are presented, and the trade-offs between increasing the effective tube output via 1) decreasing pitch, 2) increasing the scan time, or 3) increasing the kV are discussed. Out of 12 master protocols customized across three different scanners, only one had a statistically significant acceptance rate that differed from the scanner it was customized from. The difference, however, was only 1% and was judged to be negligible. All other master protocols differed in acceptance rate insignificantly between scanners. The methodology described in this paper allows a small set of master protocols to be adapted among different clinical indications on a single scanner and among different CT scanners.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raterman, G; Gauntt, D
2014-06-01
Purpose: To propose a method other than CTDI phantom measurements for routine CT dosimetry QA. This consists of taking a series of air exposure measurements and calculating a factor for converting from this exposure measurement to the protocol's associated head or body CTDI value using DLP. The data presented are the ratios of phantom DLP to air exposure ratios for different scanners, as well as error in the displayed CTDI. Methods: For each scanner, the CTDI is measured at all available tube voltages using both the head and body phantoms. Then, the exposure is measured using a pencil chamber inmore » air at isocenter. A ratio of phantom DLP to exposure in air for a given protocol may be calculated and used for converting a simple air dose measurement to a head or body CTDI value. For our routine QA, the exposure in air for different collimations, mAs, and kVp is measured, and displayed CTDI is recorded. Therefore, the ratio calculated may convert these exposures to CTDI values that may then be compared to the displayed CTDI for a large range of acquisition parameter combinations. Results: It was found that all scanners tend to have a ratio factor that slightly increases with kVp. Also, Philips scanners appear to have less of a dependence on kVp; whereas, GE scanners have a lower ratio at lower kVp. The use of air exposure times the DLP conversion yielded CTDI values that were less than 10% different from the displayed CTDI on several scanners. Conclusion: This method may be used as a primary method for CT dosimetry QA. As a result of the ease of measurement, a dosimetry metric specific to that scanner may be calculated for a wide variety of CT protocols, which could also be used to monitor display CTDI value accuracy.« less
SU-G-206-11: The Effect of Table Height On CTDIvol and SSDE in CT Scanning: A Phantom Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsh, R; Silosky, M
2016-06-15
Purpose: Localizer projection radiographs acquired prior to CT scans are used to estimate patient size, affecting the function of Automatic Tube Current Modulation (ATCM) and calculation of the Size Specific Dose Estimate (SSDE). Due to geometric effects, the projected patient size varies with scanner table height and with the orientation of the localizer (AP versus PA). Consequently, variations in scanner table height may affect both CTDIvol and the calculated size-corrected dose index (SSDE). This study sought to characterize these effects. Methods: An anthropomorphic phantom was imaged using an AP localizer, followed by a diagnostic scan using ATCM and our institution’smore » routine abdomen protocol. This was repeated at various scanner table heights, recording the scanner-reported CTDIvol for each diagnostic scan. The width of the phantom was measured from the localizer and diagnostic images using in-house software. The measured phantom width and scanner-reported CTDIvol were used to calculate SSDE. This was repeated using PA localizers followed by diagnostic scans. Results: 1) The localizer-based phantom width varied by up to 54% of the nominal phantom width between minimum and maximum table heights. 2) Changing the table height caused a variation in scanner-reported CTDIvol of a factor greater than 4.6 when using a PA localizer and almost 2 when using an AP localizer. 3) SSDE, calculated from measured phantom size and scanner-reported CTDIvol, varied by a factor of more than 2.8 when using a PA localizer and almost 1.5 when using an AP localizer. Conclusion: Our study demonstrates that off-center patient positioning affects the efficacy of ATCM, more severely when localizers are acquired in the PA rather than AP projection. Further, patient positioning errors can cause a large variation in the calculated SSDE. This hinders interpretation of SSDE for individual patients and aggregate SSDE data when evaluating CT protocols and clinical practices.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Supanich, M; Bevins, N
Purpose: This review of scanners from 4 major manufacturers examines the clinical impact of performing CT scans that extend into areas of the body that were not acquired in the CT localizer radiograph. Methods: Anthropomorphic chest and abdomen phantoms were positioned together on the tables of CT scanners from 4 different vendors. All of the scanners offered an Automatic Exposure Control (AEC) option with both lateral and axial tube current modulation. A localizer radiograph was taken covering the entire extent of both phantoms and then the scanner's Chest-Abdomen-Pelvis (CAP) study was performed with the clinical AEC settings employed and themore » scan and reconstruction range extending from the superior portion of the chest phantom through the inferior portion of the abdomen phantom. A new study was then initiated with a localizer radiograph extending the length of the chest phantom (not covering the abdomen phantom). The same CAP protocol and AEC settings were then used to scan and reconstruct the entire length of both phantoms. Scan parameters at specific locations in the abdomen phantom from both studies were investigated using the information contained in the DICOM metadata of the reconstructed images. Results: The AEC systems on all scanners utilized different tube current settings in the abdomen phantom for the scan completed without the full localizer radiograph. The AEC system behavior was also scanner dependent with the default manual tube current, the maximum tube current and the tube current at the last known position observed as outcomes. Conclusion: The behavior of the AEC systems of CT scanners in regions not covered by the localizer radiograph is vendor dependent. To ensure optimal image quality and radiation exposure it is important to include the entire planned scan region in the localizer radiograph.« less
Zgong, Xin; Yu, Quan; Yu, Zhe-yuan; Wang, Guo-min; Qian, Yu-fen
2012-04-01
To establish a new method of presurgical alveolar molding using computer aided design(CAD) in infants with complete unilateral cleft lip and palate (UCLP). Ten infants with complete UCLP were recruited. A maxillary impression was taken at the first examination after birth. The study model was scanned by a non-contact three-dimensional laser scanner and a digital model was constructed and analyzed to simulate the alveolar molding procedure with reverse engineering software (RapidForm 2006). The digital geometrical data were exported to produce a scale model using rapid prototyping technology. The whole set of appliances was fabricated based on these solid models. The digital model could be viewed and measured from any direction by the software. By the end of the NAM treatment before surgical lip repair, the cleft was narrowed and the malformation of alveolar segments was aligned normally, significantly improving nasal symmetry and nostril shape. Presurgical NAM using CAD could simplify the treatment procedure and estimate the treatment objective, which enabled precise control of the force and direction of the alveolar segments movement.
Bauer, Jan Stefan; Noël, Peter Benjamin; Vollhardt, Christiane; Much, Daniela; Degirmenci, Saliha; Brunner, Stefanie; Rummeny, Ernst Josef; Hauner, Hans
2015-01-01
Purpose MR might be well suited to obtain reproducible and accurate measures of fat tissues in infants. This study evaluates MR-measurements of adipose tissue in young infants in vitro and in vivo. Material and Methods MR images of ten phantoms simulating subcutaneous fat of an infant’s torso were obtained using a 1.5T MR scanner with and without simulated breathing. Scans consisted of a cartesian water-suppression turbo spin echo (wsTSE) sequence, and a PROPELLER wsTSE sequence. Fat volume was quantified directly and by MR imaging using k-means clustering and threshold-based segmentation procedures to calculate accuracy in vitro. Whole body MR was obtained in sleeping young infants (average age 67±30 days). This study was approved by the local review board. All parents gave written informed consent. To obtain reproducibility in vivo, cartesian and PROPELLER wsTSE sequences were repeated in seven and four young infants, respectively. Overall, 21 repetitions were performed for the cartesian sequence and 13 repetitions for the PROPELLER sequence. Results In vitro accuracy errors depended on the chosen segmentation procedure, ranging from 5.4% to 76%, while the sequence showed no significant influence. Artificial breathing increased the minimal accuracy error to 9.1%. In vivo reproducibility errors for total fat volume of the sleeping infants ranged from 2.6% to 3.4%. Neither segmentation nor sequence significantly influenced reproducibility. Conclusion With both cartesian and PROPELLER sequences an accurate and reproducible measure of body fat was achieved. Adequate segmentation was mandatory for high accuracy. PMID:25706876
Colman, Kerri L; Dobbe, Johannes G G; Stull, Kyra E; Ruijter, Jan M; Oostra, Roelof-Jan; van Rijn, Rick R; van der Merwe, Alie E; de Boer, Hans H; Streekstra, Geert J
2017-07-01
Almost all European countries lack contemporary skeletal collections for the development and validation of forensic anthropological methods. Furthermore, legal, ethical and practical considerations hinder the development of skeletal collections. A virtual skeletal database derived from clinical computed tomography (CT) scans provides a potential solution. However, clinical CT scans are typically generated with varying settings. This study investigates the effects of image segmentation and varying imaging conditions on the precision of virtual modelled pelves. An adult human cadaver was scanned using varying imaging conditions, such as scanner type and standard patient scanning protocol, slice thickness and exposure level. The pelvis was segmented from the various CT images resulting in virtually modelled pelves. The precision of the virtual modelling was determined per polygon mesh point. The fraction of mesh points resulting in point-to-point distance variations of 2 mm or less (95% confidence interval (CI)) was reported. Colour mapping was used to visualise modelling variability. At almost all (>97%) locations across the pelvis, the point-to-point distance variation is less than 2 mm (CI = 95%). In >91% of the locations, the point-to-point distance variation was less than 1 mm (CI = 95%). This indicates that the geometric variability of the virtual pelvis as a result of segmentation and imaging conditions rarely exceeds the generally accepted linear error of 2 mm. Colour mapping shows that areas with large variability are predominantly joint surfaces. Therefore, results indicate that segmented bone elements from patient-derived CT scans are a sufficiently precise source for creating a virtual skeletal database.
Bauer, Jan Stefan; Noël, Peter Benjamin; Vollhardt, Christiane; Much, Daniela; Degirmenci, Saliha; Brunner, Stefanie; Rummeny, Ernst Josef; Hauner, Hans
2015-01-01
MR might be well suited to obtain reproducible and accurate measures of fat tissues in infants. This study evaluates MR-measurements of adipose tissue in young infants in vitro and in vivo. MR images of ten phantoms simulating subcutaneous fat of an infant's torso were obtained using a 1.5T MR scanner with and without simulated breathing. Scans consisted of a cartesian water-suppression turbo spin echo (wsTSE) sequence, and a PROPELLER wsTSE sequence. Fat volume was quantified directly and by MR imaging using k-means clustering and threshold-based segmentation procedures to calculate accuracy in vitro. Whole body MR was obtained in sleeping young infants (average age 67±30 days). This study was approved by the local review board. All parents gave written informed consent. To obtain reproducibility in vivo, cartesian and PROPELLER wsTSE sequences were repeated in seven and four young infants, respectively. Overall, 21 repetitions were performed for the cartesian sequence and 13 repetitions for the PROPELLER sequence. In vitro accuracy errors depended on the chosen segmentation procedure, ranging from 5.4% to 76%, while the sequence showed no significant influence. Artificial breathing increased the minimal accuracy error to 9.1%. In vivo reproducibility errors for total fat volume of the sleeping infants ranged from 2.6% to 3.4%. Neither segmentation nor sequence significantly influenced reproducibility. With both cartesian and PROPELLER sequences an accurate and reproducible measure of body fat was achieved. Adequate segmentation was mandatory for high accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Y; Scott, A; Allahverdian, J
2014-06-15
Purpose: It is possible to measure the patient surface dose non-invasively using radiolucent dosimeters. However, the patient size specific weighted dose remains unknown. We attempted to study the weighted dose to surface dose relationship as the patient size varies in abdominal CT. Methods: Seven abdomen phantoms (CIRS TE series) simulating patients from an infant to a large adult were used. Size specific doses were measured with a 100 mm CT chamber under axial scans using a Siemens Sensation 64 (mCT) and a GE 750 HD. The scanner settings were 120 kVp, 200 mAs with fully opened collimations. Additional kVps (80,more » 100, 140) were added depending on the phantom sizes. The ratios (r) of the weighted CT dose (Dw) to the surface dose (Ds) were related to the phantom size (L) defined as the diameter resulting the equivalent cross-sectional area. Results: The Dw versus Ds ratio (r) was fitted to a linear relationship: r = 1.083 − 0.007L (R square = 0.995), and r = 1.064 − 0.007L (R square = 0.953), for Siemens Sensation 64 and GE 750 HD, respectively. The relationship appears to be independent of the scanner specifics. Conclusion: The surface dose to the weighted dose ratio decreases linearly as the patient size increases. The result is independent of the scanner specifics. The result can be used to obtain in vivo CT dosimetry in abdominal CT.« less
Multi-spectral optical scanners for commercial earth observation missions
NASA Astrophysics Data System (ADS)
Schröter, Karin; Engel, Wolfgang; Berndt, Klaus
2017-11-01
In recent years, a number of commercial Earth observation missions have been initiated with the aim to gather data in the visible and near-infrared wavelength range. Some of these missions aim at medium resolution (5 to 10 m) multi-spectral imaging with the special background of daily revisiting. Typical applications aim at monitoring of farming area for growth control and harvest prediction, irrigation control, or disaster monitoring such as hail damage in farming, or flood survey. In order to arrive at profitable business plans for such missions, it is mandatory to establish the space segment, i.e. the spacecraft with their opto -electronic payloads, at minimum cost while guaranteeing maximum reliability for mission success. As multiple spacecraft are required for daily revisiting, the solutions are typically based on micro-satellites. This paper presents designs for multi-spectral opto-electric scanners for this type of missions. These designs are drive n by minimum mass and power budgets of microsatellites, and the need for minimum cost. As a consequence, it is mandatory to arrive at thermally robust, compact telescope designs. The paper gives a comparison between refractive, catadioptric, and TMA optics. For mirror designs, aluminium and Zerodur mirror technologies are briefly discussed. State-of-the art focal plane designs are presented. The paper also addresses the choice of detector technologies such as CCDs and CMOS Active Pixel Sensors. The electronics of the multi-spectral scanners represent the main design driver regarding power consumption, reliability, and (most often) cost. It can be subdivided into the detector drive electronics, analog and digital data processing chains, the data mass memory unit, formatting and down - linking units, payload control electronics, and local power supply. The paper gives overviews and trade-offs between data compression strategies and electronics solutions, mass memory unit designs, and data formatting approaches. Special emphasis will be put on space application aspects of these electronics solutions such as radiation total dose tolerance and single events robustness. Finally, software architecture and operational modes of commercial multi-spectral scanners are discussed. They are driven by operational requirements and mission constraints such as data takes per orbit, number of downlink ground stations, calibration needs, and mission schedule planning.
Goracci, Cecilia; Franchi, Lorenzo; Vichi, Alessandro; Ferrari, Marco
2016-08-01
The interest on intraoral scanners for digital impressions has been growing and new devices are continuously introduced on the market. It is timely to verify whether the several scanners proposed for full-arch digital impressions have been tested under clinical conditions for validity, repeatability, reproducibility, as well as for time efficiency, and patient acceptance. An electronic search of the literature was conducted through PubMed, Scopus, Cochrane Library, Web of Science, and Embase, entering the query terms 'digital impression', 'intraoral digital impression', 'intraoral scanning', 'intraoral scanner', 'intraoral digital scanner', combined by the Boolean operator 'OR'. No language or time limitation was applied. Only studies where digital full-arch impressions had been recorded intraorally were considered. In only eight studies full-arch scans had been performed intraorally. Only four studies reported data on validity, repeatability, reproducibility of digital measurements and their samples were limited to subjects in complete permanent dentition. Only two intraoral scanners, Lava COS and iTero, were tested. Scanning times were measured in six studies and varied largely. Patients' acceptance of intraoral scanning was evaluated in four studies, but it was not specifically assessed for children. The scientific evidence so far collected on intraoral scanning is neither exhaustive, nor up-to-date. Data from full-arch scans performed in children should be collected. For a meaningful assessment of time efficiency, agreement should be reached on the procedural steps to be included in the computation of scanning time. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Navigation concepts for MR image-guided interventions.
Moche, Michael; Trampel, Robert; Kahn, Thomas; Busse, Harald
2008-02-01
The ongoing development of powerful magnetic resonance imaging techniques also allows for advanced possibilities to guide and control minimally invasive interventions. Various navigation concepts have been described for practically all regions of the body. The specific advantages and limitations of these concepts largely depend on the magnet design of the MR scanner and the interventional environment. Open MR scanners involve minimal patient transfer, which improves the interventional workflow and reduces the need for coregistration, ie, the mapping of spatial coordinates between imaging and intervention position. Most diagnostic scanners, in contrast, do not allow the physician to guide his instrument inside the magnet and, consequently, the patient needs to be moved out of the bore. Although adequate coregistration and navigation concepts for closed-bore scanners are technically more challenging, many developments are driven by the well-known capabilities of high-field systems and their better economic value. Advanced concepts such as multimodal overlays, augmented reality displays, and robotic assistance devices are still in their infancy but might propel the use of intraoperative navigation. The goal of this work is to give an update on MRI-based navigation and related techniques and to briefly discuss the clinical experience and limitations of some selected systems. (Copyright) 2008 Wiley-Liss, Inc.
Laser Scanning on Road Pavements: A New Approach for Characterizing Surface Texture
Bitelli, Gabriele; Simone, Andrea; Girardi, Fabrizio; Lantieri, Claudio
2012-01-01
The surface layer of road pavement has a particular importance in relation to the satisfaction of the primary demands of locomotion, such as security and eco-compatibility. Among those pavement surface characteristics, the “texture” appears to be one of the most interesting with regard to the attainment of skid resistance. Specifications and regulations, providing a wide range of functional indicators, act as guidelines to satisfy the performance requirements. This paper describes an experiment on the use of laser scanner techniques on various types of asphalt for texture characterization. The use of high precision laser scanners, such as the triangulation types, is proposed to expand the analysis of road pavement from the commonly and currently used two-dimensional method to a three-dimensional one, with the aim of extending the range of the most important parameters for these kinds of applications. Laser scanners can be used in an innovative way to obtain information on areal surface layer through a single measurement, with data homogeneity and representativeness. The described experience highlights how the laser scanner is used for both laboratory experiments and tests in situ, with a particular attention paid to factors that could potentially affect the survey. PMID:23012535
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2018-01-01
The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
NASA Astrophysics Data System (ADS)
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
The Registration and Segmentation of Heterogeneous Laser Scanning Data
NASA Astrophysics Data System (ADS)
Al-Durgham, Mohannad M.
Light Detection And Ranging (LiDAR) mapping has been emerging over the past few years as a mainstream tool for the dense acquisition of three dimensional point data. Besides the conventional mapping missions, LiDAR systems have proven to be very useful for a wide spectrum of applications such as forestry, structural deformation analysis, urban mapping, and reverse engineering. The wide application scope of LiDAR lead to the development of many laser scanning technologies that are mountable on multiple platforms (i.e., airborne, mobile terrestrial, and tripod mounted), this caused variations in the characteristics and quality of the generated point clouds. As a result of the increased popularity and diversity of laser scanners, one should address the heterogeneous LiDAR data post processing (i.e., registration and segmentation) problems adequately. Current LiDAR integration techniques do not take into account the varying nature of laser scans originating from various platforms. In this dissertation, the author proposes a methodology designed particularly for the registration and segmentation of heterogeneous LiDAR data. A data characterization and filtering step is proposed to populate the points' attributes and remove non-planar LiDAR points. Then, a modified version of the Iterative Closest Point (ICP), denoted by the Iterative Closest Projected Point (ICPP) is designed for the registration of heterogeneous scans to remove any misalignments between overlapping strips. Next, a region-growing-based heterogeneous segmentation algorithm is developed to ensure the proper extraction of planar segments from the point clouds. Validation experiments show that the proposed heterogeneous registration can successfully align airborne and terrestrial datasets despite the great differences in their point density and their noise level. In addition, similar testes have been conducted to examine the heterogeneous segmentation and it is shown that one is able to identify common planar features in airborne and terrestrial data without resampling or manipulating the data in any way. The work presented in this dissertation provides a framework for the registration and segmentation of airborne and terrestrial laser scans which has a positive impact on the completeness of the scanned feature. Therefore, the derived products from these point clouds have higher accuracy as seen in the full manuscript.
Geiger, Daniel; Bae, Won C.; Statum, Sheronda; Du, Jiang; Chung, Christine B.
2014-01-01
Objective Temporomandibular dysfunction involves osteoarthritis of the TMJ, including degeneration and morphologic changes of the mandibular condyle. Purpose of this study was to determine accuracy of novel 3D-UTE MRI versus micro-CT (μCT) for quantitative evaluation of mandibular condyle morphology. Material & Methods Nine TMJ condyle specimens were harvested from cadavers (2M, 3F; Age 85 ± 10 yrs., mean±SD). 3D-UTE MRI (TR=50ms, TE=0.05 ms, 104 μm isotropic-voxel) was performed using a 3-T MR scanner and μCT (18 μm isotropic-voxel) was performed. MR datasets were spatially-registered with μCT dataset. Two observers segmented bony contours of the condyles. Fibrocartilage was segmented on MR dataset. Using a custom program, bone and fibrocartilage surface coordinates, Gaussian curvature, volume of segmented regions and fibrocartilage thickness were determined for quantitative evaluation of joint morphology. Agreement between techniques (MRI vs. μCT) and observers (MRI vs. MRI) for Gaussian curvature, mean curvature and segmented volume of the bone were determined using intraclass correlation correlation (ICC) analyses. Results Between MRI and μCT, the average deviation of surface coordinates was 0.19±0.15 mm, slightly higher than spatial resolution of MRI. Average deviation of the Gaussian curvature and volume of segmented regions, from MRI to μCT, was 5.7±6.5% and 6.6±6.2%, respectively. ICC coefficients (MRI vs. μCT) for Gaussian curvature, mean curvature and segmented volumes were respectively 0.892, 0.893 and 0.972. Between observers (MRI vs. MRI), the ICC coefficients were 0.998, 0.999 and 0.997 respectively. Fibrocartilage thickness was 0.55±0.11 mm, as previously described in literature for grossly normal TMJ samples. Conclusion 3D-UTE MR quantitative evaluation of TMJ condyle morphology ex-vivo, including surface, curvature and segmented volume, shows high correlation against μCT and between observers. In addition, UTE MRI allows quantitative evaluation of the fibrocartilaginous condylar component. PMID:24092237
Lung fissure detection in CT images using global minimal paths
NASA Astrophysics Data System (ADS)
Appia, Vikram; Patil, Uday; Das, Bipul
2010-03-01
Pulmonary fissures separate human lungs into five distinct regions called lobes. Detection of fissure is essential for localization of the lobar distribution of lung diseases, surgical planning and follow-up. Treatment planning also requires calculation of the lobe volume. This volume estimation mandates accurate segmentation of the fissures. Presence of other structures (like vessels) near the fissure, along with its high variational probability in terms of position, shape etc. makes the lobe segmentation a challenging task. Also, false incomplete fissures and occurrence of diseases add to the complications of fissure detection. In this paper, we propose a semi-automated fissure segmentation algorithm using a minimal path approach on CT images. An energy function is defined such that the path integral over the fissure is the global minimum. Based on a few user defined points on a single slice of the CT image, the proposed algorithm minimizes a 2D energy function on the sagital slice computed using (a) intensity (b) distance of the vasculature, (c) curvature in 2D, (d) continuity in 3D. The fissure is the infimum energy path between a representative point on the fissure and nearest lung boundary point in this energy domain. The algorithm has been tested on 10 CT volume datasets acquired from GE scanners at multiple clinical sites. The datasets span through different pathological conditions and varying imaging artifacts.
Direct estimation of human trabecular bone stiffness using cone beam computed tomography.
Klintström, Eva; Klintström, Benjamin; Pahr, Dieter; Brismar, Torkel B; Smedby, Örjan; Moreno, Rodrigo
2018-04-10
The aim of this study was to evaluate the possibility of estimating the biomechanical properties of trabecular bone through finite element simulations by using dental cone beam computed tomography data. Fourteen human radius specimens were scanned in 3 cone beam computed tomography devices: 3-D Accuitomo 80 (J. Morita MFG., Kyoto, Japan), NewTom 5 G (QR Verona, Verona, Italy), and Verity (Planmed, Helsinki, Finland). The imaging data were segmented by using 2 different methods. Stiffness (Young modulus), shear moduli, and the size and shape of the stiffness tensor were studied. Corresponding evaluations by using micro-CT were regarded as the reference standard. The 3-D Accuitomo 80 (J. Morita MFG., Kyoto, Japan) showed good performance in estimating stiffness and shear moduli but was sensitive to the choice of segmentation method. NewTom 5 G (QR Verona, Verona, Italy) and Verity (Planmed, Helsinki, Finland) yielded good correlations, but they were not as strong as Accuitomo 80 (J. Morita MFG., Kyoto, Japan). The cone beam computed tomography devices overestimated both stiffness and shear compared with the micro-CT estimations. Finite element-based calculations of biomechanics from cone beam computed tomography data are feasible, with strong correlations for the Accuitomo 80 scanner (J. Morita MFG., Kyoto, Japan) combined with an appropriate segmentation method. Such measurements might be useful for predicting implant survival by in vivo estimations of bone properties. Copyright © 2018 Elsevier Inc. All rights reserved.
Patient-specific semi-supervised learning for postoperative brain tumor segmentation.
Meier, Raphael; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio
2014-01-01
In contrast to preoperative brain tumor segmentation, the problem of postoperative brain tumor segmentation has been rarely approached so far. We present a fully-automatic segmentation method using multimodal magnetic resonance image data and patient-specific semi-supervised learning. The idea behind our semi-supervised approach is to effectively fuse information from both pre- and postoperative image data of the same patient to improve segmentation of the postoperative image. We pose image segmentation as a classification problem and solve it by adopting a semi-supervised decision forest. The method is evaluated on a cohort of 10 high-grade glioma patients, with segmentation performance and computation time comparable or superior to a state-of-the-art brain tumor segmentation method. Moreover, our results confirm that the inclusion of preoperative MR images lead to a better performance regarding postoperative brain tumor segmentation.
Ultrafast web inspection with hybrid dispersion laser scanner.
Chen, Hongwei; Wang, Chao; Yazaki, Akio; Kim, Chanju; Goda, Keisuke; Jalali, Bahram
2013-06-10
We report an ultrafast web inspector that operates at a 1000 times higher scan rate than conventional methods. This system is based on a hybrid dispersion laser scanner that performs line scans at nearly 100 MHz. Specifically, we demonstrate web inspection with detectable resolution of 48.6 μm/pixel (scan direction) × 23 μm (web flow direction) within a width of view of 6 mm at a record high scan rate of 90.9 MHz. We demonstrate the identification and evaluation of particles on silicon wafers. This method holds great promise for speeding up quality control and hence reducing manufacturing costs.
Nonrigid registration-based coronary artery motion correction for cardiac computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhagalia, Roshni; Pack, Jed D.; Miller, James V.
2012-07-15
Purpose: X-ray computed tomography angiography (CTA) is the modality of choice to noninvasively monitor and diagnose heart disease with coronary artery health and stenosis detection being of particular interest. Reliable, clinically relevant coronary artery imaging mandates high spatiotemporal resolution. However, advances in intrinsic scanner spatial resolution (CT scanners are available which combine nearly 900 detector columns with focal spot oversampling) can be tempered by motion blurring, particularly in patients with unstable heartbeats. As a result, recently numerous methods have been devised to improve coronary CTA imaging. Solutions involving hardware, multisector algorithms, or {beta}-blockers are limited by cost, oversimplifying assumptions aboutmore » cardiac motion, and populations showing contraindications to drugs, respectively. This work introduces an inexpensive algorithmic solution that retrospectively improves the temporal resolution of coronary CTA without significantly affecting spatial resolution. Methods: Given the goal of ruling out coronary stenosis, the method focuses on 'deblurring' the coronary arteries. The approach makes no assumptions about cardiac motion, can be used on exams acquired at high heart rates (even over 75 beats/min), and draws on a fast and accurate three-dimensional (3D) nonrigid bidirectional labeled point matching approach to estimate the trajectories of the coronary arteries during image acquisition. Motion compensation is achieved by employing a 3D warping of a series of partial reconstructions based on the estimated motion fields. Each of these partial reconstructions is created from data acquired over a short time interval. For brevity, the algorithm 'Subphasic Warp and Add' (SWA) reconstruction. Results: The performance of the new motion estimation-compensation approach was evaluated by a systematic observer study conducted using nine human cardiac CTA exams acquired over a range of average heart rates between 68 and 86 beats/min. Algorithm performance was based-lined against exams reconstructed using standard filtered-backprojection (FBP). The study was performed by three experienced reviewers using the American Heart Association's 15-segment model. All vessel segments were evaluated to quantify their viability to allow a clinical diagnosis before and after motion estimation-compensation using SWA. To the best of the authors' knowledge this is the first such observer study to show that an image processing-based software approach can improve the clinical diagnostic value of CTA for coronary artery evaluation. Conclusions: Results from the observer study show that the SWA method described here can dramatically reduce coronary artery motion and preserve real pathology, without affecting spatial resolution. In particular, the method successfully mitigated motion artifacts in 75% of all initially nondiagnostic coronary artery segments, and in over 45% of the cases this improvement was enough to make a previously nondiagnostic vessel segment clinically diagnostic.« less
NASA Astrophysics Data System (ADS)
ter Kuile, Willem M.; van Veen, J. J.; Knoll, Bas
1995-02-01
Usual sampling methods and instruments for checking compliance with `threshold limit values' (TLV) of gaseous components do not provide much information on the mechanism which caused the measured workday average concentration. In the case of noncompliance this information is indispensable for the design of cost effective measures. The infrared gas cloud (IGC) scanner visualizes the spatial distribution of specific gases at a workplace in a quantitative image with a calibrated grayvalue scale. This helps to find the cause of an over- exposure, and so it permits effective abatement of high exposures in the working environment. This paper deals with the technical design of the IGC scanner. Its use is illustrated by some real-world problems. The measuring principle and the technical operation of the IGC-scanner are described. Special attention is given to the pros and cons of retro-reflector screens, the noise reduction methods and image presentation and interpretation. The latter is illustrated by the images produced by the measurements. Essentially the IGC scanner can be used for selective open-path measurement of all gases with a concentration in the ppm range and sufficiently strong distinct absorption lines in the infrared region between 2.5 micrometers and 14.0 micrometers . Further it could be useful for testing the efficiency of ventilation systems and the remote detection of gas leaks. We conclude that a new powerful technique has been added to the industrial hygiene facilities for controlling and improving the work environment.
3D acquisition and modeling for flint artefacts analysis
NASA Astrophysics Data System (ADS)
Loriot, B.; Fougerolle, Y.; Sestier, C.; Seulin, R.
2007-07-01
In this paper, we are interested in accurate acquisition and modeling of flint artefacts. Archaeologists needs accurate geometry measurements to refine their understanding of the flint artefacts manufacturing process. Current techniques require several operations. First, a copy of a flint artefact is reproduced. The copy is then sliced. A picture is taken for each slice. Eventually, geometric information is manually determined from the pictures. Such a technique is very time consuming, and the processing applied to the original, as well as the reproduced object, induces several measurement errors (prototyping approximations, slicing, image acquisition, and measurement). By using 3D scanners, we significantly reduce the number of operations related to data acquisition and completely suppress the prototyping step to obtain an accurate 3D model. The 3D models are segmented into sliced parts that are then analyzed. Each slice is then automatically fitted by mathematical representation. Such a representation offers several interesting properties: geometric features can be characterized (e.g. shapes, curvature, sharp edges, etc), and a shape of the original piece of stone can be extrapolated. The contributions of this paper are an acquisition technique using 3D scanners that strongly reduces human intervention, acquisition time and measurement errors, and the representation of flint artefacts as mathematical 2D sections that enable accurate analysis.
Complete 360° circumferential SSOCT gonioscopy of the iridocorneal angle
NASA Astrophysics Data System (ADS)
McNabb, Ryan P.; Kuo, Anthony N.; Izatt, Joseph A.
2014-02-01
The ocular iridocorneal angle is generally an optically inaccessible area when viewed directly through the cornea due to the high angle of incidence required and the large index of refraction difference between air and cornea (nair = 1.000 and ncornea = 1.376) resulting in total internal reflection. Gonioscopy allows for viewing of the angle by removing the aircornea interface through the use of a special contact lens on the eye. Gonioscopy is used clinically to visualize the angle directly but only en face. Optical coherence tomography (OCT) has been used to image the angle and deeper structures via an external approach. Typically, this imaging technique is performed by utilizing a conventional anterior segment OCT scanning system. However, instead of imaging the apex of the cornea, either the scanner or the subject is tilted such that the corneoscleral limbus is orthogonal to the optical axis of the scanner requiring multiple volumes to obtain complete circumferential coverage of the ocular angle. We developed a novel gonioscopic OCT (GOCT) system that images the entire ocular angle within a single volume via an "internal" approach through the use of a custom radially symmetric gonioscopic contact lens. We present, to our knowledge, the first complete 360° circumferential volumes of the iridocorneal angle from a direct, internal approach.
TOPPE: A framework for rapid prototyping of MR pulse sequences.
Nielsen, Jon-Fredrik; Noll, Douglas C
2018-06-01
To introduce a framework for rapid prototyping of MR pulse sequences. We propose a simple file format, called "TOPPE", for specifying all details of an MR imaging experiment, such as gradient and radiofrequency waveforms and the complete scan loop. In addition, we provide a TOPPE file "interpreter" for GE scanners, which is a binary executable that loads TOPPE files and executes the sequence on the scanner. We also provide MATLAB scripts for reading and writing TOPPE files and previewing the sequence prior to hardware execution. With this setup, the task of the pulse sequence programmer is reduced to creating TOPPE files, eliminating the need for hardware-specific programming. No sequence-specific compilation is necessary; the interpreter only needs to be compiled once (for every scanner software upgrade). We demonstrate TOPPE in three different applications: k-space mapping, non-Cartesian PRESTO whole-brain dynamic imaging, and myelin mapping in the brain using inhomogeneous magnetization transfer. We successfully implemented and executed the three example sequences. By simply changing the various TOPPE sequence files, a single binary executable (interpreter) was used to execute several different sequences. The TOPPE file format is a complete specification of an MR imaging experiment, based on arbitrary sequences of a (typically small) number of unique modules. Along with the GE interpreter, TOPPE comprises a modular and flexible platform for rapid prototyping of new pulse sequences. Magn Reson Med 79:3128-3134, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Wick, Carson A.; McClellan, James H.; Arepalli, Chesnal D.; Auffermann, William F.; Henry, Travis S.; Khosa, Faisal; Coy, Adam M.; Tridandapani, Srini
2015-01-01
Purpose: Accurate knowledge of cardiac quiescence is crucial to the performance of many cardiac imaging modalities, including computed tomography coronary angiography (CTCA). To accurately quantify quiescence, a method for detecting the quiescent periods of the heart from retrospective cardiac computed tomography (CT) using a correlation-based, phase-to-phase deviation measure was developed. Methods: Retrospective cardiac CT data were obtained from 20 patients (11 male, 9 female, 33–74 yr) and the left main, left anterior descending, left circumflex, right coronary artery (RCA), and interventricular septum (IVS) were segmented for each phase using a semiautomated technique. Cardiac motion of individual coronary vessels as well as the IVS was calculated using phase-to-phase deviation. As an easily identifiable feature, the IVS was analyzed to assess how well it predicts vessel quiescence. Finally, the diagnostic quality of the reconstructed volumes from the quiescent phases determined using the deviation measure from the vessels in aggregate and the IVS was compared to that from quiescent phases calculated by the CT scanner. Three board-certified radiologists, fellowship-trained in cardiothoracic imaging, graded the diagnostic quality of the reconstructions using a Likert response format: 1 = excellent, 2 = good, 3 = adequate, 4 = nondiagnostic. Results: Systolic and diastolic quiescent periods were identified for each subject from the vessel motion calculated using the phase-to-phase deviation measure. The motion of the IVS was found to be similar to the aggregate vessel (AGG) motion. The diagnostic quality of the coronary vessels for the quiescent phases calculated from the aggregate vessel (PAGG) and IVS (PIV S) deviation signal using the proposed methods was comparable to the quiescent phases calculated by the CT scanner (PCT). The one exception was the RCA, which improved for PAGG for 18 of the 20 subjects when compared to PCT (PCT = 2.48; PAGG = 2.07, p = 0.001). Conclusions: A method for quantifying the motion of specific coronary vessels using a correlation-based, phase-to-phase deviation measure was developed and tested on 20 patients receiving cardiac CT exams. The IVS was found to be a suitable predictor of vessel quiescence. The diagnostic quality of the quiescent phases detected by the proposed methods was comparable to those calculated by the CT scanner. The ability to quantify coronary vessel quiescence from the motion of the IVS can be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality. PMID:25652511
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wick, Carson A.; McClellan, James H.; Arepalli, Chesnal D.
2015-02-15
Purpose: Accurate knowledge of cardiac quiescence is crucial to the performance of many cardiac imaging modalities, including computed tomography coronary angiography (CTCA). To accurately quantify quiescence, a method for detecting the quiescent periods of the heart from retrospective cardiac computed tomography (CT) using a correlation-based, phase-to-phase deviation measure was developed. Methods: Retrospective cardiac CT data were obtained from 20 patients (11 male, 9 female, 33–74 yr) and the left main, left anterior descending, left circumflex, right coronary artery (RCA), and interventricular septum (IVS) were segmented for each phase using a semiautomated technique. Cardiac motion of individual coronary vessels as wellmore » as the IVS was calculated using phase-to-phase deviation. As an easily identifiable feature, the IVS was analyzed to assess how well it predicts vessel quiescence. Finally, the diagnostic quality of the reconstructed volumes from the quiescent phases determined using the deviation measure from the vessels in aggregate and the IVS was compared to that from quiescent phases calculated by the CT scanner. Three board-certified radiologists, fellowship-trained in cardiothoracic imaging, graded the diagnostic quality of the reconstructions using a Likert response format: 1 = excellent, 2 = good, 3 = adequate, 4 = nondiagnostic. Results: Systolic and diastolic quiescent periods were identified for each subject from the vessel motion calculated using the phase-to-phase deviation measure. The motion of the IVS was found to be similar to the aggregate vessel (AGG) motion. The diagnostic quality of the coronary vessels for the quiescent phases calculated from the aggregate vessel (P{sub AGG}) and IVS (P{sub IV} {sub S}) deviation signal using the proposed methods was comparable to the quiescent phases calculated by the CT scanner (P{sub CT}). The one exception was the RCA, which improved for P{sub AGG} for 18 of the 20 subjects when compared to P{sub CT} (P{sub CT} = 2.48; P{sub AGG} = 2.07, p = 0.001). Conclusions: A method for quantifying the motion of specific coronary vessels using a correlation-based, phase-to-phase deviation measure was developed and tested on 20 patients receiving cardiac CT exams. The IVS was found to be a suitable predictor of vessel quiescence. The diagnostic quality of the quiescent phases detected by the proposed methods was comparable to those calculated by the CT scanner. The ability to quantify coronary vessel quiescence from the motion of the IVS can be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality.« less
Probabilistic brain tissue segmentation in neonatal magnetic resonance imaging.
Anbeek, Petronella; Vincken, Koen L; Groenendaal, Floris; Koeman, Annemieke; van Osch, Matthias J P; van der Grond, Jeroen
2008-02-01
A fully automated method has been developed for segmentation of four different structures in the neonatal brain: white matter (WM), central gray matter (CEGM), cortical gray matter (COGM), and cerebrospinal fluid (CSF). The segmentation algorithm is based on information from T2-weighted (T2-w) and inversion recovery (IR) scans. The method uses a K nearest neighbor (KNN) classification technique with features derived from spatial information and voxel intensities. Probabilistic segmentations of each tissue type were generated. By applying thresholds on these probability maps, binary segmentations were obtained. These final segmentations were evaluated by comparison with a gold standard. The sensitivity, specificity, and Dice similarity index (SI) were calculated for quantitative validation of the results. High sensitivity and specificity with respect to the gold standard were reached: sensitivity >0.82 and specificity >0.9 for all tissue types. Tissue volumes were calculated from the binary and probabilistic segmentations. The probabilistic segmentation volumes of all tissue types accurately estimated the gold standard volumes. The KNN approach offers valuable ways for neonatal brain segmentation. The probabilistic outcomes provide a useful tool for accurate volume measurements. The described method is based on routine diagnostic magnetic resonance imaging (MRI) and is suitable for large population studies.
Heart imaging: the accuracy of the 64-MSCT in the detection of coronary artery disease.
Alessandri, N; Di Matteo, A; Rondoni, G; Petrassi, M; Tufani, F; Ferrari, R; Laghi, A
2009-01-01
At present, coronary angiography represents the gold standard technique for the diagnosis of coronary artery disease. Our aim is to compare the conventional coronary angiography to the coronary 64-multislice spiral computed tomography (64-MSCT), a new and non-invasive cardiac imaging technique. The last generation of MSCT scanners show a better imaging quality, due to a greater spatial and temporal resolution. Four expert observers (two cardiologists and two radiologists) have compared the angiographic data with the accuracy of the 64-MSCT in the detection and evaluation of coronary vessels stenoses. From the data obtained, the sensibility, the specificity and the accuracy of the coronary 64-MSCT have been defined. We have enrolled 75 patients (57 male, 18 female, mean age 61.83 +/- 10.38; range 30-80 years) with known or suspected coronary artery disease. The above population has been divided into 3 groups: Group A (Gr. A) with 40 patients (mean age 60.7 +/- 12.5) affected by both non-significant and significant coronary artery disease; Group B (Gr. B) with 25 patients (mean age 60.3 +/- 14.6) who underwent to percutaneous coronary intervention (PCI); Group C (Gr. C) with 10 patients (mean age 54.20 +/- 13.7) without any coronary angiographic stenoses. All the patients underwent non-invasive exams, conventional coronary angiography and coronary 64-MSCT. The comparison of the data obtained has been carried out according to a per group analysis, per patient analysis and per segment analysis. Moreover, the accuracy of the 64-MSCT has been defined for the detection of >75%, 50-75% and <50% coronary stenoses. Coronary angiography has identified significant coronary artery disease in 75% of the patients in the Gr. A and in 73% of the patients in the Gr. B. No coronary stenoses have been detected in Gr. C. According to a per segment analysis, in Gr. A, 36% of the segments analysed have shown a coronary stenosis (37% stenoses >75%, 32% stenoses 50-75% and 31% stenoses <50%). In Gr. B, 32% of the segments have shown a coronary stenosis (33% stenoses >75%, 29% stenoses 50-75% and 38% stenoses <50%). In-stent disease has been shown in only 4 of the 29 coronary stents identified. In Gr. A, coronary 64-MSCT has confirmed the angiographic results in the 93% of cases (sensibility 93%, specificity 100%, positive predictive value 100% and negative predictive value 83%) while, in Gr. B, this confirm has been obtained only in 64% of cases (sensibility 64%, specificity 100%, positive predictive value 100% and negative predictive value 50%). In Gr. C, we have observed a complete agreement between angiographic and CT data (sensibility, specificity, positive predictive value and negative predictive value 100%). According to a per segment analysis, the angiographic results have been confirmed in 98% of cases in Gr. A (sensibility 98%, specificity 94%, positive predictive value 90% and negative predictive value 94%) but only in 55% of cases in Gr. B (sensibility 55%, specificity 90%, positive predictive value 71% and negative predictive value 81%). Moreover, only 1 of the 4 in-stent restenoses has been detected (sensibility 25%, specificity 100%, positive predictive value 100% and negative predictive value 77%). Coronary angiography has detected a greater number of coronary stenoses than the 64-MSCT. 64-MSCT has demonstrated better accuracy in the study of coronary vessels wider than 2 mm, while its accuracy is lower for smaller vessels (diameter < 2.5 mm) and for the identification of in-stent restenosis, because there is a reduced image quality for these vessels and therefore a lower accuracy in the coronary stenosis detection. Nevertheless, 64-MSCT shows high accuracy and it can be considered a comparative but not a substitutive exam of the coronary angiography. Several technical limitations of the 64-MSCT are responsible of its lower accuracy versus the conventional coronary angiography, but solving these technical problems could give us a new non-invasive imaging technique for the study of coronary stents.
Computed tomography imaging and angiography - principles.
Kamalian, Shervin; Lev, Michael H; Gupta, Rajiv
2016-01-01
The evaluation of patients with diverse neurologic disorders was forever changed in the summer of 1973, when the first commercial computed tomography (CT) scanners were introduced. Until then, the detection and characterization of intracranial or spinal lesions could only be inferred by limited spatial resolution radioisotope scans, or by the patterns of tissue and vascular displacement on invasive pneumoencaphalography and direct carotid puncture catheter arteriography. Even the earliest-generation CT scanners - which required tens of minutes for the acquisition and reconstruction of low-resolution images (128×128 matrix) - could, based on density, noninvasively distinguish infarct, hemorrhage, and other mass lesions with unprecedented accuracy. Iodinated, intravenous contrast added further sensitivity and specificity in regions of blood-brain barrier breakdown. The advent of rapid multidetector row CT scanning in the early 1990s created renewed enthusiasm for CT, with CT angiography largely replacing direct catheter angiography. More recently, iterative reconstruction postprocessing techniques have made possible high spatial resolution, reduced noise, very low radiation dose CT scanning. The speed, spatial resolution, contrast resolution, and low radiation dose capability of present-day scanners have also facilitated dual-energy imaging which, like magnetic resonance imaging, for the first time, has allowed tissue-specific CT imaging characterization of intracranial pathology. © 2016 Elsevier B.V. All rights reserved.
Human brain MRI at 500 MHz, scientific perspectives and technological challenges
NASA Astrophysics Data System (ADS)
Le Bihan, Denis; Schild, Thierry
2017-03-01
The understanding of the human brain is one of the main scientific challenges of the 21st century. In the early 2000s the French Alternative Energies and Atomic Energy Commission launched a program to conceive and build a ‘human brain explorer’, the first human MRI scanner operating at 11.7 T. This scanner was envisioned to be part of the ambitious French-German project Iseult, bridging together industrial and academic partners to push the limits of molecular neuroimaging, from mouse to man, using ultra-high field MRI. In this article we provide a summary of the main neuroscience and medical targets of the Iseult project, mainly to acquire within timescales compatible with human tolerances images at a scale of 100 μm at which everything remains to discover, and to create new approaches to develop new imaging biomarkers for specific neurological and psychiatric disorders. The system specifications, the technological challenges, in terms of magnet design, winding technology, cryogenics, quench protection, stability control, and the solutions which have been chosen to overcome them and build this outstanding instrument are provided. Lines of the research and development which will be necessary to fully exploit the potential of this and other UHF MRI scanners are also outlined.
Bore-sight calibration of the profile laser scanner using a large size exterior calibration field
NASA Astrophysics Data System (ADS)
Koska, Bronislav; Křemen, Tomáš; Štroner, Martin
2014-10-01
The bore-sight calibration procedure and results of a profile laser scanner using a large size exterior calibration field is presented in the paper. The task is a part of Autonomous Mapping Airship (AMA) project which aims to create s surveying system with specific properties suitable for effective surveying of medium-wide areas (units to tens of square kilometers per a day). As is obvious from the project name an airship is used as a carrier. This vehicle has some specific properties. The most important properties are high carrying capacity (15 kg), long flight time (3 hours), high operating safety and special flight characteristics such as stability of flight, in terms of vibrations, and possibility to flight at low speed. The high carrying capacity enables using of high quality sensors like professional infrared (IR) camera FLIR SC645, high-end visible spectrum (VIS) digital camera and optics in the visible spectrum and tactical grade INSGPS sensor iMAR iTracerRT-F200 and profile laser scanner SICK LD-LRS1000. The calibration method is based on direct laboratory measuring of coordinate offset (lever-arm) and in-flight determination of rotation offsets (bore-sights). The bore-sight determination is based on the minimization of squares of individual point distances from measured planar surfaces.
Nephron segment-specific gene expression using AAV vectors.
Asico, Laureano D; Cuevas, Santiago; Ma, Xiaobo; Jose, Pedro A; Armando, Ines; Konkalmatt, Prasad R
2018-02-26
AAV9 vector provides efficient gene transfer in all segments of the renal nephron, with minimum expression in non-renal cells, when administered retrogradely via the ureter. It is important to restrict the transgene expression to the desired cell type within the kidney, so that the physiological endpoints represent the function of the transgene expressed in that specific cell type within kidney. We hypothesized that segment-specific gene expression within the kidney can be accomplished using the highly efficient AAV9 vectors carrying the promoters of genes that are expressed exclusively in the desired segment of the nephron in combination with administration by retrograde infusion into the kidney via the ureter. We constructed AAV vectors carrying eGFP under the control of: kidney-specific cadherin (KSPC) gene promoter for expression in the entire nephron; Na + /glucose co-transporter (SGLT2) gene promoter for expression in the S1 and S2 segments of the proximal tubule; sodium, potassium, 2 chloride co-transporter (NKCC2) gene promoter for expression in the thick ascending limb of Henle's loop (TALH); E-cadherin (ECAD) gene promoter for expression in the collecting duct (CD); and cytomegalovirus (CMV) early promoter that provides expression in most of the mammalian cells, as control. We tested the specificity of the promoter constructs in vitro for cell type-specific expression in mouse kidney cells in primary culture, followed by retrograde infusion of the AAV vectors via the ureter in the mouse. Our data show that AAV9 vector, in combination with the segment-specific promoters administered by retrograde infusion via the ureter, provides renal nephron segment-specific gene expression. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Zhu, F; Kuhlmann, M K; Kaysen, G A; Sarkar, S; Kaitwatcharachai, C; Khilnani, R; Stevens, L; Leonard, E F; Wang, J; Heymsfield, S; Levin, N W
2006-02-01
Discrepancies in body fluid estimates between segmental bioimpedance spectroscopy (SBIS) and gold-standard methods may be due to the use of a uniform value of tissue resistivity to compute extracellular fluid volume (ECV) and intracellular fluid volume (ICV). Discrepancies may also arise from the exclusion of fluid volumes of hands, feet, neck, and head from measurements due to electrode positions. The aim of this study was to define the specific resistivity of various body segments and to use those values for computation of ECV and ICV along with a correction for unmeasured fluid volumes. Twenty-nine maintenance hemodialysis patients (16 men) underwent body composition analysis including whole body MRI, whole body potassium (40K) content, deuterium, and sodium bromide dilution, and segmental and wrist-to-ankle bioimpedance spectroscopy, all performed on the same day before a hemodialysis. Segment-specific resistivity was determined from segmental fat-free mass (FFM; by MRI), hydration status of FFM (by deuterium and sodium bromide), tissue resistance (by SBIS), and segment length. Segmental FFM was higher and extracellular hydration of FFM was lower in men compared with women. Segment-specific resistivity values for arm, trunk, and leg all differed from the uniform resistivity used in traditional SBIS algorithms. Estimates for whole body ECV, ICV, and total body water from SBIS using segmental instead of uniform resistivity values and after adjustment for unmeasured fluid volumes of the body did not differ significantly from gold-standard measures. The uniform tissue resistivity values used in traditional SBIS algorithms result in underestimation of ECV, ICV, and total body water. Use of segmental resistivity values combined with adjustment for body volumes that are neglected by traditional SBIS technique significantly improves estimations of body fluid volume in hemodialysis patients.
Pediatric chest and abdominopelvic CT: organ dose estimation based on 42 patient models.
Tian, Xiaoyu; Li, Xiang; Segars, W Paul; Paulson, Erik K; Frush, Donald P; Samei, Ehsan
2014-02-01
To estimate organ dose from pediatric chest and abdominopelvic computed tomography (CT) examinations and evaluate the dependency of organ dose coefficients on patient size and CT scanner models. The institutional review board approved this HIPAA-compliant study and did not require informed patient consent. A validated Monte Carlo program was used to perform simulations in 42 pediatric patient models (age range, 0-16 years; weight range, 2-80 kg; 24 boys, 18 girls). Multidetector CT scanners were modeled on those from two commercial manufacturers (LightSpeed VCT, GE Healthcare, Waukesha, Wis; SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). Organ doses were estimated for each patient model for routine chest and abdominopelvic examinations and were normalized by volume CT dose index (CTDI(vol)). The relationships between CTDI(vol)-normalized organ dose coefficients and average patient diameters were evaluated across scanner models. For organs within the image coverage, CTDI(vol)-normalized organ dose coefficients largely showed a strong exponential relationship with the average patient diameter (R(2) > 0.9). The average percentage differences between the two scanner models were generally within 10%. For distributed organs and organs on the periphery of or outside the image coverage, the differences were generally larger (average, 3%-32%) mainly because of the effect of overranging. It is feasible to estimate patient-specific organ dose for a given examination with the knowledge of patient size and the CTDI(vol). These CTDI(vol)-normalized organ dose coefficients enable one to readily estimate patient-specific organ dose for pediatric patients in clinical settings. This dose information, and, as appropriate, attendant risk estimations, can provide more substantive information for the individual patient for both clinical and research applications and can yield more expansive information on dose profiles across patient populations within a practice. © RSNA, 2013.
Navigated MRI-guided liver biopsies in a closed-bore scanner: experience in 52 patients.
Moche, Michael; Heinig, Susann; Garnov, Nikita; Fuchs, Jochen; Petersen, Tim-Ole; Seider, Daniel; Brandmaier, Philipp; Kahn, Thomas; Busse, Harald
2016-08-01
To evaluate clinical effectiveness and diagnostic efficiency of a navigation device for MR-guided biopsies of focal liver lesions in a closed-bore scanner. In 52 patients, 55 biopsies were performed. An add-on MR navigation system with optical instrument tracking was used for image guidance and biopsy device insertion outside the bore. Fast control imaging allowed visualization of the true needle position at any time. The biopsy workflow and procedure duration were recorded. Histological analysis and clinical course/outcome were used to calculate sensitivity, specificity and diagnostic accuracy. Fifty-four of 55 liver biopsies were performed successfully with the system. No major and four minor complications occurred. Mean tumour size was 23 ± 14 mm and the skin-to-target length ranged from 22 to 177 mm. In 39 cases, access path was double oblique. Sensitivity, specificity and diagnostic accuracy were 88 %, 100 % and 92 %, respectively. The mean procedure time was 51 ± 12 min, whereas the puncture itself lasted 16 ± 6 min. On average, four control scans were taken. Using this navigation device, biopsies of poorly visible and difficult accessible liver lesions could be performed safely and reliably in a closed-bore MRI scanner. The system can be easily implemented in clinical routine workflow. • Targeted liver biopsies could be reliably performed in a closed-bore MRI. • The navigation system allows for image guidance outside of the scanner bore. • Assisted MRI-guided biopsies are helpful for focal lesions with a difficult access. • Successful integration of the method in clinical workflow was shown. • Subsequent system installation in an existing MRI environment is feasible.
Grating-based real-time smart optics for biomedicine and communications
NASA Astrophysics Data System (ADS)
Yaqoob, Zahid
Novel photonic systems are proposed and experimentally validated using active as well as passive wavelength dispersive optical devices in unique fashions to solve important system level application problems in biomedicine and laser communications. Specifically for the first time are proposed, high dynamic range variable optical attenuators (VOAs) using bulk acousto-optics (AO). These AO-based architectures have excellent characteristics such as high laser damage threshold (e.g., 1 Watt CW laser power operations), large (e.g., >40 dB) dynamic range, and microsecond domain attenuation setting speed. The demonstrated architectures show potentials for compact, low static insertion loss, and low power VOA designs for wavelength division multiplexed (WDM) fiber-optic communication networks and high speed photonic signal processing for optical and radio frequency (RF) radar and electronic warfare (EW). Acoustic diffraction of light in isotropic media has been manipulated to design and demonstrate on a proof-of-principle basis, the first bulk AO-based optical coherence tomography (OCT) system for high-resolution sub-surface tissue diagnostics. As opposed to the current OCT systems that use mechanical means to generate optical delays, both free-space as well as fiber-optic AO-based OCT systems utilize unique electronically-controlled acousto-optically switched no-moving parts optical delay lines and therefore promise microsecond speed OCT data acquisition rates. The proposed OCT systems also feature high (e.g., >100 MHz) intermediate frequency for low 1/f noise heterodyne detection. For the first time, two agile laser beam steering schemes that are members of a new beam steering technology known as Multiplexed-Optical Scanner Technology (MOST) are theoretically investigated and experimentally demonstrated. The new scanner technologies are based on wavelength and space manipulations and possess remarkable features such as a no-moving parts fast (e.g., microseconds domain or less) beam switching speed option, large (e.g., several centimeters) scanner apertures for high-resolution scans, and large (e.g., >10°) angular scans in more than one dimensions. These incredible features make these scanners excellent candidates for high-end applications. Specifically discussed and experimentally analyzed for the first time are novel MOST-based systems for agile free-space lasercom links, internal and external cavity scanning biomedical probes, and high-speed optical data handling such as barcode scanners. In addition, a novel low sidelobe wavelength selection filter based on a single bulk crystal acousto-optic tunable filter device is theoretically analyzed and experimentally demonstrated showing its versatility as a scanner control fiber-optic component for interfacing with the proposed wavelength based optical scanners. In conclusion, this thesis has shown how powerful photonic systems can be realized via novel architectures using active and passive wavelength sensitive optics leading to advanced solutions for the biomedical and laser communications research communities.
Guerrisi, A; Marin, D; Laghi, A; Di Martino, M; Iafrate, F; Iannaccone, R; Catalano, C; Passariello, R
2010-08-01
The aim of this study was to assess the accuracy of translucency rendering (TR) in computed tomographic (CT) colonography without cathartic preparation using primary 3D reading. From 350 patients with 482 endoscopically verified polyps, 50 pathologically proven polyps and 50 pseudopolyps were retrospectively examined. For faecal tagging, all patients ingested 140 ml of orally administered iodinated contrast agent (diatrizoate meglumine and diatrizoate sodium) at meals 48 h prior to CT colonography examination and two h prior to scanning. CT colonography was performed using a 64-section CT scanner. Colonoscopy with segmental unblinding was performed within 2 weeks after CT. Three independent radiologists retrospectively evaluated TRCT clonographic images using a dedicated software package (V3D-Colon System). To enable size-dependent statistical analysis, lesions were stratified into the following size categories: small (< or =5 mm), intermediate (6-9 mm), and large (> or =10 mm). Overall average TR sensitivity for polyp characterisation was 96.6%, and overall average specificity for pseudopolyp characterisation was 91.3%. Overall average diagnostic accuracy (area under the curve) of TR for characterising colonic lesions was 0.97. TR is an accurate tool that facilitates interpretation of images obtained with a primary 3D analysis, thus enabling easy differentiation of polyps from pseudopolyps.
Classifying magnetic resonance image modalities with convolutional neural networks
NASA Astrophysics Data System (ADS)
Remedios, Samuel; Pham, Dzung L.; Butman, John A.; Roy, Snehashis
2018-02-01
Magnetic Resonance (MR) imaging allows the acquisition of images with different contrast properties depending on the acquisition protocol and the magnetic properties of tissues. Many MR brain image processing techniques, such as tissue segmentation, require multiple MR contrasts as inputs, and each contrast is treated differently. Thus it is advantageous to automate the identification of image contrasts for various purposes, such as facilitating image processing pipelines, and managing and maintaining large databases via content-based image retrieval (CBIR). Most automated CBIR techniques focus on a two-step process: extracting features from data and classifying the image based on these features. We present a novel 3D deep convolutional neural network (CNN)- based method for MR image contrast classification. The proposed CNN automatically identifies the MR contrast of an input brain image volume. Specifically, we explored three classification problems: (1) identify T1-weighted (T1-w), T2-weighted (T2-w), and fluid-attenuated inversion recovery (FLAIR) contrasts, (2) identify pre vs postcontrast T1, (3) identify pre vs post-contrast FLAIR. A total of 3418 image volumes acquired from multiple sites and multiple scanners were used. To evaluate each task, the proposed model was trained on 2137 images and tested on the remaining 1281 images. Results showed that image volumes were correctly classified with 97.57% accuracy.
McCalpin, J.P.; Nishenko, S.P.
1996-01-01
The chronology of M>7 paleoearthquakes on the central five segments of the Wasatch fault zone (WFZ) is one of the best dated in the world and contains 16 earthquakes in the past 5600 years with an average repeat time of 350 years. Repeat times for individual segments vary by a factor of 2, and range from about 1200 to 2600 years. Four of the central five segments ruptured between ??? 620??30 and 1230??60 calendar years B.P. The remaining segment (Brigham City segment) has not ruptured in the past 2120??100 years. Comparison of the WFZ space-time diagram of paleoearthquakes with synthetic paleoseismic histories indicates that the observed temporal clusters and gaps have about an equal probability (depending on model assumptions) of reflecting random coincidence as opposed to intersegment contagion. Regional seismicity suggests that for exposure times of 50 and 100 years, the probability for an earthquake of M>7 anywhere within the Wasatch Front region, based on a Poisson model, is 0.16 and 0.30, respectively. A fault-specific WFZ model predicts 50 and 100 year probabilities for a M>7 earthquake on the WFZ itself, based on a Poisson model, as 0.13 and 0.25, respectively. In contrast, segment-specific earthquake probabilities that assume quasi-periodic recurrence behavior on the Weber, Provo, and Nephi segments are less (0.01-0.07 in 100 years) than the regional or fault-specific estimates (0.25-0.30 in 100 years), due to the short elapsed times compared to average recurrence intervals on those segments. The Brigham City and Salt Lake City segments, however, have time-dependent probabilities that approach or exceed the regional and fault specific probabilities. For the Salt Lake City segment, these elevated probabilities are due to the elapsed time being approximately equal to the average late Holocene recurrence time. For the Brigham City segment, the elapsed time is significantly longer than the segment-specific late Holocene recurrence time.
Ghose, Soumya; Greer, Peter B; Sun, Jidi; Pichler, Peter; Rivest-Henault, David; Mitra, Jhimli; Richardson, Haylea; Wratten, Chris; Martin, Jarad; Arm, Jameen; Best, Leah; Dowling, Jason A
2017-10-27
In MR only radiation therapy planning, generation of the tissue specific HU map directly from the MRI would eliminate the need of CT image acquisition and may improve radiation therapy planning. The aim of this work is to generate and validate substitute CT (sCT) scans generated from standard T2 weighted MR pelvic scans in prostate radiation therapy dose planning. A Siemens Skyra 3T MRI scanner with laser bridge, flat couch and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole pelvis MRI (1.6 mm 3D isotropic T2w SPACE sequence) was acquired. Patients received a routine planning CT scan. Co-registered whole pelvis CT and T2w MRI pairs were used as training images. Advanced tissue specific non-linear regression models to predict HU for the fat, muscle, bladder and air were created from co-registered CT-MRI image pairs. On a test case T2w MRI, the bones and bladder were automatically segmented using a novel statistical shape and appearance model, while other soft tissues were separated using an Expectation-Maximization based clustering model. The CT bone in the training database that was most 'similar' to the segmented bone was then transformed with deformable registration to create the sCT component of the test case T2w MRI bone tissue. Predictions for the bone, air and soft tissue from the separate regression models were successively combined to generate a whole pelvis sCT. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same IMRT dose plan was found to be [Formula: see text] (mean ± standard deviation) for 39 patients. The 3D Gamma pass rate was [Formula: see text] (2 mm/2%). The novel hybrid model is computationally efficient, generating an sCT in 20 min from standard T2w images for prostate cancer radiation therapy dose planning and DRR generation.
NASA Astrophysics Data System (ADS)
Ghose, Soumya; Greer, Peter B.; Sun, Jidi; Pichler, Peter; Rivest-Henault, David; Mitra, Jhimli; Richardson, Haylea; Wratten, Chris; Martin, Jarad; Arm, Jameen; Best, Leah; Dowling, Jason A.
2017-11-01
In MR only radiation therapy planning, generation of the tissue specific HU map directly from the MRI would eliminate the need of CT image acquisition and may improve radiation therapy planning. The aim of this work is to generate and validate substitute CT (sCT) scans generated from standard T2 weighted MR pelvic scans in prostate radiation therapy dose planning. A Siemens Skyra 3T MRI scanner with laser bridge, flat couch and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole pelvis MRI (1.6 mm 3D isotropic T2w SPACE sequence) was acquired. Patients received a routine planning CT scan. Co-registered whole pelvis CT and T2w MRI pairs were used as training images. Advanced tissue specific non-linear regression models to predict HU for the fat, muscle, bladder and air were created from co-registered CT-MRI image pairs. On a test case T2w MRI, the bones and bladder were automatically segmented using a novel statistical shape and appearance model, while other soft tissues were separated using an Expectation-Maximization based clustering model. The CT bone in the training database that was most ‘similar’ to the segmented bone was then transformed with deformable registration to create the sCT component of the test case T2w MRI bone tissue. Predictions for the bone, air and soft tissue from the separate regression models were successively combined to generate a whole pelvis sCT. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same IMRT dose plan was found to be 0.3%+/-0.9% (mean ± standard deviation) for 39 patients. The 3D Gamma pass rate was 99.8+/-0.00 (2 mm/2%). The novel hybrid model is computationally efficient, generating an sCT in 20 min from standard T2w images for prostate cancer radiation therapy dose planning and DRR generation.
Abdullah, Kamarul A; McEntee, Mark F; Reed, Warren; Kench, Peter L
2018-04-30
An ideal organ-specific insert phantom should be able to simulate the anatomical features with appropriate appearances in the resultant computed tomography (CT) images. This study investigated a 3D printing technology to develop a novel and cost-effective cardiac insert phantom derived from volumetric CT image datasets of anthropomorphic chest phantom. Cardiac insert volumes were segmented from CT image datasets, derived from an anthropomorphic chest phantom of Lungman N-01 (Kyoto Kagaku, Japan). These segmented datasets were converted to a virtual 3D-isosurface of heart-shaped shell, while two other removable inserts were included using computer-aided design (CAD) software program. This newly designed cardiac insert phantom was later printed by using a fused deposition modelling (FDM) process via a Creatbot DM Plus 3D printer. Then, several selected filling materials, such as contrast media, oil, water and jelly, were loaded into designated spaces in the 3D-printed phantom. The 3D-printed cardiac insert phantom was positioned within the anthropomorphic chest phantom and 30 repeated CT acquisitions performed using a multi-detector scanner at 120-kVp tube potential. Attenuation (Hounsfield Unit, HU) values were measured and compared to the image datasets of real-patient and Catphan ® 500 phantom. The output of the 3D-printed cardiac insert phantom was a solid acrylic plastic material, which was strong, light in weight and cost-effective. HU values of the filling materials were comparable to the image datasets of real-patient and Catphan ® 500 phantom. A novel and cost-effective cardiac insert phantom for anthropomorphic chest phantom was developed using volumetric CT image datasets with a 3D printer. Hence, this suggested the printing methodology could be applied to generate other phantoms for CT imaging studies. © 2018 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand Institute of Medical Radiation Technology.
Impact of topographic mask models on scanner matching solutions
NASA Astrophysics Data System (ADS)
Tyminski, Jacek K.; Pomplun, Jan; Renwick, Stephen P.
2014-03-01
Of keen interest to the IC industry are advanced computational lithography applications such as Optical Proximity Correction of IC layouts (OPC), scanner matching by optical proximity effect matching (OPEM), and Source Optimization (SO) and Source-Mask Optimization (SMO) used as advanced reticle enhancement techniques. The success of these tasks is strongly dependent on the integrity of the lithographic simulators used in computational lithography (CL) optimizers. Lithographic mask models used by these simulators are key drivers impacting the accuracy of the image predications, and as a consequence, determine the validity of these CL solutions. Much of the CL work involves Kirchhoff mask models, a.k.a. thin masks approximation, simplifying the treatment of the mask near-field images. On the other hand, imaging models for hyper-NA scanner require that the interactions of the illumination fields with the mask topography be rigorously accounted for, by numerically solving Maxwell's Equations. The simulators used to predict the image formation in the hyper-NA scanners must rigorously treat the masks topography and its interaction with the scanner illuminators. Such imaging models come at a high computational cost and pose challenging accuracy vs. compute time tradeoffs. Additional complication comes from the fact that the performance metrics used in computational lithography tasks show highly non-linear response to the optimization parameters. Finally, the number of patterns used for tasks such as OPC, OPEM, SO, or SMO range from tens to hundreds. These requirements determine the complexity and the workload of the lithography optimization tasks. The tools to build rigorous imaging optimizers based on first-principles governing imaging in scanners are available, but the quantifiable benefits they might provide are not very well understood. To quantify the performance of OPE matching solutions, we have compared the results of various imaging optimization trials obtained with Kirchhoff mask models to those obtained with rigorous models involving solutions of Maxwell's Equations. In both sets of trials, we used sets of large numbers of patterns, with specifications representative of CL tasks commonly encountered in hyper-NA imaging. In this report we present OPEM solutions based on various mask models and discuss the models' impact on hyper- NA scanner matching accuracy. We draw conclusions on the accuracy of results obtained with thin mask models vs. the topographic OPEM solutions. We present various examples representative of the scanner image matching for patterns representative of the current generation of IC designs.
Identification of the two rotavirus genes determining neutralization specificities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Offit, P.A.; Blavat, G.
1986-01-01
Bovine rotavirus NCDV and simian rotavirus SA-11 represent two distinct rotavirus serotypes. A genetic approach was used to determine which viral gene segments segregated with serotype-specific viral neutralization. There were 16 reassortant rotarviruses derived by coinfection of MA-104 cells in vitro with the SA-11 and NCDV strains. The parental origin of reassortant rotavirus double-stranded RNA segments was determined by gene segment mobility in polyacrylamide gels and by hybridization with radioactively labeled parental viral transcripts. The authors found that two rotavirus gene segments found previously to code for outer capsid proteins vp3 and vp7 cosegreated with virus neutralization specificities.
NASA Astrophysics Data System (ADS)
Reed, Judd E.; Rumberger, John A.; Buithieu, Jean; Behrenbeck, Thomas; Breen, Jerome F.; Sheedy, Patrick F., II
1995-05-01
Electron beam computed tomography is unparalleled in its ability to consistently produce high quality dynamic images of the human heart. Its use in quantification of left ventricular dynamics is well established in both clinical and research applications. However, the image analysis tools supplied with the scanners offer a limited number of analysis options. They are based on embedded computer systems which have not been significantly upgraded since the scanner was introduced over a decade ago in spite of the explosive improvements in available computer power which have occured during this period. To address these shortcomings, a workstation-based ventricular analysis system has been developed at our institution. This system, which has been in use for over five years, is based on current workstation technology and therefore has benefited from the periodic upgrades in processor performance available to these systems. The dynamic image segmentation component of this system is an interactively supervised, semi-automatic surface identification and tracking system. It characterizes the endocardial and epicardial surfaces of the left ventricle as two concentric 4D hyper-space polyhedrons. Each of these polyhedrons have nearly ten thousand vertices which are deposited into a relational database. The right ventricle is also processed in a similar manner. This database is queried by other custom components which extract ventricular function parameters such as regional ejection fraction and wall stress. The interactive tool which supervises dynamic image segmentation has been enhanced with a temporal domain display. The operator interactively chooses the spatial location of the endpoints of a line segment while the corresponding space/time image is displayed. These images, with content resembling M-Mode echocardiography, benefit form electron beam computed tomography's high spatial and contrast resolution. The segmented surfaces are displayed along with the imagery. These displays give the operator valuable feedback pertaining to the contiguity of the extracted surfaces. As with M-Mode echocardiography, the velocity of moving structures can be easily visualized and measured. However, many views inaccessible to standard transthoracic echocardiography are easily generated. These features have augmented the interpretability of cine electron beam computed tomography and have prompted the recent cloning of this system into an 'omni-directional M-Mode display' system for use in digital post-processing of echocardiographic parasternal short axis tomograms. This enhances the functional assessment in orthogonal views of the left ventricle, accounting for shape changes particularly in the asymmetric post-infarction ventricle. Conclusions: A new tool has been developed for analysis and visualization of cine electron beam computed tomography. It has been found to be very useful in verifying the consistency of myocardial surface definition with a semi-automated segmentation tool. By drawing on M-Mode echocardiography experience, electron beam tomography's interpretability has been enhanced. Use of this feature, in conjunction with the existing image processing tools, will enhance the presentations of data on regional systolic and diastolic functions to clinicians in a format that is familiar to most cardiologists. Additionally, this tool reinforces the advantages of electron beam tomography as a single imaging modality for the assessment of left and right ventricular size, shape, and regional functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMillan, K; Bostani, M; Cagnon, C
Purpose: AAPM Task Group 204 described size specific dose estimates (SSDE) for body scans. The purpose of this work is to use a similar approach to develop patient-specific, scanner-independent organ dose estimates for head CT exams using an attenuation-based size metric. Methods: For eight patient models from the GSF family of voxelized phantoms, dose to brain and lens of the eye was estimated using Monte Carlo simulations of contiguous axial scans for 64-slice MDCT scanners from four major manufacturers. Organ doses were normalized by scannerspecific 16 cm CTDIvol values and averaged across all scanners to obtain scanner-independent CTDIvol-to-organ-dose conversion coefficientsmore » for each patient model. Head size was measured at the first slice superior to the eyes; patient perimeter and effective diameter (ED) were measured directly from the GSF data. Because the GSF models use organ identification codes instead of Hounsfield units, water equivalent diameter (WED) was estimated indirectly. Using the image data from 42 patients ranging from 2 weeks old to adult, the perimeter, ED and WED size metrics were obtained and correlations between each metric were established. Applying these correlations to the GSF perimeter and ED measurements, WED was calculated for each model. The relationship between the various patient size metrics and CTDIvol-to-organ-dose conversion coefficients was then described. Results: The analysis of patient images demonstrated the correlation between WED and ED across a wide range of patient sizes. When applied to the GSF patient models, an exponential relationship between CTDIvol-to-organ-dose conversion coefficients and the WED size metric was observed with correlation coefficients of 0.93 and 0.77 for the brain and lens of the eye, respectively. Conclusion: Strong correlation exists between CTDIvol normalized brain dose and WED. For the lens of the eye, a lower correlation is observed, primarily due to surface dose variations. Funding Support: Siemens-UCLA Radiology Master Research Agreement; Disclosures - Michael McNitt-Gray: Institutional Research Agreement, Siemens AG; Research Support, Siemens AG; Consultant, Flaherty Sensabaugh Bonasso PLLC; Consultant, Fulbright and Jaworski.« less
Logic gate scanner focus control in high-volume manufacturing using scatterometry
NASA Astrophysics Data System (ADS)
Dare, Richard J.; Swain, Bryan; Laughery, Michael
2004-05-01
Tool matching and optimal process control are critical requirements for success in semiconductor manufacturing. It is imperative that a tool"s operating conditions are understood and controlled in order to create a process that is repeatable and produces devices within specifications. Likewise, it is important where possible to match multiple systems using some methodology, so that regardless of which tool is used the process remains in control. Agere Systems is currently using Timbre Technologies" Optical Digital Profilometry (ODP) scatterometry for controlling Nikon scanner focus at the most critical lithography layer; logic gate. By adjusting focus settings and verifying the resultant changes in resist profile shape using ODP, it becomes possible to actively control scanner focus to achieve a desired resist profile. Since many critical lithography processes are designed to produce slightly re-entrant resist profiles, this type of focus control is not possible via Critical Dimension Scanning Electron Microscopy (CDSEM) where reentrant profiles cannot be accurately determined. Additionally, the high throughput and non-destructive nature of this measurement technique saves both cycle time and wafer costs compared to cross-section SEM. By implementing an ODP daily process check and after any maintenance on a scanner, Agere successfully enabled focus drift control, i.e. making necessary focus or equipment changes in order to maintain a desired resist profile.
Brunner, C; Hoffmann, K; Thiele, T; Schedler, U; Jehle, H; Resch-Genger, U
2015-04-01
Commercial platforms consisting of ready-to-use microarrays printed with target-specific DNA probes, a microarray scanner, and software for data analysis are available for different applications in medical diagnostics and food analysis, detecting, e.g., viral and bacteriological DNA sequences. The transfer of these tools from basic research to routine analysis, their broad acceptance in regulated areas, and their use in medical practice requires suitable calibration tools for regular control of instrument performance in addition to internal assay controls. Here, we present the development of a novel assay-adapted calibration slide for a commercialized DNA-based assay platform, consisting of precisely arranged fluorescent areas of various intensities obtained by incorporating different concentrations of a "green" dye and a "red" dye in a polymer matrix. These dyes present "Cy3" and "Cy5" analogues with improved photostability, chosen based upon their spectroscopic properties closely matching those of common labels for the green and red channel of microarray scanners. This simple tool allows to efficiently and regularly assess and control the performance of the microarray scanner provided with the biochip platform and to compare different scanners. It will be eventually used as fluorescence intensity scale for referencing of assays results and to enhance the overall comparability of diagnostic tests.
Overlay improvements using a real time machine learning algorithm
NASA Astrophysics Data System (ADS)
Schmitt-Weaver, Emil; Kubis, Michael; Henke, Wolfgang; Slotboom, Daan; Hoogenboom, Tom; Mulkens, Jan; Coogans, Martyn; ten Berge, Peter; Verkleij, Dick; van de Mast, Frank
2014-04-01
While semiconductor manufacturing is moving towards the 14nm node using immersion lithography, the overlay requirements are tightened to below 5nm. Next to improvements in the immersion scanner platform, enhancements in the overlay optimization and process control are needed to enable these low overlay numbers. Whereas conventional overlay control methods address wafer and lot variation autonomously with wafer pre exposure alignment metrology and post exposure overlay metrology, we see a need to reduce these variations by correlating more of the TWINSCAN system's sensor data directly to the post exposure YieldStar metrology in time. In this paper we will present the results of a study on applying a real time control algorithm based on machine learning technology. Machine learning methods use context and TWINSCAN system sensor data paired with post exposure YieldStar metrology to recognize generic behavior and train the control system to anticipate on this generic behavior. Specific for this study, the data concerns immersion scanner context, sensor data and on-wafer measured overlay data. By making the link between the scanner data and the wafer data we are able to establish a real time relationship. The result is an inline controller that accounts for small changes in scanner hardware performance in time while picking up subtle lot to lot and wafer to wafer deviations introduced by wafer processing.
LANSCE-R WIRE-SCANNER ANALOG FRONT-END ELECTRONICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Michael E.
2011-01-01
A new AFE is being developed for the new LANSCE-R wire-scanner systems. The new AFE is implemented in a National Instruments Compact RIO (cRIO) module installed a BiRa 4U BiRIO cRIO chassis specifically designed to accommodate the cRIO crate and all the wire-scanner interface, control and motor-drive electronics. A single AFE module provides interface to both X and Y wire sensors using true DC coupled transimpedance amplifiers providing collection of the wire charge signals, real-time wire integrity verification using the normal dataacquisition system, and wire bias of 0V to +/-50V. The AFE system is designed to accommodate comparatively long macropulsesmore » (>1ms) with high PRF (>120Hz) without the need to provide timing signals. The basic AFE bandwidth is flat from true DC to 50kHz with a true first-order pole at 50kHz. Numeric integration in the cRIO FPGA provides real-time pulse-to-pulse numeric integration of the AFE signal to compute the total charge collected in each macropulse. This method of charge collection eliminates the need to provide synchronization signals to the wire-scanner AFE while providing the capability to accurately record the charge from long macropulses at high PRF.« less
Malkyarenko, Dariya I; Chenevert, Thomas L
2014-12-01
To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.
NASA Technical Reports Server (NTRS)
Reginato, R. J.; Vedder, J. F.; Idso, S. B.; Jackson, R. D.; Blanchard, M. B.; Goettelman, R.
1977-01-01
For several days in March of 1975, reflected solar radiation measurements were obtained from smooth and rough surfaces of wet, drying, and continually dry Avondale loam at Phoenix, Arizona, with pyranometers located 50 cm above the ground surface and a multispectral scanner flown at a 300-m height. The simple summation of the different band radiances measured by the multispectral scanner proved equally as good as the pyranometer data for estimating surface soil water content if the multispectral scanner data were standardized with respect to the intensity of incoming solar radiation or the reflected radiance from a reference surface, such as the continually dry soil. Without this means of standardization, multispectral scanner data are most useful in a spectral band ratioing context. Our results indicated that, for the bands used, no significant information on soil water content could be obtained by band ratioing. Thus the variability in soil water content should insignificantly affect soil-type discrimination based on identification of type-specific spectral signatures. Therefore remote sensing, conducted in the 0.4- to 1.0-micron wavelength region of the solar spectrum, would seem to be much More suited to identifying crop and soil types than to estimating of soil water content.
Image quality phantom and parameters for high spatial resolution small-animal SPECT
NASA Astrophysics Data System (ADS)
Visser, Eric P.; Harteveld, Anita A.; Meeuwis, Antoi P. W.; Disselhorst, Jonathan A.; Beekman, Freek J.; Oyen, Wim J. G.; Boerman, Otto C.
2011-10-01
At present, generally accepted standards to characterize small-animal single photon emission tomographs (SPECT) do not exist. Whereas for small-animal positron emission tomography (PET), the NEMA NU 4-2008 guidelines are available, such standards are still lacking for small-animal SPECT. More specifically, a dedicated image quality (IQ) phantom and corresponding IQ parameters are absent. The structures of the existing PET IQ phantom are too large to fully characterize the sub-millimeter spatial resolution of modern multi-pinhole SPECT scanners, and its diameter will not fit into all scanners when operating in high spatial resolution mode. We therefore designed and constructed an adapted IQ phantom with smaller internal structures and external diameter, and a facility to guarantee complete filling of the smallest rods. The associated IQ parameters were adapted from NEMA NU 4. An additional parameter, effective whole-body sensitivity, was defined since this was considered relevant in view of the variable size of the field of view and the use of multiple bed positions as encountered in modern small-animal SPECT scanners. The usefulness of the phantom was demonstrated for 99mTc in a USPECT-II scanner operated in whole-body scanning mode using a multi-pinhole mouse collimator with 0.6 mm pinhole diameter.
Use of Breast-Specific PET Scanners and Comparison with MR Imaging.
Narayanan, Deepa; Berg, Wendie A
2018-05-01
The goals of this article are to discuss the role of breast-specific PET imaging of women with breast cancer, compare the clinical performance of positron emission mammography (PEM) and MR imaging for current indications, and provide recommendations for when women should undergo PEM instead of breast MR imaging. Published by Elsevier Inc.
Clarkson, Wesley A; Restrepo, Carlos Santiago; Bauch, Terry D; Rubal, Bernard J
2009-01-01
This study examines the effects of intravenous infusion of adenosine and sublingual nitroglycerin on coronary angiograms obtained by current-generation multidetector computed tomography. We assessed coronary vasodilation at baseline and after intravenous adenosine (140 µg/kg/min) or sublingual nitroglycerin spray (800 µg) in 7 female swine (weight, 40.9 ± 1.4 kg) by using electrocardiogram-gated coronary angiography with a 64-detector scanner (rotation time, 400 ms; 120kV; 400 mA) and intravenous contrast (300 mg/mL iohexol, 4.5 mL/s, 2 mL/kg). Cross-sectional areas of segments in the left anterior descending, circumflex, and right coronary arteries were evaluated in oblique orthogonal views. Images were acquired at an average heart rate of 73 ± 11 beats per minute. Changes in aortic pressure were not significant with nitroglycerin but decreased (approximately 10%) with adenosine. Of the 76 segments analyzed (baseline range, 2 to 39 mm2), 1 distal segment could not be assessed after adenosine. Segment cross-sectional area increased by 11.3% with nitroglycerin but decreased by 9.6% during adenosine infusion. The results of the present study are consistent with the practice of using sublingual nitroglycerin to enhance visualization of epicardial vessels and suggest that intravenous adenosine may hinder coronary artery visualization. This study is the first repeated-measures electrocardiogram-gated CT evaluation to use the same imaging technology to assess changes in coronary cross-sectional area before and after treatment with a vasodilator. The nitroglycerin-associated changes in our swine model were modest in comparison with previously reported human studies. PMID:20034433
Brain tumor segmentation in MRI by using the fuzzy connectedness method
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Udupa, Jayaram K.; Hackney, David; Moonis, Gul
2001-07-01
The aim of this paper is the precise and accurate quantification of brain tumor via MRI. This is very useful in evaluating disease progression, response to therapy, and the need for changes in treatment plans. We use multiple MRI protocols including FLAIR, T1, and T1 with Gd enhancement to gather information about different aspects of the tumor and its vicinity- edema, active regions, and scar left over due to surgical intervention. We have adapted the fuzzy connectedness framework to segment tumor and to measure its volume. The method requires only limited user interaction in routine clinical MRI. The first step in the process is to apply an intensity normalization method to the images so that the same body region has the same tissue meaning independent of the scanner and patient. Subsequently, a fuzzy connectedness algorithm is utilized to segment the different aspects of the tumor. The system has been tested, for its precision, accuracy, and efficiency, utilizing 40 patient studies. The percent coefficient of variation (% CV) in volume due to operator subjectivity in specifying seeds for fuzzy connectedness segmentation is less than 1%. The mean operator and computer time taken per study is 3 minutes. The package is designed to run under operator supervision. Delineation has been found to agree with the operators' visual inspection most of the time except in some cases when the tumor is close to the boundary of the brain. In the latter case, the scalp is included in the delineation and an operator has to exclude this manually. The methodology is rapid, robust, consistent, yielding highly reproducible measurements, and is likely to become part of the routine evaluation of brain tumor patients in our health system.
Santos Armentia, E; Tardáguila de la Fuente, G; Castellón Plaza, D; Delgado Sánchez-Gracián, C; Prada González, R; Fernández Fernández, L; Tardáguila Montero, F
2014-01-01
To study the differences in vascular image quality, bone subtraction, and dose of radiation of dual energy CT angiography of the supraaortic trunks using different tube voltages. We reviewed the CT angiograms of the supraaortic trunks in 46 patients acquired with a 128-slice dual source CT scanner using two voltage protocols (80/140 kV and 100/140 kV). The "head bone removal" tool was used for postprocessing. We divided the arteries into 15 segments. In each segment, we evaluated the image quality of the vessels and the effectiveness of bone removal in multiplanar reconstructions (MPR) and in maximum intensity projections (MIP) with each protocol, analyzing the trabecular and cortical bones separately. We also evaluated the dose of radiation received. Of the 46 patients, 13 were studied using 80/140 kV and 33 with 100/140 kV. There were no significant differences between the two groups in age or sex. Image quality in four segments was better in the group examined with 100/140 kV. Cortical bone removal in MPR and MIP and trabecular bone removal in MIP were also better in the group examined with 100/140 kV. The dose of radiation received was significantly higher in the group examined with 100/140 kV (1.16 mSv with 80/140 kV vs. 1.59 mSv with 100/140 kV). Using 100/140 kV increases the dose of radiation but improves the quality of the study of arterial segments and bone subtraction. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.
CT protocol management: simplifying the process by using a master protocol concept
Bour, Robert K.; Rubert, Nicholas; Wendt, Gary; Pozniak, Myron; Ranallo, Frank N.
2015-01-01
This article explains a method for creating CT protocols for a wide range of patient body sizes and clinical indications, using detailed tube current information from a small set of commonly used protocols. Analytical expressions were created relating CT technical acquisition parameters which can be used to create new CT protocols on a given scanner or customize protocols from one scanner to another. Plots of mA as a function of patient size for specific anatomical regions were generated and used to identify the tube output needs for patients as a function of size for a single master protocol. Tube output data were obtained from the DICOM header of clinical images from our PACS and patient size was measured from CT localizer radiographs under IRB approval. This master protocol was then used to create 11 additional master protocols. The 12 master protocols were further combined to create 39 single and multiphase clinical protocols. Radiologist acceptance rate of exams scanned using the clinical protocols was monitored for 12,857 patients to analyze the effectiveness of the presented protocol management methods using a two‐tailed Fisher's exact test. A single routine adult abdominal protocol was used as the master protocol to create 11 additional master abdominal protocols of varying dose and beam energy. Situations in which the maximum tube current would have been exceeded are presented, and the trade‐offs between increasing the effective tube output via 1) decreasing pitch, 2) increasing the scan time, or 3) increasing the kV are discussed. Out of 12 master protocols customized across three different scanners, only one had a statistically significant acceptance rate that differed from the scanner it was customized from. The difference, however, was only 1% and was judged to be negligible. All other master protocols differed in acceptance rate insignificantly between scanners. The methodology described in this paper allows a small set of master protocols to be adapted among different clinical indications on a single scanner and among different CT scanners. PACS number: 87.57.Q PMID:26219005
Evaluation and implementation of triple‐channel radiochromic film dosimetry in brachytherapy
Bradley, David; Nisbet, Andrew
2014-01-01
The measurement of dose distributions in clinical brachytherapy, for the purpose of quality control, commissioning or dosimetric audit, is challenging and requires development. Radiochromic film dosimetry with a commercial flatbed scanner may be suitable, but careful methodologies are required to control various sources of uncertainty. Triple‐channel dosimetry has recently been utilized in external beam radiotherapy to improve the accuracy of film dosimetry, but its use in brachytherapy, with characteristic high maximum doses, steep dose gradients, and small scales, has been less well researched. We investigate the use of advanced film dosimetry techniques for brachytherapy dosimetry, evaluating uncertainties and assessing the mitigation afforded by triple‐channel dosimetry. We present results on postirradiation film darkening, lateral scanner effect, film surface perturbation, film active layer thickness, film curling, and examples of the measurement of clinical brachytherapy dose distributions. The lateral scanner effect in brachytherapy film dosimetry can be very significant, up to 23% dose increase at 14 Gy, at ± 9 cm lateral from the scanner axis for simple single‐channel dosimetry. Triple‐channel dosimetry mitigates the effect, but still limits the useable width of a typical scanner to less than 8 cm at high dose levels to give dose uncertainty to within 1%. Triple‐channel dosimetry separates dose and dose‐independent signal components, and effectively removes disturbances caused by film thickness variation and surface perturbations in the examples considered in this work. The use of reference dose films scanned simultaneously with brachytherapy test films is recommended to account for scanner variations from calibration conditions. Postirradiation darkening, which is a continual logarithmic function with time, must be taken into account between the reference and test films. Finally, films must be flat when scanned to avoid the Callier‐like effects and to provide reliable dosimetric results. We have demonstrated that radiochromic film dosimetry with GAFCHROMIC EBT3 film and a commercial flatbed scanner is a viable method for brachytherapy dose distribution measurement, and uncertainties may be reduced with triple‐channel dosimetry and specific film scan and evaluation methodologies. PACS numbers: 87.55.Qr, 87.56.bg, 87.55.km PMID:25207417
Pattern recognition of native plant communities: Manitou Colorado test site
NASA Technical Reports Server (NTRS)
Driscoll, R. S.
1972-01-01
Optimum channel selection among 12 channels of multispectral scanner imagery identified six as providing the best information about 11 vegetation classes and two nonvegetation classes at the Manitou Experimental Forest. Intensive preprocessing of the scanner signals was required to eliminate a serious scan angle effect. Final processing of the normalized data provided acceptable recognition results of generalized plant community types. Serious errors occurred with attempts to classify specific community types within upland grassland areas. The consideration of the convex mixtures concept (effects of amounts of live plant cover, exposed soil, and plant litter cover on apparent scene radiances) significantly improved the classification of some of the grassland classes.
Baeßler, Bettina; Schaarschmidt, Frank; Stehning, Christian; Schnackenburg, Bernhard; Maintz, David; Bunck, Alexander C
2015-11-01
Previous studies showed that myocardial T2 relaxation times measured by cardiac T2-mapping vary significantly depending on sequence and field strength. Therefore, a systematic comparison of different T2-mapping sequences and the establishment of dedicated T2 reference values is mandatory for diagnostic decision-making. Phantom experiments using gel probes with a range of different T1 and T2 times were performed on a clinical 1.5T and 3T scanner. In addition, 30 healthy volunteers were examined at 1.5 and 3T in immediate succession. In each examination, three different T2-mapping sequences were performed at three short-axis slices: Multi Echo Spin Echo (MESE), T2-prepared balanced SSFP (T2prep), and Gradient Spin Echo with and without fat saturation (GraSEFS/GraSE). Segmented T2-Maps were generated according to the AHA 16-segment model and statistical analysis was performed. Significant intra-individual differences between mean T2 times were observed for all sequences. In general, T2prep resulted in lowest and GraSE in highest T2 times. A significant variation with field strength was observed for mean T2 in phantom as well as in vivo, with higher T2 values at 1.5T compared to 3T, regardless of the sequence used. Segmental T2 values for each sequence at 1.5 and 3T are presented. Despite a careful selection of sequence parameters and volunteers, significant variations of the measured T2 values were observed between field strengths, MR sequences and myocardial segments. Therefore, we present segmental T2 values for each sequence at 1.5 and 3T with the inherent potential to serve as reference values for future studies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Reliable and fast volumetry of the lumbar spinal cord using cord image analyser (Cordial).
Tsagkas, Charidimos; Altermatt, Anna; Bonati, Ulrike; Pezold, Simon; Reinhard, Julia; Amann, Michael; Cattin, Philippe; Wuerfel, Jens; Fischer, Dirk; Parmar, Katrin; Fischmann, Arne
2018-04-30
To validate the precision and accuracy of the semi-automated cord image analyser (Cordial) for lumbar spinal cord (SC) volumetry in 3D T1w MRI data of healthy controls (HC). 40 3D T1w images of 10 HC (w/m: 6/4; age range: 18-41 years) were acquired at one 3T-scanner in two MRI sessions (time interval 14.9±6.1 days). Each subject was scanned twice per session, allowing determination of test-retest reliability both in back-to-back (intra-session) and scan-rescan images (inter-session). Cordial was applied for lumbar cord segmentation twice per image by two raters, allowing for assessment of intra- and inter-rater reliability, and compared to a manual gold standard. While manually segmented volumes were larger (mean: 2028±245 mm 3 vs. Cordial: 1636±300 mm 3 , p<0.001), accuracy assessments between manually and semi-automatically segmented images showed a mean Dice-coefficient of 0.88±0.05. Calculation of within-subject coefficients of variation (COV) demonstrated high intra-session (1.22-1.86%), inter-session (1.26-1.84%), as well as intra-rater (1.73-1.83%) reproducibility. No significant difference was shown between intra- and inter-session reproducibility or between intra-rater reliabilities. Although inter-rater reproducibility (COV: 2.87%) was slightly lower compared to all other reproducibility measures, between rater consistency was very strong (intraclass correlation coefficient: 0.974). While under-estimating the lumbar SCV, Cordial still provides excellent inter- and intra-session reproducibility showing high potential for application in longitudinal trials. • Lumbar spinal cord segmentation using the semi-automated cord image analyser (Cordial) is feasible. • Lumbar spinal cord is 40-mm cord segment 60 mm above conus medullaris. • Cordial provides excellent inter- and intra-session reproducibility in lumbar spinal cord region. • Cordial shows high potential for application in longitudinal trials.
Non-laser-based scanner for three-dimensional digitization of historical artifacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, Daniel V.; Baldwin, Kevin C.; Duncan, Donald D
2007-05-20
A 3D scanner, based on incoherent illumination techniques, and associated data-processing algorithms are presented that can be used to scan objects at lateral resolutions ranging from 5 to100 {mu}m (or more) and depth resolutions of approximately 2 {mu}m.The scanner was designed with the specific intent to scan cuneiform tablets but can be utilized for other applications. Photometric stereo techniques are used to obtain both a surface normal map and a parameterized model of the object's bidirectional reflectance distribution function. The normal map is combined with height information,gathered by structured light techniques, to form a consistent 3D surface. Data from Lambertianmore » and specularly diffuse spherical objects are presented and used to quantify the accuracy of the techniques. Scans of a cuneiform tablet are also presented. All presented data are at a lateral resolution of 26.8 {mu}m as this is approximately the minimum resolution deemed necessary to accurately represent cuneiform.« less
NASA Astrophysics Data System (ADS)
Lachat, E.; Landes, T.; Grussenmeyer, P.
2017-08-01
Handheld 3D scanners can be used to complete large scale models with the acquisition of occluded areas or small artefacts. This may be of interest for digitization projects in the field of Cultural Heritage, where detailed areas may require a specific treatment. Such sensors present the advantage of being easily portable in the field, and easily usable even without particular knowledge. In this paper, the Freestyle3D handheld scanner launched on the market in 2015 by FARO is investigated. Different experiments are described, covering various topics such as the influence of range or color on the measurements, but also the precision achieved for geometrical primitive digitization. These laboratory experiments are completed by acquisitions performed on engraved and sculpted stone blocks. This practical case study is useful to investigate which acquisition protocol seems to be the more adapted and leads to precise results. The produced point clouds will be compared to photogrammetric surveys for the purpose of their accuracy assessment.
Multispectral scanner flight model (F-1) radiometric calibration and alignment handbook
NASA Technical Reports Server (NTRS)
1981-01-01
This handbook on the calibration of the MSS-D flight model (F-1) provides both the relevant data and a summary description of how the data were obtained for the system radiometric calibration, system relative spectral response, and the filter response characteristics for all 24 channels of the four band MSS-D F-1 scanner. The calibration test procedure and resulting test data required to establish the reference light levels of the MSS-D internal calibration system are discussed. The final set of data ("nominal" calibration wedges for all 24 channels) for the internal calibration system is given. The system relative spectral response measurements for all 24 channels of MSS-D F-1 are included. These data are the spectral response of the complete scanner, which are the composite of the spectral responses of the scan mirror primary and secondary telescope mirrors, fiber optics, optical filters, and detectors. Unit level test data on the measurements of the individual channel optical transmission filters are provided. Measured performance is compared to specification values.
Simulated thought insertion: Influencing the sense of agency using deception and magic.
Olson, Jay A; Landry, Mathieu; Appourchaux, Krystèle; Raz, Amir
2016-07-01
In order to study the feeling of control over decisions, we told 60 participants that a neuroimaging machine could read and influence their thoughts. While inside a mock brain scanner, participants chose arbitrary numbers in two similar tasks. In the Mind-Reading Task, the scanner appeared to guess the participants' numbers; in the Mind-Influencing Task, it appeared to influence their choice of numbers. We predicted that participants would feel less voluntary control over their decisions when they believed that the scanner was influencing their choices. As predicted, participants felt less control and made slower decisions in the Mind-Influencing Task compared to the Mind-Reading Task. A second study replicated these findings. Participants' experience of the ostensible influence varied, with some reporting an unknown source directing them towards specific numbers. This simulated thought insertion paradigm can therefore influence feelings of voluntary control and may help model symptoms of mental disorders. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Baker, Jameson Todd
The complex dose patterns that result in Intensity Modulated Radiation Therapy make the typical QA of a second calculation insufficient for ensuring safe treatment of patients. Many facilities choose to deliver the treatment to film inserted in a phantom and calculate the dose delivered as an additional check of the treatment plan. Radiochromic films allow for measurements without the use of a processor in the current digital age. International Specialty Products developed Gafchromic EBT film, which is a radiochromic film having a useful range of 1 -- 800 cGy. EBT film properties are fully analyzed including studies of uniformity, spectral absorption, exposure sensitivity, energy dependence and post exposure density growth. Dosimetric performance on commercially available digitizers is studied with specific attention on the shortcomings. Finally, a custom designed scanner is built specifically for EBT film and its unique properties. Performance of the EBT digitizer is analyzed and compared against currently available scanners.
NASA Astrophysics Data System (ADS)
Shepard, Lauren; Sommer, Kelsey; Izzo, Richard; Podgorsak, Alexander; Wilson, Michael; Said, Zaid; Rybicki, Frank J.; Mitsouras, Dimitrios; Rudin, Stephen; Angel, Erin; Ionita, Ciprian N.
2017-03-01
Purpose: Accurate patient-specific phantoms for device testing or endovascular treatment planning can be 3D printed. We expand the applicability of this approach for cardiovascular disease, in particular, for CT-geometry derived benchtop measurements of Fractional Flow Reserve, the reference standard for determination of significant individual coronary artery atherosclerotic lesions. Materials and Methods: Coronary CT Angiography (CTA) images during a single heartbeat were acquired with a 320x0.5mm detector row scanner (Toshiba Aquilion ONE). These coronary CTA images were used to create 4 patientspecific cardiovascular models with various grades of stenosis: severe, <75% (n=1); moderate, 50-70% (n=1); and mild, <50% (n=2). DICOM volumetric images were segmented using a 3D workstation (Vitrea, Vital Images); the output was used to generate STL files (using AutoDesk Meshmixer), and further processed to create 3D printable geometries for flow experiments. Multi-material printed models (Stratasys Connex3) were connected to a programmable pulsatile pump, and the pressure was measured proximal and distal to the stenosis using pressure transducers. Compliance chambers were used before and after the model to modulate the pressure wave. A flow sensor was used to ensure flow rates within physiological reported values. Results: 3D model based FFR measurements correlated well with stenosis severity. FFR measurements for each stenosis grade were: 0.8 severe, 0.7 moderate and 0.88 mild. Conclusions: 3D printed models of patient-specific coronary arteries allows for accurate benchtop diagnosis of FFR. This approach can be used as a future diagnostic tool or for testing CT image-based FFR methods.
Silva, Guilherme; Martins, Cristina; Moreira da Silva, Nádia; Vieira, Duarte; Costa, Dias; Rego, Ricardo; Fonseca, José; Silva Cunha, João Paulo
2017-08-01
Background and purpose We evaluated two methods to identify mesial temporal sclerosis (MTS): visual inspection by experienced epilepsy neuroradiologists based on structural magnetic resonance imaging sequences and automated hippocampal volumetry provided by a processing pipeline based on the FMRIB Software Library. Methods This retrospective study included patients from the epilepsy monitoring unit database of our institution. All patients underwent brain magnetic resonance imaging in 1.5T and 3T scanners with protocols that included thin coronal T2, T1 and fluid-attenuated inversion recovery and isometric T1 acquisitions. Two neuroradiologists with experience in epilepsy and blinded to clinical data evaluated magnetic resonance images for the diagnosis of MTS. The diagnosis of MTS based on an automated method included the calculation of a volumetric asymmetry index between the two hippocampi of each patient and a threshold value to define the presence of MTS obtained through statistical tests (receiver operating characteristics curve). Hippocampi were segmented for volumetric quantification using the FIRST tool and fslstats from the FMRIB Software Library. Results The final cohort included 19 patients with unilateral MTS (14 left side): 14 women and a mean age of 43.4 ± 10.4 years. Neuroradiologists had a sensitivity of 100% and specificity of 73.3% to detect MTS (gold standard, k = 0.755). Automated hippocampal volumetry had a sensitivity of 84.2% and specificity of 86.7% (k = 0.704). Combined, these methods had a sensitivity of 84.2% and a specificity of 100% (k = 0.825). Conclusions Automated volumetry of the hippocampus could play an important role in temporal lobe epilepsy evaluation, namely on confirmation of unilateral MTS diagnosis in patients with radiological suggestive findings.
Age-Related Differences and Heritability of the Perisylvian Language Networks.
Budisavljevic, Sanja; Dell'Acqua, Flavio; Rijsdijk, Frühling V; Kane, Fergus; Picchioni, Marco; McGuire, Philip; Toulopoulou, Timothea; Georgiades, Anna; Kalidindi, Sridevi; Kravariti, Eugenia; Murray, Robin M; Murphy, Declan G; Craig, Michael C; Catani, Marco
2015-09-16
Acquisition of language skills depends on the progressive maturation of specialized brain networks that are usually lateralized in adult population. However, how genetic and environmental factors relate to the age-related differences in lateralization of these language pathways is still not known. We recruited 101 healthy right-handed subjects aged 9-40 years to investigate age-related differences in the anatomy of perisylvian language pathways and 86 adult twins (52 monozygotic and 34 dizygotic) to understand how heritability factors influence language anatomy. Diffusion tractography was used to dissect and extract indirect volume measures from the three segments of the arcuate fasciculus connecting Wernicke's to Broca's region (i.e., long segment), Broca's to Geschwind's region (i.e., anterior segment), and Wernicke's to Geschwind's region (i.e., posterior segment). We found that the long and anterior arcuate segments are lateralized before adolescence and their lateralization remains stable throughout adolescence and early adulthood. Conversely, the posterior segment shows right lateralization in childhood but becomes progressively bilateral during adolescence, driven by a reduction in volume in the right hemisphere. Analysis of the twin sample showed that genetic and shared environmental factors influence the anatomy of those segments that lateralize earlier, whereas specific environmental effects drive the variability in the volume of the posterior segment that continues to change in adolescence and adulthood. Our results suggest that the age-related differences in the lateralization of the language perisylvian pathways are related to the relative contribution of genetic and environmental effects specific to each segment. Our study shows that, by early childhood, frontotemporal (long segment) and frontoparietal (anterior segment) connections of the arcuate fasciculus are left and right lateralized, respectively, and remain lateralized throughout adolescence and early adulthood. In contrast, temporoparietal (posterior segment) connections are right lateralized in childhood, but become progressively bilateral during adolescence. Preliminary twin analysis suggested that lateralization of the arcuate fasciculus is a heterogeneous process that depends on the interplay between genetic and environment factors specific to each segment. Tracts that exhibit higher age effects later in life (i.e., posterior segment) appear to be influenced more by specific environmental factors. Copyright © 2015 Budisavljevic et al.
Age-Related Differences and Heritability of the Perisylvian Language Networks
Dell'Acqua, Flavio; Rijsdijk, Frühling V.; Kane, Fergus; Picchioni, Marco; McGuire, Philip; Toulopoulou, Timothea; Georgiades, Anna; Kalidindi, Sridevi; Kravariti, Eugenia; Murray, Robin M.; Murphy, Declan G.; Craig, Michael C.
2015-01-01
Acquisition of language skills depends on the progressive maturation of specialized brain networks that are usually lateralized in adult population. However, how genetic and environmental factors relate to the age-related differences in lateralization of these language pathways is still not known. We recruited 101 healthy right-handed subjects aged 9–40 years to investigate age-related differences in the anatomy of perisylvian language pathways and 86 adult twins (52 monozygotic and 34 dizygotic) to understand how heritability factors influence language anatomy. Diffusion tractography was used to dissect and extract indirect volume measures from the three segments of the arcuate fasciculus connecting Wernicke's to Broca's region (i.e., long segment), Broca's to Geschwind's region (i.e., anterior segment), and Wernicke's to Geschwind's region (i.e., posterior segment). We found that the long and anterior arcuate segments are lateralized before adolescence and their lateralization remains stable throughout adolescence and early adulthood. Conversely, the posterior segment shows right lateralization in childhood but becomes progressively bilateral during adolescence, driven by a reduction in volume in the right hemisphere. Analysis of the twin sample showed that genetic and shared environmental factors influence the anatomy of those segments that lateralize earlier, whereas specific environmental effects drive the variability in the volume of the posterior segment that continues to change in adolescence and adulthood. Our results suggest that the age-related differences in the lateralization of the language perisylvian pathways are related to the relative contribution of genetic and environmental effects specific to each segment. SIGNIFICANCE STATEMENT Our study shows that, by early childhood, frontotemporal (long segment) and frontoparietal (anterior segment) connections of the arcuate fasciculus are left and right lateralized, respectively, and remain lateralized throughout adolescence and early adulthood. In contrast, temporoparietal (posterior segment) connections are right lateralized in childhood, but become progressively bilateral during adolescence. Preliminary twin analysis suggested that lateralization of the arcuate fasciculus is a heterogeneous process that depends on the interplay between genetic and environment factors specific to each segment. Tracts that exhibit higher age effects later in life (i.e., posterior segment) appear to be influenced more by specific environmental factors. PMID:26377454
Dental scanning in CAD/CAM technologies: laser beams
NASA Astrophysics Data System (ADS)
Sinescu, Cosmin; Negrutiu, Meda; Faur, Nicolae; Negru, Radu; Romînu, Mihai; Cozarov, Dalibor
2008-02-01
Scanning, also called digitizing, is the process of gathering the requisite data from an object. Many different technologies are used to collect three dimensional data. They range from mechanical and very slow, to radiation-based and highly-automated. Each technology has its advantages and disadvantages, and their applications and specifications overlap. The aims of this study are represented by establishing a viable method of digitally representing artifacts of dental casts, proposing a suitable scanner and post-processing software and obtaining 3D Models for the dental applications. The method is represented by the scanning procedure made by different scanners as the implicated materials. Scanners are the medium of data capture. 3D scanners aim to measure and record the relative distance between the object's surface and a known point in space. This geometric data is represented in the form of point cloud data. The contact and no contact scanners were presented. The results show that contact scanning procedures uses a touch probe to record the relative position of points on the objects' surface. This procedure is commonly used in Reverse engineering applications. Its merits are represented by efficiency for objects with low geometric surface detail. Disadvantages are represented by time consuming, this procedure being impractical for artifacts digitization. The non contact scanning procedure implies laser scanning (laser triangulation technology) and photogrammetry. As a conclusion it can be drawn that different types of dental structure needs different types of scanning procedures in order to obtain a competitive complex 3D virtual model that can be used in CAD/CAM technologies.
Time-optimized laser micro machining by using a new high dynamic and high precision galvo scanner
NASA Astrophysics Data System (ADS)
Jaeggi, Beat; Neuenschwander, Beat; Zimmermann, Markus; Zecherle, Markus; Boeckler, Ernst W.
2016-03-01
High accuracy, quality and throughput are key factors in laser micro machining. To obtain these goals the ablation process, the machining strategy and the scanning device have to be optimized. The precision is influenced by the accuracy of the galvo scanner and can further be enhanced by synchronizing the movement of the mirrors with the laser pulse train. To maintain a high machining quality i.e. minimum surface roughness, the pulse-to-pulse distance has also to be optimized. Highest ablation efficiency is obtained by choosing the proper laser peak fluence together with highest specific removal rate. The throughput can now be enhanced by simultaneously increasing the average power, the repetition rate as well as the scanning speed to preserve the fluence and the pulse-to-pulse distance. Therefore a high scanning speed is of essential importance. To guarantee the required excellent accuracy even at high scanning speeds a new interferometry based encoder technology was used, that provides a high quality signal for closed-loop control of the galvo scanner position. Low inertia encoder design enables a very dynamic scanner system, which can be driven to very high line speeds by a specially adapted control solution. We will present results with marking speeds up to 25 m/s using a f = 100 mm objective obtained with a new scanning system and scanner tuning maintaining a precision of about 5 μm. Further it will be shown that, especially for short line lengths, the machining time can be minimized by choosing the proper speed which has not to be the maximum one.
Shimizu, Sakura; Shinya, Akikazu; Kuroda, Soichi; Gomi, Harunori
2017-07-26
The accuracy of prostheses affects clinical success and is, in turn, affected by the accuracy of the scanner and CAD programs. Thus, their accuracy is important. The first aim of this study was to evaluate the accuracy of an intraoral scanner with active triangulation (Cerec Omnicam), an intraoral scanner with a confocal laser (3Shape Trios), and an extraoral scanner with active triangulation (D810). The second aim of this study was to compare the accuracy of the digital crowns designed with two different scanner/CAD combinations. The accuracy of the intraoral scanners and extraoral scanner was clinically acceptable. Marginal and internal fit of the digital crowns fabricated using the intraoral scanner and CAD programs were inferior to those fabricated using the extraoral scanner and CAD programs.
Automated size-specific CT dose monitoring program: assessing variability in CT dose.
Christianson, Olav; Li, Xiang; Frush, Donald; Samei, Ehsan
2012-11-01
The potential health risks associated with low levels of ionizing radiation have created a movement in the radiology community to optimize computed tomography (CT) imaging protocols to use the lowest radiation dose possible without compromising the diagnostic usefulness of the images. Despite efforts to use appropriate and consistent radiation doses, studies suggest that a great deal of variability in radiation dose exists both within and between institutions for CT imaging. In this context, the authors have developed an automated size-specific radiation dose monitoring program for CT and used this program to assess variability in size-adjusted effective dose from CT imaging. The authors radiation dose monitoring program operates on an independent health insurance portability and accountability act compliant dosimetry server. Digital imaging and communication in medicine routing software is used to isolate dose report screen captures and scout images for all incoming CT studies. Effective dose conversion factors (k-factors) are determined based on the protocol and optical character recognition is used to extract the CT dose index and dose-length product. The patient's thickness is obtained by applying an adaptive thresholding algorithm to the scout images and is used to calculate the size-adjusted effective dose (ED(adj)). The radiation dose monitoring program was used to collect data on 6351 CT studies from three scanner models (GE Lightspeed Pro 16, GE Lightspeed VCT, and GE Definition CT750 HD) and two institutions over a one-month period and to analyze the variability in ED(adj) between scanner models and across institutions. No significant difference was found between computer measurements of patient thickness and observer measurements (p = 0.17), and the average difference between the two methods was less than 4%. Applying the size correction resulted in ED(adj) that differed by up to 44% from effective dose estimates that were not adjusted by patient size. Additionally, considerable differences were noted in ED(adj) distributions between scanners, with scanners employing iterative reconstruction exhibiting significantly lower ED(adj) (range: 9%-64%). Finally, a significant difference (up to 59%) in ED(adj) distributions was observed between institutions, indicating the potential for dose reduction. The authors developed a robust automated size-specific radiation dose monitoring program for CT. Using this program, significant differences in ED(adj) were observed between scanner models and across institutions. This new dose monitoring program offers a unique tool for improving quality assurance and standardization both within and across institutions.
Automated size-specific CT dose monitoring program: Assessing variability in CT dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christianson, Olav; Li Xiang; Frush, Donald
2012-11-15
Purpose: The potential health risks associated with low levels of ionizing radiation have created a movement in the radiology community to optimize computed tomography (CT) imaging protocols to use the lowest radiation dose possible without compromising the diagnostic usefulness of the images. Despite efforts to use appropriate and consistent radiation doses, studies suggest that a great deal of variability in radiation dose exists both within and between institutions for CT imaging. In this context, the authors have developed an automated size-specific radiation dose monitoring program for CT and used this program to assess variability in size-adjusted effective dose from CTmore » imaging. Methods: The authors radiation dose monitoring program operates on an independent health insurance portability and accountability act compliant dosimetry server. Digital imaging and communication in medicine routing software is used to isolate dose report screen captures and scout images for all incoming CT studies. Effective dose conversion factors (k-factors) are determined based on the protocol and optical character recognition is used to extract the CT dose index and dose-length product. The patient's thickness is obtained by applying an adaptive thresholding algorithm to the scout images and is used to calculate the size-adjusted effective dose (ED{sub adj}). The radiation dose monitoring program was used to collect data on 6351 CT studies from three scanner models (GE Lightspeed Pro 16, GE Lightspeed VCT, and GE Definition CT750 HD) and two institutions over a one-month period and to analyze the variability in ED{sub adj} between scanner models and across institutions. Results: No significant difference was found between computer measurements of patient thickness and observer measurements (p= 0.17), and the average difference between the two methods was less than 4%. Applying the size correction resulted in ED{sub adj} that differed by up to 44% from effective dose estimates that were not adjusted by patient size. Additionally, considerable differences were noted in ED{sub adj} distributions between scanners, with scanners employing iterative reconstruction exhibiting significantly lower ED{sub adj} (range: 9%-64%). Finally, a significant difference (up to 59%) in ED{sub adj} distributions was observed between institutions, indicating the potential for dose reduction. Conclusions: The authors developed a robust automated size-specific radiation dose monitoring program for CT. Using this program, significant differences in ED{sub adj} were observed between scanner models and across institutions. This new dose monitoring program offers a unique tool for improving quality assurance and standardization both within and across institutions.« less
Implementation of Size-Dependent Local Diagnostic Reference Levels for CT Angiography.
Boere, Hub; Eijsvoogel, Nienke G; Sailer, Anna M; Wildberger, Joachim E; de Haan, Michiel W; Das, Marco; Jeukens, Cecile R L P N
2018-05-01
Diagnostic reference levels (DRLs) are established for standard-sized patients; however, patient dose in CT depends on patient size. The purpose of this study was to introduce a method for setting size-dependent local diagnostic reference levels (LDRLs) and to evaluate these LDRLs in comparison with size-independent LDRLs and with respect to image quality. One hundred eighty-four aortic CT angiography (CTA) examinations performed on either a second-generation or third-generation dual-source CT scanner were included; we refer to the second-generation dual-source CT scanner as "CT1" and the third-generation dual-source CT scanner as "CT2." The volume CT dose index (CTDI vol ) and patient diameter (i.e., the water-equivalent diameter) were retrieved by dose-monitoring software. Size-dependent DRLs based on a linear regression of the CTDI vol versus patient size were set by scanner type. Size-independent DRLs were set by the 5th and 95th percentiles of the CTDI vol values. Objective image quality was assessed using the signal-to-noise ratio (SNR), and subjective image quality was assessed using a 4-point Likert scale. The CTDI vol depended on patient size and scanner type (R 2 = 0.72 and 0.78, respectively; slope = 0.05 and 0.02 mGy/mm; p < 0.001). Of the outliers identified by size-independent DRLs, 30% (CT1) and 67% (CT2) were adequately dosed when considering patient size. Alternatively, 30% (CT1) and 70% (CT2) of the outliers found with size-dependent DRLs were not identified using size-independent DRLs. A negative correlation was found between SNR and CTDI vol (R 2 = 0.36 for CT1 and 0.45 for CT2). However, all outliers had a subjective image quality score of sufficient or better. We introduce a method for setting size-dependent LDRLs in CTA. Size-dependent LDRLs are relevant for assessing the appropriateness of the radiation dose for an individual patient on a specific CT scanner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafiq ul Hassan, M; Zhang, G; Oliver, J
Purpose: To investigate the impact of reconstruction Field of View on Radiomics features in computed tomography (CT) using a texture phantom. Methods: A rectangular Credence Cartridge Radiomics (CCR) phantom, composed of 10 different cartridges, was scanned on four different CT scanners from two manufacturers. A pre-defined scanning protocol was adopted for consistency. The slice thickness and reconstruction interval of 1.5 mm was used on all scanners. The reconstruction FOV was varied to result a voxel size ranging from 0.38 to 0.98 mm. A spherical region of interest (ROI) was contoured on the shredded rubber cartridge from CCR phantom CT scans.more » Ninety three Radiomics features were extracted from ROI using an in-house program. These include 10 shape, 22 intensity, 26 GLCM, 11 GLZSM, 11 RLM, 5 NGTDM and 8 fractal dimensional features. To evaluate the Interscanner variability across three scanners, a coefficient of variation (COV) was calculated for each feature group. Each group was further classified according to the COV by calculating the percentage of features in each of the following categories: COV≤ 5%, between 5 and 10% and ≥ 10%. Results: Shape features were the most robust, as expected, because of the spherical contouring of ROI. Intensity features were the second most robust with 54.5 to 64% of features with COV < 5%. GLCM features ranged from 31 to 35% for the same category. RLM features were sensitive to specific scanner and 5% variability was 9 to 54%. Almost all GLZM and NGTDM features showed COV ≥10% among the scanners. The dependence of fractal dimensions features on FOV was not consistent across different scanners. Conclusion: We concluded that reconstruction FOV greatly influence Radiomics features. The GLZSM and NGTDM are highly sensitive to FOV. funded in part by Grant NIH/NCI R01CA190105-01.« less
A micron resolution optical scanner for characterization of silicon detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shukla, R. A.; Dugad, S. R., E-mail: dugad@cern.ch; Gopal, A. V.
2014-02-15
The emergence of high position resolution (∼10 μm) silicon detectors in recent times have highlighted the urgent need for the development of new automated optical scanners of micron level resolution suited for characterizing microscopic features of these detectors. More specifically, for the newly developed silicon photo-multipliers (SiPM) that are compact, possessing excellent photon detection efficiency with gain comparable to photo-multiplier tube. In a short time, since their invention the SiPMs are already being widely used in several high-energy physics and astrophysics experiments as the photon readout element. The SiPM is a high quantum efficiency, multi-pixel photon counting detector with fastmore » timing and high gain. The presence of a wide variety of photo sensitive silicon detectors with high spatial resolution requires their performance evaluation to be carried out by photon beams of very compact spot size. We have designed a high resolution optical scanner that provides a monochromatic focused beam on a target plane. The transverse size of the beam was measured by the knife-edge method to be 1.7 μm at 1 − σ level. Since the beam size was an order of magnitude smaller than the typical feature size of silicon detectors, this optical scanner can be used for selective excitation of these detectors. The design and operational details of the optical scanner, high precision programmed movement of target plane (0.1 μm) integrated with general purpose data acquisition system developed for recording static and transient response photo sensitive silicon detector are reported in this paper. Entire functionality of scanner is validated by using it for selective excitation of individual pixels in a SiPM and identifying response of active and dead regions within SiPM. Results from these studies are presented in this paper.« less
Laser-based structural sensing and surface damage detection
NASA Astrophysics Data System (ADS)
Guldur, Burcu
Damage due to age or accumulated damage from hazards on existing structures poses a worldwide problem. In order to evaluate the current status of aging, deteriorating and damaged structures, it is vital to accurately assess the present conditions. It is possible to capture the in situ condition of structures by using laser scanners that create dense three-dimensional point clouds. This research investigates the use of high resolution three-dimensional terrestrial laser scanners with image capturing abilities as tools to capture geometric range data of complex scenes for structural engineering applications. Laser scanning technology is continuously improving, with commonly available scanners now capturing over 1,000,000 texture-mapped points per second with an accuracy of ~2 mm. However, automatically extracting meaningful information from point clouds remains a challenge, and the current state-of-the-art requires significant user interaction. The first objective of this research is to use widely accepted point cloud processing steps such as registration, feature extraction, segmentation, surface fitting and object detection to divide laser scanner data into meaningful object clusters and then apply several damage detection methods to these clusters. This required establishing a process for extracting important information from raw laser-scanned data sets such as the location, orientation and size of objects in a scanned region, and location of damaged regions on a structure. For this purpose, first a methodology for processing range data to identify objects in a scene is presented and then, once the objects from model library are correctly detected and fitted into the captured point cloud, these fitted objects are compared with the as-is point cloud of the investigated object to locate defects on the structure. The algorithms are demonstrated on synthetic scenes and validated on range data collected from test specimens and test-bed bridges. The second objective of this research is to combine useful information extracted from laser scanner data with color information, which provides information in the fourth dimension that enables detection of damage types such as cracks, corrosion, and related surface defects that are generally difficult to detect using only laser scanner data; moreover, the color information also helps to track volumetric changes on structures such as spalling. Although using images with varying resolution to detect cracks is an extensively researched topic, damage detection using laser scanners with and without color images is a new research area that holds many opportunities for enhancing the current practice of visual inspections. The aim is to combine the best features of laser scans and images to create an automatic and effective surface damage detection method, which will reduce the need for skilled labor during visual inspections and allow automatic documentation of related information. This work enables developing surface damage detection strategies that integrate existing condition rating criteria for a wide range damage types that are collected under three main categories: small deformations already existing on the structure (cracks); damage types that induce larger deformations, but where the initial topology of the structure has not changed appreciably (e.g., bent members); and large deformations where localized changes in the topology of the structure have occurred (e.g., rupture, discontinuities and spalling). The effectiveness of the developed damage detection algorithms are validated by comparing the detection results with the measurements taken from test specimens and test-bed bridges.
Occurrence and characteristics of mutual interference between LIDAR scanners
NASA Astrophysics Data System (ADS)
Kim, Gunzung; Eom, Jeongsook; Park, Seonghyeon; Park, Yongwan
2015-05-01
The LIDAR scanner is at the heart of object detection of the self-driving car. Mutual interference between LIDAR scanners has not been regarded as a problem because the percentage of vehicles equipped with LIDAR scanners was very rare. With the growing number of autonomous vehicle equipped with LIDAR scanner operated close to each other at the same time, the LIDAR scanner may receive laser pulses from other LIDAR scanners. In this paper, three types of experiments and their results are shown, according to the arrangement of two LIDAR scanners. We will show the probability that any LIDAR scanner will interfere mutually by considering spatial and temporal overlaps. It will present some typical mutual interference scenario and report an analysis of the interference mechanism.
Automated measurement of hippocampal subfields in PTSD: Evidence for smaller dentate gyrus volume.
Hayes, Jasmeet P; Hayes, Scott; Miller, Danielle R; Lafleche, Ginette; Logue, Mark W; Verfaellie, Mieke
2017-12-01
Smaller hippocampal volume has been consistently observed as a biomarker of posttraumatic stress disorder (PTSD). However, less is known about individual volumes of the subfields composing the hippocampus such as the dentate gyrus and cornu ammonis (CA) fields 1-4 in PTSD. The aim of the present study was to examine the hypothesis that volume of the dentate gyrus, a region putatively involved in distinctive encoding of similar events, is smaller in individuals with PTSD versus trauma-exposed controls. Ninety-seven recent war veterans underwent structural imaging on a 3T scanner and were assessed for PTSD using the Clinician-Administered PTSD Scale. The hippocampal subfield automated segmentation program available through FreeSurfer was used to segment the CA4/dentate gyrus, CA1, CA2/3, presubiculum, and subiculum of the hippocampus. Results showed that CA4/dentate gyrus subfield volume was significantly smaller in veterans with PTSD and scaled inversely with PTSD symptom severity. These results support the view that dentate gyrus abnormalities are associated with symptoms of PTSD, although additional evidence is necessary to determine whether these abnormalities underlie fear generalization and other memory alterations in PTSD. Published by Elsevier Ltd.
A method for evaluating the murine pulmonary vasculature using micro-computed tomography.
Phillips, Michael R; Moore, Scott M; Shah, Mansi; Lee, Clara; Lee, Yueh Z; Faber, James E; McLean, Sean E
2017-01-01
Significant mortality and morbidity are associated with alterations in the pulmonary vasculature. While techniques have been described for quantitative morphometry of whole-lung arterial trees in larger animals, no methods have been described in mice. We report a method for the quantitative assessment of murine pulmonary arterial vasculature using high-resolution computed tomography scanning. Mice were harvested at 2 weeks, 4 weeks, and 3 months of age. The pulmonary artery vascular tree was pressure perfused to maximal dilation with a radio-opaque casting material with viscosity and pressure set to prevent capillary transit and venous filling. The lungs were fixed and scanned on a specimen computed tomography scanner at 8-μm resolution, and the vessels were segmented. Vessels were grouped into categories based on lumen diameter and branch generation. Robust high-resolution segmentation was achieved, permitting detailed quantitation of pulmonary vascular morphometrics. As expected, postnatal lung development was associated with progressive increase in small-vessel number and arterial branching complexity. These methods for quantitative analysis of the pulmonary vasculature in postnatal and adult mice provide a useful tool for the evaluation of mouse models of disease that affect the pulmonary vasculature. Copyright © 2016 Elsevier Inc. All rights reserved.
Hybrid overlay metrology for high order correction by using CDSEM
NASA Astrophysics Data System (ADS)
Leray, Philippe; Halder, Sandip; Lorusso, Gian; Baudemprez, Bart; Inoue, Osamu; Okagawa, Yutaka
2016-03-01
Overlay control has become one of the most critical issues for semiconductor manufacturing. Advanced lithographic scanners use high-order corrections or correction per exposure to reduce the residual overlay. It is not enough in traditional feedback of overlay measurement by using ADI wafer because overlay error depends on other process (etching process and film stress, etc.). It needs high accuracy overlay measurement by using AEI wafer. WIS (Wafer Induced Shift) is the main issue for optical overlay, IBO (Image Based Overlay) and DBO (Diffraction Based Overlay). We design dedicated SEM overlay targets for dual damascene process of N10 by i-ArF multi-patterning. The pattern is same as device-pattern locally. Optical overlay tools select segmented pattern to reduce the WIS. However segmentation has limit, especially the via-pattern, for keeping the sensitivity and accuracy. We evaluate difference between the viapattern and relaxed pitch gratings which are similar to optical overlay target at AEI. CDSEM can estimate asymmetry property of target from image of pattern edge. CDSEM can estimate asymmetry property of target from image of pattern edge. We will compare full map of SEM overlay to full map of optical overlay for high order correction ( correctables and residual fingerprints).
Intelligent image processing for vegetation classification using multispectral LANDSAT data
NASA Astrophysics Data System (ADS)
Santos, Stewart R.; Flores, Jorge L.; Garcia-Torales, G.
2015-09-01
We propose an intelligent computational technique for analysis of vegetation imaging, which are acquired with multispectral scanner (MSS) sensor. This work focuses on intelligent and adaptive artificial neural network (ANN) methodologies that allow segmentation and classification of spectral remote sensing (RS) signatures, in order to obtain a high resolution map, in which we can delimit the wooded areas and quantify the amount of combustible materials present into these areas. This could provide important information to prevent fires and deforestation of wooded areas. The spectral RS input data, acquired by the MSS sensor, are considered in a random propagation remotely sensed scene with unknown statistics for each Thematic Mapper (TM) band. Performing high-resolution reconstruction and adding these spectral values with neighbor pixels information from each TM band, we can include contextual information into an ANN. The biggest challenge in conventional classifiers is how to reduce the number of components in the feature vector, while preserving the major information contained in the data, especially when the dimensionality of the feature space is high. Preliminary results show that the Adaptive Modified Neural Network method is a promising and effective spectral method for segmentation and classification in RS images acquired with MSS sensor.
Ebert, Lars C; Heimer, Jakob; Schweitzer, Wolf; Sieberth, Till; Leipner, Anja; Thali, Michael; Ampanozi, Garyfalia
2017-12-01
Post mortem computed tomography (PMCT) can be used as a triage tool to better identify cases with a possibly non-natural cause of death, especially when high caseloads make it impossible to perform autopsies on all cases. Substantial data can be generated by modern medical scanners, especially in a forensic setting where the entire body is documented at high resolution. A solution for the resulting issues could be the use of deep learning techniques for automatic analysis of radiological images. In this article, we wanted to test the feasibility of such methods for forensic imaging by hypothesizing that deep learning methods can detect and segment a hemopericardium in PMCT. For deep learning image analysis software, we used the ViDi Suite 2.0. We retrospectively selected 28 cases with, and 24 cases without, hemopericardium. Based on these data, we trained two separate deep learning networks. The first one classified images into hemopericardium/not hemopericardium, and the second one segmented the blood content. We randomly selected 50% of the data for training and 50% for validation. This process was repeated 20 times. The best performing classification network classified all cases of hemopericardium from the validation images correctly with only a few false positives. The best performing segmentation network would tend to underestimate the amount of blood in the pericardium, which is the case for most networks. This is the first study that shows that deep learning has potential for automated image analysis of radiological images in forensic medicine.
Naumovich, S S; Naumovich, S A; Goncharenko, V G
2015-01-01
The objective of the present study was the development and clinical testing of a three-dimensional (3D) reconstruction method of teeth and a bone tissue of the jaw on the basis of CT images of the maxillofacial region. 3D reconstruction was performed using the specially designed original software based on watershed transformation. Computed tomograms in digital imaging and communications in medicine format obtained on multispiral CT and CBCT scanners were used for creation of 3D models of teeth and the jaws. The processing algorithm is realized in the stepwise threshold image segmentation with the placement of markers in the mode of a multiplanar projection in areas relating to the teeth and a bone tissue. The developed software initially creates coarse 3D models of the entire dentition and the jaw. Then, certain procedures specify the model of the jaw and cut the dentition into separate teeth. The proper selection of the segmentation threshold is very important for CBCT images having a low contrast and high noise level. The developed semi-automatic algorithm of multispiral and cone beam computed tomogram processing allows 3D models of teeth to be created separating them from a bone tissue of the jaws. The software is easy to install in a dentist's workplace, has an intuitive interface and takes little time in processing. The obtained 3D models can be used for solving a wide range of scientific and clinical tasks.
Computerized image analysis: estimation of breast density on mammograms
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Petrick, Nicholas; Sahiner, Berkman; Helvie, Mark A.; Roubidoux, Marilyn A.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.
2000-06-01
An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is performed in three stages. First, the breast region is segmented from the surrounding background by an automated breast boundary-tracking algorithm. Second, an adaptive dynamic range compression technique is applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rule-based classification is used to classify the breast images into several classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold is automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area is then estimated. In this preliminary study, we analyzed the interobserver variation of breast density estimation by two experienced radiologists using BI-RADS lexicon. The radiologists' visually estimated percent breast densities were compared with the computer's calculation. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility in comparison with the subjective visual assessment by radiologists.
Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa
2015-04-13
Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry
2015-11-01
In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.
Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry
2015-11-21
In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.
Fuzzy pulmonary vessel segmentation in contrast enhanced CT data
NASA Astrophysics Data System (ADS)
Kaftan, Jens N.; Kiraly, Atilla P.; Bakai, Annemarie; Das, Marco; Novak, Carol L.; Aach, Til
2008-03-01
Pulmonary vascular tree segmentation has numerous applications in medical imaging and computer-aided diagnosis (CAD), including detection and visualization of pulmonary emboli (PE), improved lung nodule detection, and quantitative vessel analysis. We present a novel approach to pulmonary vessel segmentation based on a fuzzy segmentation concept, combining the strengths of both threshold and seed point based methods. The lungs of the original image are first segmented and a threshold-based approach identifies core vessel components with a high specificity. These components are then used to automatically identify reliable seed points for a fuzzy seed point based segmentation method, namely fuzzy connectedness. The output of the method consists of the probability of each voxel belonging to the vascular tree. Hence, our method provides the possibility to adjust the sensitivity/specificity of the segmentation result a posteriori according to application-specific requirements, through definition of a minimum vessel-probability required to classify a voxel as belonging to the vascular tree. The method has been evaluated on contrast-enhanced thoracic CT scans from clinical PE cases and demonstrates overall promising results. For quantitative validation we compare the segmentation results to randomly selected, semi-automatically segmented sub-volumes and present the resulting receiver operating characteristic (ROC) curves. Although we focus on contrast enhanced chest CT data, the method can be generalized to other regions of the body as well as to different imaging modalities.
Geometric and Colour Data Fusion for Outdoor 3D Models
Merchán, Pilar; Adán, Antonio; Salamanca, Santiago; Domínguez, Vicente; Chacón, Ricardo
2012-01-01
This paper deals with the generation of accurate, dense and coloured 3D models of outdoor scenarios from scanners. This is a challenging research field in which several problems still remain unsolved. In particular, the process of 3D model creation in outdoor scenes may be inefficient if the scene is digitalized under unsuitable technical (specific scanner on-board camera) and environmental (rain, dampness, changing illumination) conditions. We address our research towards the integration of images and range data to produce photorealistic models. Our proposal is based on decoupling the colour integration and geometry reconstruction stages, making them independent and controlled processes. This issue is approached from two different viewpoints. On the one hand, given a complete model (geometry plus texture), we propose a method to modify the original texture provided by the scanner on-board camera with the colour information extracted from external images taken at given moments and under specific environmental conditions. On the other hand, we propose an algorithm to directly assign external images onto the complete geometric model, thus avoiding tedious on-line calibration processes. We present the work conducted on two large Roman archaeological sites dating from the first century A.D., namely, the Theatre of Segobriga and the Fori Porticus of Emerita Augusta, both in Spain. The results obtained demonstrate that our approach could be useful in the digitalization and 3D modelling fields. PMID:22969327
Real Time Coincidence Detection Engine for High Count Rate Timestamp Based PET
NASA Astrophysics Data System (ADS)
Tetrault, M.-A.; Oliver, J. F.; Bergeron, M.; Lecomte, R.; Fontaine, R.
2010-02-01
Coincidence engines follow two main implementation flows: timestamp based systems and AND-gate based systems. The latter have been more widespread in recent years because of its lower cost and high efficiency. However, they are highly dependent on the selected electronic components, they have limited flexibility once assembled and they are customized to fit a specific scanner's geometry. Timestamp based systems are gathering more attention lately, especially with high channel count fully digital systems. These new systems must however cope with important singles count rates. One option is to record every detected event and postpone coincidence detection offline. For daily use systems, a real time engine is preferable because it dramatically reduces data volume and hence image preprocessing time and raw data management. This paper presents the timestamp based coincidence engine for the LabPET¿, a small animal PET scanner with up to 4608 individual readout avalanche photodiode channels. The engine can handle up to 100 million single events per second and has extensive flexibility because it resides in programmable logic devices. It can be adapted for any detector geometry or channel count, can be ported to newer, faster programmable devices and can have extra modules added to take advantage of scanner-specific features. Finally, the user can select between full processing mode for imaging protocols and minimum processing mode to study different approaches for coincidence detection with offline software.
Parker, Brent C.; Neck, Daniel W.; Henkelmann, Greg; Rosen, Isaac I.
2010-01-01
The purpose of this study was to quantify the performance and assess the utility of two different types of scanners for radiochromic EBT film dosimetry: a commercial flatbed document scanner and a widely used radiographic film scanner. We evaluated the Epson Perfection V700 Photo flatbed scanner and the Vidar VXR Dosimetry Pro Advantage scanner as measurement devices for radiochromic EBT film. Measurements were made of scan orientation effects, response uniformity, and scanner noise. Scanners were tested using films irradiated with eight separate 3×3 cm2 fields to doses ranging from 0.115–5.119 Gy. ImageJ and RIT software was used for analyzing the Epson and Vidar scans, respectively. For repeated scans of a single film, the measurements in each dose region were reproducible to within ±0.3% standard deviation (SD) with both scanners. Film‐to‐film variations for corresponding doses were measured to be within ±0.4% SD for both Epson scanner and Vidar scanners. Overall, the Epson scanner showed a 10% smaller range of pixel value compared to the Vidar scanner. Scanner noise was small: ±0.3% SD for the Epson and ±0.2% for the Vidar. Overall measurement uniformity for blank film in both systems was better than ±0.2%, provided that the leading and trailing 2 cm film edges were neglected in the Vidar system. In this region artifacts are attributed to the film rollers. Neither system demonstrated a clear measurement advantage. The Epson scanner is a relatively inexpensive method for analyzing radiochromic film, but there is a lack of commercially available software. For a clinic already using a Vidar scanner, applying it to radiochromic film is attractive because commercial software is available. However, care must be taken to avoid using the leading and trailing film edges. PACS number: 87.55.Qr
Multi-atlas segmentation for abdominal organs with Gaussian mixture models
NASA Astrophysics Data System (ADS)
Burke, Ryan P.; Xu, Zhoubing; Lee, Christopher P.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Abramson, Richard G.; Landman, Bennett A.
2015-03-01
Abdominal organ segmentation with clinically acquired computed tomography (CT) is drawing increasing interest in the medical imaging community. Gaussian mixture models (GMM) have been extensively used through medical segmentation, most notably in the brain for cerebrospinal fluid / gray matter / white matter differentiation. Because abdominal CT exhibit strong localized intensity characteristics, GMM have recently been incorporated in multi-stage abdominal segmentation algorithms. In the context of variable abdominal anatomy and rich algorithms, it is difficult to assess the marginal contribution of GMM. Herein, we characterize the efficacy of an a posteriori framework that integrates GMM of organ-wise intensity likelihood with spatial priors from multiple target-specific registered labels. In our study, we first manually labeled 100 CT images. Then, we assigned 40 images to use as training data for constructing target-specific spatial priors and intensity likelihoods. The remaining 60 images were evaluated as test targets for segmenting 12 abdominal organs. The overlap between the true and the automatic segmentations was measured by Dice similarity coefficient (DSC). A median improvement of 145% was achieved by integrating the GMM intensity likelihood against the specific spatial prior. The proposed framework opens the opportunities for abdominal organ segmentation by efficiently using both the spatial and appearance information from the atlases, and creates a benchmark for large-scale automatic abdominal segmentation.
Shi, Feng; Yap, Pew-Thian; Fan, Yong; Cheng, Jie-Zhi; Wald, Lawrence L.; Gerig, Guido; Lin, Weili; Shen, Dinggang
2010-01-01
The acquisition of high quality MR images of neonatal brains is largely hampered by their characteristically small head size and low tissue contrast. As a result, subsequent image processing and analysis, especially for brain tissue segmentation, are often hindered. To overcome this problem, a dedicated phased array neonatal head coil is utilized to improve MR image quality by effectively combing images obtained from 8 coil elements without lengthening data acquisition time. In addition, a subject-specific atlas based tissue segmentation algorithm is specifically developed for the delineation of fine structures in the acquired neonatal brain MR images. The proposed tissue segmentation method first enhances the sheet-like cortical gray matter (GM) structures in neonatal images with a Hessian filter for generation of cortical GM prior. Then, the prior is combined with our neonatal population atlas to form a cortical enhanced hybrid atlas, which we refer to as the subject-specific atlas. Various experiments are conducted to compare the proposed method with manual segmentation results, as well as with additional two population atlas based segmentation methods. Results show that the proposed method is capable of segmenting the neonatal brain with the highest accuracy, compared to other two methods. PMID:20862268
a Method for the Registration of Hemispherical Photographs and Tls Intensity Images
NASA Astrophysics Data System (ADS)
Schmidt, A.; Schilling, A.; Maas, H.-G.
2012-07-01
Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.
Min-cut segmentation of cursive handwriting in tabular documents
NASA Astrophysics Data System (ADS)
Davis, Brian L.; Barrett, William A.; Swingle, Scott D.
2015-01-01
Handwritten tabular documents, such as census, birth, death and marriage records, contain a wealth of information vital to genealogical and related research. Much work has been done in segmenting freeform handwriting, however, segmentation of cursive handwriting in tabular documents is still an unsolved problem. Tabular documents present unique segmentation challenges caused by handwriting overlapping cell-boundaries and other words, both horizontally and vertically, as "ascenders" and "descenders" overlap into adjacent cells. This paper presents a method for segmenting handwriting in tabular documents using a min-cut/max-flow algorithm on a graph formed from a distance map and connected components of handwriting. Specifically, we focus on line, word and first letter segmentation. Additionally, we include the angles of strokes of the handwriting as a third dimension to our graph to enable the resulting segments to share pixels of overlapping letters. Word segmentation accuracy is 89.5% evaluating lines of the data set used in the ICDAR2013 Handwriting Segmentation Contest. Accuracy is 92.6% for a specific application of segmenting first and last names from noisy census records. Accuracy for segmenting lines of names from noisy census records is 80.7%. The 3D graph cutting shows promise in segmenting overlapping letters, although highly convoluted or overlapping handwriting remains an ongoing challenge.
Telescope with a wide field of view internal optical scanner
NASA Technical Reports Server (NTRS)
Zheng, Yunhui (Inventor); Degnan, III, John James (Inventor)
2012-01-01
A telescope with internal scanner utilizing either a single optical wedge scanner or a dual optical wedge scanner and a controller arranged to control a synchronous rotation of the first and/or second optical wedges, the wedges constructed and arranged to scan light redirected by topological surfaces and/or volumetric scatterers. The telescope with internal scanner further incorporates a first converging optical element that receives the redirected light and transmits the redirected light to the scanner, and a second converging optical element within the light path between the first optical element and the scanner arranged to reduce an area of impact on the scanner of the beam collected by the first optical element.
Use of monoclonal antibody-IRDye800CW bioconjugates in the resection of breast cancer
Korb, Melissa L.; Hartman, Yolanda E.; Kovar, Joy; Zinn, Kurt R.; Bland, Kirby I.; Rosenthal, Eben L.
2015-01-01
Background Complete surgical resection of breast cancer is a powerful determinant of patient outcome, and failure to achieve negative margins results in reoperation in between 30% and 60% of patients. We hypothesize that repurposing Food and Drug Administration approved antibodies as tumor-targeting diagnostic molecules can function as optical contrast agents to identify the boundaries of malignant tissue intraoperatively. Materials and methods The monoclonal antibodies bevacizumab, cetuximab, panitumumab, trastuzumab, and tocilizumab were covalently linked to a near-infrared fluorescence probe (IRDye800CW) and in vitro binding assays were performed to confirm ligand-specific binding. Nude mice bearing human breast cancer flank tumors were intravenously injected with the antibody-IRDye800 bioconjugates and imaged over time. Tumor resections were performed using the SPY and Pearl Impulse systems, and the presence or absence of tumor was confirmed by conventional and fluorescence histology. Results Tumor was distinguishable from normal tissue using both SPY and Pearl systems, with both platforms being able to detect tumor as small as 0.5 mg. Serial surgical resections demonstrated that real-time fluorescence can differentiate subclinical segments of disease. Pathologic examination of samples by conventional and optical histology using the Odyssey scanner confirmed that the bioconjugates were specific for tumor cells and allowed accurate differentiation of malignant areas from normal tissue. Conclusions Human breast cancer tumors can be imaged in vivo with multiple optical imaging platforms using near-infrared fluorescently labeled antibodies. These data support additional preclinical investigations for improving the surgical resection of malignancies with the goal of eventual clinical translation. PMID:24360117
Foot roll-over evaluation based on 3D dynamic foot scan.
Samson, William; Van Hamme, Angèle; Sanchez, Stéphane; Chèze, Laurence; Van Sint Jan, Serge; Feipel, Véronique
2014-01-01
Foot roll-over is commonly analyzed to evaluate gait pathologies. The current study utilized a dynamic foot scanner (DFS) to analyze foot roll-over. The right feet of ten healthy subjects were assessed during gait trials with a DFS system integrated into a walkway. A foot sole picture was computed by vertically projecting points from the 3D foot shape which were lower than a threshold height of 15 mm. A 'height' value of these projected points was determined; corresponding to the initial vertical coordinates prior to projection. Similar to pedobarographic analysis, the foot sole picture was segmented into anatomical regions of interest (ROIs) to process mean height (average of height data by ROI) and projected surface (area of the projected foot sole by ROI). Results showed that these variables evolved differently to plantar pressure data previously reported in the literature, mainly due to the specificity of each physical quantity (millimeters vs Pascals). Compared to plantar pressure data arising from surface contact by the foot, the current method takes into account the whole plantar aspect of the foot, including the parts that do not make contact with the support surface. The current approach using height data could contribute to a better understanding of specific aspects of foot motion during walking, such as plantar arch height and the windlass mechanism. Results of this study show the underlying method is reliable. Further investigation is required to validate the DFS measurements within a clinical context, prior to implementation into clinical practice. Copyright © 2013 Elsevier B.V. All rights reserved.
Stadelmann, Marc A; Maquer, Ghislain; Voumard, Benjamin; Grant, Aaron; Hackney, David B; Vermathen, Peter; Alkalay, Ron N; Zysset, Philippe K
2018-05-17
Intervertebral disc degeneration is a common disease that is often related to impaired mechanical function, herniations and chronic back pain. The degenerative process induces alterations of the disc's shape, composition and structure that can be visualized in vivo using magnetic resonance imaging (MRI). Numerical tools such as finite element analysis (FEA) have the potential to relate MRI-based information to the altered mechanical behavior of the disc. However, in terms of geometry, composition and fiber architecture, current FE models rely on observations made on healthy discs and might therefore not be well suited to study the degeneration process. To address the issue, we propose a new, more realistic FE methodology based on diffusion tensor imaging (DTI). For this study, a human disc joint was imaged in a high-field MR scanner with proton-density weighted (PD) and DTI sequences. The PD image was segmented and an anatomy-specific mesh was generated. Assuming accordance between local principal diffusion direction and local mean collagen fiber alignment, corresponding fiber angles were assigned to each element. Those element-wise fiber directions and PD intensities allowed the homogenized model to smoothly account for composition and fibrous structure of the disc. The disc's in vitro mechanical behavior was quantified under tension, compression, flexion, extension, lateral bending and rotation. The six resulting load-displacement curves could be replicated by the FE model, which supports our approach as a first proof of concept towards patient-specific disc modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.
Magnetic resonance brain tissue segmentation based on sparse representations
NASA Astrophysics Data System (ADS)
Rueda, Andrea
2015-12-01
Segmentation or delineation of specific organs and structures in medical images is an important task in the clinical diagnosis and treatment, since it allows to characterize pathologies through imaging measures (biomarkers). In brain imaging, segmentation of main tissues or specific structures is challenging, due to the anatomic variability and complexity, and the presence of image artifacts (noise, intensity inhomogeneities, partial volume effect). In this paper, an automatic segmentation strategy is proposed, based on sparse representations and coupled dictionaries. Image intensity patterns are singly related to tissue labels at the level of small patches, gathering this information in coupled intensity/segmentation dictionaries. This dictionaries are used within a sparse representation framework to find the projection of a new intensity image onto the intensity dictionary, and the same projection can be used with the segmentation dictionary to estimate the corresponding segmentation. Preliminary results obtained with two publicly available datasets suggest that the proposal is capable of estimating adequate segmentations for gray matter (GM) and white matter (WM) tissues, with an average overlapping of 0:79 for GM and 0:71 for WM (with respect to original segmentations).
A novel hand-type detection technique with fingerprint sensor
NASA Astrophysics Data System (ADS)
Abe, Narishige; Shinzaki, Takashi
2013-05-01
In large-scale biometric authentication systems such as the US-Visit (USA), a 10-fingerprints scanner which simultaneously captures four fingerprints is used. In traditional systems, specific hand-types (left or right) are indicated, but it is difficult to detect hand-type due to the hand rotation and the opening and closing of fingers. In this paper, we evaluated features that were extracted from hand images (which were captured by a general optical scanner) that are considered to be effective for detecting hand-type. Furthermore, we extended the knowledge to real fingerprint images, and evaluated the accuracy with which it detects hand-type. We obtained an accuracy of about 80% with only three fingers (index, middle, ring finger).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, S; Wang, Y; Weng, H
Purpose To evaluate image quality and radiation dose of routine abdomen computed tomography exam with the automatic current modulation technique (ATCM) performed in two different brand 64-slice CT scanners in our site. Materials and Methods A retrospective review of routine abdomen CT exam performed with two scanners; scanner A and scanner B in our site. To calculate standard deviation of the portal hepatic level with a region of interest of 12.5 mm x 12.5mm represented to the image noise. The radiation dose was obtained from CT DICOM image information. Using Computed tomography dose index volume (CTDIv) to represented CT radiationmore » dose. The patient data in this study were with normal weight (about 65–75 Kg). Results The standard deviation of Scanner A was smaller than scanner B, the scanner A might with better image quality than scanner B. On the other hand, the radiation dose of scanner A was higher than scanner B(about higher 50–60%) with ATCM. Both of them, the radiation dose was under diagnostic reference level. Conclusion The ATCM systems in modern CT scanners can contribute a significant reduction in radiation dose to the patient. But the reduction by ATCM systems from different CT scanner manufacturers has slightly variation. Whatever CT scanner we use, it is necessary to find the acceptable threshold of image quality with the minimum possible radiation exposure to the patient in agreement with the ALARA principle.« less
Accuracy of complete-arch model using an intraoral video scanner: An in vitro study.
Jeong, Il-Do; Lee, Jae-Jun; Jeon, Jin-Hun; Kim, Ji-Hwan; Kim, Hae-Young; Kim, Woong-Chul
2016-06-01
Information on the accuracy of intraoral video scanners for long-span areas is limited. The purpose of this in vitro study was to evaluate and compare the trueness and precision of an intraoral video scanner, an intraoral still image scanner, and a blue-light scanner for the production of digital impressions. Reference scan data were obtained by scanning a complete-arch model. An identical model was scanned 8 times using an intraoral video scanner (CEREC Omnicam; Sirona) and an intraoral still image scanner (CEREC Bluecam; Sirona), and stone casts made from conventional impressions of the same model were scanned 8 times with a blue-light scanner as a control (Identica Blue; Medit). Accuracy consists of trueness (the extent to which the scan data differ from the reference scan) and precision (the similarity of the data from multiple scans). To evaluate precision, 8 scans were superimposed using 3-dimensional analysis software; the reference scan data were then superimposed to determine the trueness. Differences were analyzed using 1-way ANOVA and post hoc Tukey HSD tests (α=.05). Trueness in the video scanner group was not significantly different from that in the control group. However, the video scanner group showed significantly lower values than those of the still image scanner group for all variables (P<.05), except in tolerance range. The root mean square, standard deviations, and mean negative precision values for the video scanner group were significantly higher than those for the other groups (P<.05). Digital impressions obtained by the intraoral video scanner showed better accuracy for long-span areas than those captured by the still image scanner. However, the video scanner was less accurate than the laboratory scanner. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menegotti, L.; Delana, A.; Martignano, A.
Film dosimetry is an attractive tool for dose distribution verification in intensity modulated radiotherapy (IMRT). A critical aspect of radiochromic film dosimetry is the scanner used for the readout of the film: the output needs to be calibrated in dose response and corrected for pixel value and spatial dependent nonuniformity caused by light scattering; these procedures can take a long time. A method for a fast and accurate calibration and uniformity correction for radiochromic film dosimetry is presented: a single film exposure is used to do both calibration and correction. Gafchromic EBT films were read with two flatbed charge coupledmore » device scanners (Epson V750 and 1680Pro). The accuracy of the method is investigated with specific dose patterns and an IMRT beam. The comparisons with a two-dimensional array of ionization chambers using a 18x18 cm{sup 2} open field and an inverse pyramid dose pattern show an increment in the percentage of points which pass the gamma analysis (tolerance parameters of 3% and 3 mm), passing from 55% and 64% for the 1680Pro and V750 scanners, respectively, to 94% for both scanners for the 18x18 open field, and from 76% and 75% to 91% for the inverse pyramid pattern. Application to an IMRT beam also shows better gamma index results, passing from 88% and 86% for the two scanners, respectively, to 94% for both. The number of points and dose range considered for correction and calibration appears to be appropriate for use in IMRT verification. The method showed to be fast and to correct properly the nonuniformity and has been adopted for routine clinical IMRT dose verification.« less
Precision analysis of a quantitative CT liver surface nodularity score.
Smith, Andrew; Varney, Elliot; Zand, Kevin; Lewis, Tara; Sirous, Reza; York, James; Florez, Edward; Abou Elkassem, Asser; Howard-Claudio, Candace M; Roda, Manohar; Parker, Ellen; Scortegagna, Eduardo; Joyner, David; Sandlin, David; Newsome, Ashley; Brewster, Parker; Lirette, Seth T; Griswold, Michael
2018-04-26
To evaluate precision of a software-based liver surface nodularity (LSN) score derived from CT images. An anthropomorphic CT phantom was constructed with simulated liver containing smooth and nodular segments at the surface and simulated visceral and subcutaneous fat components. The phantom was scanned multiple times on a single CT scanner with adjustment of image acquisition and reconstruction parameters (N = 34) and on 22 different CT scanners from 4 manufacturers at 12 imaging centers. LSN scores were obtained using a software-based method. Repeatability and reproducibility were evaluated by intraclass correlation (ICC) and coefficient of variation. Using abdominal CT images from 68 patients with various stages of chronic liver disease, inter-observer agreement and test-retest repeatability among 12 readers assessing LSN by software- vs. visual-based scoring methods were evaluated by ICC. There was excellent repeatability of LSN scores (ICC:0.79-0.99) using the CT phantom and routine image acquisition and reconstruction parameters (kVp 100-140, mA 200-400, and auto-mA, section thickness 1.25-5.0 mm, field of view 35-50 cm, and smooth or standard kernels). There was excellent reproducibility (smooth ICC: 0.97; 95% CI 0.95, 0.99; CV: 7%; nodular ICC: 0.94; 95% CI 0.89, 0.97; CV: 8%) for LSN scores derived from CT images from 22 different scanners. Inter-observer agreement for the software-based LSN scoring method was excellent (ICC: 0.84; 95% CI 0.79, 0.88; CV: 28%) vs. good for the visual-based method (ICC: 0.61; 95% CI 0.51, 0.69; CV: 43%). Test-retest repeatability for the software-based LSN scoring method was excellent (ICC: 0.82; 95% CI 0.79, 0.84; CV: 12%). The software-based LSN score is a quantitative CT imaging biomarker with excellent repeatability, reproducibility, inter-observer agreement, and test-retest repeatability.
NASA Astrophysics Data System (ADS)
Stratis, A.; Zhang, G.; Jacobs, R.; Bogaerts, R.; Bosmans, H.
2016-12-01
In order to carry out Monte Carlo (MC) dosimetry studies, voxel phantoms, modeling human anatomy, and organ-based segmentation of CT image data sets are applied to simulation frameworks. The resulting voxel phantoms preserve patient CT acquisition geometry; in the case of head voxel models built upon head CT images, the head support with which CT scanners are equipped introduces an inclination to the head, and hence to the head voxel model. In dental cone beam CT (CBCT) imaging, patients are always positioned in such a way that the Frankfort line is horizontal, implying that there is no head inclination. The orientation of the head is important, as it influences the distance of critical radiosensitive organs like the thyroid and the esophagus from the x-ray tube. This work aims to propose a procedure to adjust head voxel phantom orientation, and to investigate the impact of head inclination on organ doses in dental CBCT MC dosimetry studies. The female adult ICRP, and three in-house-built paediatric voxel phantoms were in this study. An EGSnrc MC framework was employed to simulate two commonly used protocols; a Morita Accuitomo 170 dental CBCT scanner (FOVs: 60 × 60 mm2 and 80 × 80 mm2, standard resolution), and a 3D Teeth protocol (FOV: 100 × 90 mm2) in a Planmeca Promax 3D MAX scanner. Result analysis revealed large absorbed organ dose differences in radiosensitive organs between the original and the geometrically corrected voxel models of this study, ranging from -45.6% to 39.3%. Therefore, accurate dental CBCT MC dose calculations require geometrical adjustments to be applied to head voxel models.
NASA Technical Reports Server (NTRS)
1974-01-01
The specifications for the Earth Observatory Satellite (EOS) peculiar spacecraft segment and associated subsystems and modules are presented. The specifications considered include the following: (1) wideband communications subsystem module, (2) mission peculiar software, (3) hydrazine propulsion subsystem module, (4) solar array assembly, and (5) the scanning spectral radiometer.
Seo, Yeong-Hyeon; Hwang, Kyungmin; Jeong, Ki-Hun
2018-02-19
We report a 1.65 mm diameter forward-viewing confocal endomicroscopic catheter using a flip-chip bonded electrothermal MEMS fiber scanner. Lissajous scanning was implemented by the electrothermal MEMS fiber scanner. The Lissajous scanned MEMS fiber scanner was precisely fabricated to facilitate flip-chip connection, and bonded with a printed circuit board. The scanner was successfully combined with a fiber-based confocal imaging system. A two-dimensional reflectance image of the metal pattern 'OPTICS' was successfully obtained with the scanner. The flip-chip bonded scanner minimizes electrical packaging dimensions. The inner diameter of the flip-chip bonded MEMS fiber scanner is 1.3 mm. The flip-chip bonded MEMS fiber scanner is fully packaged with a 1.65 mm diameter housing tube, 1 mm diameter GRIN lens, and a single mode optical fiber. The packaged confocal endomicroscopic catheter can provide a new breakthrough for diverse in-vivo endomicroscopic applications.
van der Palen, Roel L F; Roest, Arno A W; van den Boogaard, Pieter J; de Roos, Albert; Blom, Nico A; Westenberg, Jos J M
2018-05-26
The aim was to investigate scan-rescan reproducibility and observer variability of segmental aortic 3D systolic wall shear stress (WSS) by phase-specific segmentation with 4D flow MRI in healthy volunteers. Ten healthy volunteers (age 26.5 ± 2.6 years) underwent aortic 4D flow MRI twice. Maximum 3D systolic WSS (WSSmax) and mean 3D systolic WSS (WSSmean) for five thoracic aortic segments over five systolic cardiac phases by phase-specific segmentations were calculated. Scan-rescan analysis and observer reproducibility analysis were performed. Scan-rescan data showed overall good reproducibility for WSSmean (coefficient of variation, COV 10-15%) with moderate-to-strong intraclass correlation coefficient (ICC 0.63-0.89). The variability in WSSmax was high (COV 16-31%) with moderate-to-good ICC (0.55-0.79) for different aortic segments. Intra- and interobserver reproducibility was good-to-excellent for regional aortic WSSmax (ICC ≥ 0.78; COV ≤ 17%) and strong-to-excellent for WSSmean (ICC ≥ 0.86; COV ≤ 11%). In general, ascending aortic segments showed more WSSmax/WSSmean variability compared to aortic arch or descending aortic segments for scan-rescan, intraobserver and interobserver comparison. Scan-rescan reproducibility was good for WSSmean and moderate for WSSmax for all thoracic aortic segments over multiple systolic phases in healthy volunteers. Intra/interobserver reproducibility for segmental WSS assessment was good-to-excellent. Variability of WSSmax is higher and should be taken into account in case of individual follow-up or in comparative rest-stress studies to avoid misinterpretation.
NASA Astrophysics Data System (ADS)
Gloger, Oliver; Tönnies, Klaus; Bülow, Robin; Völzke, Henry
2017-07-01
To develop the first fully automated 3D spleen segmentation framework derived from T1-weighted magnetic resonance (MR) imaging data and to verify its performance for spleen delineation and volumetry. This approach considers the issue of low contrast between spleen and adjacent tissue in non-contrast-enhanced MR images. Native T1-weighted MR volume data was performed on a 1.5 T MR system in an epidemiological study. We analyzed random subsamples of MR examinations without pathologies to develop and verify the spleen segmentation framework. The framework is modularized to include different kinds of prior knowledge into the segmentation pipeline. Classification by support vector machines differentiates between five different shape types in computed foreground probability maps and recognizes characteristic spleen regions in axial slices of MR volume data. A spleen-shape space generated by training produces subject-specific prior shape knowledge that is then incorporated into a final 3D level set segmentation method. Individually adapted shape-driven forces as well as image-driven forces resulting from refined foreground probability maps steer the level set successfully to the segment the spleen. The framework achieves promising segmentation results with mean Dice coefficients of nearly 0.91 and low volumetric mean errors of 6.3%. The presented spleen segmentation approach can delineate spleen tissue in native MR volume data. Several kinds of prior shape knowledge including subject-specific 3D prior shape knowledge can be used to guide segmentation processes achieving promising results.
WE-EF-207-05: Monte Carlo Dosimetry for a Dedicated Cone-Beam CT Head Scanner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisniega, A; Zbijewski, W; Xu, J
Purpose: Cone-Beam CT (CBCT) is an attractive platform for point-of-care imaging of traumatic brain injury and intracranial hemorrhage. This work implements and evaluates a fast Monte-Carlo (MC) dose estimation engine for development of a dedicated head CBCT scanner, optimization of acquisition protocols, geometry, bowtie filter designs, and patient-specific dosimetry. Methods: Dose scoring with a GPU-based MC CBCT simulator was validated on an imaging bench using a modified 16 cm CTDI phantom with 7 ion chamber shafts along the central ray for 80–100 kVp (+2 mm Al, +0.2 mm Cu). Dose distributions were computed in a segmented CBCT reconstruction of anmore » anthropomorphic head phantom with 4×10{sup 5} tracked photons per scan (5 min runtime). Circular orbits with angular span ranging from short scan (180° + fan angle) to full rotation (360°) were considered for fixed total mAs per scan. Two aluminum filters were investigated: aggressive bowtie, and moderate bowtie (matched to 16 cm and 32 cm water cylinder, respectively). Results: MC dose estimates showed strong agreement with measurements (RMSE<0.001 mGy/mAs). A moderate (aggressive) bowtie reduced the dose, per total mAs, by 20% (30%) at the center of the head, by 40% (50%) at the eye lens, and by 70% (80%) at the posterior skin entrance. For the no bowtie configuration, a short scan reduced the eye lens dose by 62% (from 0.08 mGy/mAs to 0.03 mGy/mAs) compared to full scan, although the dose to spinal bone marrow increased by 40%. For both bowties, the short scan resulted in a similar 40% increase in bone marrow dose, but the reduction in the eye lens was more pronounced: 70% (90%) for the moderate (aggressive) bowtie. Conclusions: Dose maps obtained with validated MC simulation demonstrated dose reduction in sensitive structures (eye lens and bone marrow) through combination of short-scan trajectories and bowtie filters. Xiaohui Wang and David Foos are employees of Carestream Health.« less
Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid
2016-01-01
Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. A template database of 195 (81 males, 114 females; age range 32-67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, -4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and hammer, respectively. The Bland-Altman similarity analysis revealed a low bias for the ABSS and LocalInfo techniques compared to the others. The ABSS method for automated hippocampal segmentation outperformed other methods, best approximating what could be achieved by manual tracing. This study also shows that four categories of input data can cause automated segmentation methods to fail. They include incomplete studies, artifact, low signal-to-noise ratio, and inhomogeneity. Different scanner platforms and pulse sequences were considered as means by which to improve reliability of the automated methods. Other modifications were specially devised to enhance a particular method assessed in this study.
Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid
2016-01-01
Purpose: Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. Methods: A template database of 195 (81 males, 114 females; age range 32–67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Results: Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, −4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and hammer, respectively. The Bland–Altman similarity analysis revealed a low bias for the ABSS and LocalInfo techniques compared to the others. Conclusions: The ABSS method for automated hippocampal segmentation outperformed other methods, best approximating what could be achieved by manual tracing. This study also shows that four categories of input data can cause automated segmentation methods to fail. They include incomplete studies, artifact, low signal-to-noise ratio, and inhomogeneity. Different scanner platforms and pulse sequences were considered as means by which to improve reliability of the automated methods. Other modifications were specially devised to enhance a particular method assessed in this study. PMID:26745947
3D ultrasound system to investigate intraventricular hemorrhage in preterm neonates
NASA Astrophysics Data System (ADS)
Kishimoto, J.; de Ribaupierre, S.; Lee, D. S. C.; Mehta, R.; St. Lawrence, K.; Fenster, A.
2013-11-01
Intraventricular hemorrhage (IVH) is a common disorder among preterm neonates that is routinely diagnosed and monitored by 2D cranial ultrasound (US). The cerebral ventricles of patients with IVH often have a period of ventricular dilation (ventriculomegaly). This initial increase in ventricle size can either spontaneously resolve, which often shows clinically as a period of stabilization in ventricle size and eventual decline back towards a more normal size, or progressive ventricular dilation that does not stabilize and which may require interventional therapy to reduce symptoms relating to increased intracranial pressure. To improve the characterization of ventricle dilation, we developed a 3D US imaging system that can be used with a conventional clinical US scanner to image the ventricular system of preterm neonates at risk of ventriculomegaly. A motorized transducer housing was designed specifically for hand-held use inside an incubator using a transducer commonly used for cranial 2D US scans. This system was validated using geometric phantoms, US/MRI compatible ventricle volume phantoms, and patient images to determine 3D reconstruction accuracy and inter- and intra-observer volume estimation variability. 3D US geometric reconstruction was found to be accurate with an error of <0.2%. Measured volumes of a US/MRI compatible ventricle-like phantom were within 5% of gold standard water displacement measurements. Intra-class correlation for the three observers was 0.97, showing very high agreement between observers. The coefficient of variation was between 1.8-6.3% for repeated segmentations of the same patient. The minimum detectable difference was calculated to be 0.63 cm3 for a single observer. Results from ANOVA for three observers segmenting three patients of IVH grade II did not show any significant differences (p > 0.05) for the measured ventricle volumes between observers. This 3D US system can reliably produce 3D US images of the neonatal ventricular system. There is the potential to use this system to monitor the progression of ventriculomegaly over time in patients with IVH.
Gueret, Pascal; Deux, Jean-François; Bonello, Laurent; Sarran, Anthony; Tron, Christophe; Christiaens, Luc; Dacher, Jean-Nicolas; Bertrand, David; Leborgne, Laurent; Renard, Cedric; Caussin, Christophe; Cluzel, Philippe; Helft, Gerard; Crochet, Dominique; Vernhet-Kovacsik, Hélène; Chabbert, Valérie; Ferrari, Emile; Gilard, Martine; Willoteaux, Serge; Furber, Alain; Barone-Rochette, Gilles; Jankowski, Adrien; Douek, Philippe; Mousseaux, Elie; Sirol, Marc; Niarra, Ralph; Chatellier, Gilles; Laissy, Jean-Pierre
2013-02-15
Computed tomographic coronary angiography (CTCA) has been proposed as a noninvasive test for significant coronary artery disease (CAD), but only limited data are available from prospective multicenter trials. The goal of this study was to establish the diagnostic accuracy of CTCA compared to coronary angiography (CA) in a large population of symptomatic patients with clinical indications for coronary imaging. This national, multicenter study was designed to prospectively evaluate stable patients able to undergo CTCA followed by conventional CA. Data from CTCA and CA were analyzed in a blinded fashion at central core laboratories. The main outcome was the evaluation of patient-, vessel-, and segment-based diagnostic performance of CTCA to detect or rule out significant CAD (≥50% luminal diameter reduction). Of 757 patients enrolled, 746 (mean age 61 ± 12 years, 71% men) were analyzed. They underwent CTCA followed by CA 1.7 ± 0.8 days later using a 64-detector scanner. The prevalence of significant CAD in native coronary vessels by CA was 54%. The rate of nonassessable segments by CTCA was 6%. In a patient-based analysis, sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios of CTCA were 91%, 50%, 68%, 83%, 1.82, and 0.18, respectively. The strongest predictors of false-negative results on CTCA were high estimated pretest probability of CAD (odds ratio [OR] 1.97, p <0.001), male gender (OR 1.5, p <0.002), diabetes (OR 1.5, p <0.0001), and age (OR 1.2, p <0.0001). In conclusion, in this large multicenter study, CTCA identified significant CAD with high sensitivity. However, in routine clinical practice, each patient should be individually evaluated, and the pretest probability of obstructive CAD should be taken into account when deciding which method, CTCA or CA, to use to diagnose its presence and severity. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wiemker, Rafael; Rogalla, Patrik; Opfer, Roland; Ekin, Ahmet; Romano, Valentina; Bülow, Thomas
2006-03-01
The performance of computer aided lung nodule detection (CAD) and computer aided nodule volumetry is compared between standard-dose (70-100 mAs) and ultra-low-dose CT images (5-10 mAs). A direct quantitative performance comparison was possible, since for each patient both an ultra-low-dose and a standard-dose CT scan were acquired within the same examination session. The data sets were recorded with a multi-slice CT scanner at the Charite university hospital Berlin with 1 mm slice thickness. Our computer aided nodule detection and segmentation algorithms were deployed on both ultra-low-dose and standard-dose CT data without any dose-specific fine-tuning or preprocessing. As a reference standard 292 nodules from 20 patients were visually identified, each nodule both in ultra-low-dose and standard-dose data sets. The CAD performance was analyzed by virtue of multiple FROC curves for different lower thresholds of the nodule diameter. For nodules with a volume-equivalent diameter equal or larger than 4 mm (149 nodules pairs), we observed a detection rate of 88% at a median false positive rate of 2 per patient in standard-dose images, and 86% detection rate in ultra-low-dose images, also at 2 FPs per patient. Including even smaller nodules equal or larger than 2 mm (272 nodules pairs), we observed a detection rate of 86% in standard-dose images, and 84% detection rate in ultra-low-dose images, both at a rate of 5 FPs per patient. Moreover, we observed a correlation of 94% between the volume-equivalent nodule diameter as automatically measured on ultra-low-dose versus on standard-dose images, indicating that ultra-low-dose CT is also feasible for growth-rate assessment in follow-up examinations. The comparable performance of lung nodule CAD in ultra-low-dose and standard-dose images is of particular interest with respect to lung cancer screening of asymptomatic patients.
Validation of GATE Monte Carlo simulations of the GE Advance/Discovery LS PET scanners.
Schmidtlein, C Ross; Kirov, Assen S; Nehmeh, Sadek A; Erdi, Yusuf E; Humm, John L; Amols, Howard I; Bidaut, Luc M; Ganin, Alex; Stearns, Charles W; McDaniel, David L; Hamacher, Klaus A
2006-01-01
The recently developed GATE (GEANT4 application for tomographic emission) Monte Carlo package, designed to simulate positron emission tomography (PET) and single photon emission computed tomography (SPECT) scanners, provides the ability to model and account for the effects of photon noncollinearity, off-axis detector penetration, detector size and response, positron range, photon scatter, and patient motion on the resolution and quality of PET images. The objective of this study is to validate a model within GATE of the General Electric (GE) Advance/Discovery Light Speed (LS) PET scanner. Our three-dimensional PET simulation model of the scanner consists of 12 096 detectors grouped into blocks, which are grouped into modules as per the vendor's specifications. The GATE results are compared to experimental data obtained in accordance with the National Electrical Manufactures Association/Society of Nuclear Medicine (NEMA/SNM), NEMA NU 2-1994, and NEMA NU 2-2001 protocols. The respective phantoms are also accurately modeled thus allowing us to simulate the sensitivity, scatter fraction, count rate performance, and spatial resolution. In-house software was developed to produce and analyze sinograms from the simulated data. With our model of the GE Advance/Discovery LS PET scanner, the ratio of the sensitivities with sources radially offset 0 and 10 cm from the scanner's main axis are reproduced to within 1% of measurements. Similarly, the simulated scatter fraction for the NEMA NU 2-2001 phantom agrees to within less than 3% of measured values (the measured scatter fractions are 44.8% and 40.9 +/- 1.4% and the simulated scatter fraction is 43.5 +/- 0.3%). The simulated count rate curves were made to match the experimental curves by using deadtimes as fit parameters. This resulted in deadtime values of 625 and 332 ns at the Block and Coincidence levels, respectively. The experimental peak true count rate of 139.0 kcps and the peak activity concentration of 21.5 kBq/cc were matched by the simulated results to within 0.5% and 0.1% respectively. The simulated count rate curves also resulted in a peak NECR of 35.2 kcps at 10.8 kBq/cc compared to 37.6 kcps at 10.0 kBq/cc from averaged experimental values. The spatial resolution of the simulated scanner matched the experimental results to within 0.2 mm.
A new scanning device in CT with dose reduction potential
NASA Astrophysics Data System (ADS)
Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph
2006-03-01
The amount of x-ray radiation currently applied in CT practice is not utilized optimally. A portion of radiation traversing the patient is either not detected at all or is used ineffectively. The reason lies partly in the reconstruction algorithms and partly in the geometry of the CT scanners designed specifically for these algorithms. In fact, the reconstruction methods widely used in CT are intended to invert the data that correspond to ideal straight lines. However, the collection of such data is often not accurate due to likely movement of the source/detector system of the scanner in the time interval during which all the detectors are read. In this paper, a new design of the scanner geometry is proposed that is immune to the movement of the CT system and will collect all radiation traversing the patient. The proposed scanning design has a potential to reduce the patient dose by a factor of two. Furthermore, it can be used with the existing reconstruction algorithm and it is particularly suitable for OPED, a new robust reconstruction algorithm.
Yörük, Barış K
2014-07-01
Underage drinkers often use false identification to purchase alcohol or gain access into bars. In recent years, several states have introduced laws that provide incentives to retailers and bar owners who use electronic scanners to ensure that the customer is 21 years or older and uses a valid identification to purchase alcohol. This paper is the first to investigate the effects of these laws using confidential data from the National Longitudinal Survey of Youth, 1997 Cohort (NLSY97). Using a difference-in-differences methodology, I find that the false ID laws with scanner provision significantly reduce underage drinking, including up to a 0.22 drink decrease in the average number of drinks consumed by underage youth per day. This effect is observed particularly in the short-run and more pronounced for non-college students and those who are relatively younger. These results are also robust under alternative model specifications. The findings of this paper highlight the importance of false ID laws in reducing alcohol consumption among underage youth. Copyright © 2014 Elsevier B.V. All rights reserved.
Yörük, Barış K.
2014-01-01
Underage drinkers often use false identification to purchase alcohol or gain access into bars. In recent years, several states have introduced laws that provide incentives to retailers and bar owners who use electronic scanners to ensure that the customer is 21 years or older and uses a valid identification to purchase alcohol. This paper is the first to investigate the effects of these laws using confidential data from the National Longitudinal Survey of Youth, 1997 Cohort (NLSY97). Using a difference-in-differences methodology, I find that the false ID laws with scanner provision significantly reduce underage drinking, including up to a 0.22 drink decrease in the average number of drinks consumed by underage youth per day. This effect is observed particularly in the short-run and more pronounced for non-college students and those who are relatively younger. These results are also robust under alternative model specifications. The findings of this paper highlight the importance of false ID laws in reducing alcohol consumption among underage youth. PMID:24732386
Pediatric Chest and Abdominopelvic CT: Organ Dose Estimation Based on 42 Patient Models
Tian, Xiaoyu; Li, Xiang; Segars, W. Paul; Paulson, Erik K.; Frush, Donald P.
2014-01-01
Purpose To estimate organ dose from pediatric chest and abdominopelvic computed tomography (CT) examinations and evaluate the dependency of organ dose coefficients on patient size and CT scanner models. Materials and Methods The institutional review board approved this HIPAA–compliant study and did not require informed patient consent. A validated Monte Carlo program was used to perform simulations in 42 pediatric patient models (age range, 0–16 years; weight range, 2–80 kg; 24 boys, 18 girls). Multidetector CT scanners were modeled on those from two commercial manufacturers (LightSpeed VCT, GE Healthcare, Waukesha, Wis; SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). Organ doses were estimated for each patient model for routine chest and abdominopelvic examinations and were normalized by volume CT dose index (CTDIvol). The relationships between CTDIvol-normalized organ dose coefficients and average patient diameters were evaluated across scanner models. Results For organs within the image coverage, CTDIvol-normalized organ dose coefficients largely showed a strong exponential relationship with the average patient diameter (R2 > 0.9). The average percentage differences between the two scanner models were generally within 10%. For distributed organs and organs on the periphery of or outside the image coverage, the differences were generally larger (average, 3%–32%) mainly because of the effect of overranging. Conclusion It is feasible to estimate patient-specific organ dose for a given examination with the knowledge of patient size and the CTDIvol. These CTDIvol-normalized organ dose coefficients enable one to readily estimate patient-specific organ dose for pediatric patients in clinical settings. This dose information, and, as appropriate, attendant risk estimations, can provide more substantive information for the individual patient for both clinical and research applications and can yield more expansive information on dose profiles across patient populations within a practice. © RSNA, 2013 PMID:24126364
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Y; Olsen, J.; Parikh, P.
2014-06-01
Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE),more » along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information, different filtering methods and their influences on the segmentation results. Parag Parikh receives research grant from ViewRay. Sasa Mutic has consulting and research agreements with ViewRay. Yanle Hu receives travel reimbursement from ViewRay. Iwan Kawrakow and James Dempsey are ViewRay employees.« less
Venkatraman, Vijay K; Gonzalez, Christopher E.; Landman, Bennett; Goh, Joshua; Reiter, David A.; An, Yang; Resnick, Susan M.
2017-01-01
Diffusion tensor imaging (DTI) measures are commonly used as imaging markers to investigate individual differences in relation to behavioral and health-related characteristics. However, the ability to detect reliable associations in cross-sectional or longitudinal studies is limited by the reliability of the diffusion measures. Several studies have examined reliability of diffusion measures within (i.e. intra-site) and across (i.e. inter-site) scanners with mixed results. Our study compares the test-retest reliability of diffusion measures within and across scanners and field strengths in cognitively normal older adults with a follow-up interval less than 2.25 years. Intra-class correlation (ICC) and coefficient of variation (CoV) of fractional anisotropy (FA) and mean diffusivity (MD) were evaluated in sixteen white matter and twenty-six gray matter bilateral regions. The ICC for intra-site reliability (0.32 to 0.96 for FA and 0.18 to 0.95 for MD in white matter regions; 0.27 to 0.89 for MD and 0.03 to 0.79 for FA in gray matter regions) and inter-site reliability (0.28 to 0.95 for FA in white matter regions, 0.02 to 0.86 for MD in gray matter regions) with longer follow-up intervals were similar to earlier studies using shorter follow-up intervals. The reliability of across field strengths comparisons was lower than intra- and inter-site reliability. Within and across scanner comparisons showed that diffusion measures were more stable in larger white matter regions (> 1500 mm3). For gray matter regions, the MD measure showed stability in specific regions and was not dependent on region size. Linear correction factor estimated from cross-sectional or longitudinal data improved the reliability across field strengths. Our findings indicate that investigations relating diffusion measures to external variables must consider variable reliability across the distinct regions of interest and that correction factors can be used to improve consistency of measurement across field strengths. An important result of this work is that inter-scanner and field strength effects can be partially mitigated with linear correction factors specific to regions of interest. These data-driven linear correction techniques can be applied in cross-sectional or longitudinal studies. PMID:26146196
Steinmeier, R; Fahlbusch, R; Ganslandt, O; Nimsky, C; Buchfelder, M; Kaus, M; Heigl, T; Lenz, G; Kuth, R; Huk, W
1998-10-01
Intraoperative magnetic resonance imaging (MRI) is now available with the General Electric MRI system for dedicated intraoperative use. Alternatively, non-dedicated MRI systems require fewer specific adaptations of instrumentation and surgical techniques. In this report, clinical experiences with such a system are presented. All patients were surgically treated in a "twin operating theater," consisting of a conventional operating theater with complete neuronavigation equipment (StealthStation and MKM), which allowed surgery with magnetically incompatible instruments, conventional instrumentation and operating microscope, and a radiofrequency-shielded operating room designed for use with an intraoperative MRI scanner (Magnetom Open; Siemens AG, Erlangen, Germany). The Magnetom Open is a 0.2-T MRI scanner with a resistive magnet and specific adaptations that are necessary to integrate the scanner into the surgical environment. The operating theaters lie close together, and patients can be intraoperatively transported from one room to the other. This retrospective analysis includes 55 patients with cerebral lesions, all of whom were surgically treated between March 1996 and September 1997. Thirty-one patients with supratentorial tumors were surgically treated (with navigational guidance) in the conventional operating room, with intraoperative MRI for resection control. For 5 of these 31 patients, intraoperative resection control revealed significant tumor remnants, which led to further tumor resection guided by the information provided by intraoperative MRI. Intraoperative MRI resection control was performed in 18 transsphenoidal operations. In cases with suspected tumor remnants, the surgeon reexplored the sellar region; additional tumor tissue was removed in three of five cases. Follow-up scans were obtained for all patients 1 week and 2 to 3 months after surgery. For 14 of the 18 patients, the images obtained intraoperatively were comparable to those obtained after 2 to 3 months. Intraoperative MRI was also used for six patients undergoing temporal lobe resections for treatment of pharmacoresistant seizures. For these patients, the extent of neocortical and mesial resection was tailored to fit the preoperative findings of morphological and electrophysiological alterations, as well as intraoperative electrocorticographic findings. Intraoperative MRI with the Magnetom Open provides considerable additional information to optimize resection during surgical treatment of supratentorial tumors, pituitary adenomas, and epilepsy. The twin operating theater is a true alternative to a dedicated MRI system. Additional efforts are necessary to improve patient transportation time and instrument guidance within the scanner.
Recent micro-CT scanner developments at UGCT
NASA Astrophysics Data System (ADS)
Dierick, Manuel; Van Loo, Denis; Masschaele, Bert; Van den Bulcke, Jan; Van Acker, Joris; Cnudde, Veerle; Van Hoorebeke, Luc
2014-04-01
This paper describes two X-ray micro-CT scanners which were recently developed to extend the experimental possibilities of microtomography research at the Centre for X-ray Tomography (www.ugct.ugent.be) of the Ghent University (Belgium). The first scanner, called Nanowood, is a wide-range CT scanner with two X-ray sources (160 kVmax) and two detectors, resolving features down to 0.4 μm in small samples, but allowing samples up to 35 cm to be scanned. This is a sample size range of 3 orders of magnitude, making this scanner well suited for imaging multi-scale materials such as wood, stone, etc. Besides the traditional cone-beam acquisition, Nanowood supports helical acquisition, and it can generate images with significant phase-contrast contributions. The second scanner, known as the Environmental micro-CT scanner (EMCT), is a gantry based micro-CT scanner with variable magnification for scanning objects which are not easy to rotate in a standard micro-CT scanner, for example because they are physically connected to external experimental hardware such as sensor wiring, tubing or others. This scanner resolves 5 μm features, covers a field-of-view of about 12 cm wide with an 80 cm vertical travel range. Both scanners will be extensively described and characterized, and their potential will be demonstrated with some key application results.
Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng
2015-07-28
Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.
Development of a fully automatic scheme for detection of masses in whole breast ultrasound images.
Ikedo, Yuji; Fukuoka, Daisuke; Hara, Takeshi; Fujita, Hiroshi; Takada, Etsuo; Endo, Tokiko; Morita, Takako
2007-11-01
Ultrasonography has been used for breast cancer screening in Japan. Screening using a conventional hand-held probe is operator dependent and thus it is possible that some areas of the breast may not be scanned. To overcome such problems, a mechanical whole breast ultrasound (US) scanner has been proposed and developed for screening purposes. However, another issue is that radiologists might tire while interpreting all images in a large-volume screening; this increases the likelihood that masses may remain undetected. Therefore, the aim of this study is to develop a fully automatic scheme for the detection of masses in whole breast US images in order to assist the interpretations of radiologists and potentially improve the screening accuracy. The authors database comprised 109 whole breast US imagoes, which include 36 masses (16 malignant masses, 5 fibroadenomas, and 15 cysts). A whole breast US image with 84 slice images (interval between two slice images: 2 mm) was obtained by the ASU-1004 US scanner (ALOKA Co., Ltd., Japan). The feature based on the edge directions in each slice and a method for subtracting between the slice images were used for the detection of masses in the authors proposed scheme. The Canny edge detector was applied to detect edges in US images; these edges were classified as near-vertical edges or near-horizontal edges using a morphological method. The positions of mass candidates were located using the near-vertical edges as a cue. Then, the located positions were segmented by the watershed algorithm and mass candidate regions were detected using the segmented regions and the low-density regions extracted by the slice subtraction method. For the removal of false positives (FPs), rule-based schemes and a quadratic discriminant analysis were applied for the distribution between masses and FPs. As a result, the sensitivity of the authors scheme for the detection of masses was 80.6% (29/36) with 3.8 FPs per whole breast image. The authors scheme for a computer-aided detection may be useful in improving the screening performance and efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, Adam C.; Zankl, Maria; DeMarco, John J.
2010-04-15
Purpose: Monte Carlo radiation transport techniques have made it possible to accurately estimate the radiation dose to radiosensitive organs in patient models from scans performed with modern multidetector row computed tomography (MDCT) scanners. However, there is considerable variation in organ doses across scanners, even when similar acquisition conditions are used. The purpose of this study was to investigate the feasibility of a technique to estimate organ doses that would be scanner independent. This was accomplished by assessing the ability of CTDI{sub vol} measurements to account for differences in MDCT scanners that lead to organ dose differences. Methods: Monte Carlo simulationsmore » of 64-slice MDCT scanners from each of the four major manufacturers were performed. An adult female patient model from the GSF family of voxelized phantoms was used in which all ICRP Publication 103 radiosensitive organs were identified. A 120 kVp, full-body helical scan with a pitch of 1 was simulated for each scanner using similar scan protocols across scanners. From each simulated scan, the radiation dose to each organ was obtained on a per mA s basis (mGy/mA s). In addition, CTDI{sub vol} values were obtained from each scanner for the selected scan parameters. Then, to demonstrate the feasibility of generating organ dose estimates from scanner-independent coefficients, the simulated organ dose values resulting from each scanner were normalized by the CTDI{sub vol} value for those acquisition conditions. Results: CTDI{sub vol} values across scanners showed considerable variation as the coefficient of variation (CoV) across scanners was 34.1%. The simulated patient scans also demonstrated considerable differences in organ dose values, which varied by up to a factor of approximately 2 between some of the scanners. The CoV across scanners for the simulated organ doses ranged from 26.7% (for the adrenals) to 37.7% (for the thyroid), with a mean CoV of 31.5% across all organs. However, when organ doses are normalized by CTDI{sub vol} values, the differences across scanners become very small. For the CTDI{sub vol}, normalized dose values the CoVs across scanners for different organs ranged from a minimum of 2.4% (for skin tissue) to a maximum of 8.5% (for the adrenals) with a mean of 5.2%. Conclusions: This work has revealed that there is considerable variation among modern MDCT scanners in both CTDI{sub vol} and organ dose values. Because these variations are similar, CTDI{sub vol} can be used as a normalization factor with excellent results. This demonstrates the feasibility of establishing scanner-independent organ dose estimates by using CTDI{sub vol} to account for the differences between scanners.« less
The new frontiers of multimodality and multi-isotope imaging
NASA Astrophysics Data System (ADS)
Behnam Azad, Babak; Nimmagadda, Sridhar
2014-06-01
Technological advances in imaging systems and the development of target specific imaging tracers has been rapidly growing over the past two decades. Recent progress in "all-in-one" imaging systems that allow for automated image coregistration has significantly added to the growth of this field. These developments include ultra high resolution PET and SPECT scanners that can be integrated with CT or MR resulting in PET/CT, SPECT/CT, SPECT/PET and PET/MRI scanners for simultaneous high resolution high sensitivity anatomical and functional imaging. These technological developments have also resulted in drastic enhancements in image quality and acquisition time while eliminating cross compatibility issues between modalities. Furthermore, the most cutting edge technology, though mostly preclinical, also allows for simultaneous multimodality multi-isotope image acquisition and image reconstruction based on radioisotope decay characteristics. These scientific advances, in conjunction with the explosion in the development of highly specific multimodality molecular imaging agents, may aid in realizing simultaneous imaging of multiple biological processes and pave the way towards more efficient diagnosis and improved patient care.
Monterey Bay study. [analysis of Landsat 1 multispectral band scanner data
NASA Technical Reports Server (NTRS)
Bizzell, R. M.; Wade, L. C.
1975-01-01
The multispectral scanner capabilities of LANDSAT 1 were tested over California's Monterey Bay area and portions of the San Joaquin Valley. Using both computer aided and image interpretive processing techniques, the LANDSAT 1 data were analyzed to determine their potential application in terms of land use and agriculture. Utilizing LANDSAT 1 data, analysts were able to provide the identifications and areal extent of the individual land use categories ranging from very general to highly specific levels (e.g., from agricultural lands to specific field crop types and even the different stages of growth). It is shown that the LANDSAT system is useful in the identification of major crop species and the delineation of numerous land use categories on a global basis and that repeated surveillance would permit the monitoring of changes in seasonal growth characteristics of crops as well as the assessment of various cultivation practices with a minimum of onsite observation. The LANDSAT system is demonstrated to be useful in the planning and development of resource programs on earth.
The influence of focal spot blooming on high-contrast spatial resolution in CT imaging.
Grimes, Joshua; Duan, Xinhui; Yu, Lifeng; Halaweish, Ahmed F; Haag, Nicole; Leng, Shuai; McCollough, Cynthia
2015-10-01
The objective of this work was to investigate focal spot blooming effects on the spatial resolution of CT images and to evaluate an x-ray tube that uses dynamic focal spot control for minimizing focal spot blooming. The influence of increasing tube current at a fixed tube potential of 80 kV on high-contrast spatial resolution of seven different CT scanner models (scanners A-G), including one scanner that uses dynamic focal spot control to reduce focal spot blooming (scanner A), was evaluated. Spatial resolution was assessed using a wire phantom for the modulation transfer function (MTF) calculation and a copper disc phantom for measuring the slice sensitivity profile (SSP). The impact of varying the tube potential was investigated on two scanner models (scanners A and B) by measuring the MTF and SSP and also by using the resolution bar pattern module of the ACR CT phantom. The phantoms were scanned at 70-150 kV on scanner A and 80-140 kV on scanner B, with tube currents from 100 mA up to the maximum tube current available on each scanner. The images were reconstructed using a slice thickness of 0.6 mm with both smooth and sharp kernels. Additionally, focal spot size at varying tube potentials and currents was directly measured using pinhole and slit camera techniques. Evaluation of the MTF and SSP data from the 7 CT scanner models evaluated demonstrated decreased focal spot blooming for newer scanners, as evidenced by decreasing deviations in MTF and SSP as tube current varied. For scanners A and B, where focal spot blooming effects as a function of tube potential were assessed, the spatial resolution variation in the axial plane was much smaller on scanner A compared to scanner B as tube potential and current changed. On scanner A, the 50% MTF never decreased by more than 2% from the 50% MTF measured at 100 mA. On scanner B, the 50% MTF decreased by as much as 19% from the 50% MTF measured at 100 mA. Assessments of the SSP, the bar patterns in the ACR phantom and the pinhole and slit camera measurements were consistent with the MTF calculations. Focal spot blooming has a noticeable effect on spatial resolution in CT imaging. The focal spot shaping technology of scanner A greatly reduced blooming effects.
Accuracy of single-abutment digital cast obtained using intraoral and cast scanners.
Lee, Jae-Jun; Jeong, Ii-Do; Park, Jin-Young; Jeon, Jin-Hun; Kim, Ji-Hwan; Kim, Woong-Chul
2017-02-01
Scanners are frequently used in the fabrication of dental prostheses. However, the accuracy of these scanners is variable, and little information is available. The purpose of this in vitro study was to compare the accuracy of cast scanners with that of intraoral scanners by using different image impression techniques. A poly(methyl methacrylate) master model was fabricated to replicate a maxillary first molar single-abutment tooth model. The master model was scanned with an accurate engineering scanner to obtain a true value (n=1) and with 2 intraoral scanners (CEREC Bluecam and CEREC Omnicam; n=6 each). The cast scanner scanned the master model and duplicated the dental stone cast from the master model (n=6). The trueness and precision of the data were measured using a 3-dimensional analysis program. The Kruskal-Wallis test was used to compare the different sets of scanning data, followed by a post hoc Mann-Whitney U test with a significance level modified by Bonferroni correction (α/6=.0083). The type 1 error level (α) was set at .05. The trueness value (root mean square: mean ±standard deviation) was 17.5 ±1.8 μm for the Bluecam, 13.8 ±1.4 μm for the Omnicam, 17.4 ±1.7 μm for cast scanner 1, and 12.3 ±0.1 μm for cast scanner 2. The differences between the Bluecam and the cast scanner 1 and between the Omnicam and the cast scanner 2 were not statistically significant (P>.0083), but a statistically significant difference was found between all the other pairs (P<.0083). The precision of the scanners was 12.7 ±2.6 μm for the Bluecam, 12.5 ±3.7 μm for the Omnicam, 9.2 ±1.2 μm for cast scanner 1, and 6.9 ±2.6 μm for cast scanner 2. The differences between Bluecam and Omnicam and between Omnicam and cast scanner 1 were not statistically significant (P>.0083), but there was a statistically significant difference between all the other pairs (P<.0083). An Omnicam in video image impression had better trueness than a cast scanner but with a similar level of precision. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Smart markers for watershed-based cell segmentation.
Koyuncu, Can Fahrettin; Arslan, Salim; Durmaz, Irem; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem
2012-01-01
Automated cell imaging systems facilitate fast and reliable analysis of biological events at the cellular level. In these systems, the first step is usually cell segmentation that greatly affects the success of the subsequent system steps. On the other hand, similar to other image segmentation problems, cell segmentation is an ill-posed problem that typically necessitates the use of domain-specific knowledge to obtain successful segmentations even by human subjects. The approaches that can incorporate this knowledge into their segmentation algorithms have potential to greatly improve segmentation results. In this work, we propose a new approach for the effective segmentation of live cells from phase contrast microscopy. This approach introduces a new set of "smart markers" for a marker-controlled watershed algorithm, for which the identification of its markers is critical. The proposed approach relies on using domain-specific knowledge, in the form of visual characteristics of the cells, to define the markers. We evaluate our approach on a total of 1,954 cells. The experimental results demonstrate that this approach, which uses the proposed definition of smart markers, is quite effective in identifying better markers compared to its counterparts. This will, in turn, be effective in improving the segmentation performance of a marker-controlled watershed algorithm.
Lárraga-Gutiérrez, José Manuel; García-Garduño, Olivia Amanda; Treviño-Palacios, Carlos; Herrera-González, José Alfredo
2018-03-01
Flatbed scanners are the most frequently used reading instrument for radiochromic film dosimetry because its low cost, high spatial resolution, among other advantages. These scanners use a fluorescent lamp and a CCD array as light source and detector, respectively. Recently, manufacturers of flatbed scanners replaced the fluorescent lamp by light emission diodes (LED) as a light source. The goal of this work is to evaluate the performance of a commercial flatbed scanner with LED based source light for radiochromic film dosimetry. Film read out consistency, response uniformity, film-scanner sensitivity, long term stability and total dose uncertainty was evaluated. In overall, the performance of the LED flatbed scanner is comparable to that of a cold cathode fluorescent lamp (CCFL). There are important spectral differences between LED and CCFL lamps that results in a higher sensitivity of the LED scanner in the green channel. Total dose uncertainty, film response reproducibility and long-term stability of LED scanner are slightly better than those of the CCFL. However, the LED based scanner has a strong non-uniform response, up to 9%, that must be adequately corrected for radiotherapy dosimetry QA. The differences in light emission spectra between LED and CCFL lamps and its potential impact on film-scanner sensitivity suggest that the design of a dedicated flat-bed scanner with LEDs may improve sensitivity and dose uncertainty in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Comparison of Cyberware PX and PS 3D human head scanners
NASA Astrophysics Data System (ADS)
Carson, Jeremy; Corner, Brian D.; Crockett, Eric; Li, Peng; Paquette, Steven
2008-02-01
A common limitation of laser line three-Dimensional (3D) scanners is the inability to scan objects with surfaces that are either parallel to the laser line or that self-occlude. Filling in missing areas adds some unwanted inaccuracy to the 3D model. Capturing the human head with a Cyberware PS Head Scanner is an example of obtaining a model where the incomplete areas are difficult to fill accurately. The PS scanner uses a single vertical laser line to illuminate the head and is unable to capture data at top of the head, where the line of sight is tangent to the surface, and under the chin, an area occluded by the chin when the subject looks straight forward. The Cyberware PX Scanner was developed to obtain this missing 3D head data. The PX scanner uses two cameras offset at different angles to provide a more detailed head scan that captures surfaces missed by the PS scanner. The PX scanner cameras also use new technology to obtain color maps that are of higher resolution than the PS Scanner. The two scanners were compared in terms of amount of surface captured (surface area and volume) and the quality of head measurements when compared to direct measurements obtained through standard anthropometry methods. Relative to the PS scanner, the PX head scans were more complete and provided the full set of head measurements, but actual measurement values, when available from both scanners, were about the same.
An Algorithm to Automate Yeast Segmentation and Tracking
Doncic, Andreas; Eser, Umut; Atay, Oguzhan; Skotheim, Jan M.
2013-01-01
Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation. PMID:23520484
NASA Astrophysics Data System (ADS)
Fritzsche, Klaus H.; Giesel, Frederik L.; Heimann, Tobias; Thomann, Philipp A.; Hahn, Horst K.; Pantel, Johannes; Schröder, Johannes; Essig, Marco; Meinzer, Hans-Peter
2008-03-01
Objective quantification of disease specific neurodegenerative changes can facilitate diagnosis and therapeutic monitoring in several neuropsychiatric disorders. Reproducibility and easy-to-perform assessment are essential to ensure applicability in clinical environments. Aim of this comparative study is the evaluation of a fully automated approach that assesses atrophic changes in Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI). 21 healthy volunteers (mean age 66.2), 21 patients with MCI (66.6), and 10 patients with AD (65.1) were enrolled. Subjects underwent extensive neuropsychological testing and MRI was conducted on a 1.5 Tesla clinical scanner. Atrophic changes were measured automatically by a series of image processing steps including state of the art brain mapping techniques. Results were compared with two reference approaches: a manual segmentation of the hippocampal formation and a semi-automated estimation of temporal horn volume, which is based upon interactive selection of two to six landmarks in the ventricular system. All approaches separated controls and AD patients significantly (10 -5 < p < 10 -4) and showed a slight but not significant increase of neurodegeneration for subjects with MCI compared to volunteers. The automated approach correlated significantly with the manual (r = -0.65, p < 10 -6) and semi automated (r = -0.83, p < 10 -13) measurements. It proved high accuracy and at the same time maximized observer independency, time reduction and thus usefulness for clinical routine.
A prototype of mammography CADx scheme integrated to imaging quality evaluation techniques
NASA Astrophysics Data System (ADS)
Schiabel, Homero; Matheus, Bruno R. N.; Angelo, Michele F.; Patrocínio, Ana Claudia; Ventura, Liliane
2011-03-01
As all women over the age of 40 are recommended to perform mammographic exams every two years, the demands on radiologists to evaluate mammographic images in short periods of time has increased considerably. As a tool to improve quality and accelerate analysis CADe/Dx (computer-aided detection/diagnosis) schemes have been investigated, but very few complete CADe/Dx schemes have been developed and most are restricted to detection and not diagnosis. The existent ones usually are associated to specific mammographic equipment (usually DR), which makes them very expensive. So this paper describes a prototype of a complete mammography CADx scheme developed by our research group integrated to an imaging quality evaluation process. The basic structure consists of pre-processing modules based on image acquisition and digitization procedures (FFDM, CR or film + scanner), a segmentation tool to detect clustered microcalcifications and suspect masses and a classification scheme, which evaluates as the presence of microcalcifications clusters as well as possible malignant masses based on their contour. The aim is to provide enough information not only on the detected structures but also a pre-report with a BI-RADS classification. At this time the system is still lacking an interface integrating all the modules. Despite this, it is functional as a prototype for clinical practice testing, with results comparable to others reported in literature.
Monte Carlo simulation of efficient data acquisition for an entire-body PET scanner
NASA Astrophysics Data System (ADS)
Isnaini, Ismet; Obi, Takashi; Yoshida, Eiji; Yamaya, Taiga
2014-07-01
Conventional PET scanners can image the whole body using many bed positions. On the other hand, an entire-body PET scanner with an extended axial FOV, which can trace whole-body uptake images at the same time and improve sensitivity dynamically, has been desired. The entire-body PET scanner would have to process a large amount of data effectively. As a result, the entire-body PET scanner has high dead time at a multiplex detector grouping process. Also, the entire-body PET scanner has many oblique line-of-responses. In this work, we study an efficient data acquisition for the entire-body PET scanner using the Monte Carlo simulation. The simulated entire-body PET scanner based on depth-of-interaction detectors has a 2016-mm axial field-of-view (FOV) and an 80-cm ring diameter. Since the entire-body PET scanner has higher single data loss than a conventional PET scanner at grouping circuits, the NECR of the entire-body PET scanner decreases. But, single data loss is mitigated by separating the axially arranged detector into multiple parts. Our choice of 3 groups of axially-arranged detectors has shown to increase the peak NECR by 41%. An appropriate choice of maximum ring difference (MRD) will also maintain the same high performance of sensitivity and high peak NECR while at the same time reduces the data size. The extremely-oblique line of response for large axial FOV does not contribute much to the performance of the scanner. The total sensitivity with full MRD increased only 15% than that with about half MRD. The peak NECR was saturated at about half MRD. The entire-body PET scanner promises to provide a large axial FOV and to have sufficient performance values without using the full data.
Jun, Sanghoon; Kim, Namkug; Seo, Joon Beom; Lee, Young Kyung; Lynch, David A
2017-12-01
We propose the use of ensemble classifiers to overcome inter-scanner variations in the differentiation of regional disease patterns in high-resolution computed tomography (HRCT) images of diffuse interstitial lung disease patients obtained from different scanners. A total of 600 rectangular 20 × 20-pixel regions of interest (ROIs) on HRCT images obtained from two different scanners (GE and Siemens) and the whole lung area of 92 HRCT images were classified as one of six regional pulmonary disease patterns by two expert radiologists. Textual and shape features were extracted from each ROI and the whole lung parenchyma. For automatic classification, individual and ensemble classifiers were trained and tested with the ROI dataset. We designed the following three experimental sets: an intra-scanner study in which the training and test sets were from the same scanner, an integrated scanner study in which the data from the two scanners were merged, and an inter-scanner study in which the training and test sets were acquired from different scanners. In the ROI-based classification, the ensemble classifiers showed better (p < 0.001) accuracy (89.73%, SD = 0.43) than the individual classifiers (88.38%, SD = 0.31) in the integrated scanner test. The ensemble classifiers also showed partial improvements in the intra- and inter-scanner tests. In the whole lung classification experiment, the quantification accuracies of the ensemble classifiers with integrated training (49.57%) were higher (p < 0.001) than the individual classifiers (48.19%). Furthermore, the ensemble classifiers also showed better performance in both the intra- and inter-scanner experiments. We concluded that the ensemble classifiers provide better performance when using integrated scanner images.
NASA Astrophysics Data System (ADS)
Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.
2016-09-01
There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21 patient image sections, were analyzed. The automated algorithm produced segmentation of seven material classes with a median DSC of 0.86 ± 0.03 for pediatric patient protocols, and 0.85 ± 0.04 for adult patient protocols. Additionally, 100 randomly selected patient examinations were segmented and analyzed, and a mean sensitivity of 0.91 (range: 0.82-0.98), specificity of 0.89 (range: 0.70-0.98), and accuracy of 0.90 (range: 0.76-0.98) were demonstrated. In this study, we demonstrate that this fully automated segmentation tool was able to produce fast and accurate segmentation of the neck and trunk of the body over a wide range of patient habitus and scan parameters.
2013-10-01
Scope: A major outcome is expected to be on improved detection ( specificity ) in differentiating malignant from benign prostate cancer using a novel...Digital Rectal Examination, prostate specific antigen , Four Dimensional (4D) Echo-Planar J-Resolved Spectroscopic Imaging (EP-JRESI); Citrate, Choline... prostate biopsy ranged from 3 to 8, while prostate - specific antigen varied from 2.8 to 20.6 ng/mL (mean of 6.84 ng/mL). A Siemens 3T MRI Scanner with
High throughput laser processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harley, Gabriel; Pass, Thomas; Cousins, Peter John
A solar cell is formed using a solar cell ablation system. The ablation system includes a single laser source and several laser scanners. The laser scanners include a master laser scanner, with the rest of the laser scanners being slaved to the master laser scanner. A laser beam from the laser source is split into several laser beams, with the laser beams being scanned onto corresponding wafers using the laser scanners in accordance with one or more patterns. The laser beams may be scanned on the wafers using the same or different power levels of the laser source.
NASA Astrophysics Data System (ADS)
Jia, F.; Lichti, D.
2017-09-01
The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn't guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.
Ciernik, I Frank; Brown, Derek W; Schmid, Daniel; Hany, Thomas; Egli, Peter; Davis, J Bernard
2007-02-01
Volumetric assessment of PET signals becomes increasingly relevant for radiotherapy (RT) planning. Here, we investigate the utility of 18F-choline PET signals to serve as a structure for semi-automatic segmentation for forward treatment planning of prostate cancer. 18F-choline PET and CT scans of ten patients with histologically proven prostate cancer without extracapsular growth were acquired using a combined PET/CT scanner. Target volumes were manually delineated on CT images using standard software. Volumes were also obtained from 18F-choline PET images using an asymmetrical segmentation algorithm. PTVs were derived from CT 18F-choline PET based clinical target volumes (CTVs) by automatic expansion and comparative planning was performed. As a read-out for dose given to non-target structures, dose to the rectal wall was assessed. Planning target volumes (PTVs) derived from CT and 18F-choline PET yielded comparable results. Optimal matching of CT and 18F-choline PET derived volumes in the lateral and cranial-caudal directions was obtained using a background-subtracted signal thresholds of 23.0+/-2.6%. In antero-posterior direction, where adaptation compensating for rectal signal overflow was required, optimal matching was achieved with a threshold of 49.5+/-4.6%. 3D-conformal planning with CT or 18F-choline PET resulted in comparable doses to the rectal wall. Choline PET signals of the prostate provide adequate spatial information amendable to standardized asymmetrical region growing algorithms for PET-based target volume definition for external beam RT.
Multi-modal and targeted imaging improves automated mid-brain segmentation
NASA Astrophysics Data System (ADS)
Plassard, Andrew J.; D'Haese, Pierre F.; Pallavaram, Srivatsan; Newton, Allen T.; Claassen, Daniel O.; Dawant, Benoit M.; Landman, Bennett A.
2017-02-01
The basal ganglia and limbic system, particularly the thalamus, putamen, internal and external globus pallidus, substantia nigra, and sub-thalamic nucleus, comprise a clinically relevant signal network for Parkinson's disease. In order to manually trace these structures, a combination of high-resolution and specialized sequences at 7T are used, but it is not feasible to scan clinical patients in those scanners. Targeted imaging sequences at 3T such as F-GATIR, and other optimized inversion recovery sequences, have been presented which enhance contrast in a select group of these structures. In this work, we show that a series of atlases generated at 7T can be used to accurately segment these structures at 3T using a combination of standard and optimized imaging sequences, though no one approach provided the best result across all structures. In the thalamus and putamen, a median Dice coefficient over 0.88 and a mean surface distance less than 1.0mm was achieved using a combination of T1 and an optimized inversion recovery imaging sequences. In the internal and external globus pallidus a Dice over 0.75 and a mean surface distance less than 1.2mm was achieved using a combination of T1 and FGATIR imaging sequences. In the substantia nigra and sub-thalamic nucleus a Dice coefficient of over 0.6 and a mean surface distance of less than 1.0mm was achieved using the optimized inversion recovery imaging sequence. On average, using T1 and optimized inversion recovery together produced significantly improved segmentation results than any individual modality (p<0.05 wilcox sign-rank test).
NASA Astrophysics Data System (ADS)
Maalek, R.; Lichti, D. D.; Ruwanpura, J.
2015-08-01
The application of terrestrial laser scanners (TLSs) on construction sites for automating construction progress monitoring and controlling structural dimension compliance is growing markedly. However, current research in construction management relies on the planned building information model (BIM) to assign the accumulated point clouds to their corresponding structural elements, which may not be reliable in cases where the dimensions of the as-built structure differ from those of the planned model and/or the planned model is not available with sufficient detail. In addition outliers exist in construction site datasets due to data artefacts caused by moving objects, occlusions and dust. In order to overcome the aforementioned limitations, a novel method for robust classification and segmentation of planar and linear features is proposed to reduce the effects of outliers present in the LiDAR data collected from construction sites. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a robust clustering method. A method is also proposed to robustly extract the points belonging to the flat-slab floors and/or ceilings without performing the aforementioned stages in order to preserve computational efficiency. The applicability of the proposed method is investigated in two scenarios, namely, a laboratory with 30 million points and an actual construction site with over 150 million points. The results obtained by the two experiments validate the suitability of the proposed method for robust segmentation of planar and linear features in contaminated datasets, such as those collected from construction sites.
Complete-arch accuracy of intraoral scanners.
Treesh, Joshua C; Liacouras, Peter C; Taft, Robert M; Brooks, Daniel I; Raiciulescu, Sorana; Ellert, Daniel O; Grant, Gerald T; Ye, Ling
2018-04-30
Intraoral scanners have shown varied results in complete-arch applications. The purpose of this in vitro study was to evaluate the complete-arch accuracy of 4 intraoral scanners based on trueness and precision measurements compared with a known reference (trueness) and with each other (precision). Four intraoral scanners were evaluated: CEREC Bluecam, CEREC Omnicam, TRIOS Color, and Carestream CS 3500. A complete-arch reference cast was created and printed using a 3-dimensional dental cast printer with photopolymer resin. The reference cast was digitized using a laboratory-based white light 3-dimensional scanner. The printed reference cast was scanned 10 times with each intraoral scanner. The digital standard tessellation language (STL) files from each scanner were then registered to the reference file and compared with differences in trueness and precision using a 3-dimensional modeling software. Additionally, scanning time was recorded for each scan performed. The Wilcoxon signed rank, Kruskal-Wallis, and Dunn tests were used to detect differences for trueness, precision, and scanning time (α=.05). Carestream CS 3500 had the lowest overall trueness and precision compared with Bluecam and TRIOS Color. The fourth scanner, Omnicam, had intermediate trueness and precision. All of the scanners tended to underestimate the size of the reference file, with exception of the Carestream CS 3500, which was more variable. Based on visual inspection of the color rendering of signed differences, the greatest amount of error tended to be in the posterior aspects of the arch, with local errors exceeding 100 μm for all scans. The single capture scanner Carestream CS 3500 had the overall longest scan times and was significantly slower than the continuous capture scanners TRIOS Color and Omnicam. Significant differences in both trueness and precision were found among the scanners. Scan times of the continuous capture scanners were faster than the single capture scanners. Published by Elsevier Inc.
Shape-specific perceptual learning in a figure-ground segregation task.
Yi, Do-Joon; Olson, Ingrid R; Chun, Marvin M
2006-03-01
What does perceptual experience contribute to figure-ground segregation? To study this question, we trained observers to search for symmetric dot patterns embedded in random dot backgrounds. Training improved shape segmentation, but learning did not completely transfer either to untrained locations or to untrained shapes. Such partial specificity persisted for a month after training. Interestingly, training on shapes in empty backgrounds did not help segmentation of the trained shapes in noisy backgrounds. Our results suggest that perceptual training increases the involvement of early sensory neurons in the segmentation of trained shapes, and that successful segmentation requires perceptual skills beyond shape recognition alone.
Spectral characterization of the LANDSAT-D multispectral scanner subsystems
NASA Technical Reports Server (NTRS)
Markham, B. L. (Principal Investigator); Barker, J. L.
1982-01-01
Relative spectral response data for the multispectral scanner subsystems (MSS) to be flown on LANDSAT-D and LANDSAT-D backup, the protoflight and flight models, respectively, are presented and compared to similar data for the Landsat 1,2, and 3 subsystems. Channel-bychannel (six channels per band) outputs for soil and soybean targets were simulated and compared within each band and between scanners. The two LANDSAT-D scanners proved to be nearly identical in mean spectral response, but they exhibited some differences from the previous MSS's. Principal differences between the spectral responses of the D-scanners and previous scanners were: (1) a mean upper-band edge in the green band of 606 nm compared to previous means of 593 to 598 nm; (2) an average upper-band edge of 697 nm in the red band compared to previous averages of 701 to 710 nm; and (3) an average bandpass for the first near-IR band of 702-814 nm compared to a range of 693-793 to 697-802 nm for previous scanners. These differences caused the simulated D-scanner outputs to be 3 to 10 percent lower in the red band and 3 to 11 percent higher in the first near-IR band than previous scanners for the soybeans target. Otherwise, outputs from soil and soybean targets were only slightly affected. The D-scanners were generally more uniform from channel to channel within bands than previous scanners.
Performance of an improved first generation optical CT scanner for 3D dosimetry
NASA Astrophysics Data System (ADS)
Qian, Xin; Adamovics, John; Wuu, Cheng-Shie
2013-12-01
Performance analysis of a modified 3D dosimetry optical scanner based on the first generation optical CT scanner OCTOPUS is presented. The system consists of PRESAGE™ dosimeters, the modified 3D scanner, and a new developed in-house user control panel written in Labview program which provides more flexibility to optimize mechanical control and data acquisition technique. The total scanning time has been significantly reduced from initial 8 h to ∼2 h by using the modified scanner. The functional performance of the modified scanner has been evaluated in terms of the mechanical integrity uncertainty of the data acquisition process. Optical density distribution comparison between the modified scanner, OCTOPUS and the treatment plan system has been studied. It has been demonstrated that the agreement between the modified scanner and treatment plans is comparable with that between the OCTOPUS and treatment plans.
Bister, K; Löliger, H C; Duesberg, P H
1979-01-01
RNA and protein of the defective avian acute leukemia virus CMII, which causes myelocytomas in chickens, and of CMII-associated helper virus (CMIIAV) were investigated. The RNA of CMII measured 6 kilobases (kb) and that of CMIIAV measured 8.5 kb. By comparing more than 20 mapped oligonucleotides of CMII RNA with mapped and nonmapped oligonucleotides of acute leukemia viruses MC29 and MH2 and with mapped oligonucleotides of CMIIAV and other nondefective avian tumor viruses, three segments were distinguished in the oligonucleotide map of CMII RNA: (i) a 5' group-specific segment of 1.5 kb which was conserved among CMII, MC29, and MH2 and also homologous with gag-related oligonucleotides of CMIIAV and other helper viruses (hence, group specific); (ii) an internal segment of 2 kb which was conserved specifically among CMII, MC29, and MH2 and whose presence in CMII lends new support to the view that this class of genetic elements is essential for oncogenicity, because it was absent from an otherwise isogenic, nontransforming helper, CMIIAV; and (iii) a 3' group-specific segment of 2.5 kb which shared 13 of 14 oligonucleotides with CMIIAV and included env oligonucleotides of other nondefective viruses of the avian tumor virus group (hence, group specific). This segment and analogous map segments of MC29 and MH2 were not conserved at the level of shared oligonucleotides. CMII-transformed cells contained a nonstructural, gag gene-related protein of 90,000 daltons, distinguished by its size from 110,000-daltom MC29 and 100,000-dalton MH2 counterparts. The gag relatedness and similarity to the 110,000-dalton MC29 counterpart indicated that the 90,000-dalton CMII protein is translated from the 5' and internal segments of CMII RNA. The existence of conserved 5' and internal RNA segments and conserved nonstructural protein products in CMII, MC29, and MH2 indicates that these viruses belong to a related group, termed here the MC29 group. Viruses of the MC29 group differ from one another mainly in their 3' RNA segments and in minor variations of their conserved RNA segments as well as by strain-specific size markers of their gag-related proteins. Because (i) the conserved 5' gag-related and internal RNA segments and their gag-related, nonvirion protein products correlate with the conserved oncogenic spectra of the MC29 group of viruses and because (ii) the internal RNA sequences and nonvirion proteins are not found in nondefective viruses, we propose that the conserved RNA and protein elements are necessary for oncogenicity and probably are the onc gene products of the MC29 group of viruses. Images PMID:232172
Jiang, Jun; Wu, Yao; Huang, Meiyan; Yang, Wei; Chen, Wufan; Feng, Qianjin
2013-01-01
Brain tumor segmentation is a clinical requirement for brain tumor diagnosis and radiotherapy planning. Automating this process is a challenging task due to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this paper, we propose a method to construct a graph by learning the population- and patient-specific feature sets of multimodal magnetic resonance (MR) images and by utilizing the graph-cut to achieve a final segmentation. The probabilities of each pixel that belongs to the foreground (tumor) and the background are estimated by global and custom classifiers that are trained through learning population- and patient-specific feature sets, respectively. The proposed method is evaluated using 23 glioma image sequences, and the segmentation results are compared with other approaches. The encouraging evaluation results obtained, i.e., DSC (84.5%), Jaccard (74.1%), sensitivity (87.2%), and specificity (83.1%), show that the proposed method can effectively make use of both population- and patient-specific information. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
The open for business model of the bithorax complex in Drosophila.
Maeda, Robert K; Karch, François
2015-09-01
After nearly 30 years of effort, Ed Lewis published his 1978 landmark paper in which he described the analysis of a series of mutations that affect the identity of the segments that form along the anterior-posterior (AP) axis of the fly (Lewis 1978). The mutations behaved in a non-canonical fashion in complementation tests, forming what Ed Lewis called a "pseudo-allelic" series. Because of this, he never thought that the mutations represented segment-specific genes. As all of these mutations were grouped to a particular area of the Drosophila third chromosome, the locus became known of as the bithorax complex (BX-C). One of the key findings of Lewis' article was that it revealed for the first time, to a wide scientific audience, that there was a remarkable correlation between the order of the segment-specific mutations along the chromosome and the order of the segments they affected along the AP axis. In Ed Lewis' eyes, the mutants he discovered affected "segment-specific functions" that were sequentially activated along the chromosome as one moves from anterior to posterior along the body axis (the colinearity concept now cited in elementary biology textbooks). The nature of the "segment-specific functions" started to become clear when the BX-C was cloned through the pioneering chromosomal walk initiated in the mid 1980s by the Hogness and Bender laboratories (Bender et al. 1983a; Karch et al. 1985). Through this molecular biology effort, and along with genetic characterizations performed by Gines Morata's group in Madrid (Sanchez-Herrero et al. 1985) and Robert Whittle's in Sussex (Tiong et al. 1985), it soon became clear that the whole BX-C encoded only three protein-coding genes (Ubx, abd-A, and Abd-B). Later, immunostaining against the Ubx protein hinted that the segment-specific functions could, in fact, be cis-regulatory elements regulating the expression of the three protein-coding genes. In 1987, Peifer, Karch, and Bender proposed a comprehensive model of the functioning of the BX-C, in which the "segment-specific functions" appear as segment-specific enhancers regulating, Ubx, abd-A, or Abd-B (Peifer et al. 1987). Key to their model was that the segmental address of these enhancers was not an inherent ability of the enhancers themselves, but was determined by the chromosomal location in which they lay. In their view, the sequential activation of the segment-specific functions resulted from the sequential opening of chromatin domains along the chromosome as one moves from anterior to posterior. This model soon became known of as the open for business model. While the open for business model is quite easy to visualize at a conceptual level, molecular evidence to validate this model has been missing for almost 30 years. The recent publication describing the outstanding, joint effort from the Bender and Kingston laboratories now provides the missing proof to support this model (Bowman et al. 2014). The purpose of this article is to review the open for business model and take the reader through the genetic arguments that led to its elaboration.
Arizona TeleMedicine Network: System Procurement Specifications.
ERIC Educational Resources Information Center
Atlantic Research Corp., Alexandria, VA.
Providing general specifications and system descriptions for segments within the Arizona TeleMedicine Project (a telecommunication system designed to deliver health services to rurally isolated American Indians in Arizona), this document, when used with the appropriate route segment document, will completely describe the project's required…
An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.
Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero
2017-04-01
The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.
The Influenza A Virus PB2, PA, NP, and M Segments Play a Pivotal Role during Genome Packaging
Gao, Qinshan; Chou, Yi-Ying; Doğanay, Sultan; Vafabakhsh, Reza; Ha, Taekjip
2012-01-01
The genomes of influenza A viruses consist of eight negative-strand RNA segments. Recent studies suggest that influenza viruses are able to specifically package their segmented genomes into the progeny virions. Segment-specific packaging signals of influenza virus RNAs (vRNAs) are located in the 5′ and 3′ noncoding regions, as well as in the terminal regions, of the open reading frames. How these packaging signals function during genome packaging remains unclear. Previously, we generated a 7-segmented virus in which the hemagglutinin (HA) and neuraminidase (NA) segments of the influenza A/Puerto Rico/8/34 virus were replaced by a chimeric influenza C virus hemagglutinin/esterase/fusion (HEF) segment carrying the HA packaging sequences. The robust growth of the HEF virus suggested that the NA segment is not required for the packaging of other segments. In this study, in order to determine the roles of the other seven segments during influenza A virus genome assembly, we continued to use this HEF virus as a tool and analyzed the effects of replacing the packaging sequences of other segments with those of the NA segment. Our results showed that deleting the packaging signals of the PB1, HA, or NS segment had no effect on the growth of the HEF virus, while growth was greatly impaired when the packaging sequence of the PB2, PA, nucleoprotein (NP), or matrix (M) segment was removed. These results indicate that the PB2, PA, NP, and M segments play a more important role than the remaining four vRNAs during the genome-packaging process. PMID:22532680
Taylor, Isaiah; Wang, Ying; Seitz, Kati; Baer, John; Bennewitz, Stefan; Mooney, Brian P.; Walker, John C.
2016-01-01
Receptor-like protein kinases (RLKs) are the largest family of plant transmembrane signaling proteins. Here we present functional analysis of HAESA, an RLK that regulates floral organ abscission in Arabidopsis. Through in vitro and in vivo analysis of HAE phosphorylation, we provide evidence that a conserved phosphorylation site on a region of the HAE protein kinase domain known as the activation segment positively regulates HAE activity. Additional analysis has identified another putative activation segment phosphorylation site common to multiple RLKs that potentially modulates HAE activity. Comparative analysis suggests that phosphorylation of this second activation segment residue is an RLK specific adaptation that may regulate protein kinase activity and substrate specificity. A growing number of RLKs have been shown to exhibit biologically relevant dual specificity toward serine/threonine and tyrosine residues, but the mechanisms underlying dual specificity of RLKs are not well understood. We show that a phospho-mimetic mutant of both HAE activation segment residues exhibits enhanced tyrosine auto-phosphorylation in vitro, indicating phosphorylation of this residue may contribute to dual specificity of HAE. These results add to an emerging framework for understanding the mechanisms and evolution of regulation of RLK activity and substrate specificity. PMID:26784444
Izquierdo-Garcia, David; Catana, Ciprian
2018-01-01
Synopsis Attenuation correction (AC) is one of the most important challenges in the recently introduced combined positron emission tomography/magnetic resonance imaging (PET/MR) scanners. PET/MR AC (MR-AC) approaches aim to develop methods that allow accurate estimation of the linear attenuation coefficients (LACs) of the tissues and other components located in the PET field of view (FoV). MR-AC methods can be divided into three main categories: segmentation-, atlas- and PET-based. This review aims to provide a comprehensive list of the state of the art MR-AC approaches as well as their pros and cons. The main sources of artifacts such as body-truncation, metallic implants and hardware correction will be presented. Finally, this review will discuss the current status of MR-AC approaches for clinical applications. PMID:26952727
Decadal Changes in Global Ocean Chlorophyll
NASA Technical Reports Server (NTRS)
Gregg, Watson W.; Conkright, Margarita E.; Koblinsky, Chester J. (Technical Monitor)
2001-01-01
The global ocean chlorophyll archive produced by the Coastal Zone Color Scanner (CZCS) was revised using compatible algorithms with the Sea-viewing Wide Field-of-view Sensor (SeaWIFS), and both were blended with in situ data. This methodology permitted a quantitative comparison of decadal changes in global ocean chlorophyll from the CZCS (1979-1986) and SeaWiFS (Sep. 1997-Dec. 2000) records. Global seasonal means of ocean chlorophyll decreased over the two observational segments, by 8% in winter to 16% in autumn. Chlorophyll in the high latitudes was responsible for most of the decadal change. Conversely, chlorophyll concentrations in the low latitudes increased. The differences and similarities of the two data records provide evidence of how the Earth's climate may be changing and how ocean biota respond. Furthermore, the results have implications for the ocean carbon cycle.
A comparative evaluation of intraoral and extraoral digital impressions: An in vivo study.
Sason, Gursharan Kaur; Mistry, Gaurang; Tabassum, Rubina; Shetty, Omkar
2018-01-01
The accuracy of a dental impression is determined by two factors: "trueness" and "precision." The scanners used in dentistry are relatively new in market, and very few studies have compared the "precision" and "trueness" of intraoral scanner with the extraoral scanner. The aim of this study was to evaluate and compare accuracy of intraoral and extraoral digital impressions. Ten dentulous participants (male/female) aged 18-45 years with an asymptomatic endodontically treated mandibular first molars with adjacent teeth present were selected for this study. The prepared test tooth was measured using a digital Vernier caliper to obtain reference datasets. The tooth was then scanned using the intraoral scanner, and the extraoral scans were obtained using the casts made from the impressions. The datasets were divided into four groups and then statistically analyzed. The test tooth preparation was done, and dimples were made using a round diamond point on the bucco-occlusal, mesio-occlusal, disto-occlusal, and linguo-occlusal lines angles, and these were used to obtain reference datasets intraorally using a digital Vernier caliper. The test tooth was then scanned with the IO scanner (CS 3500, Carestream dental) thrice and also impressions were made using addition silicone impression material (3M™ ESPE) and dental casts were poured in Type IV dental stone (Kalrock-Kalabhai Karson India Pvt. Ltd., India) which were later scanned with the EO scanner (LAVA™ Scan ST Design system [3M™ ESPE]) thrice. The Datasets obtained from Intraoral and Extraoral scanner were exported to Dental Wings software and readings were obtained. Repeated measures ANOVA test was used to compare differences between the groups and independent t -test for comparison between the readings of intraoral and extraoral scanner. Least significant difference test was used for comparison between reference datasets with intraoral and extraoral scanner, respectively. A level of statistical significance of P < 0.05 was set. The precision values ranged from 20.7 to 33.35 μm for intraoral scanner and 19.5 to 37 μm for extraoral scanner. The mean deviations for intraoral scanner were 19.6 μm mesiodistally (MD) and 16.4 μm buccolingually (BL) and 24.0 μm MD and 22.5 μm BL for extraoral scanner. The mean values of the intraoral scanner (413 μm) for trueness were closest to the actual measurements (459 μm) than the extraoral scanner (396 μm). The intraoral scanner showed higher "precision" and "trueness" values when compared with the extraoral scanner.
NASA Astrophysics Data System (ADS)
Grochocka, M.
2013-12-01
Mobile laser scanning is dynamically developing measurement technology, which is becoming increasingly widespread in acquiring three-dimensional spatial information. Continuous technical progress based on the use of new tools, technology development, and thus the use of existing resources in a better way, reveals new horizons of extensive use of MLS technology. Mobile laser scanning system is usually used for mapping linear objects, and in particular the inventory of roads, railways, bridges, shorelines, shafts, tunnels, and even geometrically complex urban spaces. The measurement is done from the perspective of use of the object, however, does not interfere with the possibilities of movement and work. This paper presents the initial results of the segmentation data acquired by the MLS. The data used in this work was obtained as part of an inventory measurement infrastructure railway line. Measurement of point clouds was carried out using a profile scanners installed on the railway platform. To process the data, the tools of 'open source' Point Cloud Library was used. These tools allow to use templates of programming libraries. PCL is an open, independent project, operating on a large scale for processing 2D/3D image and point clouds. Software PCL is released under the terms of the BSD license (Berkeley Software Distribution License), which means it is a free for commercial and research use. The article presents a number of issues related to the use of this software and its capabilities. Segmentation data is based on applying the templates library pcl_ segmentation, which contains the segmentation algorithms to separate clusters. These algorithms are best suited to the processing point clouds, consisting of a number of spatially isolated regions. Template library performs the extraction of the cluster based on the fit of the model by the consensus method samples for various parametric models (planes, cylinders, spheres, lines, etc.). Most of the mathematical operation is carried out on the basis of Eigen library, a set of templates for linear algebra.
51. View of upper radar scanner switch in radar scanner ...
51. View of upper radar scanner switch in radar scanner building 105 from upper catwalk level showing emanating waveguides from upper switch (upper one-fourth of photograph) and emanating waveguides from lower radar scanner switch in vertical runs. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
High throughput solar cell ablation system
Harley, Gabriel; Pass, Thomas; Cousins, Peter John; Viatella, John
2014-10-14
A solar cell is formed using a solar cell ablation system. The ablation system includes a single laser source and several laser scanners. The laser scanners include a master laser scanner, with the rest of the laser scanners being slaved to the master laser scanner. A laser beam from the laser source is split into several laser beams, with the laser beams being scanned onto corresponding wafers using the laser scanners in accordance with one or more patterns. The laser beams may be scanned on the wafers using the same or different power levels of the laser source.
High throughput solar cell ablation system
Harley, Gabriel; Pass, Thomas; Cousins, Peter John; Viatella, John
2012-09-11
A solar cell is formed using a solar cell ablation system. The ablation system includes a single laser source and several laser scanners. The laser scanners include a master laser scanner, with the rest of the laser scanners being slaved to the master laser scanner. A laser beam from the laser source is split into several laser beams, with the laser beams being scanned onto corresponding wafers using the laser scanners in accordance with one or more patterns. The laser beams may be scanned on the wafers using the same or different power levels of the laser source.
NASA Technical Reports Server (NTRS)
Cook, M.
1990-01-01
Qualification testing of Combustion Engineering's AMDATA Intraspect/98 Data Acquisition and Imaging System that applies to the redesigned solid rocket motor field joint capture feature case-to-insulation bondline inspection was performed. Testing was performed at M-111, the Thiokol Corp. Inert Parts Preparation Building. The purpose of the inspection was to verify the integrity of the capture feature area case-to-insulation bondline. The capture feature scanner was calibrated over an intentional 1.0 to 1.0 in. case-to-insulation unbond. The capture feature scanner was then used to scan 60 deg of a capture feature field joint. Calibration of the capture feature scanner was then rechecked over the intentional unbond to ensure that the calibration settings did not change during the case scan. This procedure was successfully performed five times to qualify the unbond detection capability of the capture feature scanner. The capture feature scanner qualified in this test contains many points of mechanical instability that can affect the overall ultrasonic signal response. A new generation scanner, designated the sigma scanner, should be implemented to replace the current configuration scanner. The sigma scanner eliminates the unstable connection points of the current scanner and has additional inspection capabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoot, A. J. A. J. van de, E-mail: a.j.schootvande@amc.uva.nl; Schooneveldt, G.; Wognum, S.
Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used tomore » guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation results significantly (p < 0.01) based on DSC (6.72%) and SD of contour-to-contour distances (0.08 cm) and decreased the 95% confidence intervals of the bladder volume differences. Moreover, expanding the shape model improved the segmentation results significantly (p < 0.01) based on DSC and SD of contour-to-contour distances. Conclusions: This patient-specific shape model based automatic bladder segmentation method on CBCT is accurate and generic. Our segmentation method only needs two pretreatment imaging data sets as prior knowledge, is independent of patient gender and patient treatment position and has the possibility to manually adapt the segmentation locally.« less
Clarkson, Sean; Wheat, Jon; Heller, Ben; Choppin, Simon
2016-01-01
Use of anthropometric data to infer sporting performance is increasing in popularity, particularly within elite sport programmes. Measurement typically follows standards set by the International Society for the Advancement of Kinanthropometry (ISAK). However, such techniques are time consuming, which reduces their practicality. Schranz et al. recently suggested 3D body scanners could replace current measurement techniques; however, current systems are costly. Recent interest in natural user interaction has led to a range of low-cost depth cameras capable of producing 3D body scans, from which anthropometrics can be calculated. A scanning system comprising 4 depth cameras was used to scan 4 cylinders, representative of the body segments. Girth measurements were calculated from the 3D scans and compared to gold standard measurements. Requirements of a Level 1 ISAK practitioner were met in all 4 cylinders, and ISO standards for scan-derived girth measurements were met in the 2 larger cylinders only. A fixed measurement bias was identified that could be corrected with a simple offset factor. Further work is required to determine comparable performance across a wider range of measurements performed upon living participants. Nevertheless, findings of the study suggest such a system offers many advantages over current techniques, having a range of potential applications.
Patient-specific dose estimation for pediatric chest CT
Li, Xiang; Samei, Ehsan; Segars, W. Paul; Sturgeon, Gregory M.; Colsher, James G.; Frush, Donald P.
2008-01-01
Current methods for organ and effective dose estimations in pediatric CT are largely patient generic. Physical phantoms and computer models have only been developed for standard/limited patient sizes at discrete ages (e.g., 0, 1, 5, 10, 15years old) and do not reflect the variability of patient anatomy and body habitus within the same size/age group. In this investigation, full-body computer models of seven pediatric patients in the same size/protocol group (weight: 11.9–18.2kg) were created based on the patients’ actual multi-detector array CT (MDCT) data. Organs and structures in the scan coverage were individually segmented. Other organs and structures were created by morphing existing adult models (developed from visible human data) to match the framework defined by the segmented organs, referencing the organ volume and anthropometry data in ICRP Publication 89. Organ and effective dose of these patients from a chest MDCT scan protocol (64 slice LightSpeed VCT scanner, 120kVp, 70 or 75mA, 0.4s gantry rotation period, pitch of 1.375, 20mm beam collimation, and small body scan field-of-view) was calculated using a Monte Carlo program previously developed and validated to simulate radiation transport in the same CT system. The seven patients had normalized effective dose of 3.7–5.3mSv∕100mAs (coefficient of variation: 10.8%). Normalized lung dose and heart dose were 10.4–12.6mGy∕100mAs and 11.2–13.3mGy∕100mAs, respectively. Organ dose variations across the patients were generally small for large organs in the scan coverage (<7%), but large for small organs in the scan coverage (9%–18%) and for partially or indirectly exposed organs (11%–77%). Normalized effective dose correlated weakly with body weight (correlation coefficient:r=−0.80). Normalized lung dose and heart dose correlated strongly with mid-chest equivalent diameter (lung: r=−0.99, heart: r=−0.93); these strong correlation relationships can be used to estimate patient-specific organ dose for any other patient in the same size/protocol group who undergoes the chest scan. In summary, this work reported the first assessment of dose variations across pediatric CT patients in the same size/protocol group due to the variability of patient anatomy and body habitus and provided a previously unavailable method for patient-specific organ dose estimation, which will help in assessing patient risk and optimizing dose reduction strategies, including the development of scan protocols. PMID:19175138
Patient-specific dose estimation for pediatric chest CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Xiang; Samei, Ehsan; Segars, W. Paul
2008-12-15
Current methods for organ and effective dose estimations in pediatric CT are largely patient generic. Physical phantoms and computer models have only been developed for standard/limited patient sizes at discrete ages (e.g., 0, 1, 5, 10, 15 years old) and do not reflect the variability of patient anatomy and body habitus within the same size/age group. In this investigation, full-body computer models of seven pediatric patients in the same size/protocol group (weight: 11.9-18.2 kg) were created based on the patients' actual multi-detector array CT (MDCT) data. Organs and structures in the scan coverage were individually segmented. Other organs and structuresmore » were created by morphing existing adult models (developed from visible human data) to match the framework defined by the segmented organs, referencing the organ volume and anthropometry data in ICRP Publication 89. Organ and effective dose of these patients from a chest MDCT scan protocol (64 slice LightSpeed VCT scanner, 120 kVp, 70 or 75 mA, 0.4 s gantry rotation period, pitch of 1.375, 20 mm beam collimation, and small body scan field-of-view) was calculated using a Monte Carlo program previously developed and validated to simulate radiation transport in the same CT system. The seven patients had normalized effective dose of 3.7-5.3 mSv/100 mAs (coefficient of variation: 10.8%). Normalized lung dose and heart dose were 10.4-12.6 mGy/100 mAs and 11.2-13.3 mGy/100 mAs, respectively. Organ dose variations across the patients were generally small for large organs in the scan coverage (<7%), but large for small organs in the scan coverage (9%-18%) and for partially or indirectly exposed organs (11%-77%). Normalized effective dose correlated weakly with body weight (correlation coefficient: r=-0.80). Normalized lung dose and heart dose correlated strongly with mid-chest equivalent diameter (lung: r=-0.99, heart: r=-0.93); these strong correlation relationships can be used to estimate patient-specific organ dose for any other patient in the same size/protocol group who undergoes the chest scan. In summary, this work reported the first assessment of dose variations across pediatric CT patients in the same size/protocol group due to the variability of patient anatomy and body habitus and provided a previously unavailable method for patient-specific organ dose estimation, which will help in assessing patient risk and optimizing dose reduction strategies, including the development of scan protocols.« less
Matsumoto, Keiichi; Kitamura, Keishi; Mizuta, Tetsuro; Shimizu, Keiji; Murase, Kenya; Senda, Michio
2006-02-20
Transmission scanning can be successfully performed with a Cs-137 single-photon-emitting point source for three-dimensional PET imaging. This method was effective for postinjection transmission scanning because of differences in physical energy. However, scatter contamination in the transmission data lowers measured attenuation coefficients. The purpose of this study was to investigate the accuracy of the influence of object scattering by measuring the attenuation coefficients on the transmission images. We also compared the results with the conventional germanium line source method. Two different types of PET scanner, the SET-3000 G/X (Shimadzu Corp.) and ECAT EXACT HR(+) (Siemens/CTI) , were used. For the transmission scanning, the SET-3000 G/X and ECAT HR(+) were the Cs-137 point source and Ge-68/Ga-68 line source, respectively. With the SET-3000 G/X, we performed transmission measurement at two energy gate settings, the standard 600-800 keV as well as 500-800 keV. The energy gate setting of the ECAT HR(+) was 350-650 keV. The effects of scattering in a uniform phantom with different cross-sectional areas ranging from 201 cm(2) to 314 cm(2) to 628 cm(2) (apposition of the two 20 cm diameter phantoms) and 943 cm(2) (stacking of the three 20 cm diameter phantoms) were acquired without emission activity. First, we evaluated the attenuation coefficients of the two different types of transmission scanning using region of interest (ROI) analysis. In addition, we evaluated the attenuation coefficients with and without segmentation for Cs-137 transmission images using the same analysis. The segmentation method was a histogram-based soft-tissue segmentation process that can also be applied to reconstructed transmission images. In the Cs-137 experiment, the maximum underestimation was 3% without segmentation, which was reduced to less than 1% with segmentation at the center of the largest phantom. In the Ge-68/Ga-68 experiment, the difference in mean attenuation coefficients was stable with all phantoms. We evaluated the accuracy of attenuation coefficients of Cs-137 single-transmission scans. The results for Cs-137 suggest that scattered photons depend on object size. Although Cs-137 single-transmission scans contained scattered photons, attenuation coefficient error could be reduced using by the segmentation method.
Lago, M A; Rupérez, M J; Monserrat, C; Martínez-Martínez, F; Martínez-Sanchis, S; Larra, E; Díez-Ajenjo, M A; Peris-Martínez, C
2015-11-01
The purpose of this study was the simulation of the implantation of intrastromal corneal-ring segments for patients with keratoconus. The aim of the study was the prediction of the corneal curvature recovery after this intervention. Seven patients with keratoconus diagnosed and treated by implantation of intrastromal corneal-ring segments were enrolled in the study. The 3D geometry of the cornea of each patient was obtained from its specific topography and a hyperelastic model was assumed to characterize its mechanical behavior. To simulate the intervention, the intrastromal corneal-ring segments were modeled and placed at the same location at which they were placed in the surgery. The finite element method was then used to obtain a simulation of the deformation of the cornea after the ring segment insertion. Finally, the predicted curvature was compared with the real curvature after the intervention. The simulation of the ring segment insertion was validated comparing the curvature change with the data after the surgery. Results showed a flattening of the cornea which was in consonance with the real improvement of the corneal curvature. The mean difference obtained was of 0.74 mm using properties of healthy corneas. For the first time, a patient-specific model of the cornea has been used to predict the outcomes of the surgery after the intrastromal corneal-ring segments implantation in real patients. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Halse, M. R.; Hudson, W. J.
1986-01-01
Describes an X-Y scanner used to create acoustic holograms. Scanner is computer controlled and can be adapted to digitize pictures. Scanner geometry is discussed. An appendix gives equipment details. The control program in ATOM BASIC and 6502 machine code is available from the authors. (JM)
Frazee, D
2001-01-01
I hope that you will find the product matrix to be a useful tool for making comparisons between vendors and scanners. Please keep in mind that the vendors have directly provided the specific answers to the questions within the matrix. Neither the author nor Radiology Management shall be held responsible for any misrepresented or erroneous data.
NASA Technical Reports Server (NTRS)
Barker, J. L.
1983-01-01
Tables and graphs show the results of the spectral, radiometric, and geometric characterization of LANDSAT 4 sensors associated with imagery and of the imagery associated with sensors and processing. Specifications for the various parameters are compared with the photoflight and flight values.
ERIC Educational Resources Information Center
Polka, Linda; Orena, Adriel John; Sundara, Megha; Worrall, Jennifer
2017-01-01
Previous research shows that word segmentation is a language-specific skill. Here, we tested segmentation of bi-syllabic words in two languages (French; English) within the same infants in a single test session. In Experiment 1, monolingual 8-month-olds (French; English) segmented bi-syllabic words in their native language, but not in an…
Ultra-High-Resolution Computed Tomography of the Lung: Image Quality of a Prototype Scanner.
Kakinuma, Ryutaro; Moriyama, Noriyuki; Muramatsu, Yukio; Gomi, Shiho; Suzuki, Masahiro; Nagasawa, Hirobumi; Kusumoto, Masahiko; Aso, Tomohiko; Muramatsu, Yoshihisa; Tsuchida, Takaaki; Tsuta, Koji; Maeshima, Akiko Miyagi; Tochigi, Naobumi; Watanabe, Shun-Ichi; Sugihara, Naoki; Tsukagoshi, Shinsuke; Saito, Yasuo; Kazama, Masahiro; Ashizawa, Kazuto; Awai, Kazuo; Honda, Osamu; Ishikawa, Hiroyuki; Koizumi, Naoya; Komoto, Daisuke; Moriya, Hiroshi; Oda, Seitaro; Oshiro, Yasuji; Yanagawa, Masahiro; Tomiyama, Noriyuki; Asamura, Hisao
2015-01-01
The image noise and image quality of a prototype ultra-high-resolution computed tomography (U-HRCT) scanner was evaluated and compared with those of conventional high-resolution CT (C-HRCT) scanners. This study was approved by the institutional review board. A U-HRCT scanner prototype with 0.25 mm x 4 rows and operating at 120 mAs was used. The C-HRCT images were obtained using a 0.5 mm x 16 or 0.5 mm x 64 detector-row CT scanner operating at 150 mAs. Images from both scanners were reconstructed at 0.1-mm intervals; the slice thickness was 0.25 mm for the U-HRCT scanner and 0.5 mm for the C-HRCT scanners. For both scanners, the display field of view was 80 mm. The image noise of each scanner was evaluated using a phantom. U-HRCT and C-HRCT images of 53 images selected from 37 lung nodules were then observed and graded using a 5-point score by 10 board-certified thoracic radiologists. The images were presented to the observers randomly and in a blinded manner. The image noise for U-HRCT (100.87 ± 0.51 Hounsfield units [HU]) was greater than that for C-HRCT (40.41 ± 0.52 HU; P < .0001). The image quality of U-HRCT was graded as superior to that of C-HRCT (P < .0001) for all of the following parameters that were examined: margins of subsolid and solid nodules, edges of solid components and pulmonary vessels in subsolid nodules, air bronchograms, pleural indentations, margins of pulmonary vessels, edges of bronchi, and interlobar fissures. Despite a larger image noise, the prototype U-HRCT scanner had a significantly better image quality than the C-HRCT scanners.
Scanner OPC signatures: automatic vendor-to-vendor OPE matching
NASA Astrophysics Data System (ADS)
Renwick, Stephen P.
2009-03-01
As 193nm lithography continues to be stretched and the k1 factor decreases, optical proximity correction (OPC) has become a vital part of the lithographer's tool kit. Unfortunately, as is now well known, the design variations of lithographic scanners from different vendors cause them to have slightly different optical-proximity effect (OPE) behavior, meaning that they print features through pitch in distinct ways. This in turn means that their response to OPC is not the same, and that an OPC solution designed for a scanner from Company 1 may or may not work properly on a scanner from Company 2. Since OPC is not inexpensive, that causes trouble for chipmakers using more than one brand of scanner. Clearly a scanner-matching procedure is needed to meet this challenge. Previously, automatic matching has only been reported for scanners of different tool generations from the same manufacturer. In contrast, scanners from different companies have been matched using expert tuning and adjustment techniques, frequently requiring laborious test exposures. Automatic matching between scanners from Company 1 and Company 2 has remained an unsettled problem. We have recently solved this problem and introduce a novel method to perform the automatic matching. The success in meeting this challenge required three enabling factors. First, we recognized the strongest drivers of OPE mismatch and are thereby able to reduce the information needed about a tool from another supplier to that information readily available from all modern scanners. Second, we developed a means of reliably identifying the scanners' optical signatures, minimizing dependence on process parameters that can cloud the issue. Third, we carefully employed standard statistical techniques, checking for robustness of the algorithms used and maximizing efficiency. The result is an automatic software system that can predict an OPC matching solution for scanners from different suppliers without requiring expert intervention.
Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-10-01
The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.
Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-01-01
Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070
Structural analysis of vibroacoustical processes
NASA Technical Reports Server (NTRS)
Gromov, A. P.; Myasnikov, L. L.; Myasnikova, Y. N.; Finagin, B. A.
1973-01-01
The method of automatic identification of acoustical signals, by means of the segmentation was used to investigate noises and vibrations in machines and mechanisms, for cybernetic diagnostics. The structural analysis consists of presentation of a noise or vibroacoustical signal as a sequence of segments, determined by the time quantization, in which each segment is characterized by specific spectral characteristics. The structural spectrum is plotted as a histogram of the segments, also as a relation of the probability density of appearance of a segment to the segment type. It is assumed that the conditions of ergodic processes are maintained.
FormScanner: Open-Source Solution for Grading Multiple-Choice Exams
NASA Astrophysics Data System (ADS)
Young, Chadwick; Lo, Glenn; Young, Kaisa; Borsetta, Alberto
2016-01-01
The multiple-choice exam remains a staple for many introductory physics courses. In the past, people have graded these by hand or even flaming needles. Today, one usually grades the exams with a form scanner that utilizes optical mark recognition (OMR). Several companies provide these scanners and particular forms, such as the eponymous "Scantron." OMR scanners combine hardware and software—a scanner and OMR program—to read and grade student-filled forms.
Human body surface area database and estimation formula.
Yu, Chi-Yuang; Lin, Ching-Hua; Yang, Yi-Hsueh
2010-08-01
This study established human body surface area (BSA) database and estimation formula based on three-dimensional (3D) scanned data. For each gender, 135 subjects were drawn. The sampling was stratified in five stature heights and three body weights according to a previous survey. The 3D body surface shape was measured using an innovated 3D body scanner and a high resolution hand/foot scanner, the total body surface area (BSA) and segmental body surface area (SBSA) were computed based on the summation of every tiny triangular area of triangular meshes of the scanned surface; and the accuracy of BSA measurement is below 1%. The results of BSA and sixteen SBSAs were tabulated in fifteen strata for the Male, the Female and the Total (two genders combined). The %SBSA data was also used to revise new Lund and Browder Charts. The comparison of BSA shows that the BSA of this study is comparable with the Du Bois and Du Bois' but smaller than that of Tikuisis et al. The difference might be attributed to body size difference between the samples. The comparison of SBSA shows that the differences of SBSA between this study and the Lund and Browder Chart range between 0.00% and 2.30%. A new BSA estimation formula, BSA=71.3989 x H(.7437) x W(.4040), was obtained. An accuracy test showed that this formula has smaller estimation error than that of the Du Bois and Du Bois'; and significantly better than other BSA estimation formulae.
NASA Astrophysics Data System (ADS)
Degnan, J. J.; Wells, D. N.; Huet, H.; Chauvet, N.; Lawrence, D. W.; Mitchell, S. E.; Eklund, W. D.
2005-12-01
A 3D imaging lidar system, developed for the University of Florida at Gainesville and operating at the water transmissive wavelength of 532 nm, is designed to contiguously map underlying terrain and/or perform shallow water bathymetry on a single overflight from an altitude of 600 m with a swath width of 225 m and a horizontal spatial resolution of 20 cm. Each 600 psec pulse from a frequency-doubled, low power (~3 microjoules @ 8 kHz = 24 mW), passively Q-switched Nd:YAG microchip laser is passed through a holographic element which projects a 10x10 array of spots onto a 2m x 2m target area. The individual ground spots are then imaged onto individual anodes within a 10x10 segmented anode photomultiplier. The latter is followed by a 100 channel multistop ranging receiver with a range resolution of about 4 cm. The multistop feature permits single photon detection in daylight with wide range gates as well as multiple single photon returns per pixel per laser fire from volumetric scatterers such as tree canopies or turbid water columns. The individual single pulse 3D images are contiguously mosaiced together through the combined action of the platform velocity and a counter-rotating dual wedge optical scanner whose rotations are synchronized to the laser pulse train. The paper provides an overview of the lidar opto-mechanical design, the synchronized dual wedge scanner and servo controller, and the experimental results obtained to date.
Fazeli Dehkordy, Soudabeh; Fowler, Kathryn J; Wolfson, Tanya; Igarashi, Saya; Lamas Constantino, Carolina P; Hooker, Jonathan C; Hong, Cheng W; Mamidipalli, Adrija; Gamst, Anthony C; Hemming, Alan; Sirlin, Claude B
2017-10-31
Gadoxetate-disodium (Gd-EOB-DTPA)-enhanced 3D T1- weighted (T1w) MR cholangiography (MRC) is an efficient method to evaluate biliary anatomy due to T1 shortening of excreted contrast in the bile. A method that exploits both T1 shortening and T2* effects may produce even greater bile duct conspicuity. The aim of our study is to determine feasibility and compare the diagnostic performance of two-dimensional (2D) T1w multi-echo (ME) spoiled gradient-recalled-echo (SPGR) derived R2* maps against T1w MRC for bile duct visualization in living liver donor candidates. Ten potential living liver donor candidates underwent pretransplant 3T MRI and were included in our study. Following injection of Gd-EOBDTPA and a 20-min delay, 3D T1w MRC and 2D T1w ME SPGR images were acquired. 2D R2* maps were generated inline by the scanner assuming exponential decay. The 3D T1w MRC and 2D R2* maps were retrospectively and independently reviewed in two separate sessions by three radiologists. Visualization of eight bile duct segments was scored using a 4-point ordinal scale. The scores were compared using mixed effects regression model. Imaging was tolerated by all donors and R2* maps were successfully generated in all cases. Visualization scores of 2D R2* maps were significantly higher than 3D T1w MRC for right anterior (p = 0.003) and posterior (p = 0.0001), segment 2 (p < 0.0001), segment 3 (p = 0.0001), and segment 4 (p < 0.0001) ducts. Gd-EOB-DTPA-enhanced 2D R2* mapping is a feasible method for evaluating the bile ducts in living donors and may be a valuable addition to the living liver donor MR protocol for delineating intrahepatic biliary anatomy.
NASA Astrophysics Data System (ADS)
Huang, Yishuo; Chiang, Chih-Hung; Hsu, Keng-Tsang
2018-03-01
Defects presented on the facades of a building do have profound impacts on extending the life cycle of the building. How to identify the defects is a crucial issue; destructive and non-destructive methods are usually employed to identify the defects presented on a building. Destructive methods always cause the permanent damages for the examined objects; on the other hand, non-destructive testing (NDT) methods have been widely applied to detect those defects presented on exterior layers of a building. However, NDT methods cannot provide efficient and reliable information for identifying the defects because of the huge examination areas. Infrared thermography is often applied to quantitative energy performance measurements for building envelopes. Defects on the exterior layer of buildings may be caused by several factors: ventilation losses, conduction losses, thermal bridging, defective services, moisture condensation, moisture ingress, and structure defects. Analyzing the collected thermal images can be quite difficult when the spatial variations of surface temperature are small. In this paper the authors employ image segmentation to cluster those pixels with similar surface temperatures such that the processed thermal images can be composed of limited groups. The surface temperature distribution in each segmented group is homogenous. In doing so, the regional boundaries of the segmented regions can be identified and extracted. A terrestrial laser scanner (TLS) is widely used to collect the point clouds of a building, and those point clouds are applied to reconstruct the 3D model of the building. A mapping model is constructed such that the segmented thermal images can be projected onto the 2D image of the specified 3D building. In this paper, the administrative building in Chaoyang University campus is used as an example. The experimental results not only provide the defect information but also offer their corresponding spatial locations in the 3D model.
Effect of beam hardening on transmural myocardial perfusion quantification in myocardial CT imaging
NASA Astrophysics Data System (ADS)
Fahmi, Rachid; Eck, Brendan L.; Levi, Jacob; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.
2016-03-01
The detection of subendocardial ischemia exhibiting an abnormal transmural perfusion gradient (TPG) may help identify ischemic conditions due to micro-vascular dysfunction. We evaluated the effect of beam hardening (BH) artifacts on TPG quantification using myocardial CT perfusion (CTP). We used a prototype spectral detector CT scanner (Philips Healthcare) to acquire dynamic myocardial CTP scans in a porcine ischemia model with partial occlusion of the left anterior descending (LAD) coronary artery guided by pressure wire-derived fractional flow reserve (FFR) measurements. Conventional 120 kVp and 70 keV projection-based mono-energetic images were reconstructed from the same projection data and used to compute myocardial blood flow (MBF) using the Johnson-Wilson model. Under moderate LAD occlusion (FFR~0.7), we used three 5 mm short axis slices and divided the myocardium into three LAD segments and three remote segments. For each slice and each segment, we characterized TPG as the mean "endo-to-epi" transmural flow ratio (TFR). BH-induced hypoenhancement on the ischemic anterior wall at 120 kVp resulted in significantly lower mean TFR value as compared to the 70 keV TFR value (0.29+/-0.01 vs. 0.55+/-0.01 p<1e-05). No significant difference was measured between 120 kVp and 70 keV mean TFR values on segments moderately affected or unaffected by BH. In the entire ischemic LAD territory, 120 kVp mean endocardial flow was significantly reduced as compared to mean epicardial flow (15.80+/-10.98 vs. 40.85+/-23.44 ml/min/100g; p<1e-04). At 70 keV, BH was effectively minimized resulting in mean endocardial MBF of 40.85+/-15.3407 ml/min/100g vs. 74.09+/-5.07 ml/min/100g (p=0.0054) in the epicardium. We also found that BH artifact in the conventional 120 kVp images resulted in falsely reduced MBF measurements even under non-ischemic conditions.
Galavis, Paulina E; Hollensen, Christian; Jallow, Ngoneh; Paliwal, Bhudatt; Jeraj, Robert
2010-10-01
Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45-60 minutes post-injection of 10 mCi of [(18)F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor segmentation.
GALAVIS, PAULINA E.; HOLLENSEN, CHRISTIAN; JALLOW, NGONEH; PALIWAL, BHUDATT; JERAJ, ROBERT
2014-01-01
Background Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Material and methods Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45–60 minutes post-injection of 10 mCi of [18F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Conclusion Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor segmentation. PMID:20831489
Applications of Optical Scanners in an Academic Center.
ERIC Educational Resources Information Center
Molinari, Carol; Tannenbaum, Robert S.
1995-01-01
Describes optical scanners, including how the technology works; applications in data management and research; development of instructional materials; and providing community services. Discussion includes the three basic types of optical scanners: optical character recognition (OCR), optical mark readers (OMR), and graphic scanners. A sidebar…
NASA Technical Reports Server (NTRS)
Biehl, L. L.; Silva, L. F.
1975-01-01
Skylab multispectral scanner data, digitized Skylab color infrared (IR) photography, digitized Skylab black and white multiband photography, and Earth Resources Technology Satellite (ERTS) multispectral scanner data collected within a 24-hr time period over an area in south-central Indiana near Bloomington on June 9 and 10, 1973, were compared in a machine-aided land use analysis of the area. The overall classification performance results, obtained with nine land use classes, were 87% correct classification using the 'best' 4 channels of the Skylab multispectral scanner, 80% for the channels on the Skylab multispectral scanner which are spectrally comparable to the ERTS multispectral scanner, 88% for the ERTS multispectral scanner, 83% for the digitized color IR photography, and 76% for the digitized black and white multiband photography. The results indicate that the Skylab multispectral scanner may yield even higher classification accuracies when a noise-filtered multispectral scanner data set becomes available in the near future.
Efficient system modeling for a small animal PET scanner with tapered DOI detectors.
Zhang, Mengxi; Zhou, Jian; Yang, Yongfeng; Rodríguez-Villafuerte, Mercedes; Qi, Jinyi
2016-01-21
A prototype small animal positron emission tomography (PET) scanner for mouse brain imaging has been developed at UC Davis. The new scanner uses tapered detector arrays with depth of interaction (DOI) measurement. In this paper, we present an efficient system model for the tapered PET scanner using matrix factorization and a virtual scanner geometry. The factored system matrix mainly consists of two components: a sinogram blurring matrix and a geometrical matrix. The geometric matrix is based on a virtual scanner geometry. The sinogram blurring matrix is estimated by matrix factorization. We investigate the performance of different virtual scanner geometries. Both simulation study and real data experiments are performed in the fully 3D mode to study the image quality under different system models. The results indicate that the proposed matrix factorization can maintain image quality while substantially reduce the image reconstruction time and system matrix storage cost. The proposed method can be also applied to other PET scanners with DOI measurement.
Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhbardeh, Alireza; Jacobs, Michael A.; Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205
2012-04-15
Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), andmore » diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both synthetic and clinical data. In the synthetic data, the authors demonstrated the performance of the NLDR method compared with conventional linear DR methods. The NLDR approach enabled successful segmentation of the structures, whereas, in most cases, PCA and MDS failed. The NLDR approach was able to segment different breast tissue types with a high accuracy and the embedded image of the breast MRI data demonstrated fuzzy boundaries between the different types of breast tissue, i.e., fatty, glandular, and tissue with lesions (>86%). Conclusions: The proposed hybrid NLDR methods were able to segment clinical breast data with a high accuracy and construct an embedded image that visualized the contribution of different radiological parameters.« less