Multisample conversion of water to hydrogen by zinc for stable isotope determination
Kendall, C.; Coplen, T.B.
1985-01-01
Two techniques for the conversion of water to hydrogen for stable isotope ratio determination have been developed that are especially suited for automated multisample analysis. Both procedures involve reaction of zinc shot with a water sample at 450 ??C. in one method designed for water samples in bottles, the water is put in capillaries and is reduced by zinc in reaction vessels; overall savings in sample preparation labor of 75% have been realized over the standard uranium reduction technique. The second technique is for waters evolved under vacuum and is a sealed-tube method employing 9 mm o.d. quartz tubing. Problems inherent with zinc reduction include surface inhomogeneity of the zinc and exchange of hydrogen both with the zinc and with the glass walls of the vessels. For best results, water/zinc and water/glass surface area ratios of vessels should be kept as large as possible.
Green, Michael V; Seidel, Jurgen; Choyke, Peter L; Jagoda, Elaine M
2017-10-01
We describe a simple fixture that can be added to the imaging bed of a small-animal PET scanner that allows for automated counting of multiple organ or tissue samples from mouse-sized animals and counting of injection syringes prior to administration of the radiotracer. The combination of imaging and counting capabilities in the same machine offers advantages in certain experimental settings. A polyethylene block of plastic, sculpted to mate with the animal imaging bed of a small-animal PET scanner, is machined to receive twelve 5-ml containers, each capable of holding an entire organ from a mouse-sized animal. In addition, a triangular cross-section slot is machined down the centerline of the block to secure injection syringes from 1-ml to 3-ml in size. The sample holder is scanned in PET whole-body mode to image all samples or in one bed position to image a filled injection syringe. Total radioactivity in each sample or syringe is determined from the reconstructed images of these objects using volume re-projection of the coronal images and a single region-of-interest for each. We tested the accuracy of this method by comparing PET estimates of sample and syringe activity with well counter and dose calibrator estimates of these same activities. PET and well counting of the same samples gave near identical results (in MBq, R 2 =0.99, slope=0.99, intercept=0.00-MBq). PET syringe and dose calibrator measurements of syringe activity in MBq were also similar (R 2 =0.99, slope=0.99, intercept=- 0.22-MBq). A small-animal PET scanner can be easily converted into a multi-sample and syringe counting device by the addition of a sample block constructed for that purpose. This capability, combined with live animal imaging, can improve efficiency and flexibility in certain experimental settings. Copyright © 2017 Elsevier Inc. All rights reserved.
Artes, Paul H; Henson, David B; Harper, Robert; McLeod, David
2003-06-01
To compare a multisampling suprathreshold strategy with conventional suprathreshold and full-threshold strategies in detecting localized visual field defects and in quantifying the area of loss. Probability theory was applied to examine various suprathreshold pass criteria (i.e., the number of stimuli that have to be seen for a test location to be classified as normal). A suprathreshold strategy that requires three seen or three missed stimuli per test location (multisampling suprathreshold) was selected for further investigation. Simulation was used to determine how the multisampling suprathreshold, conventional suprathreshold, and full-threshold strategies detect localized field loss. To determine the systematic error and variability in estimates of loss area, artificial fields were generated with clustered defects (0-25 field locations with 8- and 16-dB loss) and, for each condition, the number of test locations classified as defective (suprathreshold strategies) and with pattern deviation probability less than 5% (full-threshold strategy), was derived from 1000 simulated test results. The full-threshold and multisampling suprathreshold strategies had similar sensitivity to field loss. Both detected defects earlier than the conventional suprathreshold strategy. The pattern deviation probability analyses of full-threshold results underestimated the area of field loss. The conventional suprathreshold perimetry also underestimated the defect area. With multisampling suprathreshold perimetry, the estimates of defect area were less variable and exhibited lower systematic error. Multisampling suprathreshold paradigms may be a powerful alternative to other strategies of visual field testing. Clinical trials are needed to verify these findings.
A multi-level pore-water sampler for permeable sediments
Martin, J.B.; Hartl, K.M.; Corbett, D.R.; Swarzenski, P.W.; Cable, J.E.
2003-01-01
The construction and operation of a multi-level piezometer (multisampler) designed to collect pore water from permeable sediments up to 230 cm below the sediment-water interface is described. Multisamplers are constructed from 1 1/2 inch schedule 80 PVC pipe. One-quarter-inch flexible PVC tubing leads from eight ports at variable depths to a 1 1/2 inch tee fitting at the top of the PVC pipe. Multisamplers are driven into the sediments using standard fence-post drivers. Water is pumped from the PVC tubing with a peristaltic pump. Field tests in Banana River Lagoon, Florida, demonstrate the utility of multisamplers. These tests include collection of multiple samples from the permeable sediments and reveal mixing between shallow pore water and overlying lagoon water.
ERIC Educational Resources Information Center
Mann, Heather M.; Rutstein, Daisy W.; Hancock, Gregory R.
2009-01-01
Multisample measured variable path analysis is used to test whether causal/structural relations among measured variables differ across populations. Several invariance testing approaches are available for assessing cross-group equality of such relations, but the associated test statistics may vary considerably across methods. This study is a…
2016-11-01
A.; Weinstein, M. P.; Lohmann, R. Trophodynamic behavior of hydrophobic organic contaminants in the aquatic food web of a tidal river. Environ. Sci...FINAL REPORT Development of a Passive Multisampling Method to Measure Dioxins/Furans and Other Contaminant Bioavailability in Aquatic...trade name, trademark, manufacturer , or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the
Liu, Gui-Long; Huang, Shi-Hong; Shi, Che-Si; Zeng, Bin; Zhang, Ke-Shi; Zhong, Xian-Ci
2018-02-10
Using copper thin-walled tubular specimens, the subsequent yield surfaces under pre-tension, pre-torsion and pre-combined tension-torsion are measured, where the single-sample and multi-sample methods are applied respectively to determine the yield stresses at specified offset strain. The rule and characteristics of the evolution of the subsequent yield surface are investigated. Under the conditions of different pre-strains, the influence of test point number, test sequence and specified offset strain on the measurement of subsequent yield surface and the concave phenomenon for measured yield surface are studied. Moreover, the feasibility and validity of the two methods are compared. The main conclusions are drawn as follows: (1) For the single or multi-sample method, the measured subsequent yield surfaces are remarkably different from cylindrical yield surfaces proposed by the classical plasticity theory; (2) there are apparent differences between the test results from the two kinds of methods: the multi-sample method is not influenced by the number of test points, test order and the cumulative effect of residual plastic strain resulting from the other test point, while those are very influential in the single-sample method; and (3) the measured subsequent yield surface may appear concave, which can be transformed to convex for single-sample method by changing the test sequence. However, for the multiple-sample method, the concave phenomenon will disappear when a larger offset strain is specified.
A new approach to accelerated drug-excipient compatibility testing.
Sims, Jonathan L; Carreira, Judith A; Carrier, Daniel J; Crabtree, Simon R; Easton, Lynne; Hancock, Stephen A; Simcox, Carol E
2003-01-01
The purpose of this study was to develop a method of qualitatively predicting the most likely degradants in a formulation or probing specific drug-excipient interactions in a significantly shorter time frame than the typical 1 month storage testing. In the example studied, accelerated storage testing of a solid dosage form at 50 degrees C, the drug substance SB-243213-A degraded via the formation of two oxidative impurities. These impurities reached a level of 1% PAR after 3 months. Various stressing methods were examined to try to recreate this degradation and in doing so provide a practical and reliable method capable of predicting drug-excipient interactions. The technique developed was able to mimic the 1-month's accelerated degradation in just 1 hr. The method was suitable for automated analysis, capable of multisample stressing, and ideal for use in drug-excipient compatibility screening.
Zhang, L; Liu, X J
2016-06-03
With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.
Analysis of Invasion Dynamics of Matrix-Embedded Cells in a Multisample Format.
Van Troys, Marleen; Masuzzo, Paola; Huyck, Lynn; Bakkali, Karima; Waterschoot, Davy; Martens, Lennart; Ampe, Christophe
2018-01-01
In vitro tests of cancer cell invasion are the "first line" tools of preclinical researchers for screening the multitude of chemical compounds or cell perturbations that may aid in halting or treating cancer malignancy. In order to have predictive value or to contribute to designing personalized treatment regimes, these tests need to take into account the cancer cell environment and measure effects on invasion in sufficient detail. The in vitro invasion assays presented here are a trade-off between feasibility in a multisample format and mimicking the complexity of the tumor microenvironment. They allow testing multiple samples and conditions in parallel using 3D-matrix-embedded cells and deal with the heterogeneous behavior of an invading cell population in time. We describe the steps to take, the technical problems to tackle and useful software tools for the entire workflow: from the experimental setup to the quantification of the invasive capacity of the cells. The protocol is intended to guide researchers to standardize experimental set-ups and to annotate their invasion experiments in sufficient detail. In addition, it provides options for image processing and a solution for storage, visualization, quantitative analysis, and multisample comparison of acquired cell invasion data.
Automated image quality assessment for chest CT scans.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2018-02-01
Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Rivers, Thane D.
1992-06-01
An Automated Scanning Monochromator was developed using: an Acton Research Corporation (ARC) Monochromator, Ealing Photomultiplier Tube and a Macintosh PC in conjunction with LabVIEW software. The LabVIEW Virtual Instrument written to operate the ARC Monochromator is a mouse driven user friendly program developed for automated spectral data measurements. Resolution and sensitivity of the Automated Scanning Monochromator System were determined experimentally. The Automated monochromator was then used for spectral measurements of a Platinum Lamp. Additionally, the reflectivity curve for a BaSO4 coated screen has been measured. Reflectivity measurements indicate a large discrepancy with expected results. Further analysis of the reflectivity experiment is required for conclusive results.
Multisample cross-validation of a model of childhood posttraumatic stress disorder symptomatology.
Anthony, Jason L; Lonigan, Christopher J; Vernberg, Eric M; Greca, Annette M La; Silverman, Wendy K; Prinstein, Mitchell J
2005-12-01
This study is the latest advancement of our research aimed at best characterizing children's posttraumatic stress reactions. In a previous study, we compared existing nosologic and empirical models of PTSD dimensionality and determined the superior model was a hierarchical one with three symptom clusters (Intrusion/Active Avoidance, Numbing/Passive Avoidance, and Arousal; Anthony, Lonigan, & Hecht, 1999). In this study, we cross-validate this model in two populations. Participants were 396 fifth graders who were exposed to either Hurricane Andrew or Hurricane Hugo. Multisample confirmatory factor analysis demonstrated the model's factorial invariance across populations who experienced traumatic events that differed in severity. These results show the model's robustness to characterize children's posttraumatic stress reactions. Implications for diagnosis, classification criteria, and an empirically supported theory of PTSD are discussed.
Mansberger, Steven L; Menda, Shivali A; Fortune, Brad A; Gardiner, Stuart K; Demirel, Shaban
2017-02-01
To characterize the error of optical coherence tomography (OCT) measurements of retinal nerve fiber layer (RNFL) thickness when using automated retinal layer segmentation algorithms without manual refinement. Cross-sectional study. This study was set in a glaucoma clinical practice, and the dataset included 3490 scans from 412 eyes of 213 individuals with a diagnosis of glaucoma or glaucoma suspect. We used spectral domain OCT (Spectralis) to measure RNFL thickness in a 6-degree peripapillary circle, and exported the native "automated segmentation only" results. In addition, we exported the results after "manual refinement" to correct errors in the automated segmentation of the anterior (internal limiting membrane) and the posterior boundary of the RNFL. Our outcome measures included differences in RNFL thickness and glaucoma classification (i.e., normal, borderline, or outside normal limits) between scans with automated segmentation only and scans using manual refinement. Automated segmentation only resulted in a thinner global RNFL thickness (1.6 μm thinner, P < .001) when compared to manual refinement. When adjusted by operator, a multivariate model showed increased differences with decreasing RNFL thickness (P < .001), decreasing scan quality (P < .001), and increasing age (P < .03). Manual refinement changed 298 of 3486 (8.5%) of scans to a different global glaucoma classification, wherein 146 of 617 (23.7%) of borderline classifications became normal. Superior and inferior temporal clock hours had the largest differences. Automated segmentation without manual refinement resulted in reduced global RNFL thickness and overestimated the classification of glaucoma. Differences increased in eyes with a thinner RNFL thickness, older age, and decreased scan quality. Operators should inspect and manually refine OCT retinal layer segmentation when assessing RNFL thickness in the management of patients with glaucoma. Copyright © 2016 Elsevier Inc. All rights reserved.
Smits, Loek P.; van Wijk, Diederik F.; Duivenvoorden, Raphael; Xu, Dongxiang; Yuan, Chun; Stroes, Erik S.; Nederveen, Aart J.
2016-01-01
Purpose To study the interscan reproducibility of manual versus automated segmentation of carotid artery plaque components, and the agreement between both methods, in high and lower quality MRI scans. Methods 24 patients with 30–70% carotid artery stenosis were planned for 3T carotid MRI, followed by a rescan within 1 month. A multicontrast protocol (T1w,T2w, PDw and TOF sequences) was used. After co-registration and delineation of the lumen and outer wall, segmentation of plaque components (lipid-rich necrotic cores (LRNC) and calcifications) was performed both manually and automated. Scan quality was assessed using a visual quality scale. Results Agreement for the detection of LRNC (Cohen’s kappa (k) is 0.04) and calcification (k = 0.41) between both manual and automated segmentation methods was poor. In the high-quality scans (visual quality score ≥ 3), the agreement between manual and automated segmentation increased to k = 0.55 and k = 0.58 for, respectively, the detection of LRNC and calcification larger than 1 mm2. Both manual and automated analysis showed good interscan reproducibility for the quantification of LRNC (intraclass correlation coefficient (ICC) of 0.94 and 0.80 respectively) and calcified plaque area (ICC of 0.95 and 0.77, respectively). Conclusion Agreement between manual and automated segmentation of LRNC and calcifications was poor, despite a good interscan reproducibility of both methods. The agreement between both methods increased to moderate in high quality scans. These findings indicate that image quality is a critical determinant of the performance of both manual and automated segmentation of carotid artery plaque components. PMID:27930665
NASA Astrophysics Data System (ADS)
Qiu, Yuchen; Wang, Xingwei; Chen, Xiaodong; Li, Yuhua; Liu, Hong; Li, Shibo; Zheng, Bin
2010-02-01
Visually searching for analyzable metaphase chromosome cells under microscopes is quite time-consuming and difficult. To improve detection efficiency, consistency, and diagnostic accuracy, an automated microscopic image scanning system was developed and tested to directly acquire digital images with sufficient spatial resolution for clinical diagnosis. A computer-aided detection (CAD) scheme was also developed and integrated into the image scanning system to search for and detect the regions of interest (ROI) that contain analyzable metaphase chromosome cells in the large volume of scanned images acquired from one specimen. Thus, the cytogeneticists only need to observe and interpret the limited number of ROIs. In this study, the high-resolution microscopic image scanning and CAD performance was investigated and evaluated using nine sets of images scanned from either bone marrow (three) or blood (six) specimens for diagnosis of leukemia. The automated CAD-selection results were compared with the visual selection. In the experiment, the cytogeneticists first visually searched for the analyzable metaphase chromosome cells from specimens under microscopes. The specimens were also automated scanned and followed by applying the CAD scheme to detect and save ROIs containing analyzable cells while deleting the others. The automated selected ROIs were then examined by a panel of three cytogeneticists. From the scanned images, CAD selected more analyzable cells than initially visual examinations of the cytogeneticists in both blood and bone marrow specimens. In general, CAD had higher performance in analyzing blood specimens. Even in three bone marrow specimens, CAD selected 50, 22, 9 ROIs, respectively. Except matching with the initially visual selection of 9, 7, and 5 analyzable cells in these three specimens, the cytogeneticists also selected 41, 15 and 4 new analyzable cells, which were missed in initially visual searching. This experiment showed the feasibility of applying this CAD-guided high-resolution microscopic image scanning system to prescreen and select ROIs that may contain analyzable metaphase chromosome cells. The success and the further improvement of this automated scanning system may have great impact on the future clinical practice in genetic laboratories to detect and diagnose diseases.
Validation of automated white matter hyperintensity segmentation.
Smart, Sean D; Firbank, Michael J; O'Brien, John T
2011-01-01
Introduction. White matter hyperintensities (WMHs) are a common finding on MRI scans of older people and are associated with vascular disease. We compared 3 methods for automatically segmenting WMHs from MRI scans. Method. An operator manually segmented WMHs on MRI images from a 3T scanner. The scans were also segmented in a fully automated fashion by three different programmes. The voxel overlap between manual and automated segmentation was compared. Results. Between observer overlap ratio was 63%. Using our previously described in-house software, we had overlap of 62.2%. We investigated the use of a modified version of SPM segmentation; however, this was not successful, with only 14% overlap. Discussion. Using our previously reported software, we demonstrated good segmentation of WMHs in a fully automated fashion.
Finding the ’RITE’ Acquisition Environment for Navy C2 Software
2015-05-01
Boiler plate contract language - Gov purpose Rights • Adding expectation of quality to contracting language • Template SOW’s created Pr...Debugger MCCABE IQ Static Analysis Cyclomatic Complexity and KSLOC. All Languages HP Fortify Security Scan STIG and Vulnerabilities Security & IA...GSSAT (GOTS) Security Scan STIG and Vulnerabilities AutoIT Automated Test Scripting Engine for Automation Functional Testing TestComplete Automated
NASA Astrophysics Data System (ADS)
Lu, Peter J.; Hoehl, Melanie M.; Macarthur, James B.; Sims, Peter A.; Ma, Hongshen; Slocum, Alexander H.
2012-09-01
We present a portable multi-channel, multi-sample UV/vis absorption and fluorescence detection device, which has no moving parts, can operate wirelessly and on batteries, interfaces with smart mobile phones or tablets, and has the sensitivity of commercial instruments costing an order of magnitude more. We use UV absorption to measure the concentration of ethylene glycol in water solutions at all levels above those deemed unsafe by the United States Food and Drug Administration; in addition we use fluorescence to measure the concentration of d-glucose. Both wavelengths can be used concurrently to increase measurement robustness and increase detection sensitivity. Our small robust economical device can be deployed in the absence of laboratory infrastructure, and therefore may find applications immediately following natural disasters, and in more general deployment for much broader-based testing of food, agricultural and household products to prevent outbreaks of poisoning and disease.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barstow, Del R; Patlolla, Dilip Reddy; Mann, Christopher J
Abstract The data captured by existing standoff biometric systems typically has lower biometric recognition performance than their close range counterparts due to imaging challenges, pose challenges, and other factors. To assist in overcoming these limitations systems typically perform in a multi-modal capacity such as Honeywell s Combined Face and Iris (CFAIRS) [21] system. While this improves the systems performance, standoff systems have yet to be proven as accurate as their close range equivalents. We will present a standoff system capable of operating up to 7 meters in range. Unlike many systems such as the CFAIRS our system captures high qualitymore » 12 MP video allowing for a multi-sample as well as multi-modal comparison. We found that for standoff systems multi-sample improved performance more than multi-modal. For a small test group of 50 subjects we were able to achieve 100% rank one recognition performance with our system.« less
Validation of Automated White Matter Hyperintensity Segmentation
Smart, Sean D.; Firbank, Michael J.; O'Brien, John T.
2011-01-01
Introduction. White matter hyperintensities (WMHs) are a common finding on MRI scans of older people and are associated with vascular disease. We compared 3 methods for automatically segmenting WMHs from MRI scans. Method. An operator manually segmented WMHs on MRI images from a 3T scanner. The scans were also segmented in a fully automated fashion by three different programmes. The voxel overlap between manual and automated segmentation was compared. Results. Between observer overlap ratio was 63%. Using our previously described in-house software, we had overlap of 62.2%. We investigated the use of a modified version of SPM segmentation; however, this was not successful, with only 14% overlap. Discussion. Using our previously reported software, we demonstrated good segmentation of WMHs in a fully automated fashion. PMID:21904678
Poon, Candice C; Ebacher, Vincent; Liu, Katherine; Yong, Voon Wee; Kelly, John James Patrick
2018-05-03
Automated slide scanning and segmentation of fluorescently-labeled tissues is the most efficient way to analyze whole slides or large tissue sections. Unfortunately, many researchers spend large amounts of time and resources developing and optimizing workflows that are only relevant to their own experiments. In this article, we describe a protocol that can be used by those with access to a widefield high-content analysis system (WHCAS) to image any slide-mounted tissue, with options for customization within pre-built modules found in the associated software. Not originally intended for slide scanning, the steps detailed in this article make it possible to acquire slide scanning images in the WHCAS which can be imported into the associated software. In this example, the automated segmentation of brain tumor slides is demonstrated, but the automated segmentation of any fluorescently-labeled nuclear or cytoplasmic marker is possible. Furthermore, there are a variety of other quantitative software modules including assays for protein localization/translocation, cellular proliferation/viability/apoptosis, and angiogenesis that can be run. This technique will save researchers time and effort and create an automated protocol for slide analysis.
Automated volumetric segmentation of retinal fluid on optical coherence tomography
Wang, Jie; Zhang, Miao; Pechauer, Alex D.; Liu, Liang; Hwang, Thomas S.; Wilson, David J.; Li, Dengwang; Jia, Yali
2016-01-01
We propose a novel automated volumetric segmentation method to detect and quantify retinal fluid on optical coherence tomography (OCT). The fuzzy level set method was introduced for identifying the boundaries of fluid filled regions on B-scans (x and y-axes) and C-scans (z-axis). The boundaries identified from three types of scans were combined to generate a comprehensive volumetric segmentation of retinal fluid. Then, artefactual fluid regions were removed using morphological characteristics and by identifying vascular shadowing with OCT angiography obtained from the same scan. The accuracy of retinal fluid detection and quantification was evaluated on 10 eyes with diabetic macular edema. Automated segmentation had good agreement with manual segmentation qualitatively and quantitatively. The fluid map can be integrated with OCT angiogram for intuitive clinical evaluation. PMID:27446676
Improved design of electrophoretic equipment for rapid sickle-cell-anemia screening
NASA Technical Reports Server (NTRS)
Reddick, J. M.; Hirsch, I.
1974-01-01
Effective mass screening may be accomplished by modifying existing electrophoretic equipment in conjunction with multisample applicator used with cellulose-acetate-matrix test paper. Using this method, approximately 20 to 25 samples can undergo electrophoresis in 5 to 6 minutes.
NASA Astrophysics Data System (ADS)
Liu, Jingbin; Liang, Xinlian; Hyyppä, Juha; Yu, Xiaowei; Lehtomäki, Matti; Pyörälä, Jiri; Zhu, Lingli; Wang, Yunsheng; Chen, Ruizhi
2017-04-01
Terrestrial laser scanning has been widely used to analyze the 3D structure of a forest in detail and to generate data at the level of a reference plot for forest inventories without destructive measurements. Multi-scan terrestrial laser scanning is more commonly applied to collect plot-level data so that all of the stems can be detected and analyzed. However, it is necessary to match the point clouds of multiple scans to yield a point cloud with automated processing. Mismatches between datasets will lead to errors during the processing of multi-scan data. Classic registration methods based on flat surfaces cannot be directly applied in forest environments; therefore, artificial reference objects have conventionally been used to assist with scan matching. The use of artificial references requires additional labor and expertise, as well as greatly increasing the cost. In this study, we present an automated processing method for plot-level stem mapping that matches multiple scans without artificial references. In contrast to previous studies, the registration method developed in this study exploits the natural geometric characteristics among a set of tree stems in a plot and combines the point clouds of multiple scans into a unified coordinate system. Integrating multiple scans improves the overall performance of stem mapping in terms of the correctness of tree detection, as well as the bias and the root-mean-square errors of forest attributes such as diameter at breast height and tree height. In addition, the automated processing method makes stem mapping more reliable and consistent among plots, reduces the costs associated with plot-based stem mapping, and enhances the efficiency.
Automated CT Scan Scores of Bronchiectasis and Air Trapping in Cystic Fibrosis
Swiercz, Waldemar; Heltshe, Sonya L.; Anthony, Margaret M.; Szefler, Paul; Klein, Rebecca; Strain, John; Brody, Alan S.; Sagel, Scott D.
2014-01-01
Background: Computer analysis of high-resolution CT (HRCT) scans may improve the assessment of structural lung injury in children with cystic fibrosis (CF). The goal of this cross-sectional pilot study was to validate automated, observer-independent image analysis software to establish objective, simple criteria for bronchiectasis and air trapping. Methods: HRCT scans of the chest were performed in 35 children with CF and compared with scans from 12 disease control subjects. Automated image analysis software was developed to count visible airways on inspiratory images and to measure a low attenuation density (LAD) index on expiratory images. Among the children with CF, relationships among automated measures, Brody HRCT scanning scores, lung function, and sputum markers of inflammation were assessed. Results: The number of total, central, and peripheral airways on inspiratory images and LAD (%) on expiratory images were significantly higher in children with CF compared with control subjects. Among subjects with CF, peripheral airway counts correlated strongly with Brody bronchiectasis scores by two raters (r = 0.86, P < .0001; r = 0.91, P < .0001), correlated negatively with lung function, and were positively associated with sputum free neutrophil elastase activity. LAD (%) correlated with Brody air trapping scores (r = 0.83, P < .0001; r = 0.69, P < .0001) but did not correlate with lung function or sputum inflammatory markers. Conclusions: Quantitative airway counts and LAD (%) on HRCT scans appear to be useful surrogates for bronchiectasis and air trapping in children with CF. Our automated methodology provides objective quantitative measures of bronchiectasis and air trapping that may serve as end points in CF clinical trials. PMID:24114359
Automated planning of MRI scans of knee joints
NASA Astrophysics Data System (ADS)
Bystrov, Daniel; Pekar, Vladimir; Young, Stewart; Dries, Sebastian P. M.; Heese, Harald S.; van Muiswinkel, Arianne M.
2007-03-01
A novel and robust method for automatic scan planning of MRI examinations of knee joints is presented. Clinical knee examinations require acquisition of a 'scout' image, in which the operator manually specifies the scan volume orientations (off-centres, angulations, field-of-view) for the subsequent diagnostic scans. This planning task is time-consuming and requires skilled operators. The proposed automated planning system determines orientations for the diagnostic scan by using a set of anatomical landmarks derived by adapting active shape models of the femur, patella and tibia to the acquired scout images. The expert knowledge required to position scan geometries is learned from previous manually planned scans, allowing individual preferences to be taken into account. The system is able to automatically discriminate between left and right knees. This allows to use and merge training data from both left and right knees, and to automatically transform all learned scan geometries to the side for which a plan is required, providing a convenient integration of the automated scan planning system in the clinical routine. Assessment of the method on the basis of 88 images from 31 different individuals, exhibiting strong anatomical and positional variability demonstrates success, robustness and efficiency of all parts of the proposed approach, which thus has the potential to significantly improve the clinical workflow.
Automated measurement of uptake in cerebellum, liver, and aortic arch in full-body FDG PET/CT scans.
Bauer, Christian; Sun, Shanhui; Sun, Wenqing; Otis, Justin; Wallace, Audrey; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M; Beichel, Reinhard R
2012-06-01
The purpose of this work was to develop and validate fully automated methods for uptake measurement of cerebellum, liver, and aortic arch in full-body PET/CT scans. Such measurements are of interest in the context of uptake normalization for quantitative assessment of metabolic activity and/or automated image quality control. Cerebellum, liver, and aortic arch regions were segmented with different automated approaches. Cerebella were segmented in PET volumes by means of a robust active shape model (ASM) based method. For liver segmentation, a largest possible hyperellipsoid was fitted to the liver in PET scans. The aortic arch was first segmented in CT images of a PET/CT scan by a tubular structure analysis approach, and the segmented result was then mapped to the corresponding PET scan. For each of the segmented structures, the average standardized uptake value (SUV) was calculated. To generate an independent reference standard for method validation, expert image analysts were asked to segment several cross sections of each of the three structures in 134 F-18 fluorodeoxyglucose (FDG) PET/CT scans. For each case, the true average SUV was estimated by utilizing statistical models and served as the independent reference standard. For automated aorta and liver SUV measurements, no statistically significant scale or shift differences were observed between automated results and the independent standard. In the case of the cerebellum, the scale and shift were not significantly different, if measured in the same cross sections that were utilized for generating the reference. In contrast, automated results were scaled 5% lower on average although not shifted, if FDG uptake was calculated from the whole segmented cerebellum volume. The estimated reduction in total SUV measurement error ranged between 54.7% and 99.2%, and the reduction was found to be statistically significant for cerebellum and aortic arch. With the proposed methods, the authors have demonstrated that automated SUV uptake measurements in cerebellum, liver, and aortic arch agree with expert-defined independent standards. The proposed methods were found to be accurate and showed less intra- and interobserver variability, compared to manual analysis. The approach provides an alternative to manual uptake quantification, which is time-consuming. Such an approach will be important for application of quantitative PET imaging to large scale clinical trials. © 2012 American Association of Physicists in Medicine.
D. L. Johnson; D. J. Nowak; V. A. Jouraeva
1999-01-01
Leaves from twenty-three deciduous tree species and five conifer species were collected within a limited geographic range (1 km radius) and evaluated for possible application of scanning electron microscopy and X-ray microanalysis techniques of individual particle analysis (IPA). The goal was to identify tree species with leaves suitable for the automated...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linguraru, Marius George; Panjwani, Neil; Fletcher, Joel G.
2011-12-15
Purpose: To evaluate the performance of a computer-aided detection (CAD) system for detecting colonic polyps at noncathartic computed tomography colonography (CTC) in conjunction with an automated image-based colon cleansing algorithm. Methods: An automated colon cleansing algorithm was designed to detect and subtract tagged-stool, accounting for heterogeneity and poor tagging, to be used in conjunction with a colon CAD system. The method is locally adaptive and combines intensity, shape, and texture analysis with probabilistic optimization. CTC data from cathartic-free bowel preparation were acquired for testing and training the parameters. Patients underwent various colonic preparations with barium or Gastroview in divided dosesmore » over 48 h before scanning. No laxatives were administered and no dietary modifications were required. Cases were selected from a polyp-enriched cohort and included scans in which at least 90% of the solid stool was visually estimated to be tagged and each colonic segment was distended in either the prone or supine view. The CAD system was run comparatively with and without the stool subtraction algorithm. Results: The dataset comprised 38 CTC scans from prone and/or supine scans of 19 patients containing 44 polyps larger than 10 mm (22 unique polyps, if matched between prone and supine scans). The results are robust on fine details around folds, thin-stool linings on the colonic wall, near polyps and in large fluid/stool pools. The sensitivity of the CAD system is 70.5% per polyp at a rate of 5.75 false positives/scan without using the stool subtraction module. This detection improved significantly (p = 0.009) after automated colon cleansing on cathartic-free data to 86.4% true positive rate at 5.75 false positives/scan. Conclusions: An automated image-based colon cleansing algorithm designed to overcome the challenges of the noncathartic colon significantly improves the sensitivity of colon CAD by approximately 15%.« less
Zwaenepoel, Karen; Merkle, Dennis; Cabillic, Florian; Berg, Erica; Belaud-Rotureau, Marc-Antoine; Grazioli, Vittorio; Herelle, Olga; Hummel, Michael; Le Calve, Michele; Lenze, Dido; Mende, Stefanie; Pauwels, Patrick; Quilichini, Benoit; Repetti, Elena
2015-02-01
In the past several years we have observed a significant increase in our understanding of molecular mechanisms that drive lung cancer. Specifically in the non-small cell lung cancer sub-types, ALK gene rearrangements represent a sub-group of tumors that are targetable by the tyrosine kinase inhibitor Crizotinib, resulting in significant reductions in tumor burden. Phase II and III clinical trials were performed using an ALK break-apart FISH probe kit, making FISH the gold standard for identifying ALK rearrangements in patients. FISH is often considered a labor and cost intensive molecular technique, and in this study we aimed to demonstrate feasibility for automation of ALK FISH testing, to improve laboratory workflow and ease of testing. This involved automation of the pre-treatment steps of the ALK assay using various protocols on the VP 2000 instrument, and facilitating automated scanning of the fluorescent FISH specimens for simplified enumeration on various backend scanning and analysis systems. The results indicated that ALK FISH can be automated. Significantly, both the Ikoniscope and BioView system of automated FISH scanning and analysis systems provided a robust analysis algorithm to define ALK rearrangements. In addition, the BioView system facilitated consultation of difficult cases via the internet. Copyright © 2015 Elsevier Inc. All rights reserved.
Preliminary Full-Scale Tests of the Center for Automated Processing of Hardwoods' Auto-Image
Philip A. Araman; Janice K. Wiedenbeck
1995-01-01
Automated lumber grading and yield optimization using computer controlled saws will be plausible for hardwoods if and when lumber scanning systems can reliably identify all defects by type. Existing computer programs could then be used to grade the lumber, identify the best cut-up solution, and control the sawing machines. The potential value of a scanning grading...
Mansouri, Kaweh; Medeiros, Felipe A.; Tatham, Andrew J.; Marchase, Nicholas; Weinreb, Robert N.
2017-01-01
PURPOSE To determine the repeatability of automated retinal and choroidal thickness measurements with swept-source optical coherence tomography (SS OCT) and the frequency and type of scan artifacts. DESIGN Prospective evaluation of new diagnostic technology. METHODS Thirty healthy subjects were recruited prospectively and underwent imaging with a prototype SS OCT instrument. Undilated scans of 54 eyes of 27 subjects (mean age, 35.1 ± 9.3 years) were obtained. Each subject had 4 SS OCT protocols repeated 3 times: 3-dimensional (3D) 6 × 6-mm raster scan of the optic disc and macula, radial, and line scan. Automated measurements were obtained through segmentation software. Interscan repeatability was assessed by intraclass correlation coefficients (ICCs). RESULTS ICCs for choroidal measurements were 0.92, 0.98, 0.80, and 0.91, respectively, for 3D macula, 3D optic disc, radial, and line scans. ICCs for retinal measurements were 0.39, 0.49, 0.71, and 0.69, respectively. Artifacts were present in up to 9% scans. Signal loss because of blinking was the most common artifact on 3D scans (optic disc scan, 7%; macula scan, 9%), whereas segmentation failure occurred in 4% of radial and 3% of line scans. When scans with image artifacts were excluded, ICCs for choroidal thickness increased to 0.95, 0.99, 0.87, and 0.93 for 3D macula, 3D optic disc, radial, and line scans, respectively. ICCs for retinal thickness increased to 0.88, 0.83, 0.89, and 0.76, respectively. CONCLUSIONS Improved repeatability of automated choroidal and retinal thickness measurements was found with the SS OCT after correction of scan artifacts. Recognition of scan artifacts is important for correct interpretation of SS OCT measurements. PMID:24531020
Automated model-based quantitative analysis of phantoms with spherical inserts in FDG PET scans.
Ulrich, Ethan J; Sunderland, John J; Smith, Brian J; Mohiuddin, Imran; Parkhurst, Jessica; Plichta, Kristin A; Buatti, John M; Beichel, Reinhard R
2018-01-01
Quality control plays an increasingly important role in quantitative PET imaging and is typically performed using phantoms. The purpose of this work was to develop and validate a fully automated analysis method for two common PET/CT quality assurance phantoms: the NEMA NU-2 IQ and SNMMI/CTN oncology phantom. The algorithm was designed to only utilize the PET scan to enable the analysis of phantoms with thin-walled inserts. We introduce a model-based method for automated analysis of phantoms with spherical inserts. Models are first constructed for each type of phantom to be analyzed. A robust insert detection algorithm uses the model to locate all inserts inside the phantom. First, candidates for inserts are detected using a scale-space detection approach. Second, candidates are given an initial label using a score-based optimization algorithm. Third, a robust model fitting step aligns the phantom model to the initial labeling and fixes incorrect labels. Finally, the detected insert locations are refined and measurements are taken for each insert and several background regions. In addition, an approach for automated selection of NEMA and CTN phantom models is presented. The method was evaluated on a diverse set of 15 NEMA and 20 CTN phantom PET/CT scans. NEMA phantoms were filled with radioactive tracer solution at 9.7:1 activity ratio over background, and CTN phantoms were filled with 4:1 and 2:1 activity ratio over background. For quantitative evaluation, an independent reference standard was generated by two experts using PET/CT scans of the phantoms. In addition, the automated approach was compared against manual analysis, which represents the current clinical standard approach, of the PET phantom scans by four experts. The automated analysis method successfully detected and measured all inserts in all test phantom scans. It is a deterministic algorithm (zero variability), and the insert detection RMS error (i.e., bias) was 0.97, 1.12, and 1.48 mm for phantom activity ratios 9.7:1, 4:1, and 2:1, respectively. For all phantoms and at all contrast ratios, the average RMS error was found to be significantly lower for the proposed automated method compared to the manual analysis of the phantom scans. The uptake measurements produced by the automated method showed high correlation with the independent reference standard (R 2 ≥ 0.9987). In addition, the average computing time for the automated method was 30.6 s and was found to be significantly lower (P ≪ 0.001) compared to manual analysis (mean: 247.8 s). The proposed automated approach was found to have less error when measured against the independent reference than the manual approach. It can be easily adapted to other phantoms with spherical inserts. In addition, it eliminates inter- and intraoperator variability in PET phantom analysis and is significantly more time efficient, and therefore, represents a promising approach to facilitate and simplify PET standardization and harmonization efforts. © 2017 American Association of Physicists in Medicine.
Gordon, N. C.; Wareham, D. W.
2009-01-01
We report the failure of the automated MicroScan WalkAway system to detect carbapenem heteroresistance in Enterobacter aerogenes. Carbapenem resistance has become an increasing concern in recent years, and robust surveillance is required to prevent dissemination of resistant strains. Reliance on automated systems may delay the detection of emerging resistance. PMID:19641071
Note: Automated electrochemical etching and polishing of silver scanning tunneling microscope tips.
Sasaki, Stephen S; Perdue, Shawn M; Rodriguez Perez, Alejandro; Tallarida, Nicholas; Majors, Julia H; Apkarian, V Ara; Lee, Joonhee
2013-09-01
Fabrication of sharp and smooth Ag tips is crucial in optical scanning probe microscope experiments. To ensure reproducible tip profiles, the polishing process is fully automated using a closed-loop laminar flow system to deliver the electrolytic solution to moving electrodes mounted on a motorized translational stage. The repetitive translational motion is controlled precisely on the μm scale with a stepper motor and screw-thread mechanism. The automated setup allows reproducible control over the tip profile and improves smoothness and sharpness of tips (radius 27 ± 18 nm), as measured by ultrafast field emission.
Brudvig, Jean M; Swenson, Cheryl L
2015-12-01
Rapid and precise measurement of total and differential nucleated cell counts is a crucial diagnostic component of cavitary and synovial fluid analyses. The objectives of this study included (1) evaluation of reliability and precision of canine and equine fluid total nucleated cell count (TNCC) determined by the benchtop Abaxis VetScan HM5, in comparison with the automated reference instruments ADVIA 120 and the scil Vet abc, respectively, and (2) comparison of automated with manual canine differential nucleated cell counts. The TNCC and differential counts in canine pleural and peritoneal, and equine synovial fluids were determined on the Abaxis VetScan HM5 and compared with the ADVIA 120 and Vet abc analyzer, respectively. Statistical analyses included correlation, least squares fit linear regression, Passing-Bablok regression, and Bland-Altman difference plots. In addition, precision of the total cell count generated by the VetScan HM5 was determined. Agreement was excellent without significant constant or proportional bias for canine cavitary fluid TNCC. Automated and manual differential counts had R(2) < .5 for individual cell types (least squares fit linear regression). Equine synovial fluid TNCC agreed but with some bias due to the VetScan HM5 overestimating TNCC compared to the Vet abc. Intra-assay precision of the VetScan HM5 in 3 fluid samples was 2-31%. The Abaxis VetScan HM5 provided rapid, reliable TNCC for canine and equine fluid samples. The differential nucleated cell count should be verified microscopically as counts from the VetScan HM5 and also from the ADVIA 120 were often incorrect in canine fluid samples. © 2015 American Society for Veterinary Clinical Pathology.
Method for semi-automated microscopy of filtration-enriched circulating tumor cells.
Pailler, Emma; Oulhen, Marianne; Billiot, Fanny; Galland, Alexandre; Auger, Nathalie; Faugeroux, Vincent; Laplace-Builhé, Corinne; Besse, Benjamin; Loriot, Yohann; Ngo-Camus, Maud; Hemanda, Merouan; Lindsay, Colin R; Soria, Jean-Charles; Vielh, Philippe; Farace, Françoise
2016-07-14
Circulating tumor cell (CTC)-filtration methods capture high numbers of CTCs in non-small-cell lung cancer (NSCLC) and metastatic prostate cancer (mPCa) patients, and hold promise as a non-invasive technique for treatment selection and disease monitoring. However filters have drawbacks that make the automation of microscopy challenging. We report the semi-automated microscopy method we developed to analyze filtration-enriched CTCs from NSCLC and mPCa patients. Spiked cell lines in normal blood and CTCs were enriched by ISET (isolation by size of epithelial tumor cells). Fluorescent staining was carried out using epithelial (pan-cytokeratins, EpCAM), mesenchymal (vimentin, N-cadherin), leukocyte (CD45) markers and DAPI. Cytomorphological staining was carried out with Mayer-Hemalun or Diff-Quik. ALK-, ROS1-, ERG-rearrangement were detected by filter-adapted-FISH (FA-FISH). Microscopy was carried out using an Ariol scanner. Two combined assays were developed. The first assay sequentially combined four-color fluorescent staining, scanning, automated selection of CD45(-) cells, cytomorphological staining, then scanning and analysis of CD45(-) cell phenotypical and cytomorphological characteristics. CD45(-) cell selection was based on DAPI and CD45 intensity, and a nuclear area >55 μm(2). The second assay sequentially combined fluorescent staining, automated selection of CD45(-) cells, FISH scanning on CD45(-) cells, then analysis of CD45(-) cell FISH signals. Specific scanning parameters were developed to deal with the uneven surface of filters and CTC characteristics. Thirty z-stacks spaced 0.6 μm apart were defined as the optimal setting, scanning 82 %, 91 %, and 95 % of CTCs in ALK-, ROS1-, and ERG-rearranged patients respectively. A multi-exposure protocol consisting of three separate exposure times for green and red fluorochromes was optimized to analyze the intensity, size and thickness of FISH signals. The semi-automated microscopy method reported here increases the feasibility and reliability of filtration-enriched CTC assays and can help progress towards their validation and translation to the clinic.
Modified Brown-Forsythe Procedure for Testing Interaction Effects in Split-Plot Designs
ERIC Educational Resources Information Center
Vallejo, Guillermo; Ato, Manuel
2006-01-01
The standard univariate and multivariate methods are conventionally used to analyze continuous data from groups by trials repeated measures designs, in spite of being extremely sensitive to departures from the multisample sphericity assumption when group sizes are unequal. However, in the last 10 years several authors have offered alternative…
ERIC Educational Resources Information Center
Hodis, Flaviu A.
2015-01-01
Understanding human motivation requires gauging individuals' strivings to be effective in controlling goal pursuits and establishing the truth about themselves and their experiences. Two constructs, assessment and locomotion, capture well truth and control strivings, respectively. The validation process of the instruments measuring assessment and…
Individual, Familial, Friends-Related and Contextual Predictors of Early Sexual Intercourse
ERIC Educational Resources Information Center
Boislard P., Marie-Aude; Poulin, Francois
2011-01-01
This study examined the unique and simultaneous contribution of adolescents' characteristics, parent-child relationship and friends' characteristics on early sexual intercourse, while accounting for family status. A longitudinal multi-sample design was used. The first sample was recruited in a suburban context (n = 265; 62% girls) and the second…
Population-based structural variation discovery with Hydra-Multi.
Lindberg, Michael R; Hall, Ira M; Quinlan, Aaron R
2015-04-15
Current strategies for SNP and INDEL discovery incorporate sequence alignments from multiple individuals to maximize sensitivity and specificity. It is widely accepted that this approach also improves structural variant (SV) detection. However, multisample SV analysis has been stymied by the fundamental difficulties of SV calling, e.g. library insert size variability, SV alignment signal integration and detecting long-range genomic rearrangements involving disjoint loci. Extant tools suffer from poor scalability, which limits the number of genomes that can be co-analyzed and complicates analysis workflows. We have developed an approach that enables multisample SV analysis in hundreds to thousands of human genomes using commodity hardware. Here, we describe Hydra-Multi and measure its accuracy, speed and scalability using publicly available datasets provided by The 1000 Genomes Project and by The Cancer Genome Atlas (TCGA). Hydra-Multi is written in C++ and is freely available at https://github.com/arq5x/Hydra. aaronquinlan@gmail.com or ihall@genome.wustl.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Takahashi; Nakazawa; Watanabe; Konagaya
1999-01-01
We have developed the automated processing algorithms for 2-dimensional (2-D) electrophoretograms of genomic DNA based on RLGS (Restriction Landmark Genomic Scanning) method, which scans the restriction enzyme recognition sites as the landmark and maps them onto a 2-D electrophoresis gel. Our powerful processing algorithms realize the automated spot recognition from RLGS electrophoretograms and the automated comparison of a huge number of such images. In the final stage of the automated processing, a master spot pattern, on which all the spots in the RLGS images are mapped at once, can be obtained. The spot pattern variations which seemed to be specific to the pathogenic DNA molecular changes can be easily detected by simply looking over the master spot pattern. When we applied our algorithms to the analysis of 33 RLGS images derived from human colon tissues, we successfully detected several colon tumor specific spot pattern changes.
An Automated Medical Information Management System (OpScan-MIMS) in a Clinical Setting
Margolis, S.; Baker, T.G.; Ritchey, M.G.; Alterescu, S.; Friedman, C.
1981-01-01
This paper describes an automated medical information management system within a clinic setting. The system includes an optically scanned data entry system (OpScan), a generalized, interactive retrieval and storage software system(Medical Information Management System, MIMS) and the use of time-sharing. The system has the advantages of minimal hardware purchase and maintenance, rapid data entry and retrieval, user-created programs, no need for user knowledge of computer language or technology and is cost effective. The OpScan-MIMS system has been operational for approximately 16 months in a sexually transmitted disease clinic. The system's application to medical audit, quality assurance, clinic management and clinical training are demonstrated.
Automated extraction of subdural electrode grid from post-implant MRI scans for epilepsy surgery
NASA Astrophysics Data System (ADS)
Pozdin, Maksym A.; Skrinjar, Oskar
2005-04-01
This paper presents an automated algorithm for extraction of Subdural Electrode Grid (SEG) from post-implant MRI scans for epilepsy surgery. Post-implant MRI scans are corrupted by the image artifacts caused by implanted electrodes. The artifacts appear as dark spherical voids and given that the cerebrospinal fluid is also dark in T1-weigthed MRI scans, it is a difficult and time-consuming task to manually locate SEG position relative to brain structures of interest. The proposed algorithm reliably and accurately extracts SEG from post-implant MRI scan, i.e. finds its shape and position relative to brain structures of interest. The algorithm was validated against manually determined electrode locations, and the average error was 1.6mm for the three tested subjects.
Philip A. Araman; Janice K. Wiedenbeck
1995-01-01
Automated lumber grading and yield optimization using computer controlled saws will be plausible for hardwoods if and when lumber scanning systems can reliably identify all defects by type. Existing computer programs could then be used to grade the lumber, identify the best cut-up solution, and control the sawing machines. The potential value of a scanning grading...
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2006-01-01
Structural equation models are widely appreciated in social-psychological research and other behavioral research to model relations between latent constructs and manifest variables and to control for measurement error. Most applications of SEMs are based on fully observed continuous normal data and models with a linear structural equation.…
ERIC Educational Resources Information Center
Ribeiro, Rui P. P. L.; Silva, Ricardo J. S.; Esteves, Isabel A. A. C.; Mota, Jose´ P. B.
2015-01-01
The construction of a simple volumetric adsorption apparatus is highlighted. The setup is inexpensive and provides a clear demonstration of gas phase adsorption concepts. The topic is suitable for undergraduate chemistry and chemical engineering students. Moreover, this unit can also provide quantitative data that can be used by young researchers…
ERIC Educational Resources Information Center
Immekus, Jason C.; Maller, Susan J.
2010-01-01
Multisample confirmatory factor analysis (MCFA) and latent mean structures analysis (LMS) were used to test measurement invariance and latent mean differences on the Kaufman Adolescent and Adult Intelligence Scale[TM] (KAIT) across males and females in the standardization sample. MCFA found that the parameters of the KAIT two-factor model were…
Note: Automated optical focusing on encapsulated devices for scanning light stimulation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bitzer, L. A.; Benson, N., E-mail: niels.benson@uni-due.de; Schmechel, R.
Recently, a scanning light stimulation system with an automated, adaptive focus correction during the measurement was introduced. Here, its application on encapsulated devices is discussed. This includes the changes an encapsulating optical medium introduces to the focusing process as well as to the subsequent light stimulation measurement. Further, the focusing method is modified to compensate for the influence of refraction and to maintain a minimum beam diameter on the sample surface.
Automated Reporting of DXA Studies Using a Custom-Built Computer Program.
England, Joseph R; Colletti, Patrick M
2018-06-01
Dual-energy x-ray absorptiometry (DXA) scans are a critical population health tool and relatively simple to interpret but can be time consuming to report, often requiring manual transfer of bone mineral density and associated statistics into commercially available dictation systems. We describe here a custom-built computer program for automated reporting of DXA scans using Pydicom, an open-source package built in the Python computer language, and regular expressions to mine DICOM tags for patient information and bone mineral density statistics. This program, easy to emulate by any novice computer programmer, has doubled our efficiency at reporting DXA scans and has eliminated dictation errors.
Automated coronary artery calcification detection on low-dose chest CT images
NASA Astrophysics Data System (ADS)
Xie, Yiting; Cham, Matthew D.; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.
2014-03-01
Coronary artery calcification (CAC) measurement from low-dose CT images can be used to assess the risk of coronary artery disease. A fully automatic algorithm to detect and measure CAC from low-dose non-contrast, non-ECG-gated chest CT scans is presented. Based on the automatically detected CAC, the Agatston score (AS), mass score and volume score were computed. These were compared with scores obtained manually from standard-dose ECG-gated scans and low-dose un-gated scans of the same patient. The automatic algorithm segments the heart region based on other pre-segmented organs to provide a coronary region mask. The mitral valve and aortic valve calcification is identified and excluded. All remaining voxels greater than 180HU within the mask region are considered as CAC candidates. The heart segmentation algorithm was evaluated on 400 non-contrast cases with both low-dose and regular dose CT scans. By visual inspection, 371 (92.8%) of the segmentations were acceptable. The automated CAC detection algorithm was evaluated on 41 low-dose non-contrast CT scans. Manual markings were performed on both low-dose and standard-dose scans for these cases. Using linear regression, the correlation of the automatic AS with the standard-dose manual scores was 0.86; with the low-dose manual scores the correlation was 0.91. Standard risk categories were also computed. The automated method risk category agreed with manual markings of gated scans for 24 cases while 15 cases were 1 category off. For low-dose scans, the automatic method agreed with 33 cases while 7 cases were 1 category off.
Xu, Zhoubing; Gertz, Adam L.; Burke, Ryan P.; Bansal, Neil; Kang, Hakmook; Landman, Bennett A.; Abramson, Richard G.
2016-01-01
OBJECTIVES Multi-atlas fusion is a promising approach for computer-assisted segmentation of anatomical structures. The purpose of this study was to evaluate the accuracy and time efficiency of multi-atlas segmentation for estimating spleen volumes on clinically-acquired CT scans. MATERIALS AND METHODS Under IRB approval, we obtained 294 deidentified (HIPAA-compliant) abdominal CT scans on 78 subjects from a recent clinical trial. We compared five pipelines for obtaining splenic volumes: Pipeline 1–manual segmentation of all scans, Pipeline 2–automated segmentation of all scans, Pipeline 3–automated segmentation of all scans with manual segmentation for outliers on a rudimentary visual quality check, Pipelines 4 and 5–volumes derived from a unidimensional measurement of craniocaudal spleen length and three-dimensional splenic index measurements, respectively. Using Pipeline 1 results as ground truth, the accuracy of Pipelines 2–5 (Dice similarity coefficient [DSC], Pearson correlation, R-squared, and percent and absolute deviation of volume from ground truth) were compared for point estimates of splenic volume and for change in splenic volume over time. Time cost was also compared for Pipelines 1–5. RESULTS Pipeline 3 was dominant in terms of both accuracy and time cost. With a Pearson correlation coefficient of 0.99, average absolute volume deviation 23.7 cm3, and 1 minute per scan, Pipeline 3 yielded the best results. The second-best approach was Pipeline 5, with a Pearson correlation coefficient 0.98, absolute deviation 46.92 cm3, and 1 minute 30 seconds per scan. Manual segmentation (Pipeline 1) required 11 minutes per scan. CONCLUSION A computer-automated segmentation approach with manual correction of outliers generated accurate splenic volumes with reasonable time efficiency. PMID:27519156
ERIC Educational Resources Information Center
Keith, Timothy Z.; Fine, Jodene Goldenring; Taub, Gordon E.; Reynolds, Matthew R.; Kranzler, John H.
2006-01-01
The recently published fourth edition of the Wechsler Intelligence Scale for Children (WISC-IV) represents a considerable departure from previous versions of the scale. The structure of the instrument has changed, and some subtests have been added and others deleted. The technical manual for the WISC-IV provided evidence supporting this new…
ERIC Educational Resources Information Center
Seco, Guillermo Vallejo; Izquierdo, Marcelino Cuesta; Garcia, M. Paula Fernandez; Diez, F. Javier Herrero
2006-01-01
The authors compare the operating characteristics of the bootstrap-F approach, a direct extension of the work of Berkovits, Hancock, and Nevitt, with Huynh's improved general approximation (IGA) and the Brown-Forsythe (BF) multivariate approach in a mixed repeated measures design when normality and multisample sphericity assumptions do not hold.…
Automated Image Analysis Corrosion Working Group Update: February 1, 2018
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
These are slides for the automated image analysis corrosion working group update. The overall goals were: automate the detection and quantification of features in images (faster, more accurate), how to do this (obtain data, analyze data), focus on Laser Scanning Confocal Microscope (LCM) data (laser intensity, laser height/depth, optical RGB, optical plus laser RGB).
Tamnes, Christian K; Herting, Megan M; Goddings, Anne-Lise; Meuwese, Rosa; Blakemore, Sarah-Jayne; Dahl, Ronald E; Güroğlu, Berna; Raznahan, Armin; Sowell, Elizabeth R; Crone, Eveline A; Mills, Kathryn L
2017-03-22
Before we can assess and interpret how developmental changes in human brain structure relate to cognition, affect, and motivation, and how these processes are perturbed in clinical or at-risk populations, we must first precisely understand typical brain development and how changes in different structural components relate to each other. We conducted a multisample magnetic resonance imaging study to investigate the development of cortical volume, surface area, and thickness, as well as their inter-relationships, from late childhood to early adulthood (7-29 years) using four separate longitudinal samples including 388 participants and 854 total scans. These independent datasets were processed and quality-controlled using the same methods, but analyzed separately to study the replicability of the results across sample and image-acquisition characteristics. The results consistently showed widespread and regionally variable nonlinear decreases in cortical volume and thickness and comparably smaller steady decreases in surface area. Further, the dominant contributor to cortical volume reductions during adolescence was thinning. Finally, complex regional and topological patterns of associations between changes in surface area and thickness were observed. Positive relationships were seen in sulcal regions in prefrontal and temporal cortices, while negative relationships were seen mainly in gyral regions in more posterior cortices. Collectively, these results help resolve previous inconsistencies regarding the structural development of the cerebral cortex from childhood to adulthood, and provide novel insight into how changes in the different dimensions of the cortex in this period of life are inter-related. SIGNIFICANCE STATEMENT Different measures of brain anatomy develop differently across adolescence. Their precise trajectories and how they relate to each other throughout development are important to know if we are to fully understand both typical development and disorders involving aberrant brain development. However, our understanding of such trajectories and relationships is still incomplete. To provide accurate characterizations of how different measures of cortical structure develop, we performed an MRI investigation across four independent datasets. The most profound anatomical change in the cortex during adolescence was thinning, with the largest decreases observed in the parietal lobe. There were complex regional patterns of associations between changes in surface area and thickness, with positive relationships seen in sulcal regions in prefrontal and temporal cortices, and negative relationships seen mainly in gyral regions in more posterior cortices. Copyright © 2017 Tamnes et al.
Chavez, Sofia; Viviano, Joseph; Zamyadi, Mojdeh; Kingsley, Peter B; Kochunov, Peter; Strother, Stephen; Voineskos, Aristotle
2018-02-01
To develop a quality assurance (QA) tool (acquisition guidelines and automated processing) for diffusion tensor imaging (DTI) data using a common agar-based phantom used for fMRI QA. The goal is to produce a comprehensive set of automated, sensitive and robust QA metrics. A readily available agar phantom was scanned with and without parallel imaging reconstruction. Other scanning parameters were matched to the human scans. A central slab made up of either a thick slice or an average of a few slices, was extracted and all processing was performed on that image. The proposed QA relies on the creation of two ROIs for processing: (i) a preset central circular region of interest (ccROI) and (ii) a signal mask for all images in the dataset. The ccROI enables computation of average signal for SNR calculations as well as average FA values. The production of the signal masks enables automated measurements of eddy current and B0 inhomogeneity induced distortions by exploiting the sphericity of the phantom. Also, the signal masks allow automated background localization to assess levels of Nyquist ghosting. The proposed DTI-QA was shown to produce eleven metrics which are robust yet sensitive to image quality changes within site and differences across sites. It can be performed in a reasonable amount of scan time (~15min) and the code for automated processing has been made publicly available. A novel DTI-QA tool has been proposed. It has been applied successfully on data from several scanners/platforms. The novelty lies in the exploitation of the sphericity of the phantom for distortion measurements. Other novel contributions are: the computation of an SNR value per gradient direction for the diffusion weighted images (DWIs) and an SNR value per non-DWI, an automated background detection for the Nyquist ghosting measurement and an error metric reflecting the contribution of EPI instability to the eddy current induced shape changes observed for DWIs. Copyright © 2017 Elsevier Inc. All rights reserved.
Hamill, Daniel; Buscombe, Daniel; Wheaton, Joseph M
2018-01-01
Side scan sonar in low-cost 'fishfinder' systems has become popular in aquatic ecology and sedimentology for imaging submerged riverbed sediment at coverages and resolutions sufficient to relate bed texture to grain-size. Traditional methods to map bed texture (i.e. physical samples) are relatively high-cost and low spatial coverage compared to sonar, which can continuously image several kilometers of channel in a few hours. Towards a goal of automating the classification of bed habitat features, we investigate relationships between substrates and statistical descriptors of bed textures in side scan sonar echograms of alluvial deposits. We develop a method for automated segmentation of bed textures into between two to five grain-size classes. Second-order texture statistics are used in conjunction with a Gaussian Mixture Model to classify the heterogeneous bed into small homogeneous patches of sand, gravel, and boulders with an average accuracy of 80%, 49%, and 61%, respectively. Reach-averaged proportions of these sediment types were within 3% compared to similar maps derived from multibeam sonar.
Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-10-01
The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.
Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-01-01
Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070
Automated search method for AFM and profilers
NASA Astrophysics Data System (ADS)
Ray, Michael; Martin, Yves C.
2001-08-01
A new automation software creates a search model as an initial setup and searches for a user-defined target in atomic force microscopes or stylus profilometers used in semiconductor manufacturing. The need for such automation has become critical in manufacturing lines. The new method starts with a survey map of a small area of a chip obtained from a chip-design database or an image of the area. The user interface requires a user to point to and define a precise location to be measured, and to select a macro function for an application such as line width or contact hole. The search algorithm automatically constructs a range of possible scan sequences within the survey, and provides increased speed and functionality compared to the methods used in instruments to date. Each sequence consists in a starting point relative to the target, a scan direction, and a scan length. The search algorithm stops when the location of a target is found and criteria for certainty in positioning is met. With today's capability in high speed processing and signal control, the tool can simultaneously scan and search for a target in a robotic and continuous manner. Examples are given that illustrate the key concepts.
Automated Status Notification System
NASA Technical Reports Server (NTRS)
2005-01-01
NASA Lewis Research Center's Automated Status Notification System (ASNS) was born out of need. To prevent "hacker attacks," Lewis' telephone system needed to monitor communications activities 24 hr a day, 7 days a week. With decreasing staff resources, this continuous monitoring had to be automated. By utilizing existing communications hardware, a UNIX workstation, and NAWK (a pattern scanning and processing language), we implemented a continuous monitoring system.
Toward Automated Intraocular Laser Surgery Using a Handheld Micromanipulator
Yang, Sungwook; MacLachlan, Robert A.; Riviere, Cameron N.
2014-01-01
This paper presents a technique for automated intraocular laser surgery using a handheld micromanipulator known as Micron. The novel handheld manipulator enables the automated scanning of a laser probe within a cylinder of 4 mm long and 4 mm in diameter. For the automation, the surface of the retina is reconstructed using a stereomicroscope, and then preplanned targets are placed on the surface. The laser probe is precisely located on the target via visual servoing of the aiming beam, while maintaining a specific distance above the surface. In addition, the system is capable of tracking the surface of the eye in order to compensate for any eye movement introduced during the operation. We compared the performance of the automated scanning using various control thresholds, in order to find the most effective threshold in terms of accuracy and speed. Given the selected threshold, we conducted the handheld operation above a fixed target surface. The average error and execution time are reduced by 63.6% and 28.5%, respectively, compared to the unaided trials. Finally, the automated laser photocoagulation was demonstrated also in an eye phantom, including compensation for the eye movement. PMID:25893135
Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M
2015-09-01
The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.
RootScan: Software for high-throughput analysis of root anatomical traits
USDA-ARS?s Scientific Manuscript database
RootScan is a program for semi-automated image analysis of anatomical phenes in root cross-sections. RootScan uses pixel value thresholds to separate the cross-section from its background and to visually dissect it into tissue regions. Area measurements and object counts are performed within various...
Evaluation of a laser scanning sensor for variable-rate tree sprayer development
USDA-ARS?s Scientific Manuscript database
Accurate canopy measurement capabilities are prerequisites to automate variable-rate sprayers. A 270° radial range laser scanning sensor was tested for its scanning accuracy to detect tree canopy profiles. Signals from the laser sensor and a ground speed sensor were processed with an embedded comput...
Design Modification of Electrophoretic Equipment
NASA Technical Reports Server (NTRS)
Reddick, J. M.; Hirsch, I.
1973-01-01
The improved design of a zone electrophoretic sampler is reported that can be used in mass screening for hemoglobin S, the cause of sickle cell anemia. Considered is a high voltage multicell cellulose acetate device that requires 5 to 6 minutes electrophoresis periods; cells may be activitated individually or simultaneously. A multisample hemoglobin applicator standardizes the amount of sample applied and transfers the homolysate to the electrical wires.
Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients.
Mayer, Markus A; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P
2010-11-08
Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis.
Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients
Mayer, Markus A.; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.
2010-01-01
Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis. PMID:21258556
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H; Liang, X; Kalbasi, A
2014-06-01
Purpose: Advanced radiotherapy (RT) techniques such as proton pencil beam scanning (PBS) and photon-based volumetric modulated arc therapy (VMAT) have dosimetric advantages in the treatment of head and neck malignancies. However, anatomic or alignment changes during treatment may limit robustness of PBS and VMAT plans. We assess the feasibility of automated deformable registration tools for robustness evaluation in adaptive PBS and VMAT RT of oropharyngeal cancer (OPC). Methods: We treated 10 patients with bilateral OPC with advanced RT techniques and obtained verification CT scans with physician-reviewed target and OAR contours. We generated 3 advanced RT plans for each patient: protonmore » PBS plan using 2 posterior oblique fields (2F), proton PBS plan using an additional third low-anterior field (3F), and a photon VMAT plan using 2 arcs (Arc). For each of the planning techniques, we forward calculated initial (Ini) plans on the verification scans to create verification (V) plans. We extracted DVH indicators based on physician-generated contours for 2 target and 14 OAR structures to investigate the feasibility of two automated tools (contour propagation (CP) and dose deformation (DD)) as surrogates for routine clinical plan robustness evaluation. For each verification scan, we compared DVH indicators of V, CP and DD plans in a head-to-head fashion using Student's t-test. Results: We performed 39 verification scans; each patient underwent 3 to 6 verification scan. We found no differences in doses to target or OAR structures between V and CP, V and DD, and CP and DD plans across all patients (p > 0.05). Conclusions: Automated robustness evaluation tools, CP and DD, accurately predicted dose distributions of verification (V) plans using physician-generated contours. These tools may be further developed as a potential robustness screening tool in the workflow for adaptive treatment of OPC using advanced RT techniques, reducing the need for physician-generated contours.« less
Automated aortic calcification detection in low-dose chest CT images
NASA Astrophysics Data System (ADS)
Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.
2014-03-01
The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.
NASA Astrophysics Data System (ADS)
Huang, Alex S.; Belghith, Akram; Dastiridou, Anna; Chopra, Vikas; Zangwill, Linda M.; Weinreb, Robert N.
2017-06-01
The purpose was to create a three-dimensional (3-D) model of circumferential aqueous humor outflow (AHO) in a living human eye with an automated detection algorithm for Schlemm's canal (SC) and first-order collector channels (CC) applied to spectral-domain optical coherence tomography (SD-OCT). Anterior segment SD-OCT scans from a subject were acquired circumferentially around the limbus. A Bayesian Ridge method was used to approximate the location of the SC on infrared confocal laser scanning ophthalmoscopic images with a cross multiplication tool developed to initiate SC/CC detection automated through a fuzzy hidden Markov Chain approach. Automatic segmentation of SC and initial CC's was manually confirmed by two masked graders. Outflow pathways detected by the segmentation algorithm were reconstructed into a 3-D representation of AHO. Overall, only <1% of images (5114 total B-scans) were ungradable. Automatic segmentation algorithm performed well with SC detection 98.3% of the time and <0.1% false positive detection compared to expert grader consensus. CC was detected 84.2% of the time with 1.4% false positive detection. 3-D representation of AHO pathways demonstrated variably thicker and thinner SC with some clear CC roots. Circumferential (360 deg), automated, and validated AHO detection of angle structures in the living human eye with reconstruction was possible.
NASA Astrophysics Data System (ADS)
Sadhasivam, Jayakumar; Alamelu, M.; Radhika, R.; Ramya, S.; Dharani, K.; Jayavel, Senthil
2017-11-01
Now a days the people's attraction towards Automated Teller Machine(ATM) has been increasing even in rural areas. As of now the security provided by all the bank is ATM pin number. Hackers know the way to easily identify the pin number and withdraw money if they haven stolen the ATM card. Also, the Automated Teller Machine is broken and the money is stolen. To overcome these disadvantages, we propose an approach “Automated Secure Tracking System” to secure and tracking the changes in ATM. In this approach, while creating the bank account, the bank should scan the iris known (a part or movement of our eye) and fingerprint of the customer. The scanning can be done with the position of the eye movements and fingerprints identified with the shortest measurements. When the card is swiped then ATM should request the pin, scan the iris and recognize the fingerprint and then allow the customer to withdraw money. If somebody tries to break the ATM an alert message is given to the nearby police station and the ATM shutter is automatically closed. This helps in avoiding the hackers who withdraw money by stealing the ATM card and also helps the government in identifying the criminals easily.
AN ULTRAVIOLET-VISIBLE SPECTROPHOTOMETER AUTOMATION SYSTEM. PART I: FUNCTIONAL SPECIFICATIONS
This document contains the project definition, the functional requirements, and the functional design for a proposed computer automation system for scanning spectrophotometers. The system will be implemented on a Data General computer using the BASIC language. The system is a rea...
Scanning System -- Technology Worth a Look
Philip A. Araman; Daniel L. Schmoldt; Richard W. Conners; D. Earl Kline
1995-01-01
In an effort to help automate the inspection for lumber defects, optical scanning systems are emerging as an alternative to the human eye. Although still in its infancy, scanning technology is being explored by machine companies and universities. This article was excerpted from "Machine Vision Systems for Grading and Processing Hardwood Lumber," by Philip...
Automated inspection of gaps on the free-form shape parts by laser scanning technologies
NASA Astrophysics Data System (ADS)
Zhou, Sen; Xu, Jian; Tao, Lei; An, Lu; Yu, Yan
2018-01-01
In industrial manufacturing processes, the dimensional inspection of the gaps on the free-form shape parts is critical and challenging, and is directly associated with subsequent assembly and terminal product quality. In this paper, a fast measuring method for automated gap inspection based on laser scanning technologies is presented. The proposed measuring method consists of three steps: firstly, the relative position is determined according to the geometric feature of measuring gap, which considers constraints existing in a laser scanning operation. Secondly, in order to acquire a complete gap profile, a fast and effective scanning path is designed. Finally, the range dimension of the gaps on the free-form shape parts including width, depth and flush, correspondingly, is described in a virtual environment. In the future, an appliance machine based on the proposed method will be developed for the on-line dimensional inspection of gaps on the automobile or aerospace production line.
The retention of manual flying skills in the automated cockpit.
Casner, Stephen M; Geven, Richard W; Recker, Matthias P; Schooler, Jonathan W
2014-12-01
The aim of this study was to understand how the prolonged use of cockpit automation is affecting pilots' manual flying skills. There is an ongoing concern about a potential deterioration of manual flying skills among pilots who assume a supervisory role while cockpit automation systems carry out tasks that were once performed by human pilots. We asked 16 airline pilots to fly routine and nonroutine flight scenarios in a Boeing 747-400 simulator while we systematically varied the level of automation that they used, graded their performance, and probed them about what they were thinking about as they flew. We found pilots' instrument scanning and manual control skills to be mostly intact, even when pilots reported that they were infrequently practiced. However, when pilots were asked to manually perform the cognitive tasks needed for manual flight (e.g., tracking the aircraft's position without the use of a map display, deciding which navigational steps come next, recognizing instrument system failures), we observed more frequent and significant problems. Furthermore, performance on these cognitive tasks was associated with measures of how often pilots engaged in task-unrelated thought when cockpit automation was used. We found that while pilots' instrument scanning and aircraft control skills are reasonably well retained when automation is used, the retention of cognitive skills needed for manual flying may depend on the degree to which pilots remain actively engaged in supervising the automation.
Multi-Sample Cluster Analysis Using Akaike’s Information Criterion.
1982-12-20
of Likelihood Criteria for I)fferent Hypotheses," in P. A. Krishnaiah (Ed.), Multivariate Analysis-Il, New York: Academic Press. [5] Fisher, R. A...Methods of Simultaneous Inference in MANOVA," in P. R. Krishnaiah (Ed.), rultivariate Analysis-Il, New York: Academic Press. [8) Kendall, M. G. (1966...1982), Applied Multivariate Statisti- cal-Analysis, Englewood Cliffs: Prentice-Mall, Inc. [1U] Krishnaiah , P. R. (1969), "Simultaneous Test
A comprehensive quality control workflow for paired tumor-normal NGS experiments.
Schroeder, Christopher M; Hilke, Franz J; Löffler, Markus W; Bitzer, Michael; Lenz, Florian; Sturm, Marc
2017-06-01
Quality control (QC) is an important part of all NGS data analysis stages. Many available tools calculate QC metrics from different analysis steps of single sample experiments (raw reads, mapped reads and variant lists). Multi-sample experiments, as sequencing of tumor-normal pairs, require additional QC metrics to ensure validity of results. These multi-sample QC metrics still lack standardization. We therefore suggest a new workflow for QC of DNA sequencing of tumor-normal pairs. With this workflow well-known single-sample QC metrics and additional metrics specific for tumor-normal pairs can be calculated. The segmentation into different tools offers a high flexibility and allows reuse for other purposes. All tools produce qcML, a generic XML format for QC of -omics experiments. qcML uses quality metrics defined in an ontology, which was adapted for NGS. All QC tools are implemented in C ++ and run both under Linux and Windows. Plotting requires python 2.7 and matplotlib. The software is available under the 'GNU General Public License version 2' as part of the ngs-bits project: https://github.com/imgag/ngs-bits. christopher.schroeder@med.uni-tuebingen.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
High-throughput hyperpolarized 13C metabolic investigations using a multi-channel acquisition system
NASA Astrophysics Data System (ADS)
Lee, Jaehyuk; Ramirez, Marc S.; Walker, Christopher M.; Chen, Yunyun; Yi, Stacey; Sandulache, Vlad C.; Lai, Stephen Y.; Bankson, James A.
2015-11-01
Magnetic resonance imaging and spectroscopy of hyperpolarized (HP) compounds such as [1-13C]-pyruvate have shown tremendous potential for offering new insight into disease and response to therapy. New applications of this technology in clinical research and care will require extensive validation in cells and animal models, a process that may be limited by the high cost and modest throughput associated with dynamic nuclear polarization. Relatively wide spectral separation between [1-13C]-pyruvate and its chemical endpoints in vivo are conducive to simultaneous multi-sample measurements, even in the presence of a suboptimal global shim. Multi-channel acquisitions could conserve costs and accelerate experiments by allowing acquisition from multiple independent samples following a single dissolution. Unfortunately, many existing preclinical MRI systems are equipped with only a single channel for broadband acquisitions. In this work, we examine the feasibility of this concept using a broadband multi-channel digital receiver extension and detector arrays that allow concurrent measurement of dynamic spectroscopic data from ex vivo enzyme phantoms, in vitro anaplastic thyroid carcinoma cells, and in vivo in tumor-bearing mice. Throughput and the cost of consumables were improved by up to a factor of four. These preliminary results demonstrate the potential for efficient multi-sample studies employing hyperpolarized agents.
Enhancing reproducibility of ultrasonic measurements by new users
NASA Astrophysics Data System (ADS)
Pramanik, Manojit; Gupta, Madhumita; Krishnan, Kajoli Banerjee
2013-03-01
Perception of operator influences ultrasound image acquisition and processing. Lower costs are attracting new users to medical ultrasound. Anticipating an increase in this trend, we conducted a study to quantify the variability in ultrasonic measurements made by novice users and identify methods to reduce it. We designed a protocol with four presets and trained four new users to scan and manually measure the head circumference of a fetal phantom with an ultrasound scanner. In the first phase, the users followed this protocol in seven distinct sessions. They then received feedback on the quality of the scans from an expert. In the second phase, two of the users repeated the entire protocol aided by visual cues provided to them during scanning. We performed off-line measurements on all the images using a fully automated algorithm capable of measuring the head circumference from fetal phantom images. The ground truth (198.1±1.6 mm) was based on sixteen scans and measurements made by an expert. Our analysis shows that: (1) the inter-observer variability of manual measurements was 5.5 mm, whereas the inter-observer variability of automated measurements was only 0.6 mm in the first phase (2) consistency of image appearance improved and mean manual measurements was 4-5 mm closer to the ground truth in the second phase (3) automated measurements were more precise, accurate and less sensitive to different presets compared to manual measurements in both phases. Our results show that visual aids and automation can bring more reproducibility to ultrasonic measurements made by new users.
Brown, Treva T.; LeJeune, Zorabel M.; Liu, Kai; Hardin, Sean; Li, Jie-Ren; Rupnik, Kresimir; Garno, Jayne C.
2010-01-01
Controllers for scanning probe instruments can be programmed for automated lithography to generate desired surface arrangements of nanopatterns of organic thin films, such as n-alkanethiol self-assembled monolayers (SAMs). In this report, atomic force microscopy (AFM) methods of lithography known as nanoshaving and nanografting are used to write nanopatterns within organic thin films. Commercial instruments provide software to control the length, direction, speed, and applied force of the scanning motion of the tip. For nanoshaving, higher forces are applied to an AFM tip to selectively remove regions of the matrix monolayer, exposing bare areas of the gold substrate. Nanografting is accomplished by force-induced displacement of molecules of a matrix SAM, followed immediately by the surface self-assembly of n-alkanethiol molecules from solution. Advancements in AFM automation enable rapid protocols for nanolithography, which can be accomplished within the tight time restraints of undergraduate laboratories. Example experiments with scanning probe lithography (SPL) will be described in this report that were accomplished by undergraduate students during laboratory course activities and research internships in the chemistry department of Louisiana State University. Students were introduced to principles of surface analysis and gained “hands-on” experience with nanoscale chemistry. PMID:21483651
Buscombe, Daniel; Wheaton, Joseph M.
2018-01-01
Side scan sonar in low-cost ‘fishfinder’ systems has become popular in aquatic ecology and sedimentology for imaging submerged riverbed sediment at coverages and resolutions sufficient to relate bed texture to grain-size. Traditional methods to map bed texture (i.e. physical samples) are relatively high-cost and low spatial coverage compared to sonar, which can continuously image several kilometers of channel in a few hours. Towards a goal of automating the classification of bed habitat features, we investigate relationships between substrates and statistical descriptors of bed textures in side scan sonar echograms of alluvial deposits. We develop a method for automated segmentation of bed textures into between two to five grain-size classes. Second-order texture statistics are used in conjunction with a Gaussian Mixture Model to classify the heterogeneous bed into small homogeneous patches of sand, gravel, and boulders with an average accuracy of 80%, 49%, and 61%, respectively. Reach-averaged proportions of these sediment types were within 3% compared to similar maps derived from multibeam sonar. PMID:29538449
Vivas, J.; Sáa, A. I.; Tinajas, A.; Barbeyto, L.; Rodríguez, L. A.
2000-01-01
This study was performed to compare the MicroScan WalkAway automated identification system in conjunction with the new MicroScan Combo Negative type 1S panels with conventional biochemical methods for identifying 85 environmental, clinical, and reference strains of eight Aeromonas species. PMID:10742279
Automating PACS quality control with the Vanderbilt image processing enterprise resource
NASA Astrophysics Data System (ADS)
Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.
2012-02-01
Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.
NASA Astrophysics Data System (ADS)
Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Filgueiras-Rama, David; Pizarro, Gonzalo; Ibañez, Borja; Berenfeld, Omer; Boyers, Pamela; Gold, Jeffrey
2012-12-01
This paper presents an automated method to segment left ventricle (LV) tissues from functional and delayed-enhancement (DE) cardiac magnetic resonance imaging (MRI) scans using a sequential multi-step approach. First, a region of interest (ROI) is computed to create a subvolume around the LV using morphological operations and image arithmetic. From the subvolume, the myocardial contours are automatically delineated using difference of Gaussians (DoG) filters and GSV snakes. These contours are used as a mask to identify pathological tissues, such as fibrosis or scar, within the DE-MRI. The presented automated technique is able to accurately delineate the myocardium and identify the pathological tissue in patient sets. The results were validated by two expert cardiologists, and in one set the automated results are quantitatively and qualitatively compared with expert manual delineation. Furthermore, the method is patient-specific, performed on an entire patient MRI series. Thus, in addition to providing a quick analysis of individual MRI scans, the fully automated segmentation method is used for effectively tagging regions in order to reconstruct computerized patient-specific 3D cardiac models. These models can then be used in electrophysiological studies and surgical strategy planning.
3D model assisted fully automated scanning laser Doppler vibrometer measurements
NASA Astrophysics Data System (ADS)
Sels, Seppe; Ribbens, Bart; Bogaerts, Boris; Peeters, Jeroen; Vanlanduit, Steve
2017-12-01
In this paper, a new fully automated scanning laser Doppler vibrometer (LDV) measurement technique is presented. In contrast to existing scanning LDV techniques which use a 2D camera for the manual selection of sample points, we use a 3D Time-of-Flight camera in combination with a CAD file of the test object to automatically obtain measurements at pre-defined locations. The proposed procedure allows users to test prototypes in a shorter time because physical measurement locations are determined without user interaction. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. The proposed method is illustrated with vibration measurements of an unmanned aerial vehicle
New developments in electron microscopy for serial image acquisition of neuronal profiles.
Kubota, Yoshiyuki
2015-02-01
Recent developments in electron microscopy largely automate the continuous acquisition of serial electron micrographs (EMGs), previously achieved by laborious manual serial ultrathin sectioning using an ultramicrotome and ultrastructural image capture process with transmission electron microscopy. The new systems cut thin sections and capture serial EMGs automatically, allowing for acquisition of large data sets in a reasonably short time. The new methods are focused ion beam/scanning electron microscopy, ultramicrotome/serial block-face scanning electron microscopy, automated tape-collection ultramicrotome/scanning electron microscopy and transmission electron microscope camera array. In this review, their positive and negative aspects are discussed. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Computer system for scanning tunneling microscope automation
NASA Astrophysics Data System (ADS)
Aguilar, M.; García, A.; Pascual, P. J.; Presa, J.; Santisteban, A.
1987-03-01
A computerized system for the automation of a scanning tunneling microscope is presented. It is based on an IBM personal computer (PC) either an XT or an AT, which performs the control, data acquisition and storage operations, displays the STM "images" in real time, and provides image processing tools for the restoration and analysis of data. It supports different data acquisition and control cards and image display cards. The software has been designed in a modular way to allow the replacement of these cards and other equipment improvements as well as the inclusion of user routines for data analysis.
Towards Automated Nanomanipulation under Scanning Electron Microscopy
NASA Astrophysics Data System (ADS)
Ye, Xutao
Robotic Nanomaterial Manipulation inside scanning electron microscopes (SEM) is useful for prototyping functional devices and characterizing one-dimensional nanomaterial's properties. Conventionally, manipulation of nanowires has been performed via teleoperation, which is time-consuming and highly skill-dependent. Manual manipulation also has the limitation of low success rates and poor reproducibility. This research focuses on a robotic system capable of automated pick-place of single nanowires. Through SEM visual detection and vision-based motion control, the system transferred individual silicon nanowires from their growth substrate to a microelectromechanical systems (MEMS) device that characterized the nanowires' electromechanical properties. The performances of the nanorobotic pick-up and placement procedures were quantified by experiments. The system demonstrated automated nanowire pick-up and placement with high reliability. A software system for a load-lock-compatible nanomanipulation system is also designed and developed in this research.
Huang, Alex S; Belghith, Akram; Dastiridou, Anna; Chopra, Vikas; Zangwill, Linda M; Weinreb, Robert N
2017-06-01
The purpose was to create a three-dimensional (3-D) model of circumferential aqueous humor outflow (AHO) in a living human eye with an automated detection algorithm for Schlemm’s canal (SC) and first-order collector channels (CC) applied to spectral-domain optical coherence tomography (SD-OCT). Anterior segment SD-OCT scans from a subject were acquired circumferentially around the limbus. A Bayesian Ridge method was used to approximate the location of the SC on infrared confocal laser scanning ophthalmoscopic images with a cross multiplication tool developed to initiate SC/CC detection automated through a fuzzy hidden Markov Chain approach. Automatic segmentation of SC and initial CC’s was manually confirmed by two masked graders. Outflow pathways detected by the segmentation algorithm were reconstructed into a 3-D representation of AHO. Overall, only <1% of images (5114 total B-scans) were ungradable. Automatic segmentation algorithm performed well with SC detection 98.3% of the time and <0.1% false positive detection compared to expert grader consensus. CC was detected 84.2% of the time with 1.4% false positive detection. 3-D representation of AHO pathways demonstrated variably thicker and thinner SC with some clear CC roots. Circumferential (360 deg), automated, and validated AHO detection of angle structures in the living human eye with reconstruction was possible.
[Automated detection and volumetric segmentation of the spleen in CT scans].
Hammon, M; Dankerl, P; Kramer, M; Seifert, S; Tsymbal, A; Costa, M J; Janka, R; Uder, M; Cavallaro, A
2012-08-01
To introduce automated detection and volumetric segmentation of the spleen in spiral CT scans with the THESEUS-MEDICO software. The consistency between automated volumetry (aV), estimated volume determination (eV) and manual volume segmentation (mV) was evaluated. Retrospective evaluation of the CAD system based on methods like "marginal space learning" and "boosting algorithms". 3 consecutive spiral CT scans (thoraco-abdominal; portal-venous contrast agent phase; 1 or 5 mm slice thickness) of 15 consecutive lymphoma patients were included. The eV: 30 cm³ + 0.58 (width × length × thickness of the spleen) and the mV as the reference standard were determined by an experienced radiologist. The aV could be performed in all CT scans within 15.2 (± 2.4) seconds. The average splenic volume measured by aV was 268.21 ± 114.67 cm³ compared to 281.58 ± 130.21 cm³ in mV and 268.93 ± 104.60 cm³ in eV. The correlation coefficient was 0.99 (coefficient of determination (R²) = 0.98) for aV and mV, 0.91 (R² = 0.83) for mV and eV and 0.91 (R² = 0.82) for aV and eV. There was an almost perfect correlation of the changes in splenic volume measured with the new aV and mV (0.92; R² = 0.84), mV and eV (0.95; R² = 0.91) and aV and eV (0.83; R² = 0.69) between two time points. The automated detection and volumetric segmentation software rapidly provides an accurate measurement of the splenic volume in CT scans. Knowledge about splenic volume and its change between two examinations provides valuable clinical information without effort for the radiologist. © Georg Thieme Verlag KG Stuttgart · New York.
Microelectrophoretic apparatus and process
NASA Technical Reports Server (NTRS)
Grunbaum, B. W. (Inventor)
1978-01-01
New gel tray and lid assemblies designed for use in conjunction with slotted electrophoretic membranes were developed to take advantage of recently improved microelectrophoretic accessories which include a multisample applicator capable of applying up to 10 samples consecutively or simultaneously, and a temperature control plate for dissipating the heat produced by electrophoresis in a gel. The trays and membranes can be marketed ready for use as electrophoretic media or impregnated with various specific substrates and dyes
NASA Astrophysics Data System (ADS)
Lu, Yiqing; Xi, Peng; Piper, James A.; Huo, Yujing; Jin, Dayong
2012-11-01
We report a new development of orthogonal scanning automated microscopy (OSAM) incorporating time-gated detection to locate rare-event organisms regardless of autofluorescent background. The necessity of using long-lifetime (hundreds of microseconds) luminescent biolabels for time-gated detection implies long integration (dwell) time, resulting in slow scan speed. However, here we achieve high scan speed using a new 2-step orthogonal scanning strategy to realise on-the-fly time-gated detection and precise location of 1-μm lanthanide-doped microspheres with signal-to-background ratio of 8.9. This enables analysis of a 15 mm × 15 mm slide area in only 3.3 minutes. We demonstrate that detection of only a few hundred photoelectrons within 100 μs is sufficient to distinguish a target event in a prototype system using ultraviolet LED excitation. Cytometric analysis of lanthanide labelled Giardia cysts achieved a signal-to-background ratio of two orders of magnitude. Results suggest that time-gated OSAM represents a new opportunity for high-throughput background-free biosensing applications.
Raman, Pavithra; Raman, Raghav; Newman, Beverley; Venkatraman, Raman; Raman, Bhargav; Robinson, Terry E
2010-12-01
To address potential concern for cumulative radiation exposure with serial spiral chest computed tomography (CT) scans in children with chronic lung disease, we developed an approach to match bronchial airways on low-dose spiral and low-dose high-resolution CT (HRCT) chest images to allow serial comparisons. An automated algorithm matches the position and orientation of bronchial airways obtained from HRCT slices with those in the spiral CT scan. To validate this algorithm, we compared manual matching vs automatic matching of bronchial airways in three pediatric patients. The mean absolute percentage difference between the manually matched spiral CT airway and the index HRCT airways were 9.4 ± 8.5% for the internal diameter measurements, 6.0 ± 4.1% for the outer diameter measurements, and 10.1 ± 9.3% for the wall thickness measurements. The mean absolute percentage difference between the automatically matched spiral CT airway measurements and index HRCT airway measurements were 9.2 ± 8.6% for the inner diameter, 5.8 ± 4.5% for the outer diameter, and 9.9 ± 9.5% for the wall thickness. The overall difference between manual and automated methods was 2.1 ± 1.2%, which was significantly less than the interuser variability of 5.1 ± 4.6% (p<0.05). Tests of equivalence had p<0.05, demonstrating no significant difference between the two methods. The time required for matching was significantly reduced in the automated method (p<0.01) and was as accurate as manual matching, allowing efficient comparison of airways obtained on low-dose spiral CT imaging with low-dose HRCT scans.
Binnicker, M. J.; Jespersen, D. J.; Harring, J. A.; Rollins, L. O.; Bryant, S. C.; Beito, E. M.
2008-01-01
The diagnosis of Lyme borreliosis (LB) is commonly made by serologic testing with Western blot (WB) analysis serving as an important supplemental assay. Although specific, the interpretation of WBs for diagnosis of LB (i.e., Lyme WBs) is subjective, with considerable variability in results. In addition, the processing, reading, and interpretation of Lyme WBs are laborious and time-consuming procedures. With the need for rapid processing and more objective interpretation of Lyme WBs, we evaluated the performances of two automated interpretive systems, TrinBlot/BLOTrix (Trinity Biotech, Carlsbad, CA) and BeeBlot/ViraScan (Viramed Biotech AG, Munich, Germany), using 518 serum specimens submitted to our laboratory for Lyme WB analysis. The results of routine testing with visual interpretation were compared to those obtained by BLOTrix analysis of MarBlot immunoglobulin M (IgM) and IgG and by ViraScan analysis of ViraBlot and ViraStripe IgM and IgG assays. BLOTrix analysis demonstrated an agreement of 84.7% for IgM and 87.3% for IgG compared to visual reading and interpretation. ViraScan analysis of the ViraBlot assays demonstrated agreements of 85.7% for IgM and 94.2% for IgG, while ViraScan analysis of the ViraStripe IgM and IgG assays showed agreements of 87.1 and 93.1%, respectively. Testing by the automated systems yielded an average time savings of 64 min/run compared to processing, reading, and interpretation by our current procedure. Our findings demonstrated that automated processing and interpretive systems yield results comparable to those of visual interpretation, while reducing the subjectivity and time required for Lyme WB analysis. PMID:18463211
Binnicker, M J; Jespersen, D J; Harring, J A; Rollins, L O; Bryant, S C; Beito, E M
2008-07-01
The diagnosis of Lyme borreliosis (LB) is commonly made by serologic testing with Western blot (WB) analysis serving as an important supplemental assay. Although specific, the interpretation of WBs for diagnosis of LB (i.e., Lyme WBs) is subjective, with considerable variability in results. In addition, the processing, reading, and interpretation of Lyme WBs are laborious and time-consuming procedures. With the need for rapid processing and more objective interpretation of Lyme WBs, we evaluated the performances of two automated interpretive systems, TrinBlot/BLOTrix (Trinity Biotech, Carlsbad, CA) and BeeBlot/ViraScan (Viramed Biotech AG, Munich, Germany), using 518 serum specimens submitted to our laboratory for Lyme WB analysis. The results of routine testing with visual interpretation were compared to those obtained by BLOTrix analysis of MarBlot immunoglobulin M (IgM) and IgG and by ViraScan analysis of ViraBlot and ViraStripe IgM and IgG assays. BLOTrix analysis demonstrated an agreement of 84.7% for IgM and 87.3% for IgG compared to visual reading and interpretation. ViraScan analysis of the ViraBlot assays demonstrated agreements of 85.7% for IgM and 94.2% for IgG, while ViraScan analysis of the ViraStripe IgM and IgG assays showed agreements of 87.1 and 93.1%, respectively. Testing by the automated systems yielded an average time savings of 64 min/run compared to processing, reading, and interpretation by our current procedure. Our findings demonstrated that automated processing and interpretive systems yield results comparable to those of visual interpretation, while reducing the subjectivity and time required for Lyme WB analysis.
Automated scanning of plastic nuclear track detectors using the Minnesota star scanner
NASA Technical Reports Server (NTRS)
Fink, P. J.; Waddington, C. J.
1986-01-01
The problems found in an attempt to adapt an automated scanner of astronomical plates, the Minnesota Automated Dual Plate Scanner (APS), to locating and measuring the etch pits produced by ionizing particles in plastic nuclear track detectors (CR-39) are described. A visual study of these pits was made to determine the errors introduced in determining positions and shapes. Measurements made under a low power microscope were compared with those from the APS.
Model Identification of Integrated ARMA Processes
ERIC Educational Resources Information Center
Stadnytska, Tetiana; Braun, Simone; Werner, Joachim
2008-01-01
This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…
Automated Instructional Management Systems (AIMS) Version III, Operator's Guide.
ERIC Educational Resources Information Center
New York Inst. of Tech., Old Westbury.
This manual gives the instructions necessary to understand and operate the Automated Instructional Management System (AIMS), utilizing IBM System 360, Model 30/Release 20 Disk Operating System, and the OpScan 100 System Reader and Tape Unit. It covers the AIMS III system initialization, system and operational input, requirements, master response…
Evaluation of an automated hardwood lumber grading system
D. Earl Kline; Philip A. Araman; Chris Surak
2001-01-01
Over the last 10 years, scientists at the Thomas M. Brooks Forest Products Center, the Bradley Department of Electrical Engineering, and the USDA Forest Service have been working on lumber scanning systems that can accurately locate and identify defects in hardwood lumber. Current R&D efforts are targeted toward developing automated lumber grading technologies....
Prototyping an automated lumber processing system
Powsiri Klinkhachorn; Ravi Kothari; Henry A. Huber; Charles W. McMillin; K. Mukherjee; V. Barnekov
1993-01-01
The Automated Lumber Processing System (ALPS)is a multi-disciplinary continuing effort directed toward increasing the yield obtained from hardwood lumber boards during their process of remanufacture into secondary products (furniture, etc.). ALPS proposes a nondestructive vision system to scan a board for its dimension and the location and expanse of surface defects on...
Kesner, Adam Leon; Kuntner, Claudia
2010-10-01
Respiratory gating in PET is an approach used to minimize the negative effects of respiratory motion on spatial resolution. It is based on an initial determination of a patient's respiratory movements during a scan, typically using hardware based systems. In recent years, several fully automated databased algorithms have been presented for extracting a respiratory signal directly from PET data, providing a very practical strategy for implementing gating in the clinic. In this work, a new method is presented for extracting a respiratory signal from raw PET sinogram data and compared to previously presented automated techniques. The acquisition of respiratory signal from PET data in the newly proposed method is based on rebinning the sinogram data into smaller data structures and then analyzing the time activity behavior in the elements of these structures. From this analysis, a 1D respiratory trace is produced, analogous to a hardware derived respiratory trace. To assess the accuracy of this fully automated method, respiratory signal was extracted from a collection of 22 clinical FDG-PET scans using this method, and compared to signal derived from several other software based methods as well as a signal derived from a hardware system. The method presented required approximately 9 min of processing time for each 10 min scan (using a single 2.67 GHz processor), which in theory can be accomplished while the scan is being acquired and therefore allowing a real-time respiratory signal acquisition. Using the mean correlation between the software based and hardware based respiratory traces, the optimal parameters were determined for the presented algorithm. The mean/median/range of correlations for the set of scans when using the optimal parameters was found to be 0.58/0.68/0.07-0.86. The speed of this method was within the range of real-time while the accuracy surpassed the most accurate of the previously presented algorithms. PET data inherently contains information about patient motion; information that is not currently being utilized. We have shown that a respiratory signal can be extracted from raw PET data in potentially real-time and in a fully automated manner. This signal correlates well with hardware based signal for a large percentage of scans, and avoids the efforts and complications associated with hardware. The proposed method to extract a respiratory signal can be implemented on existing scanners and, if properly integrated, can be applied without changes to routine clinical procedures.
Platform for Automated Real-Time High Performance Analytics on Medical Image Data.
Allen, William J; Gabr, Refaat E; Tefera, Getaneh B; Pednekar, Amol S; Vaughn, Matthew W; Narayana, Ponnada A
2018-03-01
Biomedical data are quickly growing in volume and in variety, providing clinicians an opportunity for better clinical decision support. Here, we demonstrate a robust platform that uses software automation and high performance computing (HPC) resources to achieve real-time analytics of clinical data, specifically magnetic resonance imaging (MRI) data. We used the Agave application programming interface to facilitate communication, data transfer, and job control between an MRI scanner and an off-site HPC resource. In this use case, Agave executed the graphical pipeline tool GRAphical Pipeline Environment (GRAPE) to perform automated, real-time, quantitative analysis of MRI scans. Same-session image processing will open the door for adaptive scanning and real-time quality control, potentially accelerating the discovery of pathologies and minimizing patient callbacks. We envision this platform can be adapted to other medical instruments, HPC resources, and analytics tools.
Classification of Mobile Laser Scanning Point Clouds from Height Features
NASA Astrophysics Data System (ADS)
Zheng, M.; Lemmens, M.; van Oosterom, P.
2017-09-01
The demand for 3D maps of cities and road networks is steadily growing and mobile laser scanning (MLS) systems are often the preferred geo-data acquisition method for capturing such scenes. Because MLS systems are mounted on cars or vans they can acquire billions of points of road scenes within a few hours of survey. Manual processing of point clouds is labour intensive and thus time consuming and expensive. Hence, the need for rapid and automated methods for 3D mapping of dense point clouds is growing exponentially. The last five years the research on automated 3D mapping of MLS data has tremendously intensified. In this paper, we present our work on automated classification of MLS point clouds. In the present stage of the research we exploited three features - two height components and one reflectance value, and achieved an overall accuracy of 73 %, which is really encouraging for further refining our approach.
Automating the expert consensus paradigm for robust lung tissue classification
NASA Astrophysics Data System (ADS)
Rajagopalan, Srinivasan; Karwoski, Ronald A.; Raghunath, Sushravya; Bartholmai, Brian J.; Robb, Richard A.
2012-03-01
Clinicians confirm the efficacy of dynamic multidisciplinary interactions in diagnosing Lung disease/wellness from CT scans. However, routine clinical practice cannot readily accomodate such interactions. Current schemes for automating lung tissue classification are based on a single elusive disease differentiating metric; this undermines their reliability in routine diagnosis. We propose a computational workflow that uses a collection (#: 15) of probability density functions (pdf)-based similarity metrics to automatically cluster pattern-specific (#patterns: 5) volumes of interest (#VOI: 976) extracted from the lung CT scans of 14 patients. The resultant clusters are refined for intra-partition compactness and subsequently aggregated into a super cluster using a cluster ensemble technique. The super clusters were validated against the consensus agreement of four clinical experts. The aggregations correlated strongly with expert consensus. By effectively mimicking the expertise of physicians, the proposed workflow could make automation of lung tissue classification a clinical reality.
A multisample study of longitudinal changes in brain network architecture in 4-13-year-old children.
Wierenga, Lara M; van den Heuvel, Martijn P; Oranje, Bob; Giedd, Jay N; Durston, Sarah; Peper, Jiska S; Brown, Timothy T; Crone, Eveline A
2018-01-01
Recent advances in human neuroimaging research have revealed that white-matter connectivity can be described in terms of an integrated network, which is the basis of the human connectome. However, the developmental changes of this connectome in childhood are not well understood. This study made use of two independent longitudinal diffusion-weighted imaging data sets to characterize developmental changes in the connectome by estimating age-related changes in fractional anisotropy (FA) for reconstructed fibers (edges) between 68 cortical regions. The first sample included 237 diffusion-weighted scans of 146 typically developing children (4-13 years old, 74 females) derived from the Pediatric Longitudinal Imaging, Neurocognition, and Genetics (PLING) study. The second sample included 141 scans of 97 individuals (8-13 years old, 62 females) derived from the BrainTime project. In both data sets, we compared edges that had the most substantial age-related change in FA to edges that showed little change in FA. This allowed us to investigate if developmental changes in white matter reorganize network topology. We observed substantial increases in edges connecting peripheral and a set of highly connected hub regions, referred to as the rich club. Together with the observed topological differences between regions connecting to edges showing the smallest and largest changes in FA, this indicates that changes in white matter affect network organization, such that highly connected regions become even more strongly imbedded in the network. These findings suggest that an important process in brain development involves organizing patterns of inter-regional interactions. Hum Brain Mapp 39:157-170, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Bocsi, Jozsef; Mittag, Anja; Varga, Viktor S.; Molnar, Bela; Tulassay, Zsolt; Sack, Ulrich; Lenz, Dominik; Tarnok, Attila
2006-02-01
Scanning Fluorescence Microscope (SFM) is a new technique for automated motorized microscopes to measure multiple fluorochrome labeled cells (Bocsi et al. Cytometry 2004, 61A:1). The ratio of CD4+/CD8+ cells is an important in immune diagnostics in immunodeficiency and HIV. Therefor a four-color staining protocol (DNA, CD3, CD4 and CD8) for automated SFM analysis of lymphocytes was developed. EDTA uncoagulated blood was stained with organic and inorganic (Quantum dots) fluorochromes in different combinations. Aliquots of samples were measured by Flow Cytometry (FCM) and SFM. By SFM specimens were scanned and digitized using four fluorescence filter sets. Automated cell detection (based on Hoechst 33342 fluorescence), CD3, CD4 and CD8 detection were performed, CD4/CD8 ratio was calculated. Fluorescence signals were well separable on SFM and FCM. Passing and Bablok regression of all CD4/CD8 ratios obtained by FCM and SFM (F(X)=0.0577+0.9378x) are in the 95% confidence interval. Cusum test did not show significant deviation from linearity (P>0.10). This comparison indicates that there is no systemic bias between the two different methods. In SFM analyses the inorganic Quantum dot staining was very stable in PBS in contrast to the organic fluorescent dyes, but bleached shortly after mounting with antioxidant and free radical scavenger mounting media. This shows the difficulty of combinations of organic dyes and Quantum dots. Slide based multi-fluorescence labeling system and automated SFM are applicable tools for the CD4/CD8 ratio determination in peripheral blood samples. Quantum Dots are stable inorganic fluorescence labels that may be used as reliable high resolution dyes for cell labeling.
An automated image-collection system for crystallization experiments using SBS standard microplates.
Brostromer, Erik; Nan, Jie; Su, Xiao Dong
2007-02-01
As part of a structural genomics platform in a university laboratory, a low-cost in-house-developed automated imaging system for SBS microplate experiments has been designed and constructed. The imaging system can scan a microplate in 2-6 min for a 96-well plate depending on the plate layout and scanning options. A web-based crystallization database system has been developed, enabling users to follow their crystallization experiments from a web browser. As the system has been designed and built by students and crystallographers using commercially available parts, this report is aimed to serve as a do-it-yourself example for laboratory robotics.
Mapping Snow Depth with Automated Terrestrial Laser Scanning - Investigating Potential Applications
NASA Astrophysics Data System (ADS)
Adams, M. S.; Gigele, T.; Fromm, R.
2017-11-01
This contribution presents an automated terrestrial laser scanning (ATLS) setup, which was used during the winter 2016/17 to monitor the snow depth distribution on a NW-facing slope at a high-alpine study site. We collected data at high temporal [(sub-)daily] and spatial resolution (decimetre-range) over 0.8 km² with a Riegl LPM-321, set in a weather-proof glass fibre enclosure. Two potential ATLS-applications are investigated here: monitoring medium-sized snow avalanche events, and tracking snow depth change caused by snow drift. The results show the ATLS data's high explanatory power and versatility for different snow research questions.
Knowledge-based automated technique for measuring total lung volume from CT
NASA Astrophysics Data System (ADS)
Brown, Matthew S.; McNitt-Gray, Michael F.; Mankovich, Nicholas J.; Goldin, Jonathan G.; Aberle, Denise R.
1996-04-01
A robust, automated technique has been developed for estimating total lung volumes from chest computed tomography (CT) images. The technique includes a method for segmenting major chest anatomy. A knowledge-based approach automates the calculation of separate volumes of the whole thorax, lungs, and central tracheo-bronchial tree from volumetric CT data sets. A simple, explicit 3D model describes properties such as shape, topology and X-ray attenuation, of the relevant anatomy, which constrain the segmentation of these anatomic structures. Total lung volume is estimated as the sum of the right and left lungs and excludes the central airways. The method requires no operator intervention. In preliminary testing, the system was applied to image data from two healthy subjects and four patients with emphysema who underwent both helical CT and pulmonary function tests. To obtain single breath-hold scans, the healthy subjects were scanned with a collimation of 5 mm and a pitch of 1.5, while the emphysema patients were scanned with collimation of 10 mm at a pitch of 2.0. CT data were reconstructed as contiguous image sets. Automatically calculated volumes were consistent with body plethysmography results (< 10% difference).
Multi-Sample Cluster Analysis Using Akaike’s Information Criterion.
1982-12-20
Intervals. For more details on these test procedures refer to Gabriel [7J, Krishnaiah (CIlUj, [11]), Srivastava [16), and others. -3- As noted in Consul...723. (4] Consul, P. C. (1969), "The Exact Distributions of Likelihood Criteria for Different Hypotheses," in P. R. Krishnaiah (Ed.), Multivariate...1178. [7] Gabriel, K. R. (1969), "A Comparison of Some lethods of Simultaneous Inference in MANOVA," in P. R. Krishnaiah (Ed.), Multivariate Analysis-lI
Intra-Tumor Genetic Heterogeneity in Wilms Tumor: Clonal Evolution and Clinical Implications.
Cresswell, George D; Apps, John R; Chagtai, Tasnim; Mifsud, Borbala; Bentley, Christopher C; Maschietto, Mariana; Popov, Sergey D; Weeks, Mark E; Olsen, Øystein E; Sebire, Neil J; Pritchard-Jones, Kathy; Luscombe, Nicholas M; Williams, Richard D; Mifsud, William
2016-07-01
The evolution of pediatric solid tumors is poorly understood. There is conflicting evidence of intra-tumor genetic homogeneity vs. heterogeneity (ITGH) in a small number of studies in pediatric solid tumors. A number of copy number aberrations (CNA) are proposed as prognostic biomarkers to stratify patients, for example 1q+ in Wilms tumor (WT); current clinical trials use only one sample per tumor to profile this genetic biomarker. We multisampled 20 WT cases and assessed genome-wide allele-specific CNA and loss of heterozygosity, and inferred tumor evolution, using Illumina CytoSNP12v2.1 arrays, a custom analysis pipeline, and the MEDICC algorithm. We found remarkable diversity of ITGH and evolutionary trajectories in WT. 1q+ is heterogeneous in the majority of tumors with this change, with variable evolutionary timing. We estimate that at least three samples per tumor are needed to detect >95% of cases with 1q+. In contrast, somatic 11p15 LOH is uniformly an early event in WT development. We find evidence of two separate tumor origins in unilateral disease with divergent histology, and in bilateral WT. We also show subclonal changes related to differential response to chemotherapy. Rational trial design to include biomarkers in risk stratification requires tumor multisampling and reliable delineation of ITGH and tumor evolution. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
A comparison of abundance estimates from extended batch-marking and Jolly–Seber-type experiments
Cowen, Laura L E; Besbeas, Panagiotis; Morgan, Byron J T; Schwarz, Carl J
2014-01-01
Little attention has been paid to the use of multi-sample batch-marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. (2010) present a pseudo-likelihood for a multi-sample batch-marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson-type estimator. We have developed and maximized the likelihood for batch-marking studies. We use data simulated from a Jolly–Seber-type study and convert this to what would have been obtained from an extended batch-marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch-marking model to determine the efficiency of collecting and analyzing batch-marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo-likelihood method of Huggins et al. (2010). When faced with designing a batch-marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size. PMID:24558576
Yu, Sen-Chi; Yu, Min-Ning
2007-08-01
This study examines whether the Internet-based questionnaire is psychometrically equivalent to the paper-based questionnaire. A random sample of 2,400 teachers in Taiwan was divided into experimental and control groups. The experimental group was invited to complete the electronic form of the Chinese version of Center for Epidemiologic Studies Depression Scale (CES-D) placed on the Internet, whereas the control group was invited to complete the paper-based CES-D, which they received by mail. The multisample invariance approach, derived from structural equation modeling (SEM), was applied to analyze the collected data. The analytical results show that the two groups have equivalent factor structures in the CES-D. That is, the items in CES-D function equivalently in the two groups. Then the equality of latent mean test was performed. The latent means of "depressed mood," "positive affect," and "interpersonal problems" in CES-D are not significantly different between these two groups. However, the difference in the "somatic symptoms" latent means between these two groups is statistically significant at alpha = 0.01. But the Cohen's d statistics indicates that such differences in latent means do not apparently lead to a meaningful effect size in practice. Both CES-D questionnaires exhibit equal validity, reliability, and factor structures and exhibit a little difference in latent means. Therefore, the Internet-based questionnaire represents a promising alternative to the paper-based questionnaire.
Automated T2 relaxometry of the hippocampus for temporal lobe epilepsy.
Winston, Gavin P; Vos, Sjoerd B; Burdett, Jane L; Cardoso, M Jorge; Ourselin, Sebastien; Duncan, John S
2017-09-01
Hippocampal sclerosis (HS), the most common cause of refractory temporal lobe epilepsy, is associated with hippocampal volume loss and increased T2 signal. These can be identified on quantitative imaging with hippocampal volumetry and T2 relaxometry. Although hippocampal segmentation for volumetry has been automated, T2 relaxometry currently involves subjective and time-consuming manual delineation of regions of interest. In this work, we develop and validate an automated technique for hippocampal T2 relaxometry. Fifty patients with unilateral or bilateral HS and 50 healthy controls underwent T 1 -weighted and dual-echo fast recovery fast spin echo scans. Hippocampi were automatically segmented using a multi-atlas-based segmentation algorithm (STEPS) and a template database. Voxelwise T2 maps were determined using a monoexponential fit. The hippocampal segmentations were registered to the T2 maps and eroded to reduce partial volume effect. Voxels with T2 >170 msec excluded to minimize cerebrospinal fluid (CSF) contamination. Manual determination of T2 values was performed twice in each subject. Twenty controls underwent repeat scans to assess interscan reproducibility. Hippocampal T2 values were reliably determined using the automated method. There was a significant ipsilateral increase in T2 values in HS (p < 0.001), and a smaller but significant contralateral increase. The combination of hippocampal volumes and T2 values separated the groups well. There was a strong correlation between automated and manual methods for hippocampal T2 measurement (0.917 left, 0.896 right, both p < 0.001). Interscan reproducibility was superior for automated compared to manual measurements. Automated hippocampal segmentation can be reliably extended to the determination of hippocampal T2 values, and a combination of hippocampal volumes and T2 values can separate subjects with HS from healthy controls. There is good agreement with manual measurements, and the technique is more reproducible on repeat scans than manual measurement. This protocol can be readily introduced into a clinical workflow for the assessment of patients with focal epilepsy. © 2017 The Authors. Epilepsia published by Wiley Periodicals, Inc. on behalf of International League Against Epilepsy.
Automated lung volumetry from routine thoracic CT scans: how reliable is the result?
Haas, Matthias; Hamm, Bernd; Niehues, Stefan M
2014-05-01
Today, lung volumes can be easily calculated from chest computed tomography (CT) scans. Modern postprocessing workstations allow automated volume measurement of data sets acquired. However, there are challenges in the use of lung volume as an indicator of pulmonary disease when it is obtained from routine CT. Intra-individual variation and methodologic aspects have to be considered. Our goal was to assess the reliability of volumetric measurements in routine CT lung scans. Forty adult cancer patients whose lungs were unaffected by the disease underwent routine chest CT scans in 3-month intervals, resulting in a total number of 302 chest CT scans. Lung volume was calculated by automatic volumetry software. On average of 7.2 CT scans were successfully evaluable per patient (range 2-15). Intra-individual changes were assessed. In the set of patients investigated, lung volume was approximately normally distributed, with a mean of 5283 cm(3) (standard deviation = 947 cm(3), skewness = -0.34, and curtosis = 0.16). Between different scans in one and the same patient the median intra-individual standard deviation in lung volume was 853 cm(3) (16% of the mean lung volume). Automatic lung segmentation of routine chest CT scans allows a technically stable estimation of lung volume. However, substantial intra-individual variations have to be considered. A median intra-individual deviation of 16% in lung volume between different routine scans was found. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Chan, Adrian C H; Adachi, Jonathan D; Papaioannou, Alexandra; Wong, Andy Kin On
Lower peripheral quantitative computed tomography (pQCT)-derived leg muscle density has been associated with fragility fractures in postmenopausal women. Limb movement during image acquisition may result in motion streaks in muscle that could dilute this relationship. This cross-sectional study examined a subset of women from the Canadian Multicentre Osteoporosis Study. pQCT leg scans were qualitatively graded (1-5) for motion severity. Muscle and motion streak were segmented using semi-automated (watershed) and fully automated (threshold-based) methods, computing area, and density. Binary logistic regression evaluated odds ratios (ORs) for fragility or all-cause fractures related to each of these measures with covariate adjustment. Among the 223 women examined (mean age: 72.7 ± 7.1 years, body mass index: 26.30 ± 4.97 kg/m 2 ), muscle density was significantly lower after removing motion (p < 0.001) for both methods. Motion streak areas segmented using the semi-automated method correlated better with visual motion grades (rho = 0.90, p < 0.01) compared to the fully automated method (rho = 0.65, p < 0.01). Although the analysis-reanalysis precision of motion streak area segmentation using the semi-automated method is above 5% error (6.44%), motion-corrected muscle density measures remained well within 2% analytical error. The effect of motion-correction on strengthening the association between muscle density and fragility fractures was significant when motion grade was ≥3 (p interaction <0.05). This observation was most dramatic for the semi-automated algorithm (OR: 1.62 [0.82,3.17] before to 2.19 [1.05,4.59] after correction). Although muscle density showed an overall association with all-cause fractures (OR: 1.49 [1.05,2.12]), the effect of motion-correction was again, most impactful within individuals with scans showing grade 3 or above motion. Correcting for motion in pQCT leg scans strengthened the relationship between muscle density and fragility fractures, particularly in scans with motion grades of 3 or above. Motion streaks are not confounders to the relationship between pQCT-derived leg muscle density and fractures, but may introduce heterogeneity in muscle density measurements, rendering associations with fractures to be weaker. Copyright © 2016. Published by Elsevier Inc.
Antony, Bhavna J; Stetson, Paul F; Abramoff, Michael D; Lee, Kyungmoo; Colijn, Johanna M; Buitendijk, Gabriëlle H S; Klaver, Caroline C W; Roorda, Austin; Lujan, Brandon J
2015-07-01
Off-axis acquisition of spectral domain optical coherence tomography (SDOCT) images has been shown to increase total retinal thickness (TRT) measurements. We analyzed the reproducibility of TRT measurements obtained using either the retinal pigment epithelium (RPE) or Bruch's membrane as reference surfaces in off-axis scans intentionally acquired through multiple pupil positions. Five volumetric SDOCT scans of the macula were obtained from one eye of 25 normal subjects. One scan was acquired through a central pupil position, while subsequent scans were acquired through four peripheral pupil positions. The internal limiting membrane, the RPE, and Bruch's membrane were segmented using automated approaches. These volumes were registered to each other and the TRT was evaluated in 9 Early Treatment of Diabetic Retinopathy Study (ETDRS) regions. The reproducibility of the TRT obtained using the RPE was computed using the mean difference, coefficient of variation (CV), and the intraclass correlation coefficient (ICC), and compared to those obtained using Bruch's membrane as the reference surface. A secondary set of 1545 SDOCT scans was also analyzed in order to gauge the incidence of off-axis scans in a typical acquisition environment. The photoreceptor tips were dimmer in off-axis images, which affected the RPE segmentation. The overall mean TRT difference and CV obtained using the RPE were 7.04 ± 4.31 μm and 1.46%, respectively, whereas Bruch's membrane was 1.16 ± 1.00 μm and 0.32%, respectively. The ICCs at the subfoveal TRT were 0.982 and 0.999, respectively. Forty-one percent of the scans in the secondary set showed large tilts (> 6%). RPE segmentation is confounded by its proximity to the interdigitation zone, a structure strongly affected by the optical Stiles-Crawford effect. Bruch's membrane, however, is unaffected leading to a more robust segmentation that is less dependent upon pupil position. The way in which OCT images are acquired can independently affect the accuracy of automated retinal thickness measurements. Assessment of scan angle in a clinical dataset demonstrates that off-axis scans are common, which emphasizes the need for caution when relying on automated thickness parameters when this component of scan acquisition is not controlled for.
What's ahead in automated lumber grading
D. Earl Kline; Richard Conners; Philip A. Araman
1998-01-01
This paper discusses how present scanning technologies are being applied to automatic lumber grading. The presentation focuses on 1) what sensing and scanning devices are needed to measure information for accurate grading feature detection, 2) the hardware and software needed to efficiently process this information, and 3) specific issues related to softwood lumber...
Automated hardwood lumber grading utilizing a multiple sensor machine vision technology
D. Earl Kline; Chris Surak; Philip A. Araman
2003-01-01
Over the last 10 years, scientists at the Thomas M. Brooks Forest Products Center, the Bradley Department of Electrical and Computer Engineering, and the USDA Forest Service have been working on lumber scanning systems that can accurately locate and identify defects in hardwood lumber. Current R&D efforts are targeted toward developing automated lumber grading...
Identifying and locating surface defects in wood: Part of an automated lumber processing system
Richard W. Conners; Charles W. McMillin; Kingyao Lin; Ramon E. Vasquez-Espinosa
1983-01-01
Continued increases in the cost of materials and labor make it imperative for furniture manufacturers to control costs by improved yield and increased productivity. This paper describes an Automated Lumber Processing System (ALPS) that employs computer tomography, optical scanning technology, the calculation of an optimum cutting strategy, and 1 computer-driven laser...
He, Qingwen; Chen, Weiyuan; Huang, Liya; Lin, Qili; Zhang, Jingling; Liu, Rui; Li, Bin
2016-06-21
Carbapenem-resistant Enterobacteriaceae (CRE) is prevalent around the world. Rapid and accurate detection of CRE is urgently needed to provide effective treatment. Automated identification systems have been widely used in clinical microbiology laboratories for rapid and high-efficient identification of pathogenic bacteria. However, critical evaluation and comparison are needed to determine the specificity and accuracy of different systems. The aim of this study was to evaluate the performance of three commonly used automated identification systems on the detection of CRE. A total of 81 non-repetitive clinical CRE isolates were collected from August 2011 to August 2012 in a Chinese university hospital, and all the isolates were confirmed to be resistant to carbapenems by the agar dilution method. The potential presence of carbapenemase genotypes of the 81 isolates was detected by PCR and sequencing. Using 81 clinical CRE isolates, we evaluated and compared the performance of three automated identification systems, MicroScan WalkAway 96 Plus, Phoenix 100, and Vitek 2 Compact, which are commonly used in China. To identify CRE, the comparator methodology was agar dilution method, while the PCR and sequencing was the comparator one to identify CPE. PCR and sequencing analysis showed that 48 of the 81 CRE isolates carried carbapenemase genes, including 23 (28.4 %) IMP-4, 14 (17.3 %) IMP-8, 5 (6.2 %) NDM-1, and 8 (9.9 %) KPC-2. Notably, one Klebsiella pneumoniae isolate produced both IMP-4 and NDM-1. One Klebsiella oxytoca isolate produced both KPC-2 and IMP-8. Of the 81 clinical CRE isolates, 56 (69.1 %), 33 (40.7 %) and 77 (95.1 %) were identified as CRE by MicroScan WalkAway 96 Plus, Phoenix 100, and Vitek 2 Compact, respectively. The sensitivities/specificities of MicroScan WalkAway, Phoenix 100 and Vitek 2 were 93.8/42.4 %, 54.2/66.7 %, and 75.0/36.4 %, respectively. The MicroScan WalkAway and Viteck2 systems are more reliable in clinical identification of CRE, whereas additional tests are required for the Pheonix 100 system. Our study provides a useful guideline for using automated identification systems for CRE identification.
NASA Astrophysics Data System (ADS)
Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Sugiura, Toshihiko; Tanabe, Nobuhiro; Kusumoto, Masahiko; Eguchi, Kenji; Kaneko, Masahiro
2018-02-01
Chronic thromboembolic pulmonary hypertension (CTEPH) is characterized by obstruction of the pulmonary vasculature by residual organized thrombi. A morphological abnormality inside mediastinum of CTEPH patient is enlargement of pulmonary artery. This paper presents an automated assessment of aortic and main pulmonary arterial diameters for predicting CTEPH in low-dose CT lung screening. The distinctive feature of our method is to segment aorta and main pulmonary artery using both of prior probability and vascular direction which were estimated from mediastinal vascular region using principal curvatures of four-dimensional hyper surface. The method was applied to two datasets, 64 lowdose CT scans of lung cancer screening and 19 normal-dose CT scans of CTEPH patients through the training phase with 121 low-dose CT scans. This paper demonstrates effectiveness of our method for predicting CTEPH in low-dose CT screening.
A fully automated non-external marker 4D-CT sorting algorithm using a serial cine scanning protocol.
Carnes, Greg; Gaede, Stewart; Yu, Edward; Van Dyk, Jake; Battista, Jerry; Lee, Ting-Yim
2009-04-07
Current 4D-CT methods require external marker data to retrospectively sort image data and generate CT volumes. In this work we develop an automated 4D-CT sorting algorithm that performs without the aid of data collected from an external respiratory surrogate. The sorting algorithm requires an overlapping cine scan protocol. The overlapping protocol provides a spatial link between couch positions. Beginning with a starting scan position, images from the adjacent scan position (which spatial match the starting scan position) are selected by maximizing the normalized cross correlation (NCC) of the images at the overlapping slice position. The process was continued by 'daisy chaining' all couch positions using the selected images until an entire 3D volume was produced. The algorithm produced 16 phase volumes to complete a 4D-CT dataset. Additional 4D-CT datasets were also produced using external marker amplitude and phase angle sorting methods. The image quality of the volumes produced by the different methods was quantified by calculating the mean difference of the sorted overlapping slices from adjacent couch positions. The NCC sorted images showed a significant decrease in the mean difference (p < 0.01) for the five patients.
Automated matching of supine and prone colonic polyps based on PCA and SVMs
NASA Astrophysics Data System (ADS)
Wang, Shijun; Van Uitert, Robert L.; Summers, Ronald M.
2008-03-01
Computed tomographic colonography (CTC) is a feasible and minimally invasive method for the detection of colorectal polyps and cancer screening. In current practice, a patient will be scanned twice during the CTC examination - once supine and once prone. In order to assist the radiologists in evaluating colon polyp candidates in both scans, we expect the computer aided detection (CAD) system can provide not only the locations of suspicious polyps, but also the possible matched pairs of polyps in two scans. In this paper, we propose a new automated matching method based on the extracted features of polyps by using principal component analysis (PCA) and Support Vector Machines (SVMs). Our dataset comes from the 104 CT scans of 52 patients with supine and prone positions collected from three medical centers. From it we constructed two groups of matched polyp candidates according to the size of true polyps: group A contains 12 true polyp pairs (> 9 mm) and 454 false pairs; group B contains 24 true polyp pairs (6-9 mm) and 514 false pairs. By using PCA, we reduced the dimensions of original data (with 157 attributes) to 30 dimensions. We did leave-one-patient-out test on the two groups of data. ROC analysis shows that it is easier to match bigger polyps than that of smaller polyps. On group A data, when false alarm probability is 0.18, the sensitivity of SVM achieves 0.83 which shows that automated matching of polyp candidates is practicable for clinical applications.
Roussis, S G
2001-08-01
The automated acquisition of the product ion spectra of all precursor ions in a selected mass range by using a magnetic sector/orthogonal acceleration time-of-flight (oa-TOF) tandem mass spectrometer for the characterization of complex petroleum mixtures is reported. Product ion spectra are obtained by rapid oa-TOF data acquisition and simultaneous scanning of the magnet. An analog signal generator is used for the scanning of the magnet. Slow magnet scanning rates permit the accurate profiling of precursor ion peaks and the acquisition of product ion spectra for all isobaric ion species. The ability of the instrument to perform both high- and low-energy collisional activation experiments provides access to a large number of dissociation pathways useful for the characterization of precursor ions. Examples are given that illustrate the capability of the method for the characterization of representative petroleum mixtures. The structural information obtained by the automated MS/MS experiment is used in combination with high-resolution accurate mass measurement results to characterize unknown components in a polar extract of a refinery product. The exhaustive mapping of all precursor ions in representative naphtha and middle-distillate fractions is presented. Sets of isobaric ion species are separated and their structures are identified by interpretation from first principles or by comparison with standard 70-eV EI libraries of spectra. The utility of the method increases with the complexity of the samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasemir, Kay; Pearson, Matthew R
For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to themore » Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.« less
Improving the detection efficiency in nuclear emulsion trackers
NASA Astrophysics Data System (ADS)
Alexandrov, A.; Bozza, C.; Buonaura, A.; Consiglio, L.; D`Ambrosio, N.; Lellis, G. De; De Serio, M.; Di Capua, F.; Di Crescenzo, A.; Di Ferdinando, D.; Di Marco, N.; Fini, R. A.; Galati, G.; Giacomelli, G.; Grella, G.; Hosseini, B.; Kose, U.; Lauria, A.; Longhin, A.; Mandrioli, G.; Mauri, N.; Medinaceli, E.; Montesi, M. C.; Paoloni, A.; Pastore, A.; Patrizii, L.; Pozzato, M.; Pupilli, F.; Rescigno, R.; Roda, M.; Rosa, G.; Schembri, A.; Shchedrina, T.; Simone, S.; Sioli, M.; Sirignano, C.; Sirri, G.; Spinetti, M.; Stellacci, S. M.; Tenti, M.; Tioukov, V.
2015-03-01
Nuclear emulsion films are a tracking device with unique space resolution. Their use in nowadays large-scale experiments relies on the availability of automated microscope operating at very high speed. In this paper we describe the features and the latest improvements of the European Scanning System, a last-generation automated microscope for emulsion scanning. In particular, we present a new method for the recovery of tracking inefficiencies. Stacks of double coated emulsion films have been exposed to a 10 GeV/c pion beam. Efficiencies as high as 98% have been achieved for minimum ionising particle tracks perpendicular to the emulsion films and of 93% for tracks with tan(θ) ≃ 0.8.
Tips on hybridizing, washing, and scanning affymetrix microarrays.
Ares, Manuel
2014-02-01
Starting in the late 1990s, Affymetrix, Inc. produced a commercial system for hybridizing, washing, and scanning microarrays that was designed to be easy to operate and reproducible. The system used arrays packaged in a plastic cassette or chamber in which the prefabricated array was mounted and could be filled with fluid through resealable membrane ports either by hand or by an automated "fluidics station" specially designed to handle the arrays. A special rotating hybridization oven and a specially designed scanner were also required. Primarily because of automation and standardization the Affymetrix system was and still remains popular. Here, we provide a skeleton protocol with the potential pitfalls identified. It is designed to augment the protocols provided by Affymetrix.
Integrated microfluidic probe station.
Perrault, C M; Qasaimeh, M A; Brastaviceanu, T; Anderson, K; Kabakibo, Y; Juncker, D
2010-11-01
The microfluidic probe (MFP) consists of a flat, blunt tip with two apertures for the injection and reaspiration of a microjet into a solution--thus hydrodynamically confining the microjet--and is operated atop an inverted microscope that enables live imaging. By scanning across a surface, the microjet can be used for surface processing with the capability of both depositing and removing material; as it operates under immersed conditions, sensitive biological materials and living cells can be processed. During scanning, the MFP is kept immobile and centered over the objective of the inverted microscope, a few micrometers above a substrate that is displaced by moving the microscope stage and that is flushed continuously with the microjet. For consistent and reproducible surface processing, the gap between the MFP and the substrate, the MFP's alignment, the scanning speed, the injection and aspiration flow rates, and the image capture need all to be controlled and synchronized. Here, we present an automated MFP station that integrates all of these functionalities and automates the key operational parameters. A custom software program is used to control an independent motorized Z stage for adjusting the gap, a motorized microscope stage for scanning the substrate, up to 16 syringe pumps for injecting and aspirating fluids, and an inverted fluorescence microscope equipped with a charge-coupled device camera. The parallelism between the MFP and the substrate is adjusted using manual goniometer at the beginning of the experiment. The alignment of the injection and aspiration apertures along the scanning axis is performed using a newly designed MFP screw holder. We illustrate the integrated MFP station by the programmed, automated patterning of fluorescently labeled biotin on a streptavidin-coated surface.
Development of a highly automated system for the remote evaluation of individual tree parameters
Richard Pollock
2000-01-01
A highly-automated procedure for remotely estimating individual tree location, crown diameter, species class, and height has been developed. This procedure will involve the use of a multimodal airborne sensing system that consists of a digital frame camera, a scanning laser rangefinder, and a position and orientation measurement system. Data from the multimodal sensing...
Value of Defect Information in Automated Hardwood Edger and Trimmer Systems
Carmen Regalado; D. Earl Kline; Philip A. Araman
1992-01-01
Due to the limited capability of board defect scanners, not all defect information required to make the best edging and trimming decision can be scanned for use in an automated system. The objective of the study presented in this paper was to evaluate the lumber value obtainable from edging and trimming optimization using varying levels of defect information as input....
Evaluation of a multi-sensor machine vision system for automated hardwood lumber grading
D. Earl Kline; Chris Surak; Philip A. Araman
2000-01-01
Over the last 10 years, scientists at the Thomas M. Brooks Forest Products Center, the Bradley Department of Electrical Engineering, and the USDA Forest Service have been working on lumber scanning systems that can accurately locate and identify defects in hardwood lumber. Current R&D efforts are targeted toward developing automated lumber grading technologies. The...
Lumber Scanning System for Surface Defect Detection
D. Earl Kline; Y. Jason Hou; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman
1992-01-01
This paper describes research aimed at developing a machine vision technology to drive automated processes in the hardwood forest products manufacturing industry. An industrial-scale machine vision system has been designed to scan variable-size hardwood lumber for detecting important features that influence the grade and value of lumber such as knots, holes, wane,...
Ensemble LUT classification for degraded document enhancement
NASA Astrophysics Data System (ADS)
Obafemi-Ajayi, Tayo; Agam, Gady; Frieder, Ophir
2008-01-01
The fast evolution of scanning and computing technologies have led to the creation of large collections of scanned paper documents. Examples of such collections include historical collections, legal depositories, medical archives, and business archives. Moreover, in many situations such as legal litigation and security investigations scanned collections are being used to facilitate systematic exploration of the data. It is almost always the case that scanned documents suffer from some form of degradation. Large degradations make documents hard to read and substantially deteriorate the performance of automated document processing systems. Enhancement of degraded document images is normally performed assuming global degradation models. When the degradation is large, global degradation models do not perform well. In contrast, we propose to estimate local degradation models and use them in enhancing degraded document images. Using a semi-automated enhancement system we have labeled a subset of the Frieder diaries collection.1 This labeled subset was then used to train an ensemble classifier. The component classifiers are based on lookup tables (LUT) in conjunction with the approximated nearest neighbor algorithm. The resulting algorithm is highly effcient. Experimental evaluation results are provided using the Frieder diaries collection.1
Automated extraction of radiation dose information from CT dose report images.
Li, Xinhua; Zhang, Da; Liu, Bob
2011-06-01
The purpose of this article is to describe the development of an automated tool for retrieving texts from CT dose report images. Optical character recognition was adopted to perform text recognitions of CT dose report images. The developed tool is able to automate the process of analyzing multiple CT examinations, including text recognition, parsing, error correction, and exporting data to spreadsheets. The results were precise for total dose-length product (DLP) and were about 95% accurate for CT dose index and DLP of scanned series.
A LabVIEW based template for user created experiment automation.
Kim, D J; Fisk, Z
2012-12-01
We have developed an expandable software template to automate user created experiments. The LabVIEW based template is easily modifiable to add together user created measurements, controls, and data logging with virtually any type of laboratory equipment. We use reentrant sequential selection to implement sequence script making it possible to wrap a long series of the user created experiments and execute them in sequence. Details of software structure and application examples for scanning probe microscope and automated transport experiments using custom built laboratory electronics and a cryostat are described.
Rudyanto, Rina D.; Kerkstra, Sjoerd; van Rikxoort, Eva M.; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, İlkay; Ünay, Devrim; Kadipaşaogandcaron;lu, Kamuran; Estépar, Raúl San José; Ross, James C.; Washko, George R.; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C.; Fabijanska, Anna; Smistad, Erik; Elster, Anne C.; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J.; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G.H.; Campo, Arantza; Prokop, Mathias; de Jong, Pim A.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram
2016-01-01
The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321
NASA Astrophysics Data System (ADS)
Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-03-01
The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.
An automated distinction of DICOM images for lung cancer CAD system
NASA Astrophysics Data System (ADS)
Suzuki, H.; Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nishitani, H.; Ohmatsu, H.; Eguchi, K.; Kaneko, M.; Moriyama, N.
2009-02-01
Automated distinction of medical images is an important preprocessing in Computer-Aided Diagnosis (CAD) systems. The CAD systems have been developed using medical image sets with specific scan conditions and body parts. However, varied examinations are performed in medical sites. The specification of the examination is contained into DICOM textual meta information. Most DICOM textual meta information can be considered reliable, however the body part information cannot always be considered reliable. In this paper, we describe an automated distinction of DICOM images as a preprocessing for lung cancer CAD system. Our approach uses DICOM textual meta information and low cost image processing. Firstly, the textual meta information such as scan conditions of DICOM image is distinguished. Secondly, the DICOM image is set to distinguish the body parts which are identified by image processing. The identification of body parts is based on anatomical structure which is represented by features of three regions, body tissue, bone, and air. The method is effective to the practical use of lung cancer CAD system in medical sites.
Automated diagnosis of fetal alcohol syndrome using 3D facial image analysis
Fang, Shiaofen; McLaughlin, Jason; Fang, Jiandong; Huang, Jeffrey; Autti-Rämö, Ilona; Fagerlund, Åse; Jacobson, Sandra W.; Robinson, Luther K.; Hoyme, H. Eugene; Mattson, Sarah N.; Riley, Edward; Zhou, Feng; Ward, Richard; Moore, Elizabeth S.; Foroud, Tatiana
2012-01-01
Objectives Use three-dimensional (3D) facial laser scanned images from children with fetal alcohol syndrome (FAS) and controls to develop an automated diagnosis technique that can reliably and accurately identify individuals prenatally exposed to alcohol. Methods A detailed dysmorphology evaluation, history of prenatal alcohol exposure, and 3D facial laser scans were obtained from 149 individuals (86 FAS; 63 Control) recruited from two study sites (Cape Town, South Africa and Helsinki, Finland). Computer graphics, machine learning, and pattern recognition techniques were used to automatically identify a set of facial features that best discriminated individuals with FAS from controls in each sample. Results An automated feature detection and analysis technique was developed and applied to the two study populations. A unique set of facial regions and features were identified for each population that accurately discriminated FAS and control faces without any human intervention. Conclusion Our results demonstrate that computer algorithms can be used to automatically detect facial features that can discriminate FAS and control faces. PMID:18713153
2009-01-01
Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to validate imipenem susceptibility. Etest, whereever available, may be used as an easy method to confirm imipenem susceptibility. PMID:19291298
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, J; Tian, X; Segars, P
2016-06-15
Purpose: To develop an automated technique for estimating patient-specific regional imparted energy and dose from tube current modulated (TCM) computed tomography (CT) exams across a diverse set of head and body protocols. Methods: A library of 58 adult computational anthropomorphic extended cardiac-torso (XCAT) phantoms were used to model a patient population. A validated Monte Carlo program was used to simulate TCM CT exams on the entire library of phantoms for three head and 10 body protocols. The net imparted energy to the phantoms, normalized by dose length product (DLP), and the net tissue mass in each of the scan regionsmore » were computed. A knowledgebase containing relationships between normalized imparted energy and scanned mass was established. An automated computer algorithm was written to estimate the scanned mass from actual clinical CT exams. The scanned mass estimate, DLP of the exam, and knowledgebase were used to estimate the imparted energy to the patient. The algorithm was tested on 20 chest and 20 abdominopelvic TCM CT exams. Results: The normalized imparted energy increased with increasing kV for all protocols. However, the normalized imparted energy was relatively unaffected by the strength of the TCM. The average imparted energy was 681 ± 376 mJ for abdominopelvic exams and 274 ± 141 mJ for chest exams. Overall, the method was successful in providing patientspecific estimates of imparted energy for 98% of the cases tested. Conclusion: Imparted energy normalized by DLP increased with increasing tube potential. However, the strength of the TCM did not have a significant effect on the net amount of energy deposited to tissue. The automated program can be implemented into the clinical workflow to provide estimates of regional imparted energy and dose across a diverse set of clinical protocols.« less
Adaptation of the Nelson-Somogyi reducing-sugar assay to a microassay using microtiter plates.
Green, F; Clausen, C A; Highley, T L
1989-11-01
The Nelson-Somogyi assay for reducing sugars was adapted to microtiter plates. The primary advantages of this modified assay are (i) smaller sample and reagent volumes, (ii) elimination of boiling and filtration steps, (iii) automated measurement with a dual-wavelength scanning TLC densitometer, (iv) increased range and reproducibility, and (v) automated colorimetric readings by reflectance rather than absorbance.
Multi-atlas segmentation enables robust multi-contrast MRI spleen segmentation for splenomegaly
NASA Astrophysics Data System (ADS)
Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L.; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.
2017-02-01
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≍1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
Multi-atlas Segmentation Enables Robust Multi-contrast MRI Spleen Segmentation for Splenomegaly.
Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L; Assad, Albert; Abramson, Richard G; Landman, Bennett A
2017-02-11
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahi-Anwar, M; Lo, P; Kim, H
Purpose: The use of Quantitative Imaging (QI) methods in Clinical Trials requires both verification of adherence to a specified protocol and an assessment of scanner performance under that protocol, which are currently accomplished manually. This work introduces automated phantom identification and image QA measure extraction towards a fully-automated CT phantom QA system to perform these functions and facilitate the use of Quantitative Imaging methods in clinical trials. Methods: This study used a retrospective cohort of CT phantom scans from existing clinical trial protocols - totaling 84 phantoms, across 3 phantom types using various scanners and protocols. The QA system identifiesmore » the input phantom scan through an ensemble of threshold-based classifiers. Each classifier - corresponding to a phantom type - contains a template slice, which is compared to the input scan on a slice-by-slice basis, resulting in slice-wise similarity metric values for each slice compared. Pre-trained thresholds (established from a training set of phantom images matching the template type) are used to filter the similarity distribution, and the slice with the most optimal local mean similarity, with local neighboring slices meeting the threshold requirement, is chosen as the classifier’s matched slice (if it existed). The classifier with the matched slice possessing the most optimal local mean similarity is then chosen as the ensemble’s best matching slice. If the best matching slice exists, image QA algorithm and ROIs corresponding to the matching classifier extracted the image QA measures. Results: Automated phantom identification performed with 84.5% accuracy and 88.8% sensitivity on 84 phantoms. Automated image quality measurements (following standard protocol) on identified water phantoms (n=35) matched user QA decisions with 100% accuracy. Conclusion: We provide a fullyautomated CT phantom QA system consistent with manual QA performance. Further work will include parallel component to automatically verify image acquisition parameters and automated adherence to specifications. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics; NIH Grant support from: U01 CA181156.« less
Murayama, I; Miyano, A; Sasaki, Y; Hirata, T; Ichijo, T; Satoh, H; Sato, S; Furuhama, K
2013-11-01
This study was performed to clarify whether a formula (Holstein equation) based on a single blood sample and the isotonic, nonionic, iodine contrast medium iodixanol in Holstein dairy cows can apply to the estimation of glomerular filtration rate (GFR) for beef cattle. To verify the application of iodixanol in beef cattle, instead of the standard tracer inulin, both agents were coadministered as a bolus intravenous injection to identical animals at doses of 10 mg of I/kg of BW and 30 mg/kg. Blood was collected 30, 60, 90, and 120 min after the injection, and the GFR was determined by the conventional multisample strategies. The GFR values from iodixanol were well consistent with those from inulin, and no effects of BW, age, or parity on GFR estimates were noted. However, the GFR in cattle weighing less than 300 kg, aged<1 yr old, largely fluctuated, presumably due to the rapid ruminal growth and dynamic changes in renal function at young adult ages. Using clinically healthy cattle and those with renal failure, the GFR values estimated from the Holstein equation were in good agreement with those by the multisample method using iodixanol (r=0.89, P=0.01). The results indicate that the simplified Holstein equation using iodixanol can be used for estimating the GFR of beef cattle in the same dose regimen as Holstein dairy cows, and provides a practical and ethical alternative.
Multisample adjusted U-statistics that account for confounding covariates.
Satten, Glen A; Kong, Maiying; Datta, Somnath
2018-06-19
Multisample U-statistics encompass a wide class of test statistics that allow the comparison of 2 or more distributions. U-statistics are especially powerful because they can be applied to both numeric and nonnumeric data, eg, ordinal and categorical data where a pairwise similarity or distance-like measure between categories is available. However, when comparing the distribution of a variable across 2 or more groups, observed differences may be due to confounding covariates. For example, in a case-control study, the distribution of exposure in cases may differ from that in controls entirely because of variables that are related to both exposure and case status and are distributed differently among case and control participants. We propose to use individually reweighted data (ie, using the stratification score for retrospective data or the propensity score for prospective data) to construct adjusted U-statistics that can test the equality of distributions across 2 (or more) groups in the presence of confounding covariates. Asymptotic normality of our adjusted U-statistics is established and a closed form expression of their asymptotic variance is presented. The utility of our approach is demonstrated through simulation studies, as well as in an analysis of data from a case-control study conducted among African-Americans, comparing whether the similarity in haplotypes (ie, sets of adjacent genetic loci inherited from the same parent) occurring in a case and a control participant differs from the similarity in haplotypes occurring in 2 control participants. Copyright © 2018 John Wiley & Sons, Ltd.
Machine learning for the automatic localisation of foetal body parts in cine-MRI scans
NASA Astrophysics Data System (ADS)
Bowles, Christopher; Nowlan, Niamh C.; Hayat, Tayyib T. A.; Malamateniou, Christina; Rutherford, Mary; Hajnal, Joseph V.; Rueckert, Daniel; Kainz, Bernhard
2015-03-01
Being able to automate the location of individual foetal body parts has the potential to dramatically reduce the work required to analyse time resolved foetal Magnetic Resonance Imaging (cine-MRI) scans, for example, for use in the automatic evaluation of the foetal development. Currently, manual preprocessing of every scan is required to locate body parts before analysis can be performed, leading to a significant time overhead. With the volume of scans becoming available set to increase as cine-MRI scans become more prevalent in clinical practice, this stage of manual preprocessing is a bottleneck, limiting the data available for further analysis. Any tools which can automate this process will therefore save many hours of research time and increase the rate of new discoveries in what is a key area in understanding early human development. Here we present a series of techniques which can be applied to foetal cine-MRI scans in order to first locate and then differentiate between individual body parts. A novel approach to maternal movement suppression and segmentation using Fourier transforms is put forward as a preprocessing step, allowing for easy extraction of short movements of individual foetal body parts via the clustering of optical flow vector fields. These body part movements are compared to a labelled database and probabilistically classified before being spatially and temporally combined to give a final estimate for the location of each body part.
Characterization of Visual Scanning Patterns in Air Traffic Control
McClung, Sarah N.; Kang, Ziho
2016-01-01
Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process. PMID:27239190
Characterization of Visual Scanning Patterns in Air Traffic Control.
McClung, Sarah N; Kang, Ziho
2016-01-01
Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.
NASA Astrophysics Data System (ADS)
Gan, Yu; Yao, Xinwen; Chang, Ernest W.; Bin Amir, Syed A.; Hibshoosh, Hanina; Feldman, Sheldon; Hendon, Christine P.
2017-02-01
Breast cancer is the third leading cause of death in women in the United States. In human breast tissue, adipose cells are infiltrated or replaced by cancer cells during the development of breast tumor. Therefore, an adipose map can be an indicator of identifying cancerous region. We developed an automated classification method to generate adipose map within human breast. To facilitate the automated classification, we first mask the B-scans from OCT volumes by comparing the signal noise ratio with a threshold. Then, the image was divided into multiple blocks with a size of 30 pixels by 30 pixels. In each block, we extracted texture features such as local standard deviation, entropy, homogeneity, and coarseness. The features of each block were input to a probabilistic model, relevance vector machine (RVM), which was trained prior to the experiment, to classify tissue types. For each block within the B-scan, RVM identified the region with adipose tissue. We calculated the adipose ratio as the number of blocks identified as adipose over the total number of blocks within the B-scan. We obtained OCT images from patients (n = 19) in Columbia medical center. We automatically generated the adipose maps from 24 B-scans including normal samples (n = 16) and cancerous samples (n = 8). We found the adipose regions show an isolated pattern that in cancerous tissue while a clustered pattern in normal tissue. Moreover, the adipose ratio (52.30 ± 29.42%) in normal tissue was higher than the that in cancerous tissue (12.41 ± 10.07%).
Electrophoretic mobility shift scanning using an automated infrared DNA sequencer.
Sano, M; Ohyama, A; Takase, K; Yamamoto, M; Machida, M
2001-11-01
Electrophoretic mobility shift assay (EMSA) is widely used in the study of sequence-specific DNA-binding proteins, including transcription factors and mismatch binding proteins. We have established a non-radioisotope-based protocol for EMSA that features an automated DNA sequencer with an infrared fluorescent dye (IRDye) detection unit. Our modification of the elec- trophoresis unit, which includes cooling the gel plates with a reduced well-to-read length, has made it possible to detect shifted bands within 1 h. Further, we have developed a rapid ligation-based method for generating IRDye-labeled probes with an approximately 60% cost reduction. This method has the advantages of real-time scanning, stability of labeled probes, and better safety associated with nonradioactive methods of detection. Analysis of a promoter from an industrially important filamentous fungus, Aspergillus oryzae, in a prototype experiment revealed that the method we describe has potential for use in systematic scanning and identification of the functionally important elements to which cellular factors bind in a sequence-specific manner.
Automated segmentation of cardiac visceral fat in low-dose non-contrast chest CT images
NASA Astrophysics Data System (ADS)
Xie, Yiting; Liang, Mingzhu; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.
2015-03-01
Cardiac visceral fat was segmented from low-dose non-contrast chest CT images using a fully automated method. Cardiac visceral fat is defined as the fatty tissues surrounding the heart region, enclosed by the lungs and posterior to the sternum. It is measured by constraining the heart region with an Anatomy Label Map that contains robust segmentations of the lungs and other major organs and estimating the fatty tissue within this region. The algorithm was evaluated on 124 low-dose and 223 standard-dose non-contrast chest CT scans from two public datasets. Based on visual inspection, 343 cases had good cardiac visceral fat segmentation. For quantitative evaluation, manual markings of cardiac visceral fat regions were made in 3 image slices for 45 low-dose scans and the Dice similarity coefficient (DSC) was computed. The automated algorithm achieved an average DSC of 0.93. Cardiac visceral fat volume (CVFV), heart region volume (HRV) and their ratio were computed for each case. The correlation between cardiac visceral fat measurement and coronary artery and aortic calcification was also evaluated. Results indicated the automated algorithm for measuring cardiac visceral fat volume may be an alternative method to the traditional manual assessment of thoracic region fat content in the assessment of cardiovascular disease risk.
Automated kidney morphology measurements from ultrasound images using texture and edge analysis
NASA Astrophysics Data System (ADS)
Ravishankar, Hariharan; Annangi, Pavan; Washburn, Michael; Lanning, Justin
2016-04-01
In a typical ultrasound scan, a sonographer measures Kidney morphology to assess renal abnormalities. Kidney morphology can also help to discriminate between chronic and acute kidney failure. The caliper placements and volume measurements are often time consuming and an automated solution will help to improve accuracy, repeatability and throughput. In this work, we developed an automated Kidney morphology measurement solution from long axis Ultrasound scans. Automated kidney segmentation is challenging due to wide variability in kidney shape, size, weak contrast of the kidney boundaries and presence of strong edges like diaphragm, fat layers. To address the challenges and be able to accurately localize and detect kidney regions, we present a two-step algorithm that makes use of edge and texture information in combination with anatomical cues. First, we use an edge analysis technique to localize kidney region by matching the edge map with predefined templates. To accurately estimate the kidney morphology, we use textural information in a machine learning algorithm framework using Haar features and Gradient boosting classifier. We have tested the algorithm on 45 unseen cases and the performance against ground truth is measured by computing Dice overlap, % error in major and minor axis of kidney. The algorithm shows successful performance on 80% cases.
Automated Detection and Analysis of Interplanetary Shocks with Real-Time Application
NASA Astrophysics Data System (ADS)
Vorotnikov, V.; Smith, C. W.; Hu, Q.; Szabo, A.; Skoug, R. M.; Cohen, C. M.
2006-12-01
The ACE real-time data stream provides web-based now-casting capabilities for solar wind conditions upstream of Earth. Our goal is to provide an automated code that finds and analyzes interplanetary shocks as they occur for possible real-time application to space weather nowcasting. Shock analysis algorithms based on the Rankine-Hugoniot jump conditions exist and are in wide-spread use today for the interactive analysis of interplanetary shocks yielding parameters such as shock speed and propagation direction and shock strength in the form of compression ratios. Although these codes can be automated in a reasonable manner to yield solutions not far from those obtained by user-directed interactive analysis, event detection presents an added obstacle and the first step in a fully automated analysis. We present a fully automated Rankine-Hugoniot analysis code that can scan the ACE science data, find shock candidates, analyze the events, obtain solutions in good agreement with those derived from interactive applications, and dismiss false positive shock candidates on the basis of the conservation equations. The intent is to make this code available to NOAA for use in real-time space weather applications. The code has the added advantage of being able to scan spacecraft data sets to provide shock solutions for use outside real-time applications and can easily be applied to science-quality data sets from other missions. Use of the code for this purpose will also be explored.
Furlaneto-Maia, Luciana; Rocha, Kátia Real; Siqueira, Vera Lúcia Dias; Furlaneto, Márcia Cristina
2014-01-01
Enterococci are increasingly responsible for nosocomial infections worldwide. This study was undertaken to compare the identification and susceptibility profile using an automated MicrosScan system, PCR-based assay and disk diffusion assay of Enterococcus spp. We evaluated 30 clinical isolates of Enterococcus spp. Isolates were identified by MicrosScan system and PCR-based assay. The detection of antibiotic resistance genes (vancomycin, gentamicin, tetracycline and erythromycin) was also determined by PCR. Antimicrobial susceptibilities to vancomycin (30 µg), gentamicin (120 µg), tetracycline (30 µg) and erythromycin (15 µg) were tested by the automated system and disk diffusion method, and were interpreted according to the criteria recommended in CLSI guidelines. Concerning Enterococcus identification the general agreement between data obtained by the PCR method and by the automatic system was 90.0% (27/30). For all isolates of E. faecium and E. faecalis we observed 100% agreement. Resistance frequencies were higher in E. faecium than E. faecalis. The resistance rates obtained were higher for erythromycin (86.7%), vancomycin (80.0%), tetracycline (43.35) and gentamicin (33.3%). The correlation between disk diffusion and automation revealed an agreement for the majority of the antibiotics with category agreement rates of > 80%. The PCR-based assay, the van(A) gene was detected in 100% of vancomycin resistant enterococci. This assay is simple to conduct and reliable in the identification of clinically relevant enterococci. The data obtained reinforced the need for an improvement of the automated system to identify some enterococci. PMID:24626409
Automated 3D renal segmentation based on image partitioning
NASA Astrophysics Data System (ADS)
Yeghiazaryan, Varduhi; Voiculescu, Irina D.
2016-03-01
Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.
Validity of Automated Choroidal Segmentation in SS-OCT and SD-OCT.
Zhang, Li; Buitendijk, Gabriëlle H S; Lee, Kyungmoo; Sonka, Milan; Springelkamp, Henriët; Hofman, Albert; Vingerling, Johannes R; Mullins, Robert F; Klaver, Caroline C W; Abràmoff, Michael D
2015-05-01
To evaluate the validity of a novel fully automated three-dimensional (3D) method capable of segmenting the choroid from two different optical coherence tomography scanners: swept-source OCT (SS-OCT) and spectral-domain OCT (SD-OCT). One hundred eight subjects were imaged using SS-OCT and SD-OCT. A 3D method was used to segment the choroid and quantify the choroidal thickness along each A-scan. The segmented choroidal posterior boundary was evaluated by comparing to manual segmentation. Differences were assessed to test the agreement between segmentation results of the same subject. Choroidal thickness was defined as the Euclidian distance between Bruch's membrane and the choroidal posterior boundary, and reproducibility was analyzed using automatically and manually determined choroidal thicknesses. For SS-OCT, the average choroidal thickness of the entire 6- by 6-mm2 macular region was 219.5 μm (95% confidence interval [CI], 204.9-234.2 μm), and for SD-OCT it was 209.5 μm (95% CI, 197.9-221.0 μm). The agreement between automated and manual segmentations was high: Average relative difference was less than 5 μm, and average absolute difference was less than 15 μm. Reproducibility of choroidal thickness between repeated SS-OCT scans was high (coefficient of variation [CV] of 3.3%, intraclass correlation coefficient [ICC] of 0.98), and differences between SS-OCT and SD-OCT results were small (CV of 11.0%, ICC of 0.73). We have developed a fully automated 3D method for segmenting the choroid and quantifying choroidal thickness along each A-scan. The method yielded high validity. Our method can be used reliably to study local choroidal changes and may improve the diagnosis and management of patients with ocular diseases in which the choroid is affected.
Automated labeling of log features in CT imagery of multiple hardwood species
Daniel L. Schmoldt; Jing He; A. Lynn Abbott
2000-01-01
Before noninvasive scanning, e.g., computed tomography (CT), becomes feasible in industrial saw-mill operations, we need a procedure that can automatically interpret scan information in order to provide the saw operator with information necessary to make proper sawing decisions. To this end, we have worked to develop an approach for automatic analysis of CT images of...
ERIC Educational Resources Information Center
Lafaye, Christophe
2009-01-01
Introduction: The rapid growth of the Internet has modified the boundaries of information acquisition (tracking) in environmental scanning. Despite the numerous advantages of this new medium, information overload is an enormous problem for Internet scanners. In order to help them, intelligent agents (i.e., autonomous, automated software agents…
C. T. Scott; R. Hernandez; C. Frihart; R. Gleisner; T. Tice
2005-01-01
A new method for quantifying percentage wood failure of an adhesively bonded block-shear specimen has been developed. This method incorporates a laser displacement gage with an automated two-axis positioning system that functions as a highly sensitive profilometer. The failed specimen is continuously scanned across its width to obtain a surface failure profile. The...
MUSiC—An Automated Scan for Deviations between Data and Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Meyer, Arnd
2010-02-01
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
Automated grading, upgrading, and cuttings prediction of surfaced dry hardwood lumber
Sang-Mook Lee; Phil Araman; A.Lynn Abbott; Matthew F. Winn
2010-01-01
This paper concerns the scanning, sawing, and grading of kiln-dried hardwood lumber. A prototype system is described that uses laser sources and a video camera to scan boards. The system automatically detects defects and wane, searches for optimal sawing solutions, and then estimates the grades of the boards that would result. The goal is to derive maximum commercial...
MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Arnd
2010-02-10
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
S.A. Bowe; R.L. Smith; D. Earl Kline; Philip A. Araman
2002-01-01
A nationwide survey of advanced scanning and optimizing technology in the hardwood sawmill industry was conducted in the fall of 1999. Three specific hardwood sawmill technologies were examined that included current edger-optimizer systems, future edger-optimizer systems, and future automated grading systems. The objectives of the research were to determine differences...
Portable automated imaging in complex ceramics with a microwave interference scanning system
NASA Astrophysics Data System (ADS)
Goitia, Ryan M.; Schmidt, Karl F.; Little, Jack R.; Ellingson, William A.; Green, William; Franks, Lisa P.
2013-01-01
An improved portable microwave interferometry system has been automated to permit rapid examination of components with minimal operator attendance. Functionalities include stereo and multiplexed, frequency-modulated at multiple frequencies, producing layered volumetric images of complex ceramic structures. The technique has been used to image composite ceramic armor and ceramic matrix composite components, as well as other complex dielectric materials. The system utilizes Evisive Scan microwave interference scanning technique. Validation tests include artificial and in-service damage of ceramic armor, surrogates and ceramic matrix composite samples. Validation techniques include micro-focus x-ray and computed tomography imaging. The microwave interference scanning technique has demonstrated detection of cracks, interior laminar features and variations in material properties such as density. The image yields depth information through phase angle manipulation, and shows extent of feature and relative dielectric property information. It requires access to only one surface, and no coupling medium. Data are not affected by separation of layers of dielectric material, such as outer over-wrap. Test panels were provided by the US Army Research Laboratory, and the US Army Tank Automotive Research, Development and Engineering Center (TARDEC), who with the US Air Force Research Laboratory have supported this work.
Automated Quantitative Rare Earth Elements Mineralogy by Scanning Electron Microscopy
NASA Astrophysics Data System (ADS)
Sindern, Sven; Meyer, F. Michael
2016-09-01
Increasing industrial demand of rare earth elements (REEs) stems from the central role they play for advanced technologies and the accelerating move away from carbon-based fuels. However, REE production is often hampered by the chemical, mineralogical as well as textural complexity of the ores with a need for better understanding of their salient properties. This is not only essential for in-depth genetic interpretations but also for a robust assessment of ore quality and economic viability. The design of energy and cost-efficient processing of REE ores depends heavily on information about REE element deportment that can be made available employing automated quantitative process mineralogy. Quantitative mineralogy assigns numeric values to compositional and textural properties of mineral matter. Scanning electron microscopy (SEM) combined with a suitable software package for acquisition of backscatter electron and X-ray signals, phase assignment and image analysis is one of the most efficient tools for quantitative mineralogy. The four different SEM-based automated quantitative mineralogy systems, i.e. FEI QEMSCAN and MLA, Tescan TIMA and Zeiss Mineralogic Mining, which are commercially available, are briefly characterized. Using examples of quantitative REE mineralogy, this chapter illustrates capabilities and limitations of automated SEM-based systems. Chemical variability of REE minerals and analytical uncertainty can reduce performance of phase assignment. This is shown for the REE phases parisite and synchysite. In another example from a monazite REE deposit, the quantitative mineralogical parameters surface roughness and mineral association derived from image analysis are applied for automated discrimination of apatite formed in a breakdown reaction of monazite and apatite formed by metamorphism prior to monazite breakdown. SEM-based automated mineralogy fulfils all requirements for characterization of complex unconventional REE ores that will become increasingly important for supply of REEs in the future.
Subtle In-Scanner Motion Biases Automated Measurement of Brain Anatomy From In Vivo MRI
Alexander-Bloch, Aaron; Clasen, Liv; Stockman, Michael; Ronan, Lisa; Lalonde, Francois; Giedd, Jay; Raznahan, Armin
2016-01-01
While the potential for small amounts of motion in functional magnetic resonance imaging (fMRI) scans to bias the results of functional neuroimaging studies is well appreciated, the impact of in-scanner motion on morphological analysis of structural MRI is relatively under-studied. Even among “good quality” structural scans, there may be systematic effects of motion on measures of brain morphometry. In the present study, the subjects’ tendency to move during fMRI scans, acquired in the same scanning sessions as their structural scans, yielded a reliable, continuous estimate of in-scanner motion. Using this approach within a sample of 127 children, adolescents, and young adults, significant relationships were found between this measure and estimates of cortical gray matter volume and mean curvature, as well as trend-level relationships with cortical thickness. Specifically, cortical volume and thickness decreased with greater motion, and mean curvature increased. These effects of subtle motion were anatomically heterogeneous, were present across different automated imaging pipelines, showed convergent validity with effects of frank motion assessed in a separate sample of 274 scans, and could be demonstrated in both pediatric and adult populations. Thus, using different motion assays in two large non-overlapping sets of structural MRI scans, convergent evidence showed that in-scanner motion—even at levels which do not manifest in visible motion artifact—can lead to systematic and regionally specific biases in anatomical estimation. These findings have special relevance to structural neuroimaging in developmental and clinical datasets, and inform ongoing efforts to optimize neuroanatomical analysis of existing and future structural MRI datasets in non-sedated humans. PMID:27004471
SkinScan©: A PORTABLE LIBRARY FOR MELANOMA DETECTION ON HANDHELD DEVICES
Wadhawan, Tarun; Situ, Ning; Lancaster, Keith; Yuan, Xiaojing; Zouridakis, George
2011-01-01
We have developed a portable library for automated detection of melanoma termed SkinScan© that can be used on smartphones and other handheld devices. Compared to desktop computers, embedded processors have limited processing speed, memory, and power, but they have the advantage of portability and low cost. In this study we explored the feasibility of running a sophisticated application for automated skin cancer detection on an Apple iPhone 4. Our results demonstrate that the proposed library with the advanced image processing and analysis algorithms has excellent performance on handheld and desktop computers. Therefore, deployment of smartphones as screening devices for skin cancer and other skin diseases can have a significant impact on health care delivery in underserved and remote areas. PMID:21892382
NASA Technical Reports Server (NTRS)
Newcomb, J. S.
1975-01-01
The present paper describes an automated system for measuring stellar proper motions on the basis of information contained in photographic plates. In this system, the images on a star plate are digitized by a scanning microdensitometer using light from a He-Ne gas laser, and a special-purpose computer arranges the measurements in computer-compatible form on magnetic tape. The scanning and image-reconstruction processes are briefly outlined, and the image-evaluation techniques are discussed. It is shown that the present system has been especially successful in measuring the proper motions of low-luminosity stars, including 119 stars with less than 1/10,000 of the solar bolometric luminosity. Plans for measurements of high-density Milky Way star plates are noted.
Automated scoring system of standard uptake value for torso FDG-PET images
NASA Astrophysics Data System (ADS)
Hara, Takeshi; Kobayashi, Tatsunori; Kawai, Kazunao; Zhou, Xiangrong; Itoh, Satoshi; Katafuchi, Tetsuro; Fujita, Hiroshi
2008-03-01
The purpose of this work was to develop an automated method to calculate the score of SUV for torso region on FDG-PET scans. The three dimensional distributions for the mean and the standard deviation values of SUV were stored in each volume to score the SUV in corresponding pixel position within unknown scans. The modeling methods is based on SPM approach using correction technique of Euler characteristic and Resel (Resolution element). We employed 197 nor-mal cases (male: 143, female: 54) to assemble the normal metabolism distribution of FDG. The physique were registered each other in a rectangular parallelepiped shape using affine transformation and Thin-Plate-Spline technique. The regions of the three organs were determined based on semi-automated procedure. Seventy-three abnormal spots were used to estimate the effectiveness of the scoring methods. As a result, the score images correctly represented that the scores for normal cases were between zeros to plus/minus 2 SD. Most of the scores of abnormal spots associated with cancer were lager than the upper of the SUV interval of normal organs.
Automated Reconstruction of Historic Roof Structures from Point Clouds - Development and Examples
NASA Astrophysics Data System (ADS)
Pöchtrager, M.; Styhler-Aydın, G.; Döring-Williams, M.; Pfeifer, N.
2017-08-01
The analysis of historic roof constructions is an important task for planning the adaptive reuse of buildings or for maintenance and restoration issues. Current approaches to modeling roof constructions consist of several consecutive operations that need to be done manually or using semi-automatic routines. To increase efficiency and allow the focus to be on analysis rather than on data processing, a set of methods was developed for the fully automated analysis of the roof constructions, including integration of architectural and structural modeling. Terrestrial laser scanning permits high-detail surveying of large-scale structures within a short time. Whereas 3-D laser scan data consist of millions of single points on the object surface, we need a geometric description of structural elements in order to obtain a structural model consisting of beam axis and connections. Preliminary results showed that the developed methods work well for beams in flawless condition with a quadratic cross section and no bending. Deformations or damages such as cracks and cuts on the wooden beams can lead to incomplete representations in the model. Overall, a high degree of automation was achieved.
Learning-based scan plane identification from fetal head ultrasound images
NASA Astrophysics Data System (ADS)
Liu, Xiaoming; Annangi, Pavan; Gupta, Mithun; Yu, Bing; Padfield, Dirk; Banerjee, Jyotirmoy; Krishnan, Kajoli
2012-03-01
Acquisition of a clinically acceptable scan plane is a pre-requisite for ultrasonic measurement of anatomical features from B-mode images. In obstetric ultrasound, measurement of gestational age predictors, such as biparietal diameter and head circumference, is performed at the level of the thalami and cavum septum pelucidi. In an accurate scan plane, the head can be modeled as an ellipse, the thalami looks like a butterfly, the cavum appears like an empty box and the falx is a straight line along the major axis of a symmetric ellipse inclined either parallel to or at small angles to the probe surface. Arriving at the correct probe placement on the mother's belly to obtain an accurate scan plane is a task of considerable challenge especially for a new user of ultrasound. In this work, we present a novel automated learning-based algorithm to identify an acceptable fetal head scan plane. We divide the problem into cranium detection and a template matching to capture the composite "butterfly" structure present inside the head, which mimics the visual cues used by an expert. The algorithm uses the stateof- the-art Active Appearance Models techniques from the image processing and computer vision literature and tie them to presence or absence of the inclusions within the head to automatically compute a score to represent the goodness of a scan plane. This automated technique can be potentially used to train and aid new users of ultrasound.
Laser Scanner For Automatic Storage
NASA Astrophysics Data System (ADS)
Carvalho, Fernando D.; Correia, Bento A.; Rebordao, Jose M.; Rodrigues, F. Carvalho
1989-01-01
The automated magazines are beeing used at industry more and more. One of the problems related with the automation of a Store House is the identification of the products envolved. Already used for stock management, the Bar Codes allows an easy way to identify one product. Applied to automated magazines, the bar codes allows a great variety of items in a small code. In order to be used by the national producers of automated magazines, a devoted laser scanner has been develloped. The Prototype uses an He-Ne laser whose beam scans a field angle of 75 degrees at 16 Hz. The scene reflectivity is transduced by a photodiode into an electrical signal, which is then binarized. This digital signal is the input of the decodifying program. The machine is able to see barcodes and to decode the information. A parallel interface allows the comunication with the central unit, which is responsible for the management of automated magazine.
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.
Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M
2016-01-01
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem
Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.
2016-01-01
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well. PMID:27213683
Farber, Joshua M; Totterman, Saara M S; Martinez-Torteya, Antonio; Tamez-Peña, Jose G
2016-02-01
Subchondral bone (SCB) undergoes changes in the shape of the articulating bone surfaces and is currently recognized as a key target in osteoarthritis (OA) treatment. The aim of this study was to present an automated system that determines the curvature of the SCB regions of the knee and to evaluate its cross-sectional and longitudinal scan-rescan precision Six subjects with OA and six control subjects were selected from the Osteoarthritis Initiative (OAI) pilot study database. As per OAI protocol, these subjects underwent 3T MRI at baseline and every twelve months thereafter, including a 3D DESS WE sequence. We analyzed the baseline and twenty-four month images. Each subject was scanned twice at these visits, thus generating scan-rescan information. Images were segmented with an automated multi-atlas framework platform and then 3D renderings of the bone structure were created from the segmentations. Curvature maps were extracted from the 3D renderings and morphed into a reference atlas to determine precision, to generate population statistics, and to visualize cross-sectional and longitudinal curvature changes. The baseline scan-rescan root mean square error values ranged from 0.006mm(-1) to 0.013mm(-1), and from 0.007mm(-1) to 0.018mm(-1) for the SCB of the femur and the tibia, respectively. The standardized response of the mean of the longitudinal changes in curvature in these regions ranged from -0.09 to 0.02 and from -0.016 to 0.015, respectively. The fully automated system produces accurate and precise curvature maps of femoral and tibial SCB, and will provide a valuable tool for the analysis of the curvature changes of articulating bone surfaces during the course of knee OA. Copyright © 2015 Elsevier Ltd. All rights reserved.
Liya Thomas; R. Edward Thomas
2011-01-01
We have developed an automated defect detection system and a state-of-the-art Graphic User Interface (GUI) for hardwood logs. The algorithm identifies defects at least 0.5 inch high and at least 3 inches in diameter on barked hardwood log and stem surfaces. To summarize defect features and to build a knowledge base, hundreds of defects were measured, photographed, and...
Chi-Leung So; Thomas L. Eberhardt; Stan T. Lebow; Leslie H. Groom
2006-01-01
Near infrared (NIR) spectroscopy has been previously used in our laboratory to predict copper, chromium, and arsenic levels in samples of chromated copper arsenate (CCA)-treated wood. In the present study, we utilized our custom-made NIR scanning system, NIRVANA (near infrared visual and automated numerical analysis), to scan cross sections of ACQ (alkaline copper quat...
Lewis, Jane Ea; Williams, Paul; Davies, Jane H
2016-01-01
This cross-sectional study aimed to individually and cumulatively compare sensitivity and specificity of the (1) ankle brachial index and (2) pulse volume waveform analysis recorded by the same automated device, with the presence or absence of peripheral arterial disease being verified by ultrasound duplex scan. Patients (n=205) referred for lower limb arterial assessment underwent ankle brachial index measurement and pulse volume waveform recording using volume plethysmography, followed by ultrasound duplex scan. The presence of peripheral arterial disease was recorded if ankle brachial index <0.9; pulse volume waveform was graded as 2, 3 or 4; or if haemodynamically significant stenosis >50% was evident with ultrasound duplex scan. Outcome measure was agreement between the measured ankle brachial index and interpretation of pulse volume waveform for peripheral arterial disease diagnosis, using ultrasound duplex scan as the reference standard. Sensitivity of ankle brachial index was 79%, specificity 91% and overall accuracy 88%. Pulse volume waveform sensitivity was 97%, specificity 81% and overall accuracy 85%. The combined sensitivity of ankle brachial index and pulse volume waveform was 100%, specificity 76% and overall accuracy 85%. Combining these two diagnostic modalities within one device provided a highly accurate method of ruling out peripheral arterial disease, which could be utilised in primary care to safely reduce unnecessary secondary care referrals.
Automated SEM-EDS GSR Analysis for Turkish Ammunitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cakir, Ismail; Uner, H. Bulent
2007-04-23
In this work, Automated Scanning Electron Microscopy with Energy Dispersive X-ray Spectrometry (SEM-EDS) was used to characterize 7.65 and 9mm cartridges Turkish ammunition. All samples were analyzed in a SEM Jeol JSM-5600LV equipped BSE detector and a Link ISIS 300 (EDS). A working distance of 20mm, an accelerating voltage of 20 keV and gunshot residue software was used in all analysis. Automated search resulted in a high number of particles analyzed containing gunshot residues (GSR) unique elements (PbBaSb). The obtained data about the definition of characteristic GSR particles was concordant with other studies on this topic.
Bibler Zaidi, Nikki L; Santen, Sally A; Purkiss, Joel A; Teener, Carol A; Gay, Steven E
2016-11-01
Most medical schools have either retained a traditional admissions interview or fully adopted an innovative, multisampling format (e.g., the multiple mini-interview) despite there being advantages and disadvantages associated with each format. The University of Michigan Medical School (UMMS) sought to maximize the strengths associated with both interview formats after recognizing that combining the two approaches had the potential to capture additional, unique information about an applicant. In September 2014, the UMMS implemented a hybrid interview model with six, 6-minute short-form interviews-highly structured scenario-based encounters-and two, 30-minute semistructured long-form interviews. Five core skills were assessed across both interview formats. Overall, applicants and admissions committee members reported favorable reactions to the hybrid model, supporting continued use of the model. The generalizability coefficients for the six-station short-form and the two-interview long-form formats were estimated to be 0.470 and 0.176, respectively. Different skills were more reliably assessed by different interview formats. Scores from each format seemed to be operating independently as evidenced through moderate to low correlations (r = 0.100-0.403) for the same skills measured across different interview formats; however, after correcting for attenuation, these correlations were much higher. This hybrid model will be revised and optimized to capture the skills most reliably assessed by each format. Future analysis will examine validity by determining whether short-form and long-form interview scores accurately measure the skills intended to be assessed. Additionally, data collected from both formats will be used to establish baselines for entering students' competencies.
NASA Astrophysics Data System (ADS)
Breier, J. A.; Sheik, C. S.; Gomez-Ibanez, D.; Sayre-McCord, R. T.; Sanger, R.; Rauch, C.; Coleman, M.; Bennett, S. A.; Cron, B. R.; Li, M.; German, C. R.; Toner, B. M.; Dick, G. J.
2014-12-01
A new tool was developed for large volume sampling to facilitate marine microbiology and biogeochemical studies. It was developed for remotely operated vehicle and hydrocast deployments, and allows for rapid collection of multiple sample types from the water column and dynamic, variable environments such as rising hydrothermal plumes. It was used successfully during a cruise to the hydrothermal vent systems of the Mid-Cayman Rise. The Suspended Particulate Rosette V2 large volume multi-sampling system allows for the collection of 14 sample sets per deployment. Each sample set can include filtered material, whole (unfiltered) water, and filtrate. Suspended particulate can be collected on filters up to 142 mm in diameter and pore sizes down to 0.2 μm. Filtration is typically at flowrates of 2 L min-1. For particulate material, filtered volume is constrained only by sampling time and filter capacity, with all sample volumes recorded by digital flowmeter. The suspended particulate filter holders can be filled with preservative and sealed immediately after sample collection. Up to 2 L of whole water, filtrate, or a combination of the two, can be collected as part of each sample set. The system is constructed of plastics with titanium fasteners and nickel alloy spring loaded seals. There are no ferrous alloys in the sampling system. Individual sample lines are prefilled with filtered, deionized water prior to deployment and remain sealed unless a sample is actively being collected. This system is intended to facilitate studies concerning the relationship between marine microbiology and ocean biogeochemistry.
Laser Scanning Reader For Automated Data Entry Operations
NASA Astrophysics Data System (ADS)
Cheng, Charles C. K.
1980-02-01
The use of the Universal Product Code (UPC) in conjunction with the laser-scanner-equipped electronic checkout system has made it technologically possible for supermarket stores to operate more efficiently and accurately. At present, more than 90% of the packages in grocery stores have been marked by the manufacturer with laser-scannable UPC symbols and the installation of laser scanning systems is expected to expand into all major chain stores. Areas to be discussed are: system design features, laser-scanning pattern generation, signal-processing logical considerations, UPC characteristics and encodation.
Ultrasonic inspection and deployment apparatus
Michaels, Jennifer E.; Michaels, Thomas E.; Mech, Jr., Stephen J.
1984-01-01
An ultrasonic inspection apparatus for the inspection of metal structures, especially installed pipes. The apparatus combines a specimen inspection element, an acoustical velocity sensing element, and a surface profiling element, all in one scanning head. A scanning head bellows contains a volume of oil above the pipe surface, serving as acoustical couplant between the scanning head and the pipe. The scanning head is mounted on a scanning truck which is mobile around a circular track surrounding the pipe. The scanning truck has sufficient motors, gears, and position encoders to allow the scanning head six degrees of motion freedom. A computer system continually monitors acoustical velocity, and uses that parameter to process surface profiling and inspection data. The profiling data is used to automatically control scanning head position and alignment and to define a coordinate system used to identify and interpret inspection data. The apparatus is suitable for highly automated, remote application in hostile environments, particularly high temperature and radiation areas.
An Imaging System for Automated Characteristic Length Measurement of Debrisat Fragments
NASA Technical Reports Server (NTRS)
Moraguez, Mathew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Sorge, Marlon; Cowardin, Heather; Opiela, John; Krisko, Paula H.
2015-01-01
The debris fragments generated by DebriSat's hypervelocity impact test are currently being processed and characterized through an effort of NASA and USAF. The debris characteristics will be used to update satellite breakup models. In particular, the physical dimensions of the debris fragments must be measured to provide characteristic lengths for use in these models. Calipers and commercial 3D scanners were considered as measurement options, but an automated imaging system was ultimately developed to measure debris fragments. By automating the entire process, the measurement results are made repeatable and the human factor associated with calipers and 3D scanning is eliminated. Unlike using calipers to measure, the imaging system obtains non-contact measurements to avoid damaging delicate fragments. Furthermore, this fully automated measurement system minimizes fragment handling, which reduces the potential for fragment damage during the characterization process. In addition, the imaging system reduces the time required to determine the characteristic length of the debris fragment. In this way, the imaging system can measure the tens of thousands of DebriSat fragments at a rate of about six minutes per fragment, compared to hours per fragment in NASA's current 3D scanning measurement approach. The imaging system utilizes a space carving algorithm to generate a 3D point cloud of the article being measured and a custom developed algorithm then extracts the characteristic length from the point cloud. This paper describes the measurement process, results, challenges, and future work of the imaging system used for automated characteristic length measurement of DebriSat fragments.
Shinde, V; Burke, K E; Chakravarty, A; Fleming, M; McDonald, A A; Berger, A; Ecsedy, J; Blakemore, S J; Tirrell, S M; Bowman, D
2014-01-01
Immunohistochemistry-based biomarkers are commonly used to understand target inhibition in key cancer pathways in preclinical models and clinical studies. Automated slide-scanning and advanced high-throughput image analysis software technologies have evolved into a routine methodology for quantitative analysis of immunohistochemistry-based biomarkers. Alongside the traditional pathology H-score based on physical slides, the pathology world is welcoming digital pathology and advanced quantitative image analysis, which have enabled tissue- and cellular-level analysis. An automated workflow was implemented that includes automated staining, slide-scanning, and image analysis methodologies to explore biomarkers involved in 2 cancer targets: Aurora A and NEDD8-activating enzyme (NAE). The 2 workflows highlight the evolution of our immunohistochemistry laboratory and the different needs and requirements of each biological assay. Skin biopsies obtained from MLN8237 (Aurora A inhibitor) phase 1 clinical trials were evaluated for mitotic and apoptotic index, while mitotic index and defects in chromosome alignment and spindles were assessed in tumor biopsies to demonstrate Aurora A inhibition. Additionally, in both preclinical xenograft models and an acute myeloid leukemia phase 1 trial of the NAE inhibitor MLN4924, development of a novel image algorithm enabled measurement of downstream pathway modulation upon NAE inhibition. In the highlighted studies, developing a biomarker strategy based on automated image analysis solutions enabled project teams to confirm target and pathway inhibition and understand downstream outcomes of target inhibition with increased throughput and quantitative accuracy. These case studies demonstrate a strategy that combines a pathologist's expertise with automated image analysis to support oncology drug discovery and development programs.
Fast, Automated, Photo realistic, 3D Modeling of Building Interiors
2016-09-12
project, we developed two algorithmic pipelines for GPS-denied indoor mobile 3D mapping using an ambulatory backpack system. By mounting scanning...equipment on a backpack system, a human operator can traverse the interior of a building to produce a high-quality 3D reconstruction. In each of our...Unlimited UU UU UU UU 12-09-2016 1-May-2011 30-Jun-2015 Final Report: Fast, Automated, Photo-realistic, 3D Modeling of Building Interiors (ATTN
2011-10-01
International Conference on Robotics and Automation, Pasadena CA, USA, May 19-23, 2008, p 3672-3677. APPENDICES A Socket Breakdown for Scanning...the LimbLogic is the more efficient of the two pumps. These tests also showed that the performance for both pumps was self -consistent over the...Donelan, J. M. Biomechanical Energy Harvesting: Apparatus and Method. IEEE International Conference on Robotics and Automation, May 19-23, 2008. Lyon
Application of an industrial robot to nuclear pharmacy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viola, J.
1994-12-31
Increased patient throughput and lengthened P.E.T. scan protocols have increased the radiation dose received by P.E.T. technologists. Automated methods of tracer infusion and blood sampling have been introduced to reduce direct contact with the radioisotopes, but significant radiation exposure still exists during the receipt and dispensing of the patient dose. To address this situation the authors have developed an automated robotic system which performs these tasks, thus limiting the physical contact between operator and radioisotope.
2016-10-01
workshop, and use case development for automated and autonomous systems for CSS. The scoping study covers key concepts and trends, a technology scan, and...requirements and delimiters for the selected technologies. The report goes on to present detailed use cases for two technologies of interest: semi...selected use cases . As a result of the workshop, the large list of technologies and applications from the scoping study was narrowed down to the top
Automated position control of a surface array relative to a liquid microjunction surface sampler
Van Berkel, Gary J.; Kertesz, Vilmos; Ford, Michael James
2007-11-13
A system and method utilizes an image analysis approach for controlling the probe-to-surface distance of a liquid junction-based surface sampling system for use with mass spectrometric detection. Such an approach enables a hands-free formation of the liquid microjunction used to sample solution composition from the surface and for re-optimization, as necessary, of the microjunction thickness during a surface scan to achieve a fully automated surface sampling system.
A Communication Framework for Collaborative Defense
2009-02-28
been able to provide sufficient automation to be able to build up the most extensive application signature database in the world with a fraction of...perceived. We have been able to provide sufficient automation to be able to build up the most extensive application signature database in the world with a...that are well understood in the context of databases . These techniques allow users to quickly scan for the existence of a key in a database . 8 To be
Segmentation of the whole breast from low-dose chest CT images
NASA Astrophysics Data System (ADS)
Liu, Shuang; Salvatore, Mary; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.
2015-03-01
The segmentation of whole breast serves as the first step towards automated breast lesion detection. It is also necessary for automatically assessing the breast density, which is considered to be an important risk factor for breast cancer. In this paper we present a fully automated algorithm to segment the whole breast in low-dose chest CT images (LDCT), which has been recommended as an annual lung cancer screening test. The automated whole breast segmentation and potential breast density readings as well as lesion detection in LDCT will provide useful information for women who have received LDCT screening, especially the ones who have not undergone mammographic screening, by providing them additional risk indicators for breast cancer with no additional radiation exposure. The two main challenges to be addressed are significant range of variations in terms of the shape and location of the breast in LDCT and the separation of pectoral muscles from the glandular tissues. The presented algorithm achieves robust whole breast segmentation using an anatomy directed rule-based method. The evaluation is performed on 20 LDCT scans by comparing the segmentation with ground truth manually annotated by a radiologist on one axial slice and two sagittal slices for each scan. The resulting average Dice coefficient is 0.880 with a standard deviation of 0.058, demonstrating that the automated segmentation algorithm achieves results consistent with manual annotations of a radiologist.
Wang, Jian; Chen, Hong-Ping; Liu, You-Ping; Wei, Zheng; Liu, Rong; Fan, Dan-Qing
2013-05-01
This experiment shows how to use the automated mass spectral deconvolution & identification system (AMDIS) to deconvolve the overlapped peaks in the total ion chromatogram (TIC) of volatile oil from Chineses materia medica (CMM). The essential oil was obtained by steam distillation. Its TIC was gotten by GC-MS, and the superimposed peaks in TIC were deconvolved by AMDIS. First, AMDIS can detect the number of components in TIC through the run function. Then, by analyzing the extracted spectrum of corresponding scan point of detected component and the original spectrum of this scan point, and their counterparts' spectra in the referred MS Library, researchers can ascertain the component's structure accurately or deny some compounds, which don't exist in nature. Furthermore, through examining the changeability of characteristic fragment ion peaks of identified compounds, the previous outcome can be affirmed again. The result demonstrated that AMDIS could efficiently deconvolve the overlapped peaks in TIC by taking out the spectrum of matching scan point of discerned component, which led to exact identification of the component's structure.
Paulus, Stefan; Dupuis, Jan; Riedel, Sebastian; Kuhlmann, Heiner
2014-01-01
Due to the rise of laser scanning the 3D geometry of plant architecture is easy to acquire. Nevertheless, an automated interpretation and, finally, the segmentation into functional groups are still difficult to achieve. Two barley plants were scanned in a time course, and the organs were separated by applying a histogram-based classification algorithm. The leaf organs were represented by meshing algorithms, while the stem organs were parameterized by a least-squares cylinder approximation. We introduced surface feature histograms with an accuracy of 96% for the separation of the barley organs, leaf and stem. This enables growth monitoring in a time course for barley plants. Its reliability was demonstrated by a comparison with manually fitted parameters with a correlation R2 = 0.99 for the leaf area and R2 = 0.98 for the cumulated stem height. A proof of concept has been given for its applicability for the detection of water stress in barley, where the extension growth of an irrigated and a non-irrigated plant has been monitored. PMID:25029283
ERIC Educational Resources Information Center
Jones, Richard M.
1981-01-01
A computer program that utilizes an optical scanning machine is used for ordering supplies in a Louisiana school system. The program provides savings in time and labor, more accurate data, and easy-to-use reports. (Author/MLF)
Development of critical dimension measurement scanning electron microscope for ULSI (S-8000 series)
NASA Astrophysics Data System (ADS)
Ezumi, Makoto; Otaka, Tadashi; Mori, Hiroyoshi; Todokoro, Hideo; Ose, Yoichi
1996-05-01
The semiconductor industry is moving from half-micron to quarter-micron design rules. To support this evolution, Hitachi has developed a new critical dimension measurement scanning electron microscope (CD-SEM), the model S-8800 series, for quality control of quarter- micron process lines. The new CD-SEM provides detailed examination of process conditions with 5 nm resolution and 5 nm repeatability (3 sigma) at accelerating voltage 800 V using secondary electron imaging. In addition, a newly developed load-lock system has a capability of achieving a high sample throughput of 20 wafers/hour (5 point measurements per wafer) under continuous operation. To support user friendliness, the system incorporates a graphical user interface (GUI), an automated pattern recognition system which helps locating measurement points, both manual and semi-automated operation, and user-programmable operating parameters.
Zang, Pengxiao; Gao, Simon S; Hwang, Thomas S; Flaxel, Christina J; Wilson, David J; Morrison, John C; Huang, David; Li, Dengwang; Jia, Yali
2017-03-01
To improve optic disc boundary detection and peripapillary retinal layer segmentation, we propose an automated approach for structural and angiographic optical coherence tomography. The algorithm was performed on radial cross-sectional B-scans. The disc boundary was detected by searching for the position of Bruch's membrane opening, and retinal layer boundaries were detected using a dynamic programming-based graph search algorithm on each B-scan without the disc region. A comparison of the disc boundary using our method with that determined by manual delineation showed good accuracy, with an average Dice similarity coefficient ≥0.90 in healthy eyes and eyes with diabetic retinopathy and glaucoma. The layer segmentation accuracy in the same cases was on average less than one pixel (3.13 μm).
Zang, Pengxiao; Gao, Simon S.; Hwang, Thomas S.; Flaxel, Christina J.; Wilson, David J.; Morrison, John C.; Huang, David; Li, Dengwang; Jia, Yali
2017-01-01
To improve optic disc boundary detection and peripapillary retinal layer segmentation, we propose an automated approach for structural and angiographic optical coherence tomography. The algorithm was performed on radial cross-sectional B-scans. The disc boundary was detected by searching for the position of Bruch’s membrane opening, and retinal layer boundaries were detected using a dynamic programming-based graph search algorithm on each B-scan without the disc region. A comparison of the disc boundary using our method with that determined by manual delineation showed good accuracy, with an average Dice similarity coefficient ≥0.90 in healthy eyes and eyes with diabetic retinopathy and glaucoma. The layer segmentation accuracy in the same cases was on average less than one pixel (3.13 μm). PMID:28663830
Automated extraction of single H atoms with STM: tip state dependency
NASA Astrophysics Data System (ADS)
Møller, Morten; Jarvis, Samuel P.; Guérinet, Laurent; Sharp, Peter; Woolley, Richard; Rahe, Philipp; Moriarty, Philip
2017-02-01
The atomistic structure of the tip apex plays a crucial role in performing reliable atomic-scale surface and adsorbate manipulation using scanning probe techniques. We have developed an automated extraction routine for controlled removal of single hydrogen atoms from the H:Si(100) surface. The set of atomic extraction protocols detect a variety of desorption events during scanning tunneling microscope (STM)-induced modification of the hydrogen-passivated surface. The influence of the tip state on the probability for hydrogen removal was examined by comparing the desorption efficiency for various classifications of STM topographs (rows, dimers, atoms, etc). We find that dimer-row-resolving tip apices extract hydrogen atoms most readily and reliably (and with least spurious desorption), while tip states which provide atomic resolution counter-intuitively have a lower probability for single H atom removal.
High speed automated microtomography of nuclear emulsions and recent application
NASA Astrophysics Data System (ADS)
Tioukov, V.; Aleksandrov, A.; Consiglio, L.; De Lellis, G.; Vladymyrov, M.
2015-12-01
The development of high-speed automatic scanning systems was the key-factor for massive and successful emulsions application for big neutrino experiments like OPERA. The emulsion detector simplicity, the unprecedented sub-micron spatial resolution and the unique ability to provide intrinsically 3-dimensional spatial information make it a perfect device for short-living particles study, where the event topology should be precisely reconstructed in a 10-100 um scale vertex region. Recently the exceptional technological progress in image processing and automation together with intensive R&D done by Italian and Japanese microscopy groups permit to increase the scanning speed to unbelievable few years ago m2/day scale and so greatly extend the range of the possible applications for emulsion-based detectors to other fields like: medical imaging, directional dark matter search, nuclear physics, geological and industrial applications.
Tang, Xiaoying; Luo, Yuan; Chen, Zhibin; Huang, Nianwei; Johnson, Hans J.; Paulsen, Jane S.; Miller, Michael I.
2018-01-01
In this paper, we present a fully-automated subcortical and ventricular shape generation pipeline that acts on structural magnetic resonance images (MRIs) of the human brain. Principally, the proposed pipeline consists of three steps: (1) automated structure segmentation using the diffeomorphic multi-atlas likelihood-fusion algorithm; (2) study-specific shape template creation based on the Delaunay triangulation; (3) deformation-based shape filtering using the large deformation diffeomorphic metric mapping for surfaces. The proposed pipeline is shown to provide high accuracy, sufficient smoothness, and accurate anatomical topology. Two datasets focused upon Huntington's disease (HD) were used for evaluating the performance of the proposed pipeline. The first of these contains a total of 16 MRI scans, each with a gold standard available, on which the proposed pipeline's outputs were observed to be highly accurate and smooth when compared with the gold standard. Visual examinations and outlier analyses on the second dataset, which contains a total of 1,445 MRI scans, revealed 100% success rates for the putamen, the thalamus, the globus pallidus, the amygdala, and the lateral ventricle in both hemispheres and rates no smaller than 97% for the bilateral hippocampus and caudate. Another independent dataset, consisting of 15 atlas images and 20 testing images, was also used to quantitatively evaluate the proposed pipeline, with high accuracy having been obtained. In short, the proposed pipeline is herein demonstrated to be effective, both quantitatively and qualitatively, using a large collection of MRI scans. PMID:29867332
Conde, Esther; Suárez-Gauthier, Ana; Benito, Amparo; Garrido, Pilar; García-Campelo, Rosario; Biscuola, Michele; Paz-Ares, Luis; Hardisson, David; de Castro, Javier; Camacho, M. Carmen; Rodriguez-Abreu, Delvys; Abdulkader, Ihab; Ramirez, Josep; Reguart, Noemí; Salido, Marta; Pijuán, Lara; Arriola, Edurne; Sanz, Julián; Folgueras, Victoria; Villanueva, Noemí; Gómez-Román, Javier; Hidalgo, Manuel; López-Ríos, Fernando
2014-01-01
Background Based on the excellent results of the clinical trials with ALK-inhibitors, the importance of accurately identifying ALK positive lung cancer has never been greater. However, there are increasing number of recent publications addressing discordances between FISH and IHC. The controversy is further fuelled by the different regulatory approvals. This situation prompted us to investigate two ALK IHC antibodies (using a novel ultrasensitive detection-amplification kit) and an automated ALK FISH scanning system (FDA-cleared) in a series of non-small cell lung cancer tumor samples. Methods Forty-seven ALK FISH-positive and 56 ALK FISH-negative NSCLC samples were studied. All specimens were screened for ALK expression by two IHC antibodies (clone 5A4 from Novocastra and clone D5F3 from Ventana) and for ALK rearrangement by FISH (Vysis ALK FISH break-apart kit), which was automatically captured and scored by using Bioview's automated scanning system. Results All positive cases with the IHC antibodies were FISH-positive. There was only one IHC-negative case with both antibodies which showed a FISH-positive result. The overall sensitivity and specificity of the IHC in comparison with FISH were 98% and 100%, respectively. Conclusions The specificity of these ultrasensitive IHC assays may obviate the need for FISH confirmation in positive IHC cases. However, the likelihood of false negative IHC results strengthens the case for FISH testing, at least in some situations. PMID:25248157
Tang, Xiaoying; Luo, Yuan; Chen, Zhibin; Huang, Nianwei; Johnson, Hans J; Paulsen, Jane S; Miller, Michael I
2018-01-01
In this paper, we present a fully-automated subcortical and ventricular shape generation pipeline that acts on structural magnetic resonance images (MRIs) of the human brain. Principally, the proposed pipeline consists of three steps: (1) automated structure segmentation using the diffeomorphic multi-atlas likelihood-fusion algorithm; (2) study-specific shape template creation based on the Delaunay triangulation; (3) deformation-based shape filtering using the large deformation diffeomorphic metric mapping for surfaces. The proposed pipeline is shown to provide high accuracy, sufficient smoothness, and accurate anatomical topology. Two datasets focused upon Huntington's disease (HD) were used for evaluating the performance of the proposed pipeline. The first of these contains a total of 16 MRI scans, each with a gold standard available, on which the proposed pipeline's outputs were observed to be highly accurate and smooth when compared with the gold standard. Visual examinations and outlier analyses on the second dataset, which contains a total of 1,445 MRI scans, revealed 100% success rates for the putamen, the thalamus, the globus pallidus, the amygdala, and the lateral ventricle in both hemispheres and rates no smaller than 97% for the bilateral hippocampus and caudate. Another independent dataset, consisting of 15 atlas images and 20 testing images, was also used to quantitatively evaluate the proposed pipeline, with high accuracy having been obtained. In short, the proposed pipeline is herein demonstrated to be effective, both quantitatively and qualitatively, using a large collection of MRI scans.
A User’s Manual for Fiber Diffraction: The Automated Picker and Huber Diffractometers
1990-07-01
17 3. Layer line scan of degummed silk ( Bombyx mori ) ................................. 18...index (arbitrary units) Figure 3. Layer line scan of degummed silk ( Bombyx mori ) showing layers 0 through 6. If the fit is rejected, new values for... originally made at intervals larger than 0.010. The smoothing and interpolation is done by a least-squares polynomial fit to segments of the data. The number
Koh, Victor; Swamidoss, Issac Niwas; Aquino, Maria Cecilia D; Chew, Paul T; Sng, Chelvin
2018-04-27
Develop an algorithm to predict the success of laser peripheral iridotomy (LPI) in primary angle closure suspect (PACS), using pre-treatment anterior segment optical coherence tomography (ASOCT) scans. A total of 116 eyes with PACS underwent LPI and time-domain ASOCT scans (temporal and nasal cuts) were performed before and 1 month after LPI. All the post-treatment scans were classified to one of the following categories: (a) both angles open, (b) one of two angles open and (c) both angles closed. After LPI, success is defined as one or more angles changed from close to open. In this proposed method, the pre and post-LPI ASOCT scans were registered at the corresponding angles based on similarities between the respective local descriptor features and random sample consensus technique was used to identify the largest consensus set of correspondences between the pre and post-LPI ASOCT scans. Subsequently, features such as correlation co-efficient (CC) and structural similarity index (SSIM) were extracted and correlated with the success of LPI. We included 116 eyes and 91 (78.44%) eyes fulfilled the criteria for success after LPI. Using the CC and SSIM index scores from this training set of ASOCT images, our algorithm showed that the success of LPI in eyes with narrow angles can be predicted with 89.7% accuracy, specificity of 95.2% and sensitivity of 36.4% based on pre-LPI ASOCT scans only. Using pre-LPI ASOCT scans, our proposed algorithm showed good accuracy in predicting the success of LPI for PACS eyes. This fully-automated algorithm could aid decision making in offering LPI as a prophylactic treatment for PACS.
Outcomes of role stress: a multisample constructive replication.
Kemery, E R; Bedeian, A G; Mossholder, K W; Touliatos, J
1985-06-01
Responses from four separate samples of accountants and hospital employees provided a constructive replication of the Bedeian and Armenakis (1981) model of the causal nexus between role stress and selected outcome variables. We investigated the relationship between both role ambiguity and role conflict--as specific forms of role stress--and job-related tension, job satisfaction, and propensity to leave, using LISREL IV, a technique capable of providing statistical data for a hypothesized population model, as well as for specific causal paths. Results, which support the Bedeian and Armenakis model, are discussed in light of previous research.
Study of the 20,22Ne+20,22Ne and 10,12,13,14,15C+12C Fusion Reactions with MUSIC
NASA Astrophysics Data System (ADS)
Avila, M. L.; Rehm, K. E.; Almaraz-Calderon, S.; Carnelli, P. F. F.; DiGiovine, B.; Esbensen, H.; Hoffman, C. R.; Jiang, C. L.; Kay, B. P.; Lai, J.; Nusair, O.; Pardo, R. C.; Santiago-Gonzalez, D.; Talwar, R.; Ugalde, C.
2016-05-01
A highly efficient MUlti-Sampling Ionization Chamber (MUSIC) detector has been developed for measurements of fusion reactions. A study of fusion cross sections in the 10,12,13,14,15C+12C and 20,22Ne+20,22Ne systems has been performed at ATLAS. Experimental results and comparison with theoretical predictions are presented. Furthermore, results of direct measurements of the 17O(α, n)20Ne, 23Ne(α, p)26Mg and 23Ne(α, n)26Al reactions will be discussed.
Reproducibility of CT bone densitometry: operator versus automated ROI definition.
Louis, O; Luypaert, R; Kalender, W; Osteaux, M
1988-05-01
Intrasubject reproducibility with repeated determination of vertebral mineral density from a given set of CT images was investigated. The region of interest (ROI) in 10 patient scans was selected by four independent operators either manually or with an automated procedure separating cortical and spongeous bone, the operators being requested to interact in ROI selection. The mean intrasubject variation was found to be much lower with the automated process (0.3 to 0.6%) than with the conventional method (2.5 to 5.2%). In a second study, 10 patients were examined twice to determine the reproducibility of CT slice selection by the operator. The errors were of the same order of magnitude as in ROI selection.
Automated Telerobotic Inspection Of Surfaces
NASA Technical Reports Server (NTRS)
Balaram, J.; Prasad, K. Venkatesh
1996-01-01
Method of automated telerobotic inspection of surfaces undergoing development. Apparatus implementing method includes video camera that scans over surfaces to be inspected, in manner of mine detector. Images of surfaces compared with reference images to detect flaws. Developed for inspecting external structures of Space Station Freedom for damage from micrometeorites and debris from prior artificial satellites. On Earth, applied to inspection for damage, missing parts, contamination, and/or corrosion on interior surfaces of pipes or exterior surfaces of bridges, towers, aircraft, and ships.
Procedure for Automated Eddy Current Crack Detection in Thin Titanium Plates
NASA Technical Reports Server (NTRS)
Wincheski, Russell A.
2012-01-01
This procedure provides the detailed instructions for conducting Eddy Current (EC) inspections of thin (5-30 mils) titanium membranes with thickness and material properties typical of the development of Ultra-Lightweight diaphragm Tanks Technology (ULTT). The inspection focuses on the detection of part-through, surface breaking fatigue cracks with depths between approximately 0.002" and 0.007" and aspect ratios (a/c) of 0.2-1.0 using an automated eddy current scanning and image processing technique.
CrossTalk: The Journal of Defense Software Engineering. Volume 26, Number 6, November/December 2013
2013-12-01
requirements during sprint planning. Automated scanning, which includes automated code-review tools, allows the expert to monitor the system... sprint . This enables the validator to leverage the test results for formal validation and verification, and perform a shortened “hybrid” style of IV&V...per SPRINT (1-4 weeks) 1 week 1 Month Up to four months Ø Deliverable product to user Ø Security posture assessed Ø Accredited to field/operate
Framework for Automated GD&T Inspection Using 3D Scanner
NASA Astrophysics Data System (ADS)
Pathak, Vimal Kumar; Singh, Amit Kumar; Sivadasan, M.; Singh, N. K.
2018-04-01
Geometric Dimensioning and Tolerancing (GD&T) is a typical dialect that helps designers, production faculty and quality monitors to convey design specifications in an effective and efficient manner. GD&T has been practiced since the start of machine component assembly but without overly naming it. However, in recent times industries have started increasingly emphasizing on it. One prominent area where most of the industries struggle with is quality inspection. Complete inspection process is mostly human intensive. Also, the use of conventional gauges and templates for inspection purpose highly depends on skill of workers and quality inspectors. In industries, the concept of 3D scanning is not new but is used only for creating 3D drawings or modelling of physical parts. However, the potential of 3D scanning as a powerful inspection tool is hardly explored. This study is centred on designing a procedure for automated inspection using 3D scanner. Linear, geometric and dimensional inspection of the most popular test bar-stepped bar, as a simple example was also carried out as per the new framework. The new generation engineering industries would definitely welcome this automated inspection procedure being quick and reliable with reduced human intervention.
a Semi-Automated Point Cloud Processing Methodology for 3d Cultural Heritage Documentation
NASA Astrophysics Data System (ADS)
Kıvılcım, C. Ö.; Duran, Z.
2016-06-01
The preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. On the other hand, conventional measurement techniques require tremendous resources and lengthy project completion times for architectural surveys and 3D model production. Over the past two decades, the widespread use of laser scanning and digital photogrammetry have significantly altered the heritage documentation process. Furthermore, advances in these technologies have enabled robust data collection and reduced user workload for generating various levels of products, from single buildings to expansive cityscapes. More recently, the use of procedural modelling methods and BIM relevant applications for historic building documentation purposes has become an active area of research, however fully automated systems in cultural heritage documentation still remains open. In this paper, we present a semi-automated methodology, for 3D façade modelling of cultural heritage assets based on parametric and procedural modelling techniques and using airborne and terrestrial laser scanning data. We present the contribution of our methodology, which we implemented in an open source software environment using the example project of a 16th century early classical era Ottoman structure, Sinan the Architect's Şehzade Mosque in Istanbul, Turkey.
A new approach for freezing of aqueous solutions under active control of the nucleation temperature.
Petersen, Ansgar; Schneider, Hendrik; Rau, Guenter; Glasmacher, Birgit
2006-10-01
An experimental setup for controlled freezing of aqueous solutions is introduced. The special feature is a mechanism to actively control the nucleation temperature via electrofreezing: an ice nucleus generated at a platinum electrode by the application of an electric high voltage pulse initiates the crystallization of the sample. Using electrofreezing, the nucleation temperature in pure water can be precisely adjusted to a desired value over the whole temperature range between a maximum temperature Tn(max) close to the melting point and the temperature of spontaneous nucleation. However, the presence of additives can inhibit the nucleus formation. The influence of hydroxyethylstarch (HES), glucose, glycerol, additives commonly used in cryobiology, and NaCl on Tn(max) were investigated. While the decrease showed to be moderate for the non-ionic additives, the hindrance of nucleation by ionic NaCl makes the direct application of electrofreezing in solutions with physiological salt concentrations impossible. Therefore, in the multi-sample freezing device presented in this paper, the ice nucleus is produced in a separate volume of pure water inside an electrode cap. This way, the nucleus formation becomes independent of the sample composition. Using electrofreezing rather than conventional seeding methods allows automated freezing of many samples under equal conditions. Experiments performed with model solutions show the reliability and repeatability of this method to start crystallization in the test samples at different specified temperatures. The setup was designed to freeze samples of small volume for basic investigations in the field of cryopreservation and freeze-drying, but the mode of operation might be interesting for many other applications where a controlled nucleation of aqueous solutions is of importance.
NASA Astrophysics Data System (ADS)
Takahashi, Noriyuki; Kinoshita, Toshibumi; Ohmura, Tomomi; Matsuyama, Eri; Toyoshima, Hideto
2018-02-01
The rapid increase in the incidence of Alzheimer's disease (AD) has become a critical issue in low and middle income countries. In general, MR imaging has become sufficiently suitable in clinical situations, while CT scan might be uncommonly used in the diagnosis of AD due to its low contrast between brain tissues. However, in those countries, CT scan, which is less costly and readily available, will be desired to become useful for the diagnosis of AD. For CT scan, the enlargement of the temporal horn of the lateral ventricle (THLV) is one of few findings for the diagnosis of AD. In this paper, we present an automated volumetry of THLV with segmentation based on Bayes' rule on CT images. In our method, first, all CT data sets are normalized into an atlas by using linear affine transformation and non-linear wrapping techniques. Next, a probability map of THLV is constructed in the normalized data. Then, THLV regions are extracted based on Bayes' rule. Finally, the volume of the THLV is evaluated. This scheme was applied to CT scans from 20 AD patients and 20 controls to evaluate the performance of the method for detecting AD. The estimated THLV volume was markedly increased in the AD group compared with the controls (P < .0001), and the area under the receiver operating characteristic curve (AUC) was 0.921. Therefore, this computerized method may have the potential to accurately detect AD on CT images.
Griffis, Joseph C; Allendorfer, Jane B; Szaflarski, Jerzy P
2016-01-15
Manual lesion delineation by an expert is the standard for lesion identification in MRI scans, but it is time-consuming and can introduce subjective bias. Alternative methods often require multi-modal MRI data, user interaction, scans from a control population, and/or arbitrary statistical thresholding. We present an approach for automatically identifying stroke lesions in individual T1-weighted MRI scans using naïve Bayes classification. Probabilistic tissue segmentation and image algebra were used to create feature maps encoding information about missing and abnormal tissue. Leave-one-case-out training and cross-validation was used to obtain out-of-sample predictions for each of 30 cases with left hemisphere stroke lesions. Our method correctly predicted lesion locations for 30/30 un-trained cases. Post-processing with smoothing (8mm FWHM) and cluster-extent thresholding (100 voxels) was found to improve performance. Quantitative evaluations of post-processed out-of-sample predictions on 30 cases revealed high spatial overlap (mean Dice similarity coefficient=0.66) and volume agreement (mean percent volume difference=28.91; Pearson's r=0.97) with manual lesion delineations. Our automated approach agrees with manual tracing. It provides an alternative to automated methods that require multi-modal MRI data, additional control scans, or user interaction to achieve optimal performance. Our fully trained classifier has applications in neuroimaging and clinical contexts. Copyright © 2015 Elsevier B.V. All rights reserved.
Butler, Tracy; Zaborszky, Laszlo; Pirraglia, Elizabeth; Li, Jinyu; Wang, Xiuyuan Hugh; Li, Yi; Tsui, Wai; Talos, Delia; Devinsky, Orrin; Kuchna, Izabela; Nowicki, Krzysztof; French, Jacqueline; Kuzniecky, Rubin; Wegiel, Jerzy; Glodzik, Lidia; Rusinek, Henry; DeLeon, Mony J.; Thesen, Thomas
2014-01-01
Septal nuclei, located in basal forebrain, are strongly connected with hippocampi and important in learning and memory, but have received limited research attention in human MRI studies. While probabilistic maps for estimating septal volume on MRI are now available, they have not been independently validated against manual tracing of MRI, typically considered the gold standard for delineating brain structures. We developed a protocol for manual tracing of the human septal region on MRI based on examination of neuroanatomical specimens. We applied this tracing protocol to T1 MRI scans (n=86) from subjects with temporal epilepsy and healthy controls to measure septal volume. To assess the inter-rater reliability of the protocol, a second tracer used the same protocol on 20 scans that were randomly selected from the 72 healthy controls. In addition to measuring septal volume, maximum septal thickness between the ventricles was measured and recorded. The same scans (n=86) were also analysed using septal probabilistic maps and Dartel toolbox in SPM. Results show that our manual tracing algorithm is reliable, and that septal volume measurements obtained via manual and automated methods correlate significantly with each other (p<001). Both manual and automated methods detected significantly enlarged septal nuclei in patients with temporal lobe epilepsy in accord with a proposed compensatory neuroplastic process related to the strong connections between septal nuclei and hippocampi. Septal thickness, which was simple to measure with excellent inter-rater reliability, correlated well with both manual and automated septal volume, suggesting it could serve as an easy-to-measure surrogate for septal volume in future studies. Our results call attention to the important though understudied human septal region, confirm its enlargement in temporal lobe epilepsy, and provide a reliable new manual delineation protocol that will facilitate continued study of this critical region. PMID:24736183
Butler, Tracy; Zaborszky, Laszlo; Pirraglia, Elizabeth; Li, Jinyu; Wang, Xiuyuan Hugh; Li, Yi; Tsui, Wai; Talos, Delia; Devinsky, Orrin; Kuchna, Izabela; Nowicki, Krzysztof; French, Jacqueline; Kuzniecky, Rubin; Wegiel, Jerzy; Glodzik, Lidia; Rusinek, Henry; deLeon, Mony J; Thesen, Thomas
2014-08-15
Septal nuclei, located in basal forebrain, are strongly connected with hippocampi and important in learning and memory, but have received limited research attention in human MRI studies. While probabilistic maps for estimating septal volume on MRI are now available, they have not been independently validated against manual tracing of MRI, typically considered the gold standard for delineating brain structures. We developed a protocol for manual tracing of the human septal region on MRI based on examination of neuroanatomical specimens. We applied this tracing protocol to T1 MRI scans (n=86) from subjects with temporal epilepsy and healthy controls to measure septal volume. To assess the inter-rater reliability of the protocol, a second tracer used the same protocol on 20 scans that were randomly selected from the 72 healthy controls. In addition to measuring septal volume, maximum septal thickness between the ventricles was measured and recorded. The same scans (n=86) were also analyzed using septal probabilistic maps and DARTEL toolbox in SPM. Results show that our manual tracing algorithm is reliable, and that septal volume measurements obtained via manual and automated methods correlate significantly with each other (p<.001). Both manual and automated methods detected significantly enlarged septal nuclei in patients with temporal lobe epilepsy in accord with a proposed compensatory neuroplastic process related to the strong connections between septal nuclei and hippocampi. Septal thickness, which was simple to measure with excellent inter-rater reliability, correlated well with both manual and automated septal volume, suggesting it could serve as an easy-to-measure surrogate for septal volume in future studies. Our results call attention to the important though understudied human septal region, confirm its enlargement in temporal lobe epilepsy, and provide a reliable new manual delineation protocol that will facilitate continued study of this critical region. Copyright © 2014 Elsevier Inc. All rights reserved.
Tabaqchali, S; Silman, R; Holland, D
1987-01-01
A new rapid automated method for the identification and classification of microorganisms is described. It is based on the incorporation of 35S-methionine into cellular proteins and subsequent separation of the radiolabelled proteins by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE). The protein patterns produced were species specific and reproducible, permitting discrimination between the species. A large number of Gram negative and Gram positive aerobic and anaerobic organisms were successfully tested. Furthermore, there were sufficient differences within species between the protein profiles to permit subdivision of the species. New typing schemes for Clostridium difficile, coagulase negative staphylococci, and Staphylococcus aureus, including the methicillin resistant strains, could thus be introduced; this has provided the basis for useful epidemiological studies. To standardise and automate the procedure an automated electrophoresis system and a two dimensional scanner were developed to scan the dried gels directly. The scanner is operated by a computer which also stores and analyses the scan data. Specific histograms are produced for each bacterial species. Pattern recognition software is used to construct databases and to compare data obtained from different gels: in this way duplicate "unknowns" can be identified. Specific small areas showing differences between various histograms can also be isolated and expanded to maximise the differences, thus providing differentiation between closely related bacterial species and the identification of differences within the species to provide new typing schemes. This system should be widely applied in clinical microbiology laboratories in the near future. Images Fig 1 Fig 2 Fig 3 Fig 4 Fig 5 Fig 6 Fig 7 Fig 8 PMID:3312300
Crackscope : automatic pavement cracking inspection system.
DOT National Transportation Integrated Search
2008-08-01
The CrackScope system is an automated pavement crack rating system consisting of a : digital line scan camera, laser-line illuminator, and proprietary crack detection and classification : software. CrackScope is able to perform real-time pavement ins...
High speed automated microtomography of nuclear emulsions and recent application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tioukov, V.; Aleksandrov, A.; Consiglio, L.
2015-12-31
The development of high-speed automatic scanning systems was the key-factor for massive and successful emulsions application for big neutrino experiments like OPERA. The emulsion detector simplicity, the unprecedented sub-micron spatial resolution and the unique ability to provide intrinsically 3-dimensional spatial information make it a perfect device for short-living particles study, where the event topology should be precisely reconstructed in a 10-100 um scale vertex region. Recently the exceptional technological progress in image processing and automation together with intensive R&D done by Italian and Japanese microscopy groups permit to increase the scanning speed to unbelievable few years ago m{sup 2}/day scalemore » and so greatly extend the range of the possible applications for emulsion-based detectors to other fields like: medical imaging, directional dark matter search, nuclear physics, geological and industrial applications.« less
Giger, Maryellen L.; Chen, Chin-Tu; Armato, Samuel; Doi, Kunio
1999-10-26
A method and system for the computerized registration of radionuclide images with radiographic images, including generating image data from radiographic and radionuclide images of the thorax. Techniques include contouring the lung regions in each type of chest image, scaling and registration of the contours based on location of lung apices, and superimposition after appropriate shifting of the images. Specific applications are given for the automated registration of radionuclide lungs scans with chest radiographs. The method in the example given yields a system that spatially registers and correlates digitized chest radiographs with V/Q scans in order to correlate V/Q functional information with the greater structural detail of chest radiographs. Final output could be the computer-determined contours from each type of image superimposed on any of the original images, or superimposition of the radionuclide image data, which contains high activity, onto the radiographic chest image.
Keane, Pearse A; Grossi, Carlota M; Foster, Paul J; Yang, Qi; Reisman, Charles A; Chan, Kinpui; Peto, Tunde; Thomas, Dhanes; Patel, Praveen J
2016-01-01
To describe an approach to the use of optical coherence tomography (OCT) imaging in large, population-based studies, including methods for OCT image acquisition, storage, and the remote, rapid, automated analysis of retinal thickness. In UK Biobank, OCT images were acquired between 2009 and 2010 using a commercially available "spectral domain" OCT device (3D OCT-1000, Topcon). Images were obtained using a raster scan protocol, 6 mm x 6 mm in area, and consisting of 128 B-scans. OCT image sets were stored on UK Biobank servers in a central repository, adjacent to high performance computers. Rapid, automated analysis of retinal thickness was performed using custom image segmentation software developed by the Topcon Advanced Biomedical Imaging Laboratory (TABIL). This software employs dual-scale gradient information to allow for automated segmentation of nine intraretinal boundaries in a rapid fashion. 67,321 participants (134,642 eyes) in UK Biobank underwent OCT imaging of both eyes as part of the ocular module. 134,611 images were successfully processed with 31 images failing segmentation analysis due to corrupted OCT files or withdrawal of subject consent for UKBB study participation. Average time taken to call up an image from the database and complete segmentation analysis was approximately 120 seconds per data set per login, and analysis of the entire dataset was completed in approximately 28 days. We report an approach to the rapid, automated measurement of retinal thickness from nearly 140,000 OCT image sets from the UK Biobank. In the near future, these measurements will be publically available for utilization by researchers around the world, and thus for correlation with the wealth of other data collected in UK Biobank. The automated analysis approaches we describe may be of utility for future large population-based epidemiological studies, clinical trials, and screening programs that employ OCT imaging.
Grossi, Carlota M.; Foster, Paul J.; Yang, Qi; Reisman, Charles A.; Chan, Kinpui; Peto, Tunde; Thomas, Dhanes; Patel, Praveen J.
2016-01-01
Purpose To describe an approach to the use of optical coherence tomography (OCT) imaging in large, population-based studies, including methods for OCT image acquisition, storage, and the remote, rapid, automated analysis of retinal thickness. Methods In UK Biobank, OCT images were acquired between 2009 and 2010 using a commercially available “spectral domain” OCT device (3D OCT-1000, Topcon). Images were obtained using a raster scan protocol, 6 mm x 6 mm in area, and consisting of 128 B-scans. OCT image sets were stored on UK Biobank servers in a central repository, adjacent to high performance computers. Rapid, automated analysis of retinal thickness was performed using custom image segmentation software developed by the Topcon Advanced Biomedical Imaging Laboratory (TABIL). This software employs dual-scale gradient information to allow for automated segmentation of nine intraretinal boundaries in a rapid fashion. Results 67,321 participants (134,642 eyes) in UK Biobank underwent OCT imaging of both eyes as part of the ocular module. 134,611 images were successfully processed with 31 images failing segmentation analysis due to corrupted OCT files or withdrawal of subject consent for UKBB study participation. Average time taken to call up an image from the database and complete segmentation analysis was approximately 120 seconds per data set per login, and analysis of the entire dataset was completed in approximately 28 days. Conclusions We report an approach to the rapid, automated measurement of retinal thickness from nearly 140,000 OCT image sets from the UK Biobank. In the near future, these measurements will be publically available for utilization by researchers around the world, and thus for correlation with the wealth of other data collected in UK Biobank. The automated analysis approaches we describe may be of utility for future large population-based epidemiological studies, clinical trials, and screening programs that employ OCT imaging. PMID:27716837
Comparison of computer versus manual determination of pulmonary nodule volumes in CT scans
NASA Astrophysics Data System (ADS)
Biancardi, Alberto M.; Reeves, Anthony P.; Jirapatnakul, Artit C.; Apanasovitch, Tatiyana; Yankelevitz, David; Henschke, Claudia I.
2008-03-01
Accurate nodule volume estimation is necessary in order to estimate the clinically relevant growth rate or change in size over time. An automated nodule volume-measuring algorithm was applied to a set of pulmonary nodules that were documented by the Lung Image Database Consortium (LIDC). The LIDC process model specifies that each scan is assessed by four experienced thoracic radiologists and that boundaries are to be marked around the visible extent of the nodules for nodules 3 mm and larger. Nodules were selected from the LIDC database with the following inclusion criteria: (a) they must have a solid component on a minimum of three CT image slices and (b) they must be marked by all four LIDC radiologists. A total of 113 nodules met the selection criterion with diameters ranging from 3.59 mm to 32.68 mm (mean 9.37 mm, median 7.67 mm). The centroid of each marked nodule was used as the seed point for the automated algorithm. 95 nodules (84.1%) were correctly segmented, but one was considered not meeting the first selection criterion by the automated method; for the remaining ones, eight (7.1%) were structurally too complex or extensively attached and 10 (8.8%) were considered not properly segmented after a simple visual inspection by a radiologist. Since the LIDC specifications, as aforementioned, instruct radiologists to include both solid and sub-solid parts, the automated method core capability of segmenting solid tissues was augmented to take into account also the nodule sub-solid parts. We ranked the distances of the automated method estimates and the radiologist-based estimates from the median of the radiologist-based values. The automated method was in 76.6% of the cases closer to the median than at least one of the values derived from the manual markings, which is a sign of a very good agreement with the radiologists' markings.
Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Garg, Shailesh; Hori, Masatoshi; Oto, Aytekin; Baron, Richard L.
2014-01-01
OBJECTIVE The purpose of this study was to evaluate automated CT volumetry in the assessment of living-donor livers for transplant and to compare this technique with software-aided interactive volumetry and manual volumetry. MATERIALS AND METHODS Hepatic CT scans of 18 consecutively registered prospective liver donors were obtained under a liver transplant protocol. Automated liver volumetry was developed on the basis of 3D active-contour segmentation. To establish reference standard liver volumes, a radiologist manually traced the contour of the liver on each CT slice. We compared the results obtained with automated and interactive volumetry with those obtained with the reference standard for this study, manual volumetry. RESULTS The average interactive liver volume was 1553 ± 343 cm3, and the average automated liver volume was 1520 ± 378 cm3. The average manual volume was 1486 ± 343 cm3. Both interactive and automated volumetric results had excellent agreement with manual volumetric results (intraclass correlation coefficients, 0.96 and 0.94). The average user time for automated volumetry was 0.57 ± 0.06 min/case, whereas those for interactive and manual volumetry were 27.3 ± 4.6 and 39.4 ± 5.5 min/case, the difference being statistically significant (p < 0.05). CONCLUSION Both interactive and automated volumetry are accurate for measuring liver volume with CT, but automated volumetry is substantially more efficient. PMID:21940543
Suzuki, Kenji; Epstein, Mark L; Kohlbrenner, Ryan; Garg, Shailesh; Hori, Masatoshi; Oto, Aytekin; Baron, Richard L
2011-10-01
The purpose of this study was to evaluate automated CT volumetry in the assessment of living-donor livers for transplant and to compare this technique with software-aided interactive volumetry and manual volumetry. Hepatic CT scans of 18 consecutively registered prospective liver donors were obtained under a liver transplant protocol. Automated liver volumetry was developed on the basis of 3D active-contour segmentation. To establish reference standard liver volumes, a radiologist manually traced the contour of the liver on each CT slice. We compared the results obtained with automated and interactive volumetry with those obtained with the reference standard for this study, manual volumetry. The average interactive liver volume was 1553 ± 343 cm(3), and the average automated liver volume was 1520 ± 378 cm(3). The average manual volume was 1486 ± 343 cm(3). Both interactive and automated volumetric results had excellent agreement with manual volumetric results (intraclass correlation coefficients, 0.96 and 0.94). The average user time for automated volumetry was 0.57 ± 0.06 min/case, whereas those for interactive and manual volumetry were 27.3 ± 4.6 and 39.4 ± 5.5 min/case, the difference being statistically significant (p < 0.05). Both interactive and automated volumetry are accurate for measuring liver volume with CT, but automated volumetry is substantially more efficient.
Wang, Jianwei; Qi, Peng; Hou, Jinjun; Shen, Yao; Yang, Min; Bi, Qirui; Deng, Yanping; Shi, Xiaojian; Feng, Ruihong; Feng, Zijin; Wu, Wanying; Guo, Dean
2017-02-05
Drug metabolites identification and construction of metabolic profile are meaningful work for the drug discovery and development. The great challenge during this process is the work of the structural clarification of possible metabolites in the complicated biological matrix, which often resulting in a huge amount data sets, especially in multi-samples in vivo. Analyzing these complex data manually is time-consuming and laborious. The object of this study was to develop a practical strategy for screening and identifying of metabolites from multiple biological samples efficiently. Using hirsutine (HTI), an active components of Uncaria rhynchophylla (Gouteng in Chinese) as a model and its plasma, urine, bile, feces and various tissues were analyzed with data processing software (Metwork), data mining tool (Progenesis QI), and HR-MS n data by ultra-high performance liquid chromatography/linear ion trap-Orbitrap mass spectrometry (U-HPLC/LTQ-Orbitrap-MS). A total of 67 metabolites of HTI in rat biological samples were tentatively identified with established library, and to our knowledge most of which were reported for the first time. The possible metabolic pathways were subsequently proposed, hydroxylation, dehydrogenation, oxidation, N-oxidation, hydrolysis, reduction and glucuronide conjugation were mainly involved according to metabolic profile. The result proved application of this improved strategy was efficient, rapid, and reliable for metabolic profiling of components in multiple biological samples and could significantly expand our understanding of metabolic situation of TCM in vivo. Copyright © 2016 Elsevier B.V. All rights reserved.
Mulder, Emma R; de Jong, Remko A; Knol, Dirk L; van Schijndel, Ronald A; Cover, Keith S; Visser, Pieter J; Barkhof, Frederik; Vrenken, Hugo
2014-05-15
To measure hippocampal volume change in Alzheimer's disease (AD) or mild cognitive impairment (MCI), expert manual delineation is often used because of its supposed accuracy. It has been suggested that expert outlining yields poorer reproducibility as compared to automated methods, but this has not been investigated. To determine the reproducibilities of expert manual outlining and two common automated methods for measuring hippocampal atrophy rates in healthy aging, MCI and AD. From the Alzheimer's Disease Neuroimaging Initiative (ADNI), 80 subjects were selected: 20 patients with AD, 40 patients with mild cognitive impairment (MCI) and 20 healthy controls (HCs). Left and right hippocampal volume change between baseline and month-12 visit was assessed by using expert manual delineation, and by the automated software packages FreeSurfer (longitudinal processing stream) and FIRST. To assess reproducibility of the measured hippocampal volume change, both back-to-back (BTB) MPRAGE scans available for each visit were analyzed. Hippocampal volume change was expressed in μL, and as a percentage of baseline volume. Reproducibility of the 1-year hippocampal volume change was estimated from the BTB measurements by using linear mixed model to calculate the limits of agreement (LoA) of each method, reflecting its measurement uncertainty. Using the delta method, approximate p-values were calculated for the pairwise comparisons between methods. Statistical analyses were performed both with inclusion and exclusion of visibly incorrect segmentations. Visibly incorrect automated segmentation in either one or both scans of a longitudinal scan pair occurred in 7.5% of the hippocampi for FreeSurfer and in 6.9% of the hippocampi for FIRST. After excluding these failed cases, reproducibility analysis for 1-year percentage volume change yielded LoA of ±7.2% for FreeSurfer, ±9.7% for expert manual delineation, and ±10.0% for FIRST. Methods ranked the same for reproducibility of 1-year μL volume change, with LoA of ±218 μL for FreeSurfer, ±319 μL for expert manual delineation, and ±333 μL for FIRST. Approximate p-values indicated that reproducibility was better for FreeSurfer than for manual or FIRST, and that manual and FIRST did not differ. Inclusion of failed automated segmentations led to worsening of reproducibility of both automated methods for 1-year raw and percentage volume change. Quantitative reproducibility values of 1-year microliter and percentage hippocampal volume change were roughly similar between expert manual outlining, FIRST and FreeSurfer, but FreeSurfer reproducibility was statistically significantly superior to both manual outlining and FIRST after exclusion of failed segmentations. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.
2012-12-01
Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.
NASA Astrophysics Data System (ADS)
Schmitt, R.; Niggemann, C.; Mersmann, C.
2008-04-01
Fibre-reinforced plastics (FRP) are particularly suitable for components where light-weight structures with advanced mechanical properties are required, e.g. for aerospace parts. Nevertheless, many manufacturing processes for FRP include manual production steps without an integrated quality control. A vital step in the process chain is the lay-up of the textile preform, as it greatly affects the geometry and the mechanical performance of the final part. In order to automate the FRP production, an inline machine vision system is needed for a closed-loop control of the preform lay-up. This work describes the development of a novel laser light-section sensor for optical inspection of textile preforms and its integration and validation in a machine vision prototype. The proposed method aims at the determination of the contour position of each textile layer through edge scanning. The scanning route is automatically derived by using texture analysis algorithms in a preliminary step. As sensor output a distinct stage profile is computed from the acquired greyscale image. The contour position is determined with sub-pixel accuracy using a novel algorithm based on a non-linear least-square fitting to a sigmoid function. The whole contour position is generated through data fusion of the measured edge points. The proposed method provides robust process automation for the FRP production improving the process quality and reducing the scrap quota. Hence, the range of economically feasible FRP products can be increased and new market segments with cost sensitive products can be addressed.
Evaluation of a completely robotized neurosurgical operating microscope.
Kantelhardt, Sven R; Finke, Markus; Schweikard, Achim; Giese, Alf
2013-01-01
Operating microscopes are essential for most neurosurgical procedures. Modern robot-assisted controls offer new possibilities, combining the advantages of conventional and automated systems. We evaluated the prototype of a completely robotized operating microscope with an integrated optical coherence tomography module. A standard operating microscope was fitted with motors and control instruments, with the manual control mode and balance preserved. In the robot mode, the microscope was steered by a remote control that could be fixed to a surgical instrument. External encoders and accelerometers tracked microscope movements. The microscope was additionally fitted with an optical coherence tomography-scanning module. The robotized microscope was tested on model systems. It could be freely positioned, without forcing the surgeon to take the hands from the instruments or avert the eyes from the oculars. Positioning error was about 1 mm, and vibration faded in 1 second. Tracking of microscope movements, combined with an autofocus function, allowed determination of the focus position within the 3-dimensional space. This constituted a second loop of navigation independent from conventional infrared reflector-based techniques. In the robot mode, automated optical coherence tomography scanning of large surface areas was feasible. The prototype of a robotized optical coherence tomography-integrated operating microscope combines the advantages of a conventional manually controlled operating microscope with a remote-controlled positioning aid and a self-navigating microscope system that performs automated positioning tasks such as surface scans. This demonstrates that, in the future, operating microscopes may be used to acquire intraoperative spatial data, volume changes, and structural data of brain or brain tumor tissue.
NASA Astrophysics Data System (ADS)
Jagt, Thyrza; Breedveld, Sebastiaan; van de Water, Steven; Heijmen, Ben; Hoogeman, Mischa
2017-06-01
Proton therapy is very sensitive to daily density changes along the pencil beam paths. The purpose of this study is to develop and evaluate an automated method for adaptation of IMPT plans to compensate for these daily tissue density variations. A two-step restoration method for ‘densities-of-the-day’ was created: (1) restoration of spot positions (Bragg peaks) by adapting the energy of each pencil beam to the new water equivalent path length; and (2) re-optimization of pencil beam weights by minimizing the dosimetric difference with the planned dose distribution, using a fast and exact quadratic solver. The method was developed and evaluated using 8-10 repeat CT scans of 10 prostate cancer patients. Experiments demonstrated that giving a high weight to the PTV in the re-optimization resulted in clinically acceptable restorations. For all scans we obtained V 95% ⩾ 98% and V 107% ⩽ 2%. For the bladder, the differences between the restored and the intended treatment plan were below +2 Gy and +2%-point. The rectum differences were below +2 Gy and +2%-point for 90% of the scans. In the remaining scans the rectum was filled with air, which partly overlapped with the PTV. The air cavity distorted the Bragg peak resulting in less favorable rectum doses.
A functional-based segmentation of human body scans in arbitrary postures.
Werghi, Naoufel; Xiao, Yijun; Siebert, Jan Paul
2006-02-01
This paper presents a general framework that aims to address the task of segmenting three-dimensional (3-D) scan data representing the human form into subsets which correspond to functional human body parts. Such a task is challenging due to the articulated and deformable nature of the human body. A salient feature of this framework is that it is able to cope with various body postures and is in addition robust to noise, holes, irregular sampling and rigid transformations. Although whole human body scanners are now capable of routinely capturing the shape of the whole body in machine readable format, they have not yet realized their potential to provide automatic extraction of key body measurements. Automated production of anthropometric databases is a prerequisite to satisfying the needs of certain industrial sectors (e.g., the clothing industry). This implies that in order to extract specific measurements of interest, whole body 3-D scan data must be segmented by machine into subsets corresponding to functional human body parts. However, previously reported attempts at automating the segmentation process suffer from various limitations, such as being restricted to a standard specific posture and being vulnerable to scan data artifacts. Our human body segmentation algorithm advances the state of the art to overcome the above limitations and we present experimental results obtained using both real and synthetic data that confirm the validity, effectiveness, and robustness of our approach.
NASA Astrophysics Data System (ADS)
Lynch, John A.; Zaim, Souhil; Zhao, Jenny; Stork, Alexander; Peterfy, Charles G.; Genant, Harry K.
2000-06-01
A technique for segmentation of articular cartilage from 3D MRI scans of the knee has been developed. It overcomes the limitations of the conventionally used region growing techniques, which are prone to inter- and intra-observer variability, and which can require much manual intervention. We describe a hybrid segmentation method combining expert knowledge with directionally oriented Canny filters, cost functions and cubic splines. After manual initialization, the technique utilized 3 cost functions which aided automated detection of cartilage and its boundaries. Using the sign of the edge strength, and the local direction of the boundary, this technique is more reliable than conventional 'snakes,' and the user had little control over smoothness of boundaries. This means that the automatically detected boundary can conform to the true shape of the real boundary, also allowing reliable detection of subtle local lesions on the normally smooth cartilage surface. Manual corrections, with possible re-optimization were sometimes needed. When compared to the conventionally used region growing techniques, this newly described technique measured local cartilage volume with 3 times better reproducibility, and involved two thirds less human interaction. Combined with the use of 3D image registration, the new technique should also permit unbiased segmentation of followup scans by automated initialization from a baseline segmentation of an earlier scan of the same patient.
Liu, Li; Gao, Simon S; Bailey, Steven T; Huang, David; Li, Dengwang; Jia, Yali
2015-09-01
Optical coherence tomography angiography has recently been used to visualize choroidal neovascularization (CNV) in participants with age-related macular degeneration. Identification and quantification of CNV area is important clinically for disease assessment. An automated algorithm for CNV area detection is presented in this article. It relies on denoising and a saliency detection model to overcome issues such as projection artifacts and the heterogeneity of CNV. Qualitative and quantitative evaluations were performed on scans of 7 participants. Results from the algorithm agreed well with manual delineation of CNV area.
Automated segmentation of intraretinal layers from macular optical coherence tomography images
NASA Astrophysics Data System (ADS)
Haeker, Mona; Sonka, Milan; Kardon, Randy; Shah, Vinay A.; Wu, Xiaodong; Abràmoff, Michael D.
2007-03-01
Commercially-available optical coherence tomography (OCT) systems (e.g., Stratus OCT-3) only segment and provide thickness measurements for the total retina on scans of the macula. Since each intraretinal layer may be affected differently by disease, it is desirable to quantify the properties of each layer separately. Thus, we have developed an automated segmentation approach for the separation of the retina on (anisotropic) 3-D macular OCT scans into five layers. Each macular series consisted of six linear radial scans centered at the fovea. Repeated series (up to six, when available) were acquired for each eye and were first registered and averaged together, resulting in a composite image for each angular location. The six surfaces defining the five layers were then found on each 3-D composite image series by transforming the segmentation task into that of finding a minimum-cost closed set in a geometric graph constructed from edge/regional information and a priori-determined surface smoothness and interaction constraints. The method was applied to the macular OCT scans of 12 patients with unilateral anterior ischemic optic neuropathy (corresponding to 24 3-D composite image series). The boundaries were independently defined by two human experts on one raw scan of each eye. Using the average of the experts' tracings as a reference standard resulted in an overall mean unsigned border positioning error of 6.7 +/- 4.0 μm, with five of the six surfaces showing significantly lower mean errors than those computed between the two observers (p < 0.05, pixel size of 50 × 2 μm).
Automated classification of articular cartilage surfaces based on surface texture.
Stachowiak, G P; Stachowiak, G W; Podsiadlo, P
2006-11-01
In this study the automated classification system previously developed by the authors was used to classify articular cartilage surfaces with different degrees of wear. This automated system classifies surfaces based on their texture. Plug samples of sheep cartilage (pins) were run on stainless steel discs under various conditions using a pin-on-disc tribometer. Testing conditions were specifically designed to produce different severities of cartilage damage due to wear. Environmental scanning electron microscope (SEM) (ESEM) images of cartilage surfaces, that formed a database for pattern recognition analysis, were acquired. The ESEM images of cartilage were divided into five groups (classes), each class representing different wear conditions or wear severity. Each class was first examined and assessed visually. Next, the automated classification system (pattern recognition) was applied to all classes. The results of the automated surface texture classification were compared to those based on visual assessment of surface morphology. It was shown that the texture-based automated classification system was an efficient and accurate method of distinguishing between various cartilage surfaces generated under different wear conditions. It appears that the texture-based classification method has potential to become a useful tool in medical diagnostics.
NASA Astrophysics Data System (ADS)
Zhou, Shudao; Ma, Zhongliang; Wang, Min; Peng, Shuling
2018-05-01
This paper proposes a novel alignment system based on the measurement of optical path using a light beam scanning mode in a transmissometer. The system controls both the probe beam and the receiving field of view while scanning in two vertical directions. The system then calculates the azimuth angle of the transmitter and the receiver to determine the precise alignment of the optical path. Experiments show that this method can determine the alignment angles in less than 10 min with errors smaller than 66 μrad in the azimuth. This system also features high collimation precision, process automation and simple installation.
Laser scanning cytometry for automation of the micronucleus assay
Darzynkiewicz, Zbigniew; Smolewski, Piotr; Holden, Elena; Luther, Ed; Henriksen, Mel; François, Maxime; Leifert, Wayne; Fenech, Michael
2011-01-01
Laser scanning cytometry (LSC) provides a novel approach for automated scoring of micronuclei (MN) in different types of mammalian cells, serving as a biomarker of genotoxicity and mutagenicity. In this review, we discuss the advances to date in measuring MN in cell lines, buccal cells and erythrocytes, describe the advantages and outline potential challenges of this distinctive approach of analysis of nuclear anomalies. The use of multiple laser wavelengths in LSC and the high dynamic range of fluorescence and absorption detection allow simultaneous measurement of multiple cellular and nuclear features such as cytoplasmic area, nuclear area, DNA content and density of nuclei and MN, protein content and density of cytoplasm as well as other features using molecular probes. This high-content analysis approach allows the cells of interest to be identified (e.g. binucleated cells in cytokinesis-blocked cultures) and MN scored specifically in them. MN assays in cell lines (e.g. the CHO cell MN assay) using LSC are increasingly used in routine toxicology screening. More high-content MN assays and the expansion of MN analysis by LSC to other models (i.e. exfoliated cells, dermal cell models, etc.) hold great promise for robust and exciting developments in MN assay automation as a high-content high-throughput analysis procedure. PMID:21164197
Asou, Hiroya; Imada, N; Sato, T
2010-06-20
On coronary MR angiography (CMRA), cardiac motions worsen the image quality. To improve the image quality, detection of cardiac especially for individual coronary motion is very important. Usually, scan delay and duration were determined manually by the operator. We developed a new evaluation method to calculate static time of individual coronary artery. At first, coronary cine MRI was taken at the level of about 3 cm below the aortic valve (80 images/R-R). Chronological change of the signals were evaluated with Fourier transformation of each pixel of the images were done. Noise reduction with subtraction process and extraction process were done. To extract higher motion such as coronary arteries, morphological filter process and labeling process were added. Using these imaging processes, individual coronary motion was extracted and individual coronary static time was calculated automatically. We compared the images with ordinary manual method and new automated method in 10 healthy volunteers. Coronary static times were calculated with our method. Calculated coronary static time was shorter than that of ordinary manual method. And scan time became about 10% longer than that of ordinary method. Image qualities were improved in our method. Our automated detection method for coronary static time with chronological Fourier transformation has a potential to improve the image quality of CMRA and easy processing.
Thermal and structural analysis of the GOES scan mirror's on orbit performance
NASA Technical Reports Server (NTRS)
Zurmehly, G. E.; Hookman, R. A.
1991-01-01
The on-orbit performance of the GOES satellite's scan mirror has been predicted by means of thermal, structural, and optical models. A simpler-than-conventional thermal model was used to reduce the time required to obtain orbital predictions, and the structural model was used to predict on-earth gravity sag and on-orbit distortions. The transfer of data from the thermal model to the structural model was automated for a given set of thermal nodes and structural grids.
1991-09-01
back of a paper or plastic card. A decoder reads the flux reversals and translates them into letters and numbers for processing by a computer. The best...read without decoding. In the past 3 or 4 years, OCR technology has been improved significantly due mostly to the availability of relatively low-cost...transaction readers, and hand- held readers. Page readers scan pages of text either directly from paper or from digitized images of documents stored in the
Membrane Vibration Studies Using a Scanning Laser Vibrometer
NASA Technical Reports Server (NTRS)
Gaspar, James L.; Solter, Micah J.; Pappa, Richard S.
2001-01-01
This paper summarizes on-going experimental work at NASA Langley Research Center to measure the dynamics of a 1.016 m (40 in.) square polyimide film Kapton membrane. A fixed fully automated impact hammer and Polytec PSV-300-H scanning laser vibrometer were used for non-contact modal testing of the membrane with zero-mass-loading. The paper discusses the results obtained by testing the membrane at various tension levels and at various excitation locations. Results obtained by direct shaker excitation to the membrane are also discussed.
Cockpit Adaptive Automation and Pilot Performance
NASA Technical Reports Server (NTRS)
Parasuraman, Raja
2001-01-01
The introduction of high-level automated systems in the aircraft cockpit has provided several benefits, e.g., new capabilities, enhanced operational efficiency, and reduced crew workload. At the same time, conventional 'static' automation has sometimes degraded human operator monitoring performance, increased workload, and reduced situation awareness. Adaptive automation represents an alternative to static automation. In this approach, task allocation between human operators and computer systems is flexible and context-dependent rather than static. Adaptive automation, or adaptive task allocation, is thought to provide for regulation of operator workload and performance, while preserving the benefits of static automation. In previous research we have reported beneficial effects of adaptive automation on the performance of both pilots and non-pilots of flight-related tasks. For adaptive systems to be viable, however, such benefits need to be examined jointly in the context of a single set of tasks. The studies carried out under this project evaluated a systematic method for combining different forms of adaptive automation. A model for effective combination of different forms of adaptive automation, based on matching adaptation to operator workload was proposed and tested. The model was evaluated in studies using IFR-rated pilots flying a general-aviation simulator. Performance, subjective, and physiological (heart rate variability, eye scan-paths) measures of workload were recorded. The studies compared workload-based adaptation to to non-adaptive control conditions and found evidence for systematic benefits of adaptive automation. The research provides an empirical basis for evaluating the effectiveness of adaptive automation in the cockpit. The results contribute to the development of design principles and guidelines for the implementation of adaptive automation in the cockpit, particularly in general aviation, and in other human-machine systems. Project goals were met or exceeded. The results of the research extended knowledge of automation-related performance decrements in pilots and demonstrated the positive effects of adaptive task allocation. In addition, several practical implications for cockpit automation design were drawn from the research conducted. A total of 12 articles deriving from the project were published.
MARS: bringing the automation of small-molecule bioanalytical sample preparations to a new frontier.
Li, Ming; Chou, Judy; Jing, Jing; Xu, Hui; Costa, Aldo; Caputo, Robin; Mikkilineni, Rajesh; Flannelly-King, Shane; Rohde, Ellen; Gan, Lawrence; Klunk, Lewis; Yang, Liyu
2012-06-01
In recent years, there has been a growing interest in automating small-molecule bioanalytical sample preparations specifically using the Hamilton MicroLab(®) STAR liquid-handling platform. In the most extensive work reported thus far, multiple small-molecule sample preparation assay types (protein precipitation extraction, SPE and liquid-liquid extraction) have been integrated into a suite that is composed of graphical user interfaces and Hamilton scripts. Using that suite, bioanalytical scientists have been able to automate various sample preparation methods to a great extent. However, there are still areas that could benefit from further automation, specifically, the full integration of analytical standard and QC sample preparation with study sample extraction in one continuous run, real-time 2D barcode scanning on the Hamilton deck and direct Laboratory Information Management System database connectivity. We developed a new small-molecule sample-preparation automation system that improves in all of the aforementioned areas. The improved system presented herein further streamlines the bioanalytical workflow, simplifies batch run design, reduces analyst intervention and eliminates sample-handling error.
Digital pathology: elementary, rapid and reliable automated image analysis.
Bouzin, Caroline; Saini, Monika L; Khaing, Kyi-Kyi; Ambroise, Jérôme; Marbaix, Etienne; Grégoire, Vincent; Bol, Vanesa
2016-05-01
Slide digitalization has brought pathology to a new era, including powerful image analysis possibilities. However, while being a powerful prognostic tool, immunostaining automated analysis on digital images is still not implemented worldwide in routine clinical practice. Digitalized biopsy sections from two independent cohorts of patients, immunostained for membrane or nuclear markers, were quantified with two automated methods. The first was based on stained cell counting through tissue segmentation, while the second relied upon stained area proportion within tissue sections. Different steps of image preparation, such as automated tissue detection, folds exclusion and scanning magnification, were also assessed and validated. Quantification of either stained cells or the stained area was found to be correlated highly for all tested markers. Both methods were also correlated with visual scoring performed by a pathologist. For an equivalent reliability, quantification of the stained area is, however, faster and easier to fine-tune and is therefore more compatible with time constraints for prognosis. This work provides an incentive for the implementation of automated immunostaining analysis with a stained area method in routine laboratory practice. © 2015 John Wiley & Sons Ltd.
Variable Temperature Scanning Tunneling Microscopy
1991-07-01
Tomazin, both Electrical Engineering. Build a digital integrator for the STM feedback loop: Kyle Drewry, Electrical Engineering. Write an AutoLisp ...program to automate the AutoCad design of UHV-STM chambers: Alfred Pierce (minority), Mechanical Engineering. Design a 32-bit interface board for the EISA
Improved Real-Time Scan Matching Using Corner Features
NASA Astrophysics Data System (ADS)
Mohamed, H. A.; Moussa, A. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, Abu B.
2016-06-01
The automation of unmanned vehicle operation has gained a lot of research attention, in the last few years, because of its numerous applications. The vehicle localization is more challenging in indoor environments where absolute positioning measurements (e.g. GPS) are typically unavailable. Laser range finders are among the most widely used sensors that help the unmanned vehicles to localize themselves in indoor environments. Typically, automatic real-time matching of the successive scans is performed either explicitly or implicitly by any localization approach that utilizes laser range finders. Many accustomed approaches such as Iterative Closest Point (ICP), Iterative Matching Range Point (IMRP), Iterative Dual Correspondence (IDC), and Polar Scan Matching (PSM) handles the scan matching problem in an iterative fashion which significantly affects the time consumption. Furthermore, the solution convergence is not guaranteed especially in cases of sharp maneuvers or fast movement. This paper proposes an automated real-time scan matching algorithm where the matching process is initialized using the detected corners. This initialization step aims to increase the convergence probability and to limit the number of iterations needed to reach convergence. The corner detection is preceded by line extraction from the laser scans. To evaluate the probability of line availability in indoor environments, various data sets, offered by different research groups, have been tested and the mean numbers of extracted lines per scan for these data sets are ranging from 4.10 to 8.86 lines of more than 7 points. The set of all intersections between extracted lines are detected as corners regardless of the physical intersection of these line segments in the scan. To account for the uncertainties of the detected corners, the covariance of the corners is estimated using the extracted lines variances. The detected corners are used to estimate the transformation parameters between the successive scan using least squares. These estimated transformation parameters are used to calculate an adjusted initialization for scan matching process. The presented method can be employed solely to match the successive scans and also can be used to aid other accustomed iterative methods to achieve more effective and faster converge. The performance and time consumption of the proposed approach is compared with ICP algorithm alone without initialization in different scenarios such as static period, fast straight movement, and sharp manoeuvers.
A review of automated image understanding within 3D baggage computed tomography security screening.
Mouton, Andre; Breckon, Toby P
2015-01-01
Baggage inspection is the principal safeguard against the transportation of prohibited and potentially dangerous materials at airport security checkpoints. Although traditionally performed by 2D X-ray based scanning, increasingly stringent security regulations have led to a growing demand for more advanced imaging technologies. The role of X-ray Computed Tomography is thus rapidly expanding beyond the traditional materials-based detection of explosives. The development of computer vision and image processing techniques for the automated understanding of 3D baggage-CT imagery is however, complicated by poor image resolutions, image clutter and high levels of noise and artefacts. We discuss the recent and most pertinent advancements and identify topics for future research within the challenging domain of automated image understanding for baggage security screening CT.
Bledsoe, Sarah; Van Buskirk, Alex; Falconer, R James; Hollon, Andrew; Hoebing, Wendy; Jokic, Sladan
2018-02-01
The effectiveness of barcode-assisted medication preparation (BCMP) technology on detecting oral liquid dose preparation errors. From June 1, 2013, through May 31, 2014, a total of 178,344 oral doses were processed at Children's Mercy, a 301-bed pediatric hospital, through an automated workflow management system. Doses containing errors detected by the system's barcode scanning system or classified as rejected by the pharmacist were further reviewed. Errors intercepted by the barcode-scanning system were classified as (1) expired product, (2) incorrect drug, (3) incorrect concentration, and (4) technological error. Pharmacist-rejected doses were categorized into 6 categories based on the root cause of the preparation error: (1) expired product, (2) incorrect concentration, (3) incorrect drug, (4) incorrect volume, (5) preparation error, and (6) other. Of the 178,344 doses examined, 3,812 (2.1%) errors were detected by either the barcode-assisted scanning system (1.8%, n = 3,291) or a pharmacist (0.3%, n = 521). The 3,291 errors prevented by the barcode-assisted system were classified most commonly as technological error and incorrect drug, followed by incorrect concentration and expired product. Errors detected by pharmacists were also analyzed. These 521 errors were most often classified as incorrect volume, preparation error, expired product, other, incorrect drug, and incorrect concentration. BCMP technology detected errors in 1.8% of pediatric oral liquid medication doses prepared in an automated workflow management system, with errors being most commonly attributed to technological problems or incorrect drugs. Pharmacists rejected an additional 0.3% of studied doses. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Friedman, S N; Bambrough, P J; Kotsarini, C; Khandanpour, N; Hoggard, N
2012-12-01
Despite the established role of MRI in the diagnosis of brain tumours, histopathological assessment remains the clinically used technique, especially for the glioma group. Relative cerebral blood volume (rCBV) is a dynamic susceptibility-weighted contrast-enhanced perfusion MRI parameter that has been shown to correlate to tumour grade, but assessment requires a specialist and is time consuming. We developed analysis software to determine glioma gradings from perfusion rCBV scans in a manner that is quick, easy and does not require a specialist operator. MRI perfusion data from 47 patients with different histopathological grades of glioma were analysed with custom-designed software. Semi-automated analysis was performed with a specialist and non-specialist operator separately determining the maximum rCBV value corresponding to the tumour. Automated histogram analysis was performed by calculating the mean, standard deviation, median, mode, skewness and kurtosis of rCBV values. All values were compared with the histopathologically assessed tumour grade. A strong correlation between specialist and non-specialist observer measurements was found. Significantly different values were obtained between tumour grades using both semi-automated and automated techniques, consistent with previous results. The raw (unnormalised) data single-pixel maximum rCBV semi-automated analysis value had the strongest correlation with glioma grade. Standard deviation of the raw data had the strongest correlation of the automated analysis. Semi-automated calculation of raw maximum rCBV value was the best indicator of tumour grade and does not require a specialist operator. Both semi-automated and automated MRI perfusion techniques provide viable non-invasive alternatives to biopsy for glioma tumour grading.
Sun, Hongbin; Pashoutani, Sepehr; Zhu, Jinying
2018-06-16
Delamanintions and reinforcement corrosion are two common problems in concrete bridge decks. No single nondestructive testing method (NDT) is able to provide comprehensive characterization of these defects. In this work, two NDT methods, acoustic scanning and Ground Penetrating Radar (GPR), were used to image a straight concrete bridge deck and a curved intersection ramp bridge. An acoustic scanning system has been developed for rapid delamination mapping. The system consists of metal-ball excitation sources, air-coupled sensors, and a GPS positioning system. The acoustic scanning results are presented as a two-dimensional image that is based on the energy map in the frequency range of 0.5⁻5 kHz. The GPR scanning results are expressed as the GPR signal attenuation map to characterize concrete deterioration and reinforcement corrosion. Signal processing algorithms for both methods are discussed. Delamination maps from the acoustic scanning are compared with deterioration maps from the GPR scanning on both bridges. The results demonstrate that combining the acoustic and GPR scanning results will provide a complementary and comprehensive evaluation of concrete bridge decks.
Sadygov, Rovshan G.; Zhao, Yingxin; Haidacher, Sigmund J.; Starkey, Jonathan M.; Tilton, Ronald G.; Denner, Larry
2010-01-01
We describe a method for ratio estimations in 18O-water labeling experiments acquired from low resolution isotopically resolved data. The method is implemented in a software package specifically designed for use in experiments making use of zoom-scan mode data acquisition. Zoom-scan mode data allows commonly used ion trap mass spectrometers to attain isotopic resolution, which make them amenable to use in labeling schemes such as 18O-water labeling, but algorithms and software developed for high resolution instruments may not be appropriate for the lower resolution data acquired in zoom-scan mode. The use of power spectrum analysis is proposed as a general approach which may be uniquely suited to these data types. The software implementation uses power spectrum to remove high-frequency noise, and band-filter contributions from co-eluting species of differing charge states. From the elemental composition of a peptide sequence we generate theoretical isotope envelopes of heavy-light peptide pairs in five different ratios; these theoretical envelopes are correlated with the filtered experimental zoom scans. To automate peptide quantification in high-throughput experiments, we have implemented our approach in a computer program, MassXplorer. We demonstrate the application of MassXplorer to two model mixtures of known proteins, and to a complex mixture of mouse kidney cortical extract. Comparison with another algorithm for ratio estimations demonstrates the increased precision and automation of MassXplorer. PMID:20568695
NASA Astrophysics Data System (ADS)
Kromer, Ryan A.; Abellán, Antonio; Hutchinson, D. Jean; Lato, Matt; Chanut, Marie-Aurelie; Dubois, Laurent; Jaboyedoff, Michel
2017-05-01
We present an automated terrestrial laser scanning (ATLS) system with automatic near-real-time change detection processing. The ATLS system was tested on the Séchilienne landslide in France for a 6-week period with data collected at 30 min intervals. The purpose of developing the system was to fill the gap of high-temporal-resolution TLS monitoring studies of earth surface processes and to offer a cost-effective, light, portable alternative to ground-based interferometric synthetic aperture radar (GB-InSAR) deformation monitoring. During the study, we detected the flux of talus, displacement of the landslide and pre-failure deformation of discrete rockfall events. Additionally, we found the ATLS system to be an effective tool in monitoring landslide and rockfall processes despite missing points due to poor atmospheric conditions or rainfall. Furthermore, such a system has the potential to help us better understand a wide variety of slope processes at high levels of temporal detail.
Novel SPECT Technologies and Approaches in Cardiac Imaging
Slomka, Piotr; Hung, Guang-Uei; Germano, Guido; Berman, Daniel S.
2017-01-01
Recent novel approaches in myocardial perfusion single photon emission CT (SPECT) have been facilitated by new dedicated high-efficiency hardware with solid-state detectors and optimized collimators. New protocols include very low-dose (1 mSv) stress-only, two-position imaging to mitigate attenuation artifacts, and simultaneous dual-isotope imaging. Attenuation correction can be performed by specialized low-dose systems or by previously obtained CT coronary calcium scans. Hybrid protocols using CT angiography have been proposed. Image quality improvements have been demonstrated by novel reconstructions and motion correction. Fast SPECT acquisition facilitates dynamic flow and early function measurements. Image processing algorithms have become automated with virtually unsupervised extraction of quantitative imaging variables. This automation facilitates integration with clinical variables derived by machine learning to predict patient outcome or diagnosis. In this review, we describe new imaging protocols made possible by the new hardware developments. We also discuss several novel software approaches for the quantification and interpretation of myocardial perfusion SPECT scans. PMID:29034066
Harvey, Craig A.; Kolpin, Dana W.; Battaglin, William A.
1996-01-01
A geographic information system (GIS) procedure was developed to compile low-altitude aerial photography, digitized data, and land-use data from U.S. Department of Agriculture Consolidated Farm Service Agency (CFSA) offices into a high-resolution (approximately 5 meters) land-use GIS data set. The aerial photography consisted of 35-mm slides which were scanned into tagged information file format (TIFF) images. These TIFF images were then imported into the GIS where they were registered into a geographically referenced coordinate system. Boundaries between land use were delineated from these GIS data sets using on-screen digitizing techniques. Crop types were determined using information obtained from the U.S. Department of Agriculture CFSA offices. Crop information not supplied by the CFSA was attributed by manual classification procedures. Automated methods to provide delineation of the field boundaries and land-use classification were investigated. It was determined that using these data sources, automated methods were less efficient and accurate than manual methods of delineating field boundaries and classifying land use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yao; Wan, Liang; Chen, Kai
An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less
NASA Astrophysics Data System (ADS)
Ni, Guangming; Liu, Lin; Zhang, Jing; Liu, Juanxiu; Liu, Yong
2018-01-01
With the development of the liquid crystal display (LCD) module industry, LCD modules become more and more precise with larger sizes, which demands harsh imaging requirements for automated optical inspection (AOI). Here, we report a high-resolution and clearly focused imaging optomechatronics for precise LCD module bonding AOI inspection. It first presents and achieves high-resolution imaging for LCD module bonding AOI inspection using a line scan camera (LSC) triggered by a linear optical encoder, self-adaptive focusing for the whole large imaging region using LSC, and a laser displacement sensor, which reduces the requirements of machining, assembly, and motion control of AOI devices. Results show that this system can directly achieve clearly focused imaging for AOI inspection of large LCD module bonding with 0.8 μm image resolution, 2.65-mm scan imaging width, and no limited imaging width theoretically. All of these are significant for AOI inspection in the LCD module industry and other fields that require imaging large regions with high resolution.
Li, Yao; Wan, Liang; Chen, Kai
2015-04-25
An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webber, Nels W.
Los Alamos National Laboratory in J-1 DARHT Operations Group uses 6ft spherical vessels to contain hazardous materials produced in a hydrodynamic experiment. These contaminated vessels must be analyzed by means of a worker entering the vessel to locate, measure, and document every penetration mark on the vessel. If the worker can be replaced by a highly automated robotic system with a high precision scanner, it will eliminate the risks to the worker and provide management with an accurate 3D model of the vessel presenting the existing damage with the flexibility to manipulate the model for better and more in-depth assessment.Themore » project was successful in meeting the primary goal of installing an automated system which scanned a 6ft vessel with an elapsed time of 45 minutes. This robotic system reduces the total time for the original scope of work by 75 minutes and results in excellent data accumulation and transmission to the 3D model imaging program.« less
Verdaguer, Paula; Gris, Oscar; Casaroli-Marano, Ricardo P; Elies, Daniel; Muñoz-Gutierrez, Gerardo; Güell, Jose L
2015-08-01
To describe a case of hydrophilic intraocular lens (IOL) opacification based on IOL analysis after Descemet stripping automated endothelial keratoplasty. A 60-year-old woman had uneventful phacoemulsification after the implantation of a hydrophilic IOL (Akreos-Adapt; Bausch & Lomb) into both eyes. Because of postoperative corneal decompensation in the right eye, 2 Descemet stripping automated endothelial keratoplasty operations were performed within 1 year. After the second procedure, the graft was not well attached, requiring an intracameral injection of air on day 3. After 1 year, opacification was observed on the superior 2/3 of the anterior surface of the IOL, along with a significant decrease in visual acuity. The IOL was explanted 6 months after the opacification. Environmental scanning electron microscopy followed by x-ray microanalysis revealed an organic biofilm on the surface of the IOL. To our knowledge, this is the first reported case in which the material deposited on the lens is organic rather than calcific.
Computer Vision Malaria Diagnostic Systems-Progress and Prospects.
Pollak, Joseph Joel; Houri-Yafin, Arnon; Salpeter, Seth J
2017-01-01
Accurate malaria diagnosis is critical to prevent malaria fatalities, curb overuse of antimalarial drugs, and promote appropriate management of other causes of fever. While several diagnostic tests exist, the need for a rapid and highly accurate malaria assay remains. Microscopy and rapid diagnostic tests are the main diagnostic modalities available, yet they can demonstrate poor performance and accuracy. Automated microscopy platforms have the potential to significantly improve and standardize malaria diagnosis. Based on image recognition and machine learning algorithms, these systems maintain the benefits of light microscopy and provide improvements such as quicker scanning time, greater scanning area, and increased consistency brought by automation. While these applications have been in development for over a decade, recently several commercial platforms have emerged. In this review, we discuss the most advanced computer vision malaria diagnostic technologies and investigate several of their features which are central to field use. Additionally, we discuss the technological and policy barriers to implementing these technologies in low-resource settings world-wide.
NASA Astrophysics Data System (ADS)
Liu, Hongna; Li, Song; Wang, Zhifei; Li, Zhiyang; Deng, Yan; Wang, Hua; Shi, Zhiyang; He, Nongyue
2008-11-01
Single nucleotide polymorphisms (SNPs) comprise the most abundant source of genetic variation in the human genome wide codominant SNPs identification. Therefore, large-scale codominant SNPs identification, especially for those associated with complex diseases, has induced the need for completely high-throughput and automated SNP genotyping method. Herein, we present an automated detection system of SNPs based on two kinds of functional magnetic nanoparticles (MNPs) and dual-color hybridization. The amido-modified MNPs (NH 2-MNPs) modified with APTES were used for DNA extraction from whole blood directly by electrostatic reaction, and followed by PCR, was successfully performed. Furthermore, biotinylated PCR products were captured on the streptavidin-coated MNPs (SA-MNPs) and interrogated by hybridization with a pair of dual-color probes to determine SNP, then the genotype of each sample can be simultaneously identified by scanning the microarray printed with the denatured fluorescent probes. This system provided a rapid, sensitive and highly versatile automated procedure that will greatly facilitate the analysis of different known SNPs in human genome.
A Mobile Automated Tomographic Gamma Scanning System - 13231
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirkpatrick, J.M.; LeBlanc, P.J.; Nakazawa, D.
2013-07-01
Canberra Industries have recently designed and built a new automated Tomographic Gamma Scanning (TGS) system for mobile deployment. The TGS technique combines high-resolution gamma spectroscopy with low spatial resolution 3-dimensional image reconstruction to provide increased accuracy over traditional approaches for the assay of non-uniform source distributions in low-to medium-density, non-heterogeneous matrices. Originally pioneered by R. Estep at Los Alamos National Laboratory (LANL), the TGS method has been further developed and commercialized by Canberra Industries in recent years. The present system advances the state of the art on several fronts: it is designed to be housed in a standard cargo transportmore » container for ease of transport, allowing waste characterization at multiple facilities under the purview of a single operator. Conveyor feed, drum rotator, and detector and collimator positioning mechanisms operated by programmable logic control (PLC) allow automated batch mode operation. The variable geometry settings can accommodate a wide range of waste packaging, including but not limited to standard 220 liter drums, 380 liter overpack drums, and smaller 20 liter cans. A 20 mCi Eu-152 transmission source provides attenuation corrections for drum matrices up to 1 g/cm{sup 3} in TGS mode; the system can be operated in Segmented Gamma Scanning (SGS) mode to measure higher density drums. To support TGS assays at higher densities, the source shield is sufficient to house an alternate Co-60 transmission source of higher activity, up to 250 mCi. An automated shutter and attenuator assembly is provided for operating the system with a dual intensity transmission source. The system's 1500 kg capacity rotator turntable can handle heavy containers such as concrete lined 380 liter overpack drums. Finally, data acquisition utilizes Canberra's Broad Energy Germanium (BEGE) detector and Lynx MCA, with 32 k channels, providing better than 0.1 keV/channel resolution to support both isotopic analysis with the MGA/MGAU software and a wide 3 MeV dynamic range. The calibration and verification of the system is discussed, and quantitative results are presented for a variety of drum types and matrices. (authors)« less
Automated inspection of hot steel slabs
Martin, R.J.
1985-12-24
The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes. 5 figs.
Automated inspection of hot steel slabs
Martin, Ronald J.
1985-01-01
The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes.
A Graphical Operator Interface for a Telerobotic Inspection System
NASA Technical Reports Server (NTRS)
Kim, W. S.; Tso, K. S.; Hayati, S.
1993-01-01
Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.
An operator interface design for a telerobotic inspection system
NASA Technical Reports Server (NTRS)
Kim, Won S.; Tso, Kam S.; Hayati, Samad
1993-01-01
The operator interface has recently emerged as an important element for efficient and safe interactions between human operators and telerobotics. Advances in graphical user interface and graphics technologies enable us to produce very efficient operator interface designs. This paper describes an efficient graphical operator interface design newly developed for remote surface inspection at NASA-JPL. The interface, designed so that remote surface inspection can be performed by a single operator with an integrated robot control and image inspection capability, supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.
Analytical techniques of pilot scanning behavior and their application
NASA Technical Reports Server (NTRS)
Harris, R. L., Sr.; Glover, B. J.; Spady, A. A., Jr.
1986-01-01
The state of the art of oculometric data analysis techniques and their applications in certain research areas such as pilot workload, information transfer provided by various display formats, crew role in automated systems, and pilot training are documented. These analytical techniques produce the following data: real-time viewing of the pilot's scanning behavior, average dwell times, dwell percentages, instrument transition paths, dwell histograms, and entropy rate measures. These types of data are discussed, and overviews of the experimental setup, data analysis techniques, and software are presented. A glossary of terms frequently used in pilot scanning behavior and a bibliography of reports on related research sponsored by NASA Langley Research Center are also presented.
NASA Astrophysics Data System (ADS)
Maev, R. Gr.; Bakulin, E. Yu.; Maeva, A.; Severin, F.
Biometrics is a rapidly evolving scientific and applied discipline that studies possible ways of personal identification by means of unique biological characteristics. Such identification is important in various situations requiring restricted access to certain areas, information and personal data and for cases of medical emergencies. A number of automated biometric techniques have been developed, including fingerprint, hand shape, eye and facial recognition, thermographic imaging, etc. All these techniques differ in the recognizable parameters, usability, accuracy and cost. Among these, fingerprint recognition stands alone since a very large database of fingerprints has already been acquired. Also, fingerprints are key evidence left at a crime scene and can be used to indentify suspects. Therefore, of all automated biometric techniques, especially in the field of law enforcement, fingerprint identification seems to be the most promising. We introduce a newer development of the ultrasonic fingerprint imaging. The proposed method obtains a scan only once and then varies the C-scan gate position and width to visualize acoustic reflections from any appropriate depth inside the skin. Also, B-scans and A-scans can be recreated from any position using such data array, which gives the control over the visualization options. By setting the C-scan gate deeper inside the skin, distribution of the sweat pores (which are located along the ridges) can be easily visualized. This distribution should be unique for each individual so this provides a means of personal identification, which is not affected by any changes (accidental or intentional) of the fingers' surface conditions. This paper discusses different setups, acoustic parameters of the system, signal and image processing options and possible ways of 3-dimentional visualization that could be used as a recognizable characteristic in biometric identification.
Liu, Yu-Ying; Ishikawa, Hiroshi; Chen, Mei; Wollstein, Gadi; Duker, Jay S; Fujimoto, James G; Schuman, Joel S; Rehg, James M
2011-10-21
To develop an automated method to identify the normal macula and three macular pathologies (macular hole [MH], macular edema [ME], and age-related macular degeneration [AMD]) from the fovea-centered cross sections in three-dimensional (3D) spectral-domain optical coherence tomography (SD-OCT) images. A sample of SD-OCT macular scans (macular cube 200 × 200 or 512 × 128 scan protocol; Cirrus HD-OCT; Carl Zeiss Meditec, Inc., Dublin, CA) was obtained from healthy subjects and subjects with MH, ME, and/or AMD (dataset for development: 326 scans from 136 subjects [193 eyes], and dataset for testing: 131 scans from 37 subjects [58 eyes]). A fovea-centered cross-sectional slice for each of the SD-OCT images was encoded using spatially distributed multiscale texture and shape features. Three ophthalmologists labeled each fovea-centered slice independently, and the majority opinion for each pathology was used as the ground truth. Machine learning algorithms were used to identify the discriminative features automatically. Two-class support vector machine classifiers were trained to identify the presence of normal macula and each of the three pathologies separately. The area under the receiver operating characteristic curve (AUC) was calculated to assess the performance. The cross-validation AUC result on the development dataset was 0.976, 0.931, 0939, and 0.938, and the AUC result on the holdout testing set was 0.978, 0.969, 0.941, and 0.975, for identifying normal macula, MH, ME, and AMD, respectively. The proposed automated data-driven method successfully identified various macular pathologies (all AUC > 0.94). This method may effectively identify the discriminative features without relying on a potentially error-prone segmentation module.
Liu, Yu-Ying; Chen, Mei; Wollstein, Gadi; Duker, Jay S.; Fujimoto, James G.; Schuman, Joel S.; Rehg, James M.
2011-01-01
Purpose. To develop an automated method to identify the normal macula and three macular pathologies (macular hole [MH], macular edema [ME], and age-related macular degeneration [AMD]) from the fovea-centered cross sections in three-dimensional (3D) spectral-domain optical coherence tomography (SD-OCT) images. Methods. A sample of SD-OCT macular scans (macular cube 200 × 200 or 512 × 128 scan protocol; Cirrus HD-OCT; Carl Zeiss Meditec, Inc., Dublin, CA) was obtained from healthy subjects and subjects with MH, ME, and/or AMD (dataset for development: 326 scans from 136 subjects [193 eyes], and dataset for testing: 131 scans from 37 subjects [58 eyes]). A fovea-centered cross-sectional slice for each of the SD-OCT images was encoded using spatially distributed multiscale texture and shape features. Three ophthalmologists labeled each fovea-centered slice independently, and the majority opinion for each pathology was used as the ground truth. Machine learning algorithms were used to identify the discriminative features automatically. Two-class support vector machine classifiers were trained to identify the presence of normal macula and each of the three pathologies separately. The area under the receiver operating characteristic curve (AUC) was calculated to assess the performance. Results. The cross-validation AUC result on the development dataset was 0.976, 0.931, 0939, and 0.938, and the AUC result on the holdout testing set was 0.978, 0.969, 0.941, and 0.975, for identifying normal macula, MH, ME, and AMD, respectively. Conclusions. The proposed automated data-driven method successfully identified various macular pathologies (all AUC > 0.94). This method may effectively identify the discriminative features without relying on a potentially error-prone segmentation module. PMID:21911579
NASA Technical Reports Server (NTRS)
Flanagan, P. M.; Atherton, W. J.
1985-01-01
A robotic system to automate the detection, location, and quantification of gear noise using acoustic intensity measurement techniques has been successfully developed. Major system components fabricated under this grant include an instrumentation robot arm, a robot digital control unit and system software. A commercial, desktop computer, spectrum analyzer and two microphone probe complete the equipment required for the Robotic Acoustic Intensity Measurement System (RAIMS). Large-scale acoustic studies of gear noise in helicopter transmissions cannot be performed accurately and reliably using presently available instrumentation and techniques. Operator safety is a major concern in certain gear noise studies due to the operating environment. The man-hours needed to document a noise field in situ is another shortcoming of present techniques. RAIMS was designed to reduce the labor and hazard in collecting data and to improve the accuracy and repeatability of characterizing the acoustic field by automating the measurement process. Using RAIMS a system operator can remotely control the instrumentation robot to scan surface areas and volumes generating acoustic intensity information using the two microphone technique. Acoustic intensity studies requiring hours of scan time can be performed automatically without operator assistance. During a scan sequence, the acoustic intensity probe is positioned by the robot and acoustic intensity data is collected, processed, and stored.
NASA Astrophysics Data System (ADS)
Koller, Manfred R.; Hanania, Elie G.; Eisfeld, Timothy; O'Neal, Robert A.; Khovananth, Kevin M.; Palsson, Bernhard O.
2001-04-01
High-dose chemotherapy, followed by autologous hematopoietic stem cell (HSC) transplantation, is widely used for the treatment of cancer. However, contaminating tumor cells within HSC harvests continue to be of major concern since re-infused tumor cells have proven to contribute to disease relapse. Many tumor purging methods have been evaluated, but all leave detectable tumor cells in the transplant and result in significant loss of HSCs. These shortcomings cause engraftment delays and compromise the therapeutic value of purging. A novel approach integrating automated scanning cytometry, image analysis, and selective laser-induced killing of labeled cells within a cell mixture is described here. Non-Hodgkin's lymphoma (NHL) cells were spiked into cell mixtures, and fluorochrome-conjugated antibodies were used to label tumor cells within the mixture. Cells were then allowed to settle on a surface, and as the surface was scanned with a fluorescence excitation source, a laser pulse was fired at every detected tumor cell using high-speed beam steering mirrors. Tumor cells were selectively killed with little effect on adjacent non-target cells, demonstrating the feasibility of this automated cell processing approach. This technology has many potential research and clinical applications, one example of which is tumor cell purging for autologous HSC transplantation.
An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang
2017-03-01
We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.
Raith, Stefan; Vogel, Eric Per; Anees, Naeema; Keul, Christine; Güth, Jan-Frederik; Edelhoff, Daniel; Fischer, Horst
2017-01-01
Chairside manufacturing based on digital image acquisition is gainingincreasing importance in dentistry. For the standardized application of these methods, it is paramount to have highly automated digital workflows that can process acquired 3D image data of dental surfaces. Artificial Neural Networks (ANNs) arenumerical methods primarily used to mimic the complex networks of neural connections in the natural brain. Our hypothesis is that an ANNcan be developed that is capable of classifying dental cusps with sufficient accuracy. This bears enormous potential for an application in chairside manufacturing workflows in the dental field, as it closes the gap between digital acquisition of dental geometries and modern computer-aided manufacturing techniques.Three-dimensional surface scans of dental casts representing natural full dental arches were transformed to range image data. These data were processed using an automated algorithm to detect candidates for tooth cusps according to salient geometrical features. These candidates were classified following common dental terminology and used as training data for a tailored ANN.For the actual cusp feature description, two different approaches were developed and applied to the available data: The first uses the relative location of the detected cusps as input data and the second method directly takes the image information given in the range images. In addition, a combination of both was implemented and investigated.Both approaches showed high performance with correct classifications of 93.3% and 93.5%, respectively, with improvements by the combination shown to be minor.This article presents for the first time a fully automated method for the classification of teeththat could be confirmed to work with sufficient precision to exhibit the potential for its use in clinical practice,which is a prerequisite for automated computer-aided planning of prosthetic treatments with subsequent automated chairside manufacturing. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Lu; Narayanan, Ramakrishnan; Miller, Steve; Shen, Feimo; Barqawi, Al B.; Crawford, E. David; Suri, Jasjit S.
2008-02-01
Real-time knowledge of capsule volume of an organ provides a valuable clinical tool for 3D biopsy applications. It is challenging to estimate this capsule volume in real-time due to the presence of speckles, shadow artifacts, partial volume effect and patient motion during image scans, which are all inherent in medical ultrasound imaging. The volumetric ultrasound prostate images are sliced in a rotational manner every three degrees. The automated segmentation method employs a shape model, which is obtained from training data, to delineate the middle slices of volumetric prostate images. Then a "DDC" algorithm is applied to the rest of the images with the initial contour obtained. The volume of prostate is estimated with the segmentation results. Our database consists of 36 prostate volumes which are acquired using a Philips ultrasound machine using a Side-fire transrectal ultrasound (TRUS) probe. We compare our automated method with the semi-automated approach. The mean volumes using the semi-automated and complete automated techniques were 35.16 cc and 34.86 cc, with the error of 7.3% and 7.6% compared to the volume obtained by the human estimated boundary (ideal boundary), respectively. The overall system, which was developed using Microsoft Visual C++, is real-time and accurate.
Automated brain computed tomographic densitometry of early ischemic changes in acute stroke
Stoel, Berend C.; Marquering, Henk A.; Staring, Marius; Beenen, Ludo F.; Slump, Cornelis H.; Roos, Yvo B.; Majoie, Charles B.
2015-01-01
Abstract. The Alberta Stroke Program Early CT score (ASPECTS) scoring method is frequently used for quantifying early ischemic changes (EICs) in patients with acute ischemic stroke in clinical studies. Varying interobserver agreement has been reported, however, with limited agreement. Therefore, our goal was to develop and evaluate an automated brain densitometric method. It divides CT scans of the brain into ASPECTS regions using atlas-based segmentation. EICs are quantified by comparing the brain density between contralateral sides. This method was optimized and validated using CT data from 10 and 63 patients, respectively. The automated method was validated against manual ASPECTS, stroke severity at baseline and clinical outcome after 7 to 10 days (NIH Stroke Scale, NIHSS) and 3 months (modified Rankin Scale). Manual and automated ASPECTS showed similar and statistically significant correlations with baseline NIHSS (R=−0.399 and −0.277, respectively) and with follow-up mRS (R=−0.256 and −0.272), except for the follow-up NIHSS. Agreement between automated and consensus ASPECTS reading was similar to the interobserver agreement of manual ASPECTS (differences <1 point in 73% of cases). The automated ASPECTS method could, therefore, be used as a supplementary tool to assist manual scoring. PMID:26158082
Validated Automatic Brain Extraction of Head CT Images
Muschelli, John; Ullman, Natalie L.; Mould, W. Andrew; Vespa, Paul; Hanley, Daniel F.; Crainiceanu, Ciprian M.
2015-01-01
Background X-ray Computed Tomography (CT) imaging of the brain is commonly used in diagnostic settings. Although CT scans are primarily used in clinical practice, they are increasingly used in research. A fundamental processing step in brain imaging research is brain extraction – the process of separating the brain tissue from all other tissues. Methods for brain extraction have either been 1) validated but not fully automated, or 2) fully automated and informally proposed, but never formally validated. Aim To systematically analyze and validate the performance of FSL's brain extraction tool (BET) on head CT images of patients with intracranial hemorrhage. This was done by comparing the manual gold standard with the results of several versions of automatic brain extraction and by estimating the reliability of automated segmentation of longitudinal scans. The effects of the choice of BET parameters and data smoothing is studied and reported. Methods All images were thresholded using a 0 – 100 Hounsfield units (HU) range. In one variant of the pipeline, data were smoothed using a 3-dimensional Gaussian kernel (σ = 1mm3) and re-thresholded to 0 – 100 HU; in the other, data were not smoothed. BET was applied using 1 of 3 fractional intensity (FI) thresholds: 0.01, 0.1, or 0.35 and any holes in the brain mask were filled. For validation against a manual segmentation, 36 images from patients with intracranial hemorrhage were selected from 19 different centers from the MISTIE (Minimally Invasive Surgery plus recombinant-tissue plasminogen activator for Intracerebral Evacuation) stroke trial. Intracranial masks of the brain were manually created by one expert CT reader. The resulting brain tissue masks were quantitatively compared to the manual segmentations using sensitivity, specificity, accuracy, and the Dice Similarity Index (DSI). Brain extraction performance across smoothing and FI thresholds was compared using the Wilcoxon signed-rank test. The intracranial volume (ICV) of each scan was estimated by multiplying the number of voxels in the brain mask by the dimensions of each voxel for that scan. From this, we calculated the ICV ratio comparing manual and automated segmentation: ICVautomatedICVmanual. To estimate the performance in a large number of scans, brain masks were generated from the 6 BET pipelines for 1095 longitudinal scans from 129 patients. Failure rates were estimated from visual inspection. ICV of each scan was estimated and and an intraclass correlation (ICC) was estimated using a one-way ANOVA. Results Smoothing images improves brain extraction results using BET for all measures except specificity (all p < 0.01, uncorrected), irrespective of the FI threshold. Using an FI of 0.01 or 0.1 performed better than 0.35. Thus, all reported results refer only to smoothed data using an FI of 0.01 or 0.1. Using an FI of 0.01 had a higher median sensitivity (0.9901) than an FI of 0.1 (0.9884, median difference: 0.0014, p < 0.001), accuracy (0.9971 vs. 0.9971; median difference: 0.0001, p < 0.001), and DSI (0.9895 vs. 0.9894; median difference: 0.0004, p < 0.001) and lower specificity (0.9981 vs. 0.9982; median difference: −0.0001, p < 0.001). These measures are all very high indicating that a range of FI values may produce visually indistinguishable brain extractions. Using smoothed data and an FI of 0.01, the mean (SD) ICV ratio was 1.002 (0.008); the mean being close to 1 indicates the ICV estimates are similar for automated and manual segmentation. In the 1095 longitudinal scans, this pipeline had a low failure rate (5.2%) and the ICC estimate was high (0.929, 95% CI: 0.91, 0.945) for successfully extracted brains. Conclusion BET performs well at brain extraction on thresholded, 1mm3 smoothed CT images with an FI of 0.01 or 0.1. Smoothing before applying BET is an important step not previously discussed in the literature. Analysis code is provided. PMID:25862260
Automated facial attendance logger for students
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Kshitish, S.; Kishore, M. R.
2017-11-01
From the past two decades, various spheres of activity are in the aspect of ‘Face recognition’ as an essential tool. The complete series of actions of face recognition is composed of 3 stages: Face Detection, Feature Extraction and Recognition. In this paper, we make an effort to put forth a new application of face recognition and detection in education. The proposed system scans the classroom and detects the face of the students in class and matches the scanned face with the templates that is available in the database and updates the attendance of the respective students.
Toward reliable and repeatable automated STEM-EDS metrology with high throughput
NASA Astrophysics Data System (ADS)
Zhong, Zhenxin; Donald, Jason; Dutrow, Gavin; Roller, Justin; Ugurlu, Ozan; Verheijen, Martin; Bidiuk, Oleksii
2018-03-01
New materials and designs in complex 3D architectures in logic and memory devices have raised complexity in S/TEM metrology. In this paper, we report about a newly developed, automated, scanning transmission electron microscopy (STEM) based, energy dispersive X-ray spectroscopy (STEM-EDS) metrology method that addresses these challenges. Different methodologies toward repeatable and efficient, automated STEM-EDS metrology with high throughput are presented: we introduce the best known auto-EDS acquisition and quantification methods for robust and reliable metrology and present how electron exposure dose impacts the EDS metrology reproducibility, either due to poor signalto-noise ratio (SNR) at low dose or due to sample modifications at high dose conditions. Finally, we discuss the limitations of the STEM-EDS metrology technique and propose strategies to optimize the process both in terms of throughput and metrology reliability.
Automated segmentation of pulmonary structures in thoracic computed tomography scans: a review
NASA Astrophysics Data System (ADS)
van Rikxoort, Eva M.; van Ginneken, Bram
2013-09-01
Computed tomography (CT) is the modality of choice for imaging the lungs in vivo. Sub-millimeter isotropic images of the lungs can be obtained within seconds, allowing the detection of small lesions and detailed analysis of disease processes. The high resolution of thoracic CT and the high prevalence of lung diseases require a high degree of automation in the analysis pipeline. The automated segmentation of pulmonary structures in thoracic CT has been an important research topic for over a decade now. This systematic review provides an overview of current literature. We discuss segmentation methods for the lungs, the pulmonary vasculature, the airways, including airway tree construction and airway wall segmentation, the fissures, the lobes and the pulmonary segments. For each topic, the current state of the art is summarized, and topics for future research are identified.
Mencucci, Rita; Favuzza, Eleonora; Salvatici, Maria Cristina; Spadea, Leopoldo; Allen, David
2018-02-01
To evaluate by Environmental Scanning Electron Microscopy (ESEM) the corneal incision architecture after intraocular lens (IOL) implantation in pig eyes, using manual, automated injectors or preloaded delivery systems. Twenty-four pig eyes underwent IOL implantation in the anterior chamber using three different injectors: manual (Monarch III) (n = 8), automated (AutoSert) (n = 8), or a preloaded system (UltraSert) (n = 8). Acrysof IQ IOLs, 21 Dioptres (D) (n = 12) and 27D (n = 12), were implanted through 2.2 mm clear corneal incisions. Incision width was measured using corneal calipers. The endothelial side of the incision was analyzed with ESEM. In each group, the final size of the corneal wound after IOL implantation, measured by calipers, was 2.3-2.4 mm. The incision architecture resulted more irregular in the Monarch group compared with the other injectors. In every group the 27D IOL-implanted specimens showed more alterations than in 21D IOL-implanted samples, and this was less evident in the UltraSert group. The Descemet tear length was higher in the Monarch group than AutoSert and UltraSert group. The automated and preloaded delivery systems provided a good corneal incision architecture; after high-power IOL implantation the incisions were more regular and less damaged with the preloaded system than with the other devices.
Phillip, Veit; Zahel, Tina; Danninger, Assiye; Erkan, Mert; Dobritz, Martin; Steiner, Jörg M; Kleeff, Jörg; Schmid, Roland M; Algül, Hana
2015-01-01
Regeneration of the pancreas has been well characterized in animal models. However, there are conflicting data on the regenerative capacity of the human pancreas. The aim of the present study was to assess the regenerative capacity of the human pancreas. In a retrospective study, data from patients undergoing left partial pancreatic resection at a single center were eligible for inclusion (n = 185). Volumetry was performed based on 5 mm CT-scans acquired through a 256-slice CT-scanner using a semi-automated software. Data from 24 patients (15 males/9 females) were included. Mean ± SD age was 68 ± 11 years (range, 40-85 years). Median time between surgery and the 1st postoperative CT was 9 days (range, 0-27 days; IQR, 7-13), 55 days (range, 21-141 days; IQR, 34-105) until the 2nd CT, and 191 days (range, 62-1902; IQR, 156-347) until the 3rd CT. The pancreatic volumes differed significantly between the first and the second postoperative CT scans (median volume 25.6 mL and 30.6 mL, respectively; p = 0.008) and had significantly increased further by the 3rd CT scan (median volume 37.9 mL; p = 0.001 for comparison with 1st CT scan and p = 0.003 for comparison with 2nd CT scan). The human pancreas shows a measurable and considerable potential of volumetric gain after partial resection. Multidetector-CT based semi-automated volume analysis is a feasible method for follow-up of the volume of the remaining pancreatic parenchyma after partial pancreatectomy. Effects on exocrine and endocrine pancreatic function have to be evaluated in a prospective manner. Copyright © 2015 IAP and EPC. Published by Elsevier B.V. All rights reserved.
Ding, Huiyang; Shi, Chaoyang; Ma, Li; Yang, Zhan; Wang, Mingyu; Wang, Yaqiong; Chen, Tao; Sun, Lining; Toshio, Fukuda
2018-04-08
The maneuvering and electrical characterization of nanotubes inside a scanning electron microscope (SEM) has historically been time-consuming and laborious for operators. Before the development of automated nanomanipulation-enabled techniques for the performance of pick-and-place and characterization of nanoobjects, these functions were still incomplete and largely operated manually. In this paper, a dual-probe nanomanipulation system vision-based feedback was demonstrated to automatically perform 3D nanomanipulation tasks, to investigate the electrical characterization of nanotubes. The XY-position of Atomic Force Microscope (AFM) cantilevers and individual carbon nanotubes (CNTs) were precisely recognized via a series of image processing operations. A coarse-to-fine positioning strategy in the Z-direction was applied through the combination of the sharpness-based depth estimation method and the contact-detection method. The use of nanorobotic magnification-regulated speed aided in improving working efficiency and reliability. Additionally, we proposed automated alignment of manipulator axes by visual tracking the movement trajectory of the end effector. The experimental results indicate the system's capability for automated measurement electrical characterization of CNTs. Furthermore, the automated nanomanipulation system has the potential to be extended to other nanomanipulation tasks.
Ding, Huiyang; Shi, Chaoyang; Ma, Li; Yang, Zhan; Wang, Mingyu; Wang, Yaqiong; Chen, Tao; Sun, Lining; Toshio, Fukuda
2018-01-01
The maneuvering and electrical characterization of nanotubes inside a scanning electron microscope (SEM) has historically been time-consuming and laborious for operators. Before the development of automated nanomanipulation-enabled techniques for the performance of pick-and-place and characterization of nanoobjects, these functions were still incomplete and largely operated manually. In this paper, a dual-probe nanomanipulation system vision-based feedback was demonstrated to automatically perform 3D nanomanipulation tasks, to investigate the electrical characterization of nanotubes. The XY-position of Atomic Force Microscope (AFM) cantilevers and individual carbon nanotubes (CNTs) were precisely recognized via a series of image processing operations. A coarse-to-fine positioning strategy in the Z-direction was applied through the combination of the sharpness-based depth estimation method and the contact-detection method. The use of nanorobotic magnification-regulated speed aided in improving working efficiency and reliability. Additionally, we proposed automated alignment of manipulator axes by visual tracking the movement trajectory of the end effector. The experimental results indicate the system’s capability for automated measurement electrical characterization of CNTs. Furthermore, the automated nanomanipulation system has the potential to be extended to other nanomanipulation tasks. PMID:29642495
NASA Astrophysics Data System (ADS)
Liu, Jiamin; Chang, Kevin; Kim, Lauren; Turkbey, Evrim; Lu, Le; Yao, Jianhua; Summers, Ronald
2015-03-01
The thyroid gland plays an important role in clinical practice, especially for radiation therapy treatment planning. For patients with head and neck cancer, radiation therapy requires a precise delineation of the thyroid gland to be spared on the pre-treatment planning CT images to avoid thyroid dysfunction. In the current clinical workflow, the thyroid gland is normally manually delineated by radiologists or radiation oncologists, which is time consuming and error prone. Therefore, a system for automated segmentation of the thyroid is desirable. However, automated segmentation of the thyroid is challenging because the thyroid is inhomogeneous and surrounded by structures that have similar intensities. In this work, the thyroid gland segmentation is initially estimated by multi-atlas label fusion algorithm. The segmentation is refined by supervised statistical learning based voxel labeling with a random forest algorithm. Multiatlas label fusion (MALF) transfers expert-labeled thyroids from atlases to a target image using deformable registration. Errors produced by label transfer are reduced by label fusion that combines the results produced by all atlases into a consensus solution. Then, random forest (RF) employs an ensemble of decision trees that are trained on labeled thyroids to recognize features. The trained forest classifier is then applied to the thyroid estimated from the MALF by voxel scanning to assign the class-conditional probability. Voxels from the expert-labeled thyroids in CT volumes are treated as positive classes; background non-thyroid voxels as negatives. We applied this automated thyroid segmentation system to CT scans of 20 patients. The results showed that the MALF achieved an overall 0.75 Dice Similarity Coefficient (DSC) and the RF classification further improved the DSC to 0.81.
Das Neves Borges, Patricia; Vincent, Tonia L; Marenzana, Massimo
2017-01-01
The degradation of articular cartilage, which characterises osteoarthritis (OA), is usually paired with excessive bone remodelling, including subchondral bone sclerosis, cysts, and osteophyte formation. Experimental models of OA are widely used to investigate pathogenesis, yet few validated methodologies for assessing periarticular bone morphology exist and quantitative measurements are limited by manual segmentation of micro-CT scans. The aim of this work was to chart the temporal changes in periarticular bone in murine OA by novel, automated micro-CT methods. OA was induced by destabilisation of the medial meniscus (DMM) in 10-week old male mice and disease assessed cross-sectionally from 1- to 20-weeks post-surgery. A novel approach was developed to automatically segment subchondral bone compartments into plate and trabecular bone in micro-CT scans of tibial epiphyses. Osteophyte volume, as assessed by shape differences using 3D image registration, and by measuring total epiphyseal volume was performed. Significant linear and volumetric structural modifications in subchondral bone compartments and osteophytes were measured from 4-weeks post-surgery and showed progressive changes at all time points; by 20 weeks, medial subchondral bone plate thickness increased by 160±19.5 μm and the medial osteophyte grew by 0.124±0.028 μm3. Excellent agreement was found when automated measurements were compared with manual assessments. Our automated methods for assessing bone changes in murine periarticular bone are rapid, quantitative, and highly accurate, and promise to be a useful tool in future preclinical studies of OA progression and treatment. The current approaches were developed specifically for cross-sectional micro-CT studies but could be applied to longitudinal studies.
Vincent, Tonia L.; Marenzana, Massimo
2017-01-01
Objective The degradation of articular cartilage, which characterises osteoarthritis (OA), is usually paired with excessive bone remodelling, including subchondral bone sclerosis, cysts, and osteophyte formation. Experimental models of OA are widely used to investigate pathogenesis, yet few validated methodologies for assessing periarticular bone morphology exist and quantitative measurements are limited by manual segmentation of micro-CT scans. The aim of this work was to chart the temporal changes in periarticular bone in murine OA by novel, automated micro-CT methods. Methods OA was induced by destabilisation of the medial meniscus (DMM) in 10-week old male mice and disease assessed cross-sectionally from 1- to 20-weeks post-surgery. A novel approach was developed to automatically segment subchondral bone compartments into plate and trabecular bone in micro-CT scans of tibial epiphyses. Osteophyte volume, as assessed by shape differences using 3D image registration, and by measuring total epiphyseal volume was performed. Results Significant linear and volumetric structural modifications in subchondral bone compartments and osteophytes were measured from 4-weeks post-surgery and showed progressive changes at all time points; by 20 weeks, medial subchondral bone plate thickness increased by 160±19.5 μm and the medial osteophyte grew by 0.124±0.028 μm3. Excellent agreement was found when automated measurements were compared with manual assessments. Conclusion Our automated methods for assessing bone changes in murine periarticular bone are rapid, quantitative, and highly accurate, and promise to be a useful tool in future preclinical studies of OA progression and treatment. The current approaches were developed specifically for cross-sectional micro-CT studies but could be applied to longitudinal studies. PMID:28334010
A fully automated system for quantification of background parenchymal enhancement in breast DCE-MRI
NASA Astrophysics Data System (ADS)
Ufuk Dalmiş, Mehmet; Gubern-Mérida, Albert; Borelli, Cristina; Vreemann, Suzan; Mann, Ritse M.; Karssemeijer, Nico
2016-03-01
Background parenchymal enhancement (BPE) observed in breast dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) has been identified as an important biomarker associated with risk for developing breast cancer. In this study, we present a fully automated framework for quantification of BPE. We initially segmented fibroglandular tissue (FGT) of the breasts using an improved version of an existing method. Subsequently, we computed BPEabs (volume of the enhancing tissue), BPErf (BPEabs divided by FGT volume) and BPErb (BPEabs divided by breast volume), using different relative enhancement threshold values between 1% and 100%. To evaluate and compare the previous and improved FGT segmentation methods, we used 20 breast DCE-MRI scans and we computed Dice similarity coefficient (DSC) values with respect to manual segmentations. For evaluation of the BPE quantification, we used a dataset of 95 breast DCE-MRI scans. Two radiologists, in individual reading sessions, visually analyzed the dataset and categorized each breast into minimal, mild, moderate and marked BPE. To measure the correlation between automated BPE values to the radiologists' assessments, we converted these values into ordinal categories and we used Spearman's rho as a measure of correlation. According to our results, the new segmentation method obtained an average DSC of 0.81 0.09, which was significantly higher (p<0.001) compared to the previous method (0.76 0.10). The highest correlation values between automated BPE categories and radiologists' assessments were obtained with the BPErf measurement (r=0.55, r=0.49, p<0.001 for both), while the correlation between the scores given by the two radiologists was 0.82 (p<0.001). The presented framework can be used to systematically investigate the correlation between BPE and risk in large screening cohorts.
John Weisberg; Jay Beaman
2001-01-01
Progress in the options for survey data collection and its effective processing continues. This paper focuses on the rapidly evolving capabilities of handheld computers, and their effective exploitation including links to data captured from scanned questionnaires (OMR and barcodes). The paper describes events in Parks Canada that led to the creation of survey software...
Edge-following algorithm for tracking geological features
NASA Technical Reports Server (NTRS)
Tietz, J. C.
1977-01-01
Sequential edge-tracking algorithm employs circular scanning to point permit effective real-time tracking of coastlines and rivers from earth resources satellites. Technique eliminates expensive high-resolution cameras. System might also be adaptable for application in monitoring automated assembly lines, inspecting conveyor belts, or analyzing thermographs, or x ray images.
Automation to improve efficiency of field expedient injury prediction screening.
Teyhen, Deydre S; Shaffer, Scott W; Umlauf, Jon A; Akerman, Raymond J; Canada, John B; Butler, Robert J; Goffar, Stephen L; Walker, Michael J; Kiesel, Kyle B; Plisky, Phillip J
2012-07-01
Musculoskeletal injuries are a primary source of disability in the U.S. Military. Physical training and sports-related activities account for up to 90% of all injuries, and 80% of these injuries are considered overuse in nature. As a result, there is a need to develop an evidence-based musculoskeletal screen that can assist with injury prevention. The purpose of this study was to assess the capability of an automated system to improve the efficiency of field expedient tests that may help predict injury risk and provide corrective strategies for deficits identified. The field expedient tests include survey questions and measures of movement quality, balance, trunk stability, power, mobility, and foot structure and mobility. Data entry for these tests was automated using handheld computers, barcode scanning, and netbook computers. An automated algorithm for injury risk stratification and mitigation techniques was run on a server computer. Without automation support, subjects were assessed in 84.5 ± 9.1 minutes per subject compared with 66.8 ± 6.1 minutes per subject with automation and 47.1 ± 5.2 minutes per subject with automation and process improvement measures (p < 0.001). The average time to manually enter the data was 22.2 ± 7.4 minutes per subject. An additional 11.5 ± 2.5 minutes per subject was required to manually assign an intervention strategy. Automation of this injury prevention screening protocol using handheld devices and netbook computers allowed for real-time data entry and enhanced the efficiency of injury screening, risk stratification, and prescription of a risk mitigation strategy.
Meshram, GK
2010-01-01
ABSTRACT Aim : To assess the cleaning efficacy of manual and automated instrumentation using 4% sodium hypochlorite singly and in combination with Glyde file Prep as root canal irrigant. Methodology : The study utilized 40 extracted human permanent premolars with single, straight and fully formed root. The teeth were then divided into four groups of ten each, Group I and II were prepared by manual instruments with 4% sodium hypochlorite used as irrigant singly [Group I] or in combination with Glyde file prep. Group III and IV were prepared by automated instruments at 250 rpm with 4% sodium hypochlorite as irrigant singly [Group III] and in combination with glyde file prep [Group IV] automated instrumentation. After completion of the root canal preparation the canal, teeth were prepared for SEM examination. These photomicrographs were qualitatively evaluated using criteria. Overall cleanliness, presence or absence of the smear layer, presence or absence of the debris, patency of the opening of dentinal tubules. Results : When comparing the cleansing efficacy of manual and automated instrumentation using 4% sodium hypochlorite better cleansing was there with manual instrumentation. When comparing the cleansing efficacy of manual and automated instrumentation using combination regime cleansing is better with automated instrumentation. When comparing the cleansing efficacy of manual instrumentation using 4% sodium hypochlorite singly and in combination with EDTA, the combination regime led to better cleansing. When comparing the cleansing efficacy of automated instrumentation using 4% sodium hypochlorite singly and in combination regime lead to better cleansing. Conclusion : Neither of instrumentation technique, nor irrigating regimes were capable of providing a completely clean canal. Automated instrumentation with a combination of sodium hypochlorite & EDTA resulted the best cleansing efficacy. PMID:27616839
Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping.
Jaakkola, Anttoni; Hyyppä, Juha; Hyyppä, Hannu; Kukko, Antero
2008-09-01
Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.
Building a print on demand web service
NASA Astrophysics Data System (ADS)
Reddy, Prakash; Rozario, Benedict; Dudekula, Shariff; V, Anil Dev
2011-03-01
There is considerable effort underway to digitize all books that have ever been printed. There is need for a service that can take raw book scans and convert them into Print on Demand (POD) books. Such a service definitely augments the digitization effort and enables broader access to a wider audience. To make this service practical we have identified three key challenges that needed to be addressed. These are: a) produce high quality image images by eliminating artifacts that exist due to the age of the document or those that are introduced during the scanning process b) develop an efficient automated system to process book scans with minimum human intervention; and c) build an eco system which allows us the target audience to discover these books.
NASA Astrophysics Data System (ADS)
Chen, C.; Zou, X.; Tian, M.; Li, J.; Wu, W.; Song, Y.; Dai, W.; Yang, B.
2017-11-01
In order to solve the automation of 3D indoor mapping task, a low cost multi-sensor robot laser scanning system is proposed in this paper. The multiple-sensor robot laser scanning system includes a panorama camera, a laser scanner, and an inertial measurement unit and etc., which are calibrated and synchronized together to achieve simultaneously collection of 3D indoor data. Experiments are undertaken in a typical indoor scene and the data generated by the proposed system are compared with ground truth data collected by a TLS scanner showing an accuracy of 99.2% below 0.25 meter, which explains the applicability and precision of the system in indoor mapping applications.
Autonomous Scanning Probe Microscopy in Situ Tip Conditioning through Machine Learning.
Rashidi, Mohammad; Wolkow, Robert A
2018-05-23
Atomic-scale characterization and manipulation with scanning probe microscopy rely upon the use of an atomically sharp probe. Here we present automated methods based on machine learning to automatically detect and recondition the quality of the probe of a scanning tunneling microscope. As a model system, we employ these techniques on the technologically relevant hydrogen-terminated silicon surface, training the network to recognize abnormalities in the appearance of surface dangling bonds. Of the machine learning methods tested, a convolutional neural network yielded the greatest accuracy, achieving a positive identification of degraded tips in 97% of the test cases. By using multiple points of comparison and majority voting, the accuracy of the method is improved beyond 99%.
NASA Technical Reports Server (NTRS)
Carver, Kyle L.; Saulsberry, Regor L.; Nichols, Charles T.; Spencer, Paul R.; Lucero, Ralph E.
2012-01-01
Eddy current testing (ET) was used to scan bare metallic liners used in the fabrication of composite overwrapped pressure vessels (COPVs) for flaws which could result in premature failure of the vessel. The main goal of the project was to make improvements in the areas of scan signal to noise ratio, sensitivity of flaw detection, and estimation of flaw dimensions. Scan settings were optimized resulting in an increased signal to noise ratio. Previously undiscovered flaw indications were observed and investigated. Threshold criteria were determined for the system software's flaw report and estimation of flaw dimensions were brought to an acceptable level of accuracy. Computer algorithms were written to import data for filtering and a numerical derivative filtering algorithm was evaluated.
Mittman, Scott A.; Huard, Richard C.; Della-Latta, Phyllis; Whittier, Susan
2009-01-01
The performance of the BD Phoenix Automated Microbiology System (BD Diagnostic Systems) was compared to those of the Vitek 2 (bioMérieux), the MicroScan MICroSTREP plus (Siemens), and Etest (bioMérieux) for antibiotic susceptibility tests (AST) of 311 clinical isolates of Streptococcus pneumoniae. The overall essential agreement (EA) between each test system and the reference microdilution broth reference method for S. pneumoniae AST results was >95%. For Phoenix, the EAs of individual antimicrobial agents ranged from 90.4% (clindamycin) to 100% (vancomycin and gatifloxacin). The categorical agreements (CA) of Phoenix, Vitek 2, MicroScan, and Etest for penicillin were 95.5%, 94.2%, 98.7%, and 97.7%, respectively. The overall CA for Phoenix was 99.3% (1 very major error [VME] and 29 minor errors [mEs]), that for Vitek 2 was 98.8% (7 VMEs and 28 mEs), and those for MicroScan and Etest were 99.5% each (19 and 13 mEs, respectively). The average times to results for Phoenix, Vitek 2, and the manual methods were 12.1 h, 9.8 h, and 24 h, respectively. From these data, the Phoenix AST results demonstrated a high degree of agreement with all systems evaluated, although fewer VMEs were observed with the Phoenix than with the Vitek 2. Overall, both automated systems provided reliable AST results for the S. pneumoniae-antibiotic combinations in half the time required for the manual methods, rendering them more suitable for the demands of expedited reporting in the clinical setting. PMID:19741088
Multicenter reliability of semiautomatic retinal layer segmentation using OCT
Oberwahrenbrock, Timm; Traber, Ghislaine L.; Lukas, Sebastian; Gabilondo, Iñigo; Nolan, Rachel; Songster, Christopher; Balk, Lisanne; Petzold, Axel; Paul, Friedemann; Villoslada, Pablo; Brandt, Alexander U.; Green, Ari J.
2018-01-01
Objective To evaluate the inter-rater reliability of semiautomated segmentation of spectral domain optical coherence tomography (OCT) macular volume scans. Methods Macular OCT volume scans of left eyes from 17 subjects (8 patients with MS and 9 healthy controls) were automatically segmented by Heidelberg Eye Explorer (v1.9.3.0) beta-software (Spectralis Viewing Module v6.0.0.7), followed by manual correction by 5 experienced operators from 5 different academic centers. The mean thicknesses within a 6-mm area around the fovea were computed for the retinal nerve fiber layer, ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer, outer plexiform layer (OPL), and outer nuclear layer (ONL). Intraclass correlation coefficients (ICCs) were calculated for mean layer thickness values. Spatial distribution of ICC values for the segmented volume scans was investigated using heat maps. Results Agreement between raters was good (ICC > 0.84) for all retinal layers, particularly inner retinal layers showed excellent agreement across raters (ICC > 0.96). Spatial distribution of ICC showed highest values in the perimacular area, whereas the ICCs were poorer for the foveola and the more peripheral macular area. The automated segmentation of the OPL and ONL required the most correction and showed the least agreement, whereas differences were less prominent for the remaining layers. Conclusions Automated segmentation with manual correction of macular OCT scans is highly reliable when performed by experienced raters and can thus be applied in multicenter settings. Reliability can be improved by restricting analysis to the perimacular area and compound segmentation of GCL and IPL. PMID:29552598
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Qiang; Niu, Sijie; Yuan, Songtao
Purpose: In clinical research, it is important to measure choroidal thickness when eyes are affected by various diseases. The main purpose is to automatically segment choroid for enhanced depth imaging optical coherence tomography (EDI-OCT) images with five B-scans averaging. Methods: The authors present an automated choroid segmentation method based on choroidal vasculature characteristics for EDI-OCT images with five B-scans averaging. By considering the large vascular of the Haller’s layer neighbor with the choroid-sclera junction (CSJ), the authors measured the intensity ascending distance and a maximum intensity image in the axial direction from a smoothed and normalized EDI-OCT image. Then, basedmore » on generated choroidal vessel image, the authors constructed the CSJ cost and constrain the CSJ search neighborhood. Finally, graph search with smooth constraints was utilized to obtain the CSJ boundary. Results: Experimental results with 49 images from 10 eyes in 8 normal persons and 270 images from 57 eyes in 44 patients with several stages of diabetic retinopathy and age-related macular degeneration demonstrate that the proposed method can accurately segment the choroid of EDI-OCT images with five B-scans averaging. The mean choroid thickness difference and overlap ratio between the authors’ proposed method and manual segmentation drawn by experts were −11.43 μm and 86.29%, respectively. Conclusions: Good performance was achieved for normal and pathologic eyes, which proves that the authors’ method is effective for the automated choroid segmentation of the EDI-OCT images with five B-scans averaging.« less
Wallace, Adam N; Vyhmeister, Ross; Bagade, Swapnil; Chatterjee, Arindam; Hicks, Brandon; Ramirez-Giraldo, Juan Carlos; McKinstry, Robert C
2015-06-01
Cerebrospinal fluid shunts are primarily used for the treatment of hydrocephalus. Shunt complications may necessitate multiple non-contrast head CT scans resulting in potentially high levels of radiation dose starting at an early age. A new head CT protocol using automatic exposure control and automated tube potential selection has been implemented at our institution to reduce radiation exposure. The purpose of this study was to evaluate the reduction in radiation dose achieved by this protocol compared with a protocol with fixed parameters. A retrospective sample of 60 non-contrast head CT scans assessing for cerebrospinal fluid shunt malfunction was identified, 30 of which were performed with each protocol. The radiation doses of the two protocols were compared using the volume CT dose index and dose length product. The diagnostic acceptability and quality of each scan were evaluated by three independent readers. The new protocol lowered the average volume CT dose index from 15.2 to 9.2 mGy representing a 39 % reduction (P < 0.01; 95 % CI 35-44 %) and lowered the dose length product from 259.5 to 151.2 mGy/cm representing a 42 % reduction (P < 0.01; 95 % CI 34-50 %). The new protocol produced diagnostically acceptable scans with comparable image quality to the fixed parameter protocol. A pediatric shunt non-contrast head CT protocol using automatic exposure control and automated tube potential selection reduced patient radiation dose compared with a fixed parameter protocol while producing diagnostic images of comparable quality.
Chen, Yi-Tzai; Trzoss, Lynnie; Yang, Dongfang; Yan, Bingfang
2015-01-01
Human carboxylesterase-2 (CES2) and cytochrome P450 3A4 (CYP3A4) are two major drug metabolizing enzymes that play critical roles in hydrolytic and oxidative biotransformation, respectively. They share substrates but may have opposite effect on therapeutic potential such as the metabolism of the anticancer prodrug irinotecan. Both CES2 and CYP3A4 are expressed in the liver and the gastrointestinal tract. This study was conducted to determine whether CES2 and CYP3A4 are expressed under developmental regulation and whether the regulation occurs differentially between the liver and duodenum. A large number of tissues (112) were collected with majority of them from donors at 1-198 days of age. In addition, multi-sampling (liver, duodenum and jejunum) was performed in some donors. The expression was determined at mRNA and protein levels. In the liver, CES2 and CYP3A4 mRNA exhibited a postnatal surge (1 versus 2 months of age) by 2.7 and 29 fold, respectively. CYP3A4 but not CES2 mRNA in certain pediatric groups reached or even exceeded the adult level. The duodenal samples, on the other hand, showed a gene-specific expression pattern at mRNA level. CES2 mRNA increased with age but the opposite was true with CYP3A4 mRNA. The levels of CES2 and CYP3A4 protein, on the other hand, increased with age in both liver and duodenum. The multi-sampling study demonstrated significant correlation of CES2 expression between the duodenum and jejunum. However, neither duodenal nor jejunal expression correlated with hepatic expression of CES2. These findings establish that developmental regulation occurs in a gene and organ-dependent manner. PMID:25724353
Dysli, Chantal; Enzmann, Volker; Sznitman, Raphael; Zinkernagel, Martin S.
2015-01-01
Purpose Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. Methods Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b+Prph2Rd2/J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. Results Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. Conclusions Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. Translational Relevance The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions. PMID:26336634
Medical Device for Automated Prick Test Reading.
Justo, Xabier; Diaz, Inaki; Gil, Jorge Juan; Gastaminza, Gabriel
2018-05-01
Allergy tests are routinely performed in most hospitals everyday. However, measuring the outcomes of these tests is still a very laborious manual task. Current methods and systems lack of precision and repeatability. This paper presents a novel mechatronic system that is able to scan a patient's entire arm and provide allergists with precise measures of wheals for diagnosis. The device is based on 3-D laser technology and specific algorithms have been developed to process the information gathered. This system aims to automate the reading of skin prick tests and make gains in speed, accuracy, and reliability. Several experiments have been performed to evaluate the performance of the system.
Mabe, Jeffrey A.; Moring, J. Bruce
2008-01-01
The U.S. Geological Survey, in cooperation with the Houston-Galveston Area Council and the Galveston Bay Estuary Program under the authority of the Texas Commission on Environmental Quality, did a study in 2007 to assess the variation in biotic assemblages (benthic macroinvertebrate and fish communities) and stream-habitat data with sampling strategy and method in tidal segments of Highland Bayou and Marchand Bayou in Galveston County. Data were collected once in spring and once in summer 2007 from four stream sites (reaches) (short names Hitchcock, Fairwood, Bayou Dr, and Texas City) of Highland Bayou and from one reach (short name Marchand) in Marchand Bayou. Only stream-habitat data from summer 2007 samples were used for this report. Additional samples were collected at the Hitchcock, Fairwood, and Bayou Dr reaches (multisample reaches) during summer 2007 to evaluate variation resulting from sampling intensity and location. Graphical analysis of benthic macroinvertebrate community data using a multidimensional scaling technique indicates there are taxonomic differences between the spring and summer samples. Seasonal differences in communities primarily were related to decreases in the abundance of chironomids and polychaetes in summer samples. Multivariate Analysis of Similarities tests of additional summer 2007 benthic macroinvertebrate samples from Hitchcock, Fairwood, and Bayou Dr indicated significant taxonomic differences between the sampling locations at all three reaches. In general, the deepwater samples had the smallest numbers for benthic macroinvertebrate taxa richness and abundance. Graphical analysis of species-level fish data indicates no consistent seasonal difference in fish taxa across reaches. Increased seining intensity at the multisample reaches did not result in a statistically significant difference in fish communities. Increased seining resulted in some changes in taxa richness and community diversity metrics. Diversity increases associated with increased electrofishing intensity were relatively consistent across the two multisample electrofishing reaches (Hitchcock and Fairwood). Differences in the physical characteristics of the Highland and Marchand Bayou reaches are largely the result of the differences in channel gradient and position in the drainage network or watershed of each reach. No trees were observed on the bank adjacent to the five transects at either the Bayou Dr or Texas City reaches. Riparian vegetation at the more downstream Fairwood, Bayou Dr, and Texas City reaches was dominated by less-woody and more-herbaceous shrubs, and grasses and forbs, than at the more upstream Hitchcock and Marchand reaches. The width of the vegetation buffer was variable among all reaches and appeared to be more related to the extent of anthropogenic development in the riparian zone rather than to natural changes in the riparian buffer. Four additional transects per reach were sampled for habitat variables at Hitchcock, Fairwood, and Bayou Dr. Medians of most stream-habitat variables changed with increased sampling intensity (addition of two and four transects to the standard five transects), although none of the differences in medians were statistically significant. All habitat quality index values for the five reaches scored in the intermediate category. Increasing sampling intensity did not change the habitat quality index score for any of the reaches.
Automated branching pattern report generation for laparoscopic surgery assistance
NASA Astrophysics Data System (ADS)
Oda, Masahiro; Matsuzaki, Tetsuro; Hayashi, Yuichiro; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku
2015-05-01
This paper presents a method for generating branching pattern reports of abdominal blood vessels for laparoscopic gastrectomy. In gastrectomy, it is very important to understand branching structure of abdominal arteries and veins, which feed and drain specific abdominal organs including the stomach, the liver and the pancreas. In the real clinical stage, a surgeon creates a diagnostic report of the patient anatomy. This report summarizes the branching patterns of the blood vessels related to the stomach. The surgeon decides actual operative procedure. This paper shows an automated method to generate a branching pattern report for abdominal blood vessels based on automated anatomical labeling. The report contains 3D rendering showing important blood vessels and descriptions of branching patterns of each vessel. We have applied this method for fifty cases of 3D abdominal CT scans and confirmed the proposed method can automatically generate branching pattern reports of abdominal arteries.
Tiehuis, A M; Vincken, K L; Mali, W P T M; Kappelle, L J; Anbeek, P; Algra, A; Biessels, G J
2008-01-01
A reliable scoring method for ischemic cerebral white matter hyperintensities (WMH) will help to clarify the causes and consequences of these brain lesions. We compared an automated and two visual WMH scoring methods in their relations with age and cognitive function. MRI of the brain was performed on 154 participants of the Utrecht Diabetic Encephalopathy Study. WMH volumes were obtained with an automated segmentation method. Visual rating of deep and periventricular WMH (DWMH and PWMH) was performed with the Scheltens scale and the Rotterdam Scan Study (RSS) scale, respectively. Cognition was assessed with a battery of 11 tests. Within the whole study group, the association with age was most evident for the automated measured WMH volume (beta = 0.43, 95% CI = 0.29-0.57). With regard to cognition, automated measured WMH volume and Scheltens DWMH were significantly associated with information processing speed (beta = -0.22, 95% CI = -0.40 to -0.06; beta = -0.26, 95% CI = -0.42 to -0.10), whereas RSS PWMH were associated with attention and executive function (beta = -0.19, 95% CI = -0.36 to -0.02). Measurements of WMH with an automated quantitative segmentation method are comparable with visual rating scales and highly suitable for use in future studies to assess the relationship between WMH and subtle impairments in cognitive function. (c) 2007 S. Karger AG, Basel.
Scanning X-ray diffraction on cardiac tissue: automatized data analysis and processing.
Nicolas, Jan David; Bernhardt, Marten; Markus, Andrea; Alves, Frauke; Burghammer, Manfred; Salditt, Tim
2017-11-01
A scanning X-ray diffraction study of cardiac tissue has been performed, covering the entire cross section of a mouse heart slice. To this end, moderate focusing by compound refractive lenses to micrometer spot size, continuous scanning, data acquisition by a fast single-photon-counting pixel detector, and fully automated analysis scripts have been combined. It was shown that a surprising amount of structural data can be harvested from such a scan, evaluating the local scattering intensity, interfilament spacing of the muscle tissue, the filament orientation, and the degree of anisotropy. The workflow of data analysis is described and a data analysis toolbox with example data for general use is provided. Since many cardiomyopathies rely on the structural integrity of the sarcomere, the contractile unit of cardiac muscle cells, the present study can be easily extended to characterize tissue from a diseased heart.
Multi-modality 3D breast imaging with X-Ray tomosynthesis and automated ultrasound.
Sinha, Sumedha P; Roubidoux, Marilyn A; Helvie, Mark A; Nees, Alexis V; Goodsitt, Mitchell M; LeCarpentier, Gerald L; Fowlkes, J Brian; Chalek, Carl L; Carson, Paul L
2007-01-01
This study evaluated the utility of 3D automated ultrasound in conjunction with 3D digital X-Ray tomosynthesis for breast cancer detection and assessment, to better localize and characterize lesions in the breast. Tomosynthesis image volumes and automated ultrasound image volumes were acquired in the same geometry and in the same view for 27 patients. 3 MQSA certified radiologists independently reviewed the image volumes, visually correlating the images from the two modalities with in-house software. More sophisticated software was used on a smaller set of 10 cases, which enabled the radiologist to draw a 3D box around the suspicious lesion in one image set and isolate an anatomically correlated, similarly boxed region in the other modality image set. In the primary study, correlation was found to be moderately useful to the readers. In the additional study, using improved software, the median usefulness rating increased and confidence in localizing and identifying the suspicious mass increased in more than half the cases. As automated scanning and reading software techniques advance, superior results are expected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobyshev, A.; Lamore, D.; Demar, P.
2004-12-01
In a large campus network, such at Fermilab, with tens of thousands of nodes, scanning initiated from either outside of or within the campus network raises security concerns. This scanning may have very serious impact on network performance, and even disrupt normal operation of many services. In this paper we introduce a system for detecting and automatic blocking excessive traffic of different kinds of scanning, DoS attacks, virus infected computers. The system, called AutoBlocker, is a distributed computing system based on quasi-real time analysis of network flow data collected from the border router and core switches. AutoBlocker also has anmore » interface to accept alerts from IDS systems (e.g. BRO, SNORT) that are based on other technologies. The system has multiple configurable alert levels for the detection of anomalous behavior and configurable trigger criteria for automated blocking of scans at the core or border routers. It has been in use at Fermilab for about 2 years, and has become a very valuable tool to curtail scan activity within the Fermilab campus network.« less
Impact of board-marker accuracy on lumber yield
Urs Buehlmann; R. Edward Thomas
2003-01-01
The production of wooden furniture parts, mouldings, and flooring requires the removal of unacceptable character marks (also called "defects") such as holes, rot, knots, etc., from boards. The majority of the wood processing industry manually identifies such unusable areas and marks them with fluorescent crayons. Automated saws scan these marks and computers...
Vatican Library Automates for the 21st Century.
ERIC Educational Resources Information Center
Carter, Thomas L.
1994-01-01
Because of space and staff constraints, the Vatican Library can issue only 2,000 reader cards a year. Describes IBM's Vatican Library Project: converting the library catalog records (prior to 1985) into machine readable form and digitally scanning 20,000 manuscript pages, print pages, and art works in gray scale and color, creating a database…
Maheshwari, Shishir; Pachori, Ram Bilas; Acharya, U Rajendra
2017-05-01
Glaucoma is an ocular disorder caused due to increased fluid pressure in the optic nerve. It damages the optic nerve and subsequently causes loss of vision. The available scanning methods are Heidelberg retinal tomography, scanning laser polarimetry, and optical coherence tomography. These methods are expensive and require experienced clinicians to use them. So, there is a need to diagnose glaucoma accurately with low cost. Hence, in this paper, we have presented a new methodology for an automated diagnosis of glaucoma using digital fundus images based on empirical wavelet transform (EWT). The EWT is used to decompose the image, and correntropy features are obtained from decomposed EWT components. These extracted features are ranked based on t value feature selection algorithm. Then, these features are used for the classification of normal and glaucoma images using least-squares support vector machine (LS-SVM) classifier. The LS-SVM is employed for classification with radial basis function, Morlet wavelet, and Mexican-hat wavelet kernels. The classification accuracy of the proposed method is 98.33% and 96.67% using threefold and tenfold cross validation, respectively.
An Automated Road Roughness Detection from Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Kumar, P.; Angelats, E.
2017-05-01
Rough roads influence the safety of the road users as accident rate increases with increasing unevenness of the road surface. Road roughness regions are required to be efficiently detected and located in order to ensure their maintenance. Mobile Laser Scanning (MLS) systems provide a rapid and cost-effective alternative by providing accurate and dense point cloud data along route corridor. In this paper, an automated algorithm is presented for detecting road roughness from MLS data. The presented algorithm is based on interpolating smooth intensity raster surface from LiDAR point cloud data using point thinning process. The interpolated surface is further processed using morphological and multi-level Otsu thresholding operations to identify candidate road roughness regions. The candidate regions are finally filtered based on spatial density and standard deviation of elevation criteria to detect the roughness along the road surface. The test results of road roughness detection algorithm on two road sections are presented. The developed approach can be used to provide comprehensive information to road authorities in order to schedule maintenance and ensure maximum safety conditions for road users.
Automated In-Situ Laser Scanner for Monitoring Forest Leaf Area Index
Culvenor, Darius S.; Newnham, Glenn J.; Mellor, Andrew; Sims, Neil C.; Haywood, Andrew
2014-01-01
An automated laser rangefinding instrument was developed to characterize overstorey and understorey vegetation dynamics over time. Design criteria were based on information needs within the statewide forest monitoring program in Victoria, Australia. The ground-based monitoring instrument captures the key vegetation structural information needed to overcome ambiguity in the estimation of forest Leaf Area Index (LAI) from satellite sensors. The scanning lidar instrument was developed primarily from low cost, commercially accessible components. While the 635 nm wavelength lidar is not ideally suited to vegetation studies, there was an acceptable trade-off between cost and performance. Tests demonstrated reliable range estimates to live foliage up to a distance of 60 m during night-time operation. Given the instrument's scan angle of 57.5 degrees zenith, the instrument is an effective tool for monitoring LAI in forest canopies up to a height of 30 m. An 18 month field trial of three co-located instruments showed consistent seasonal trends and mean LAI of between 1.32 to 1.56 and a temporal LAI variation of 8 to 17% relative to the mean. PMID:25196006
Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina
2016-05-01
Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.
Missert, Nancy; Kotula, Paul G.; Rye, Michael; ...
2017-02-15
We used a focused ion beam to obtain cross-sectional specimens from both magnetic multilayer and Nb/Al-AlOx/Nb Josephson junction devices for characterization by scanning transmission electron microscopy (STEM) and energy dispersive X-ray spectroscopy (EDX). An automated multivariate statistical analysis of the EDX spectral images produced chemically unique component images of individual layers within the multilayer structures. STEM imaging elucidated distinct variations in film morphology, interface quality, and/or etch artifacts that could be correlated to magnetic and/or electrical properties measured on the same devices.
Liberto, Erica; Cagliero, Cecilia; Cordero, Chiara; Rubiolo, Patrizia; Bicchi, Carlo; Sgorbini, Barbara
2017-03-17
Recent technological advances in dynamic headspace sampling (D-HS) and the possibility to automate this sampling method have lead to a marked improvement in its the performance, a strong renewal of interest in it, and have extended its fields of application. The introduction of in-parallel and in-series automatic multi-sampling and of new trapping materials, plus the possibility to design an effective sampling process by correctly applying the breakthrough volume theory, have make profiling more representative, and have enhanced selectivity, and flexibility, also offering the possibility of fractionated enrichment in particular for high-volatility compounds. This study deals with fractionated D-HS ability to produce a sample representative of the volatile fraction of solid or liquid matrices. Experiments were carried out on a model equimolar (0.5mM) EtOH/water solution, comprising 16 compounds with different polarities and volatilities, structures ranging from C5 to C15 and vapor pressures from 4.15kPa (2,3-pentandione) to 0.004kPa (t-β-caryophyllene), and on an Arabica roasted coffee powder. Three trapping materials were considered: Tenax TA™ (TX), Polydimethylsiloxane foam (PDMS), and a three-carbon cartridge Carbopack B/Carbopack C/Carbosieve S-III™ (CBS). The influence of several parameters on the design of successful fractionated D-HS sampling. Including the physical and chemical characteristics of analytes and matrix, trapping material, analyte breakthrough, purge gas volumes, and sampling temperature, were investigated. The results show that, by appropriately choosing sampling conditions, fractionated D-HS sampling, based on component volatility, can produce a fast and representative profile of the matrix volatile fraction, with total recoveries comparable to those obtained by full evaporation D-HS for liquid samples, and very high concentration factors for solid samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Electrochemical Biosensors for Rapid Detection of Foodborne Salmonella: A Critical Overview
Cinti, Stefano; Volpe, Giulia; Piermarini, Silvia; Delibato, Elisabetta; Palleschi, Giuseppe
2017-01-01
Salmonella has represented the most common and primary cause of food poisoning in many countries for at least over 100 years. Its detection is still primarily based on traditional microbiological culture methods which are labor-intensive, extremely time consuming, and not suitable for testing a large number of samples. Accordingly, great efforts to develop rapid, sensitive and specific methods, easy to use, and suitable for multi-sample analysis, have been made and continue. Biosensor-based technology has all the potentialities to meet these requirements. In this paper, we review the features of the electrochemical immunosensors, genosensors, aptasensors and phagosensors developed in the last five years for Salmonella detection, focusing on the critical aspects of their application in food analysis. PMID:28820458
A robust variable sampling time BLDC motor control design based upon μ-synthesis.
Hung, Chung-Wen; Yen, Jia-Yush
2013-01-01
The variable sampling rate system is encountered in many applications. When the speed information is derived from the position marks along the trajectory, one would have a speed dependent sampling rate system. The conventional fixed or multisampling rate system theory may not work in these cases because the system dynamics include the uncertainties which resulted from the variable sampling rate. This paper derived a convenient expression for the speed dependent sampling rate system. The varying sampling rate effect is then translated into multiplicative uncertainties to the system. The design then uses the popular μ-synthesis process to achieve a robust performance controller design. The implementation on a BLDC motor demonstrates the effectiveness of the design approach.
A Robust Variable Sampling Time BLDC Motor Control Design Based upon μ-Synthesis
Yen, Jia-Yush
2013-01-01
The variable sampling rate system is encountered in many applications. When the speed information is derived from the position marks along the trajectory, one would have a speed dependent sampling rate system. The conventional fixed or multisampling rate system theory may not work in these cases because the system dynamics include the uncertainties which resulted from the variable sampling rate. This paper derived a convenient expression for the speed dependent sampling rate system. The varying sampling rate effect is then translated into multiplicative uncertainties to the system. The design then uses the popular μ-synthesis process to achieve a robust performance controller design. The implementation on a BLDC motor demonstrates the effectiveness of the design approach. PMID:24327804
NASA Astrophysics Data System (ADS)
Kabir, Salman; Smith, Craig; Armstrong, Frank; Barnard, Gerrit; Schneider, Alex; Guidash, Michael; Vogelsang, Thomas; Endsley, Jay
2018-03-01
Differential binary pixel technology is a threshold-based timing, readout, and image reconstruction method that utilizes the subframe partial charge transfer technique in a standard four-transistor (4T) pixel CMOS image sensor to achieve a high dynamic range video with stop motion. This technology improves low light signal-to-noise ratio (SNR) by up to 21 dB. The method is verified in silicon using a Taiwan Semiconductor Manufacturing Company's 65 nm 1.1 μm pixel technology 1 megapixel test chip array and is compared with a traditional 4 × oversampling technique using full charge transfer to show low light SNR superiority of the presented technology.
Myers, Taryn A; Crowther, Janis H
2007-09-01
Theory and research suggest that sociocultural pressures, thin-ideal internalization, and self-objectification are associated with body dissatisfaction, while feminist beliefs may serve a protective function. This research examined thin-ideal internalization and self-objectification as mediators and feminist beliefs as a moderator in the relationship between sociocultural pressures to meet the thin-ideal and body dissatisfaction. Female undergraduate volunteers (N=195) completed self-report measures assessing sociocultural influences, feminist beliefs, thin-ideal internalization, self-objectification, and body dissatisfaction. Multisample structural equation modeling showed that feminist beliefs moderate the relationship between media awareness and thin-ideal internalization, but not the relationship between social influence and thin-ideal internalization. Research and clinical implications of these findings are discussed.
Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurugol, Sila, E-mail: sila.kurugol@childrens.harvard.edu; Come, Carolyn E.; Diaz, Alejandro A.
Purpose: The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. Methods: The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearbymore » edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. Results: The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. Conclusions: The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers.« less
Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions.
Kurugol, Sila; Come, Carolyn E; Diaz, Alejandro A; Ross, James C; Kinney, Greg L; Black-Shinn, Jennifer L; Hokanson, John E; Budoff, Matthew J; Washko, George R; San Jose Estepar, Raul
2015-09-01
The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearby edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers.
Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions
Kurugol, Sila; Come, Carolyn E.; Diaz, Alejandro A.; Ross, James C.; Kinney, Greg L.; Black-Shinn, Jennifer L.; Hokanson, John E.; Budoff, Matthew J.; Washko, George R.; San Jose Estepar, Raul
2015-01-01
Purpose: The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. Methods: The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearby edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. Results: The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. Conclusions: The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers. PMID:26328995
Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo
2008-01-01
Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes. PMID:18627634
Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo
2008-07-16
Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes.
Processing of CT images for analysis of diffuse lung disease in the lung tissue research consortium
NASA Astrophysics Data System (ADS)
Karwoski, Ronald A.; Bartholmai, Brian; Zavaletta, Vanessa A.; Holmes, David; Robb, Richard A.
2008-03-01
The goal of Lung Tissue Resource Consortium (LTRC) is to improve the management of diffuse lung diseases through a better understanding of the biology of Chronic Obstructive Pulmonary Disease (COPD) and fibrotic interstitial lung disease (ILD) including Idiopathic Pulmonary Fibrosis (IPF). Participants are subjected to a battery of tests including tissue biopsies, physiologic testing, clinical history reporting, and CT scanning of the chest. The LTRC is a repository from which investigators can request tissue specimens and test results as well as semi-quantitative radiology reports, pathology reports, and automated quantitative image analysis results from the CT scan data performed by the LTRC core laboratories. The LTRC Radiology Core Laboratory (RCL), in conjunction with the Biomedical Imaging Resource (BIR), has developed novel processing methods for comprehensive characterization of pulmonary processes on volumetric high-resolution CT scans to quantify how these diseases manifest in radiographic images. Specifically, the RCL has implemented a semi-automated method for segmenting the anatomical regions of the lungs and airways. In these anatomic regions, automated quantification of pathologic features of disease including emphysema volumes and tissue classification are performed using both threshold techniques and advanced texture measures to determine the extent and location of emphysema, ground glass opacities, "honeycombing" (HC) and "irregular linear" or "reticular" pulmonary infiltrates and normal lung. Wall thickness measurements of the trachea, and its branches to the 3 rd and limited 4 th order are also computed. The methods for processing, segmentation and quantification are described. The results are reviewed and verified by an expert radiologist following processing and stored in the public LTRC database for use by pulmonary researchers. To date, over 1200 CT scans have been processed by the RCL and the LTRC project is on target for recruitment of the 2200 patients with 1800 CT scans in the repository for the 5-year effort. Ongoing analysis of the results in the LTRC database by the LTRC participating institutions and outside investigators are underway to look at the clinical and physiological significance of the imaging features of these diseases and correlate these findings with quality of life and other important prognostic indicators of severity. In the future, the quantitative measures of disease may have greater utility by showing correlation with prognosis, disease severity and other physiological parameters. These imaging features may provide non-invasive alternative endpoints or surrogate markers to alleviate the need for tissue biopsy or provide an accurate means to monitor rate of disease progression or response to therapy.
Cardiac imaging: working towards fully-automated machine analysis & interpretation.
Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido
2017-03-01
Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.
Automated Microflow NMR: Routine Analysis of Five-Microliter Samples
Jansma, Ariane; Chuan, Tiffany; Geierstanger, Bernhard H.; Albrecht, Robert W.; Olson, Dean L.; Peck, Timothy L.
2006-01-01
A microflow CapNMR probe double-tuned for 1H and 13C was installed on a 400-MHz NMR spectrometer and interfaced to an automated liquid handler. Individual samples dissolved in DMSO-d6 are submitted for NMR analysis in vials containing as little as 10 μL of sample. Sets of samples are submitted in a low-volume 384-well plate. Of the 10 μL of sample per well, as with vials, 5 μL is injected into the microflow NMR probe for analysis. For quality control of chemical libraries, 1D NMR spectra are acquired under full automation from 384-well plates on as many as 130 compounds within 24 h using 128 scans per spectrum and a sample-to-sample cycle time of ∼11 min. Because of the low volume requirements and high mass sensitivity of the microflow NMR system, 30 nmol of a typical small molecule is sufficient to obtain high-quality, well-resolved, 1D proton or 2D COSY NMR spectra in ∼6 or 20 min of data acquisition time per experiment, respectively. Implementation of pulse programs with automated solvent peak identification and suppression allow for reliable data collection, even for samples submitted in fully protonated DMSO. The automated microflow NMR system is controlled and monitored using web-based software. PMID:16194121
Boers, A M; Marquering, H A; Jochem, J J; Besselink, N J; Berkhemer, O A; van der Lugt, A; Beenen, L F; Majoie, C B
2013-08-01
Cerebral infarct volume as observed in follow-up CT is an important radiologic outcome measure of the effectiveness of treatment of patients with acute ischemic stroke. However, manual measurement of CIV is time-consuming and operator-dependent. The purpose of this study was to develop and evaluate a robust automated measurement of the CIV. The CIV in early follow-up CT images of 34 consecutive patients with acute ischemic stroke was segmented with an automated intensity-based region-growing algorithm, which includes partial volume effect correction near the skull, midline determination, and ventricle and hemorrhage exclusion. Two observers manually delineated the CIV. Interobserver variability of the manual assessments and the accuracy of the automated method were evaluated by using the Pearson correlation, Bland-Altman analysis, and Dice coefficients. The accuracy was defined as the correlation with the manual assessment as a reference standard. The Pearson correlation for the automated method compared with the reference standard was similar to the manual correlation (R = 0.98). The accuracy of the automated method was excellent with a mean difference of 0.5 mL with limits of agreement of -38.0-39.1 mL, which were more consistent than the interobserver variability of the 2 observers (-40.9-44.1 mL). However, the Dice coefficients were higher for the manual delineation. The automated method showed a strong correlation and accuracy with the manual reference measurement. This approach has the potential to become the standard in assessing the infarct volume as a secondary outcome measure for evaluating the effectiveness of treatment.
Lu, Hao; Papathomas, Thomas G; van Zessen, David; Palli, Ivo; de Krijger, Ronald R; van der Spek, Peter J; Dinjens, Winand N M; Stubbs, Andrew P
2014-11-25
In prognosis and therapeutics of adrenal cortical carcinoma (ACC), the selection of the most active areas in proliferative rate (hotspots) within a slide and objective quantification of immunohistochemical Ki67 Labelling Index (LI) are of critical importance. In addition to intratumoral heterogeneity in proliferative rate i.e. levels of Ki67 expression within a given ACC, lack of uniformity and reproducibility in the method of quantification of Ki67 LI may confound an accurate assessment of Ki67 LI. We have implemented an open source toolset, Automated Selection of Hotspots (ASH), for automated hotspot detection and quantification of Ki67 LI. ASH utilizes NanoZoomer Digital Pathology Image (NDPI) splitter to convert the specific NDPI format digital slide scanned from the Hamamatsu instrument into a conventional tiff or jpeg format image for automated segmentation and adaptive step finding hotspots detection algorithm. Quantitative hotspot ranking is provided by the functionality from the open source application ImmunoRatio as part of the ASH protocol. The output is a ranked set of hotspots with concomitant quantitative values based on whole slide ranking. We have implemented an open source automated detection quantitative ranking of hotspots to support histopathologists in selecting the 'hottest' hotspot areas in adrenocortical carcinoma. To provide wider community easy access to ASH we implemented a Galaxy virtual machine (VM) of ASH which is available from http://bioinformatics.erasmusmc.nl/wiki/Automated_Selection_of_Hotspots . The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/13000_2014_216.
High throughput optical scanner
Basiji, David A.; van den Engh, Gerrit J.
2001-01-01
A scanning apparatus is provided to obtain automated, rapid and sensitive scanning of substrate fluorescence, optical density or phosphorescence. The scanner uses a constant path length optical train, which enables the combination of a moving beam for high speed scanning with phase-sensitive detection for noise reduction, comprising a light source, a scanning mirror to receive light from the light source and sweep it across a steering mirror, a steering mirror to receive light from the scanning mirror and reflect it to the substrate, whereby it is swept across the substrate along a scan arc, and a photodetector to receive emitted or scattered light from the substrate, wherein the optical path length from the light source to the photodetector is substantially constant throughout the sweep across the substrate. The optical train can further include a waveguide or mirror to collect emitted or scattered light from the substrate and direct it to the photodetector. For phase-sensitive detection the light source is intensity modulated and the detector is connected to phase-sensitive detection electronics. A scanner using a substrate translator is also provided. For two dimensional imaging the substrate is translated in one dimension while the scanning mirror scans the beam in a second dimension. For a high throughput scanner, stacks of substrates are loaded onto a conveyor belt from a tray feeder.
Smith, Travis B.; Parker, Maria; Steinkamp, Peter N.; Weleber, Richard G.; Smith, Ning; Wilson, David J.
2016-01-01
Purpose To assess relationships between structural and functional biomarkers, including new topographic measures of visual field sensitivity, in patients with autosomal dominant retinitis pigmentosa. Methods Spectral domain optical coherence tomography line scans and hill of vision (HOV) sensitivity surfaces from full-field standard automated perimetry were semi-automatically aligned for 60 eyes of 35 patients. Structural biomarkers were extracted from outer retina b-scans along horizontal and vertical midlines. Functional biomarkers were extracted from local sensitivity profiles along the b-scans and from the full visual field. These included topographic measures of functional transition such as the contour of most rapid sensitivity decline around the HOV, herein called HOV slope for convenience. Biomarker relationships were assessed pairwise by coefficients of determination (R2) from mixed-effects analysis with automatic model selection. Results Structure-function relationships were accurately modeled (conditional R2>0.8 in most cases). The best-fit relationship models and correlation patterns for horizontally oriented biomarkers were different than vertically oriented ones. The structural biomarker with the largest number of significant functional correlates was the ellipsoid zone (EZ) width, followed by the total photoreceptor layer thickness. The strongest correlation observed was between EZ width and HOV slope distance (marginal R2 = 0.85, p<10−10). The mean sensitivity defect at the EZ edge was 7.6 dB. Among all functional biomarkers, the HOV slope mean value, HOV slope mean distance, and maximum sensitivity along the b-scan had the largest number of significant structural correlates. Conclusions Topographic slope metrics show promise as functional biomarkers relevant to the transition zone. EZ width is strongly associated with the location of most rapid HOV decline. PMID:26845445
Smith, Travis B; Parker, Maria; Steinkamp, Peter N; Weleber, Richard G; Smith, Ning; Wilson, David J
2016-01-01
To assess relationships between structural and functional biomarkers, including new topographic measures of visual field sensitivity, in patients with autosomal dominant retinitis pigmentosa. Spectral domain optical coherence tomography line scans and hill of vision (HOV) sensitivity surfaces from full-field standard automated perimetry were semi-automatically aligned for 60 eyes of 35 patients. Structural biomarkers were extracted from outer retina b-scans along horizontal and vertical midlines. Functional biomarkers were extracted from local sensitivity profiles along the b-scans and from the full visual field. These included topographic measures of functional transition such as the contour of most rapid sensitivity decline around the HOV, herein called HOV slope for convenience. Biomarker relationships were assessed pairwise by coefficients of determination (R2) from mixed-effects analysis with automatic model selection. Structure-function relationships were accurately modeled (conditional R(2)>0.8 in most cases). The best-fit relationship models and correlation patterns for horizontally oriented biomarkers were different than vertically oriented ones. The structural biomarker with the largest number of significant functional correlates was the ellipsoid zone (EZ) width, followed by the total photoreceptor layer thickness. The strongest correlation observed was between EZ width and HOV slope distance (marginal R(2) = 0.85, p<10(-10)). The mean sensitivity defect at the EZ edge was 7.6 dB. Among all functional biomarkers, the HOV slope mean value, HOV slope mean distance, and maximum sensitivity along the b-scan had the largest number of significant structural correlates. Topographic slope metrics show promise as functional biomarkers relevant to the transition zone. EZ width is strongly associated with the location of most rapid HOV decline.
Lee, Hyungwoo; Kang, Kyung Eun; Chung, Hyewon; Kim, Hyung Chan
2018-04-12
To evaluate an automated segmentation algorithm with a convolutional neural network (CNN) to quantify and detect intraretinal fluid (IRF), subretinal fluid (SRF), pigment epithelial detachment (PED), and subretinal hyperreflective material (SHRM) through analyses of spectral domain optical coherence tomography (SD-OCT) images from patients with neovascular age-related macular degeneration (nAMD). Reliability and validity analysis of a diagnostic tool. We constructed a dataset including 930 B-scans from 93 eyes of 93 patients with nAMD. A CNN-based deep neural network was trained using 11550 augmented images derived from 550 B-scans. The performance of the trained network was evaluated using a validation set including 140 B-scans and a test set of 240 B-scans. The Dice coefficient, positive predictive value (PPV), sensitivity, relative area difference (RAD), and intraclass correlation coefficient (ICC) were used to evaluate segmentation and detection performance. Good agreement was observed for both segmentation and detection of lesions between the trained network and clinicians. The Dice coefficients for segmentation of IRF, SRF, SHRM, and PED were 0.78, 0.82, 0.75, and 0.80, respectively; the PPVs were 0.79, 0.80, 0.75, and 0.80, respectively; and the sensitivities were 0.77, 0.84, 0.73, and 0.81, respectively. The RADs were -4.32%, -10.29%, 4.13%, and 0.34%, respectively, and the ICCs were 0.98, 0.98, 0.97, and 0.98, respectively. All lesions were detected with high PPVs (range 0.94-0.99) and sensitivities (range 0.97-0.99). A CNN-based network provides clinicians with quantitative data regarding nAMD through automatic segmentation and detection of pathological lesions, including IRF, SRF, PED, and SHRM. Copyright © 2018 Elsevier Inc. All rights reserved.
Future Automated Rough Mills Hinge on Vision Systems
Philip A. Araman
1996-01-01
The backbone behind major changes to present and future rough mills in dimension, furniture, cabinet or millwork facilities will be computer vision systems. Because of the wide variety of products and the quality of parts produced, the scanning systems and rough mills will vary greatly. The scanners will vary in type. For many complicated applications, multiple scanner...
Impact of Scanning Density on Measurements from Spectral Domain Optical Coherence Tomography
Keane, Pearse A.; Ouyang, Yanling; Updike, Jared F.; Walsh, Alexander C.
2010-01-01
Purpose. To investigate the relationship between B-scan density and retinal thickness measurements obtained by spectral domain optical coherence tomography (SDOCT) in eyes with retinal disease. Methods. Data were collected from 115 patients who underwent volume OCT imaging with Cirrus HD-OCT using the 512 × 128 horizontal raster protocol. Raw OCT data, including the location of the automated retinal boundaries, were exported from the Cirrus HD-OCT instrument and imported into the Doheny Image Reading Center (DIRC) OCT viewing and grading software, termed “3D-OCTOR.” For each case, retinal thickness maps similar to those produced by Cirrus HD-OCT were generated using all 128 B-scans, as well as using less dense subsets of scans, ranging from every other scan to every 16th scan. Retinal thickness measurements derived using only a subset of scans were compared to measurements using all 128 B-scans, and differences for the foveal central subfield (FCS) and total macular volume were computed. Results. The mean error in FCS retinal thickness measurement increased as the density of B-scans decreased, but the error was small (<2 μm), except at the sparsest densities evaluated. The maximum error at a density of every fourth scan (32 scans spaced 188 μm apart) was <1%. Conclusions. B-scan density in volume SDOCT acquisitions can be reduced to 32 horizontal B-scans (spaced 188 μm apart) with minimal change in calculated retinal thickness measurements. This information may be of value in design of scanning protocols for SDOCT for use in future clinical trials. PMID:19797199
Automating spectral measurements
NASA Astrophysics Data System (ADS)
Goldstein, Fred T.
2008-09-01
This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.
Trahearn, Nicholas; Tsang, Yee Wah; Cree, Ian A; Snead, David; Epstein, David; Rajpoot, Nasir
2017-06-01
Automation of downstream analysis may offer many potential benefits to routine histopathology. One area of interest for automation is in the scoring of multiple immunohistochemical markers to predict the patient's response to targeted therapies. Automated serial slide analysis of this kind requires robust registration to identify common tissue regions across sections. We present an automated method for co-localized scoring of Estrogen Receptor and Progesterone Receptor (ER/PR) in breast cancer core biopsies using whole slide images. Regions of tumor in a series of fifty consecutive breast core biopsies were identified by annotation on H&E whole slide images. Sequentially cut immunohistochemical stained sections were scored manually, before being digitally scanned and then exported into JPEG 2000 format. A two-stage registration process was performed to identify the annotated regions of interest in the immunohistochemistry sections, which were then scored using the Allred system. Overall correlation between manual and automated scoring for ER and PR was 0.944 and 0.883, respectively, with 90% of ER and 80% of PR scores within in one point or less of agreement. This proof of principle study indicates slide registration can be used as a basis for automation of the downstream analysis for clinically relevant biomarkers in the majority of cases. The approach is likely to be improved by implantation of safeguarding analysis steps post registration. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Fortune, Brad; Reynaud, Juan; Cull, Grant; Burgoyne, Claude F.; Wang, Lin
2014-01-01
Purpose To evaluate the effect of age on optic nerve axon counts, spectral-domain optical coherence tomography (SDOCT) scan quality, and peripapillary retinal nerve fiber layer thickness (RNFLT) measurements in healthy monkey eyes. Methods In total, 83 healthy rhesus monkeys were included in this study (age range: 1.2–26.7 years). Peripapillary RNFLT was measured by SDOCT. An automated algorithm was used to count 100% of the axons and measure their cross-sectional area in postmortem optic nerve tissue samples (N = 46). Simulation experiments were done to determine the effects of optical changes on measurements of RNFLT. An objective, fully-automated method was used to measure the diameter of the major blood vessel profiles within each SDOCT B-scan. Results Peripapillary RNFLT was negatively correlated with age in cross-sectional analysis (P < 0.01). The best-fitting linear model was RNFLT(μm) = −0.40 × age(years) + 104.5 μm (R2 = 0.1, P < 0.01). Age had very little influence on optic nerve axon count; the result of the best-fit linear model was axon count = −1364 × Age(years) + 1,210,284 (R2 < 0.01, P = 0.74). Older eyes lost the smallest diameter axons and/or axons had an increased diameter in the optic nerve of older animals. There was an inverse correlation between age and SDOCT scan quality (R = −0.65, P < 0.0001). Simulation experiments revealed that approximately 17% of the apparent cross-sectional rate of RNFLT loss is due to reduced scan quality associated with optical changes of the aging eye. Another 12% was due to thinning of the major blood vessels. Conclusions RNFLT declines by 4 μm per decade in healthy rhesus monkey eyes. This rate is approximately three times faster than loss of optic nerve axons. Approximately one-half of this difference is explained by optical degradation of the aging eye reducing SDOCT scan quality and thinning of the major blood vessels. Translational Relevance Current models used to predict retinal ganglion cell losses should be reconsidered. PMID:24932430
Chen, Yasheng; Dhar, Rajat; Heitsch, Laura; Ford, Andria; Fernandez-Cadenas, Israel; Carrera, Caty; Montaner, Joan; Lin, Weili; Shen, Dinggang; An, Hongyu; Lee, Jin-Moo
2016-01-01
Although cerebral edema is a major cause of death and deterioration following hemispheric stroke, there remains no validated biomarker that captures the full spectrum of this critical complication. We recently demonstrated that reduction in intracranial cerebrospinal fluid (CSF) volume (∆ CSF) on serial computed tomography (CT) scans provides an accurate measure of cerebral edema severity, which may aid in early triaging of stroke patients for craniectomy. However, application of such a volumetric approach would be too cumbersome to perform manually on serial scans in a real-world setting. We developed and validated an automated technique for CSF segmentation via integration of random forest (RF) based machine learning with geodesic active contour (GAC) segmentation. The proposed RF + GAC approach was compared to conventional Hounsfield Unit (HU) thresholding and RF segmentation methods using Dice similarity coefficient (DSC) and the correlation of volumetric measurements, with manual delineation serving as the ground truth. CSF spaces were outlined on scans performed at baseline (< 6 h after stroke onset) and early follow-up (FU) (closest to 24 h) in 38 acute ischemic stroke patients. RF performed significantly better than optimized HU thresholding (p < 10 - 4 in baseline and p < 10 - 5 in FU) and RF + GAC performed significantly better than RF (p < 10 - 3 in baseline and p < 10 - 5 in FU). Pearson correlation coefficients between the automatically detected ∆ CSF and the ground truth were r = 0.178 (p = 0.285), r = 0.876 (p < 10 - 6 ) and r = 0.879 (p < 10 - 6 ) for thresholding, RF and RF + GAC, respectively, with a slope closer to the line of identity in RF + GAC. When we applied the algorithm trained from images of one stroke center to segment CTs from another center, similar findings held. In conclusion, we have developed and validated an accurate automated approach to segment CSF and calculate its shifts on serial CT scans. This algorithm will allow us to efficiently and accurately measure the evolution of cerebral edema in future studies including large multi-site patient populations.
Howat, William J; Blows, Fiona M; Provenzano, Elena; Brook, Mark N; Morris, Lorna; Gazinska, Patrycja; Johnson, Nicola; McDuffus, Leigh‐Anne; Miller, Jodi; Sawyer, Elinor J; Pinder, Sarah; van Deurzen, Carolien H M; Jones, Louise; Sironen, Reijo; Visscher, Daniel; Caldas, Carlos; Daley, Frances; Coulson, Penny; Broeks, Annegien; Sanders, Joyce; Wesseling, Jelle; Nevanlinna, Heli; Fagerholm, Rainer; Blomqvist, Carl; Heikkilä, Päivi; Ali, H Raza; Dawson, Sarah‐Jane; Figueroa, Jonine; Lissowska, Jolanta; Brinton, Louise; Mannermaa, Arto; Kataja, Vesa; Kosma, Veli‐Matti; Cox, Angela; Brock, Ian W; Cross, Simon S; Reed, Malcolm W; Couch, Fergus J; Olson, Janet E; Devillee, Peter; Mesker, Wilma E; Seyaneve, Caroline M; Hollestelle, Antoinette; Benitez, Javier; Perez, Jose Ignacio Arias; Menéndez, Primitiva; Bolla, Manjeet K; Easton, Douglas F; Schmidt, Marjanka K; Pharoah, Paul D; Sherman, Mark E
2014-01-01
Abstract Breast cancer risk factors and clinical outcomes vary by tumour marker expression. However, individual studies often lack the power required to assess these relationships, and large‐scale analyses are limited by the need for high throughput, standardized scoring methods. To address these limitations, we assessed whether automated image analysis of immunohistochemically stained tissue microarrays can permit rapid, standardized scoring of tumour markers from multiple studies. Tissue microarray sections prepared in nine studies containing 20 263 cores from 8267 breast cancers stained for two nuclear (oestrogen receptor, progesterone receptor), two membranous (human epidermal growth factor receptor 2 and epidermal growth factor receptor) and one cytoplasmic (cytokeratin 5/6) marker were scanned as digital images. Automated algorithms were used to score markers in tumour cells using the Ariol system. We compared automated scores against visual reads, and their associations with breast cancer survival. Approximately 65–70% of tissue microarray cores were satisfactory for scoring. Among satisfactory cores, agreement between dichotomous automated and visual scores was highest for oestrogen receptor (Kappa = 0.76), followed by human epidermal growth factor receptor 2 (Kappa = 0.69) and progesterone receptor (Kappa = 0.67). Automated quantitative scores for these markers were associated with hazard ratios for breast cancer mortality in a dose‐response manner. Considering visual scores of epidermal growth factor receptor or cytokeratin 5/6 as the reference, automated scoring achieved excellent negative predictive value (96–98%), but yielded many false positives (positive predictive value = 30–32%). For all markers, we observed substantial heterogeneity in automated scoring performance across tissue microarrays. Automated analysis is a potentially useful tool for large‐scale, quantitative scoring of immunohistochemically stained tissue microarrays available in consortia. However, continued optimization, rigorous marker‐specific quality control measures and standardization of tissue microarray designs, staining and scoring protocols is needed to enhance results. PMID:27499890
Haslbeck, Andreas; Zhang, Bo
2017-09-01
The aim of this study was to analyze pilots' visual scanning in a manual approach and landing scenario. Manual flying skills suffer from increasing use of automation. In addition, predominantly long-haul pilots with only a few opportunities to practice these skills experience this decline. Airline pilots representing different levels of practice (short-haul vs. long-haul) had to perform a manual raw data precision approach while their visual scanning was recorded by an eye-tracking device. The analysis of gaze patterns, which are based on predominant saccades, revealed one main group of saccades among long-haul pilots. In contrast, short-haul pilots showed more balanced scanning using two different groups of saccades. Short-haul pilots generally demonstrated better manual flight performance and within this group, one type of scan pattern was found to facilitate the manual landing task more. Long-haul pilots tend to utilize visual scanning behaviors that are inappropriate for the manual ILS landing task. This lack of skills needs to be addressed by providing specific training and more practice. Copyright © 2017 Elsevier Ltd. All rights reserved.
Online fully automated three-dimensional surface reconstruction of unknown objects
NASA Astrophysics Data System (ADS)
Khalfaoui, Souhaiel; Aigueperse, Antoine; Fougerolle, Yohan; Seulin, Ralph; Fofi, David
2015-04-01
This paper presents a novel scheme for automatic and intelligent 3D digitization using robotic cells. The advantage of our procedure is that it is generic since it is not performed for a specific scanning technology. Moreover, it is not dependent on the methods used to perform the tasks associated with each elementary process. The comparison of results between manual and automatic scanning of complex objects shows that our digitization strategy is very efficient and faster than trained experts. The 3D models of the different objects are obtained with a strongly reduced number of acquisitions while moving efficiently the ranging device.
Lommen, Arjen
2009-04-15
Hyphenated full-scan MS technology creates large amounts of data. A versatile easy to handle automation tool aiding in the data analysis is very important in handling such a data stream. MetAlign softwareas described in this manuscripthandles a broad range of accurate mass and nominal mass GC/MS and LC/MS data. It is capable of automatic format conversions, accurate mass calculations, baseline corrections, peak-picking, saturation and mass-peak artifact filtering, as well as alignment of up to 1000 data sets. A 100 to 1000-fold data reduction is achieved. MetAlign software output is compatible with most multivariate statistics programs.
NASA Technical Reports Server (NTRS)
Panek, Joseph W.
2001-01-01
The proper operation of the Electronically Scanned Pressure (ESP) System critical to accomplish the following goals: acquisition of highly accurate pressure data for the development of aerospace and commercial aviation systems and continuous confirmation of data quality to avoid costly, unplanned, repeat wind tunnel or turbine testing. Standard automated setup and checkout routines are necessary to accomplish these goals. Data verification and integrity checks occur at three distinct stages, pretest pressure tubing and system checkouts, daily system validation and in-test confirmation of critical system parameters. This paper will give an overview of the existing hardware, software and methods used to validate data integrity.
Lian, Yanyun; Song, Zhijian
2014-01-01
Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning, treatment planning, monitoring of therapy. However, manual tumor segmentation commonly used in clinic is time-consuming and challenging, and none of the existed automated methods are highly robust, reliable and efficient in clinic application. An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results. Based on the symmetry of human brain, we employed sliding-window technique and correlation coefficient to locate the tumor position. At first, the image to be segmented was normalized, rotated, denoised, and bisected. Subsequently, through vertical and horizontal sliding-windows technique in turn, that is, two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image, along with calculating of correlation coefficient of two windows, two windows with minimal correlation coefficient were obtained, and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor. At last, the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length, and threshold segmentation and morphological operations were used to acquire the final tumor region. The method was evaluated on 3D FSPGR brain MR images of 10 patients. As a result, the average ratio of correct location was 93.4% for 575 slices containing tumor, the average Dice similarity coefficient was 0.77 for one scan, and the average time spent on one scan was 40 seconds. An fully automated, simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use. Correlation coefficient is a new and effective feature for tumor location.
Automated Segmentability Index for Layer Segmentation of Macular SD-OCT Images.
Lee, Kyungmoo; Buitendijk, Gabriëlle H S; Bogunovic, Hrvoje; Springelkamp, Henriët; Hofman, Albert; Wahle, Andreas; Sonka, Milan; Vingerling, Johannes R; Klaver, Caroline C W; Abràmoff, Michael D
2016-03-01
To automatically identify which spectral-domain optical coherence tomography (SD-OCT) scans will provide reliable automated layer segmentations for more accurate layer thickness analyses in population studies. Six hundred ninety macular SD-OCT image volumes (6.0 × 6.0 × 2.3 mm 3 ) were obtained from one eyes of 690 subjects (74.6 ± 9.7 [mean ± SD] years, 37.8% of males) randomly selected from the population-based Rotterdam Study. The dataset consisted of 420 OCT volumes with successful automated retinal nerve fiber layer (RNFL) segmentations obtained from our previously reported graph-based segmentation method and 270 volumes with failed segmentations. To evaluate the reliability of the layer segmentations, we have developed a new metric, segmentability index SI, which is obtained from a random forest regressor based on 12 features using OCT voxel intensities, edge-based costs, and on-surface costs. The SI was compared with well-known quality indices, quality index (QI), and maximum tissue contrast index (mTCI), using receiver operating characteristic (ROC) analysis. The 95% confidence interval (CI) and the area under the curve (AUC) for the QI are 0.621 to 0.805 with AUC 0.713, for the mTCI 0.673 to 0.838 with AUC 0.756, and for the SI 0.784 to 0.920 with AUC 0.852. The SI AUC is significantly larger than either the QI or mTCI AUC ( P < 0.01). The segmentability index SI is well suited to identify SD-OCT scans for which successful automated intraretinal layer segmentations can be expected. Interpreting the quantification of SD-OCT images requires the underlying segmentation to be reliable, but standard SD-OCT quality metrics do not predict which segmentations are reliable and which are not. The segmentability index SI presented in this study does allow reliable segmentations to be identified, which is important for more accurate layer thickness analyses in research and population studies.
FDG and Amyloid PET in Cognitively Normal Individuals at Risk for Late-Onset Alzheimer’s Disease
Murray, John; Tsui, Wai H.; Li, Yi; McHugh, Pauline; Williams, Schantel; Cummings, Megan; Pirraglia, Elizabeth; Solnes, Lilja; Osorio, Ricardo; Glodzik, Lidia; Vallabhajosula, Shankar; Drzezga, Alexander; Minoshima, Satoshi; de Leon, Mony J.; Mosconi, Lisa
2014-01-01
Having a parent affected by late-onset Alzheimer’s disease (AD) is a major risk factor for cognitively normal (NL) individuals. This study explores the potential of PET with 18F-FDG and the amyloid- β (Aβ) tracer 11C-Pittsburgh Compound B (PiB) for detection of individual risk in NL adults with AD-parents. Methods FDG− and PiB-PET was performed in 119 young to late-middle aged NL individuals including 80 NL with positive family history of AD (FH+) and 39 NL with negative family history of any dementia (FH−). The FH+ group included 50 subjects with maternal (FHm) and 30 with paternal family history (FHp). Individual FDG and PiB scans were Z scored on a voxel-wise basis relative to modality-specific reference databases using automated procedures and rated as positive or negative (+/−) for AD-typical abnormalities using predefined criteria. To determine the effect of age, the cohort was separated into younger (49 ± 9 y) and older (68 ± 5 y) groups relative to the median age (60 y). Results Among individuals of age >60 y, as compared to controls, NL FH+ showed a higher frequency of FDG+ scans vs. FH− (53% vs. 6% p < 0.003), and a trend for PiB+ scans (27% vs. 11%; p = 0.19). This effect was observed for both FHm and FHp groups. Among individuals of age ≤60 y, NL FHm showed a higher frequency of FDG+ scans (29%) compared to FH− (5%, p = 0.04) and a trend compared to FHp (11%) (p = 0.07), while the distribution of PiB+ scans was not different between groups. In both age cohorts, FDG+ scans were more frequent than PiB+ scans among NL FH+, especially FHm (p < 0.03). FDG-PET was a significant predictor of FH+ status. Classification according to PiB status was significantly less successful. Conclusions Automated analysis of FDG− and PiB-PET demonstrates higher rates of abnormalities in at-risk FH+ vs FH− subjects, indicating potentially ongoing early AD-pathology in this population. The frequency of metabolic abnormalities was higher than that of Aβ pathology in the younger cohort, suggesting that neuronal dysfunction may precede major aggregated Aβ burden in young NL FH+. Longitudinal follow-up is required to determine if the observed abnormalities predict future AD. PMID:25530915
NASA Astrophysics Data System (ADS)
Wu, Jing; Waldstein, Sebastian M.; Gerendas, Bianca S.; Langs, Georg; Simader, Christian; Schmidt-Erfurth, Ursula
2015-03-01
Spectral-domain Optical Coherence Tomography (SD-OCT) is a non-invasive modality for acquiring high- resolution, three-dimensional (3D) cross-sectional volumetric images of the retina and the subretinal layers. SD-OCT also allows the detailed imaging of retinal pathology, aiding clinicians in the diagnosis of sight degrading diseases such as age-related macular degeneration (AMD), glaucoma and retinal vein occlusion (RVO). Disease diagnosis, assessment, and treatment will require a patient to undergo multiple OCT scans, possibly using multiple scanners, to accurately and precisely gauge disease activity, progression and treatment success. However, cross-vendor imaging and patient movement may result in poor scan spatial correlation potentially leading to incorrect diagnosis or treatment analysis. The retinal fovea is the location of the highest visual acuity and is present in all patients, thus it is critical to vision and highly suitable for use as a primary landmark for cross-vendor/cross-patient registration for precise comparison of disease states. However, the location of the fovea in diseased eyes is extremely challenging to locate due to varying appearance and the presence of retinal layer destroying pathology. Thus categorising and detecting the fovea type is an important prior stage to automatically computing the fovea position. Presented here is an automated cross-vendor method for fovea distinction in 3D SD-OCT scans of patients suffering from RVO, categorising scans into three distinct types. OCT scans are preprocessed by motion correction and noise filing followed by segmentation using a kernel graph-cut approach. A statistically derived mask is applied to the resulting scan creating an ROI around the probable fovea location from which the uppermost retinal surface is delineated. For a normal appearance retina, minimisation to zero thickness is computed using the top two retinal surfaces. 3D local minima detection and layer thickness analysis are used to differentiate between the remaining two fovea types. Validation employs ground truth fovea types identified by clinical experts at the Vienna Reading Center (VRC). The results presented here are intended to show the feasibility of this method for the accurate and reproducible distinction of retinal fovea types from multiple vendor 3D SD-OCT scans of patients suffering from RVO, and for use in fovea position detection systems as a landmark for intra- and cross-vendor 3D OCT registration.
Real time automated inspection
Fant, Karl M.; Fundakowski, Richard A.; Levitt, Tod S.; Overland, John E.; Suresh, Bindinganavle R.; Ulrich, Franz W.
1985-01-01
A method and apparatus relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges are segmented out by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections.
Four-probe measurements with a three-probe scanning tunneling microscope.
Salomons, Mark; Martins, Bruno V C; Zikovsky, Janik; Wolkow, Robert A
2014-04-01
We present an ultrahigh vacuum (UHV) three-probe scanning tunneling microscope in which each probe is capable of atomic resolution. A UHV JEOL scanning electron microscope aids in the placement of the probes on the sample. The machine also has a field ion microscope to clean, atomically image, and shape the probe tips. The machine uses bare conductive samples and tips with a homebuilt set of pliers for heating and loading. Automated feedback controlled tip-surface contacts allow for electrical stability and reproducibility while also greatly reducing tip and surface damage due to contact formation. The ability to register inter-tip position by imaging of a single surface feature by multiple tips is demonstrated. Four-probe material characterization is achieved by deploying two tips as fixed current probes and the third tip as a movable voltage probe.
Development of a High Performance Acousto-ultrasonic Scan System
NASA Technical Reports Server (NTRS)
Roth, D. J.; Martin, R. E.; Harmon, L. M.; Gyekenyesi, A. L.; Kautz, H. E.
2002-01-01
Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition/distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods such as ultrasonic c-scan, x-ray radiography, and thermographic inspection that tend to be used primarily for discrete flaw detection. Through its history, AU has been used to inspect polymer matrix composite, metal matrix composite, ceramic matrix composite, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. In this paper, a review of essential AU technology is given. Additionally, the basic hardware and software configuration, and preliminary results with the system, are described.
NASA Astrophysics Data System (ADS)
Hildebrandt, Mario; Dittmann, Jana; Vielhauer, Claus; Leich, Marcus
2011-11-01
The preventive application of automated latent fingerprint acquisition devices can enhance the Homeland Defence, e.g. by improving the border security. Here, contact-less optical acquisition techniques for the capture of traces are subject to research; chromatic white light sensors allow for multi-mode operation using coarse or detailed scans. The presence of potential fingerprints could be detected using fast coarse scans. Those Regions-of- Interest can be acquired afterwards with high-resolution detailed scans to allow for a verification or identification of individuals. An acquisition and analysis of fingerprint traces on different objects that are imported or pass borders might be a great enhancement for security. Additionally, if suspicious objects require a further investigation, an initial securing of potential fingerprints could be very useful. In this paper we show current research results for the coarse detection of fingerprints to prepare the detailed acquisition from various surface materials that are relevant for preventive applications.
Automated detection of the retinal from OCT spectral domain images of healthy eyes
NASA Astrophysics Data System (ADS)
Giovinco, Gaspare; Savastano, Maria Cristina; Ventre, Salvatore; Tamburrino, Antonello
2015-06-01
Optical coherence tomography (OCT) has become one of the most relevant diagnostic tools for retinal diseases. Besides being a non-invasive technique, one distinguished feature is its unique capability of providing (in vivo) cross-sectional view of the retinal. Specifically, OCT images show the retinal layers. From the clinical point of view, the identification of the retinal layers opens new perspectives to study the correlation between morphological and functional aspects of the retinal tissue. The main contribution of this paper is a new method/algorithm for the automated segmentation of cross-sectional images of the retina of healthy eyes, obtained by means of spectral domain optical coherence tomography (SD-OCT). Specifically, the proposed segmentation algorithm provides the automated detection of different retinal layers. Tests on experimental SD-OCT scans performed by three different instruments/manufacturers have been successfully carried out and compared to a manual segmentation made by an independent ophthalmologist, showing the generality and the effectiveness of the proposed method.
Automated detection of retinal layers from OCT spectral-domain images of healthy eyes
NASA Astrophysics Data System (ADS)
Giovinco, Gaspare; Savastano, Maria Cristina; Ventre, Salvatore; Tamburrino, Antonello
2015-12-01
Optical coherence tomography (OCT) has become one of the most relevant diagnostic tools for retinal diseases. Besides being a non-invasive technique, one distinguished feature is its unique capability of providing (in vivo) cross-sectional view of the retina. Specifically, OCT images show the retinal layers. From the clinical point of view, the identification of the retinal layers opens new perspectives to study the correlation between morphological and functional aspects of the retinal tissue. The main contribution of this paper is a new method/algorithm for the automated segmentation of cross-sectional images of the retina of healthy eyes, obtained by means of spectral-domain optical coherence tomography (SD-OCT). Specifically, the proposed segmentation algorithm provides the automated detection of different retinal layers. Tests on experimental SD-OCT scans performed by three different instruments/manufacturers have been successfully carried out and compared to a manual segmentation made by an independent ophthalmologist, showing the generality and the effectiveness of the proposed method.
Brama, Elisabeth; Peddie, Christopher J; Wilkes, Gary; Gu, Yan; Collinson, Lucy M; Jones, Martin L
2016-12-13
In-resin fluorescence (IRF) protocols preserve fluorescent proteins in resin-embedded cells and tissues for correlative light and electron microscopy, aiding interpretation of macromolecular function within the complex cellular landscape. Dual-contrast IRF samples can be imaged in separate fluorescence and electron microscopes, or in dual-modality integrated microscopes for high resolution correlation of fluorophore to organelle. IRF samples also offer a unique opportunity to automate correlative imaging workflows. Here we present two new locator tools for finding and following fluorescent cells in IRF blocks, enabling future automation of correlative imaging. The ultraLM is a fluorescence microscope that integrates with an ultramicrotome, which enables 'smart collection' of ultrathin sections containing fluorescent cells or tissues for subsequent transmission electron microscopy or array tomography. The miniLM is a fluorescence microscope that integrates with serial block face scanning electron microscopes, which enables 'smart tracking' of fluorescent structures during automated serial electron image acquisition from large cell and tissue volumes.
A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data
NASA Technical Reports Server (NTRS)
Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.
2011-01-01
A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.
Programmed Multiphasic Health Testing
NASA Technical Reports Server (NTRS)
Hershberg, P. I.
1970-01-01
Multiphase health screening procedures are advocated for detection and prevention of disease at an early stage through risk factor analysis. The use of an automated medical history questionnaire together with scheduled physical examination data provides a scanning input for computer printout. This system makes it possible to process laboratory results from 1,000 to 2,000 patients for biochemical determinations on an economically feasible base.
VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.
ERIC Educational Resources Information Center
Ekman, Paul; And Others
The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…
Developing a Multi Sensor Scanning System for Hardwood Inspection and Processing
Richard W. Conners; D.Earl Kline; Philip A. Araman
1995-01-01
For the last few years the authors as part of the Center for Automated Processing of Hardwoods have been attempting to develop a multiple sensor hardwood defect detection system. This development activity has been ongoing for approximately 6 years, a very long time in the commercial development world. This paper will report the progress that has been made and will...
DOT National Transportation Integrated Search
1997-11-01
This report presents the findings of the study team on a Federal Highway Administration (FHWA) International Scanning Tour to the countries of Finland, Sweden, the Netherlands, and England. The tour was unique in that it represented the first time th...
ERIC Educational Resources Information Center
Colom, Roberto; Stein, Jason L.; Rajagopalan, Priya; Martinez, Kenia; Hermel, David; Wang, Yalin; Alvarez-Linera, Juan; Burgaleta, Miguel; Quiroga, Ma. Angeles; Shih, Pei Chun; Thompson, Paul M.
2013-01-01
Here we apply a method for automated segmentation of the hippocampus in 3D high-resolution structural brain MRI scans. One hundred and four healthy young adults completed twenty one tasks measuring abstract, verbal, and spatial intelligence, along with working memory, executive control, attention, and processing speed. After permutation tests…
Hierarchical extraction of urban objects from mobile laser scanning data
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia
2015-01-01
Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.
Gaytán, Paul; Yáñez, Jorge; Sánchez, Filiberto; Soberón, Xavier
2001-01-01
We describe here a method to generate combinatorial libraries of oligonucleotides mutated at the codon-level, with control of the mutagenesis rate so as to create predictable binomial distributions of mutants. The method allows enrichment of the libraries with single, double or larger multiplicity of amino acid replacements by appropriate choice of the mutagenesis rate, depending on the concentration of synthetic precursors. The method makes use of two sets of deoxynucleoside-phosphoramidites bearing orthogonal protecting groups [4,4′-dimethoxytrityl (DMT) and 9-fluorenylmethoxycarbonyl (Fmoc)] in the 5′ hydroxyl. These phosphoramidites are divergently combined during automated synthesis in such a way that wild-type codons are assembled with commercial DMT-deoxynucleoside-methyl-phosphoramidites while mutant codons are assembled with Fmoc-deoxynucleoside-methyl-phosphoramidites in an NNG/C fashion in a single synthesis column. This method is easily automated and suitable for low mutagenesis rates and large windows, such as those required for directed evolution and alanine scanning. Through the assembly of three oligonucleotide libraries at different mutagenesis rates, followed by cloning at the polylinker region of plasmid pUC18 and sequencing of 129 clones, we concluded that the method performs essentially as intended. PMID:11160911
Sannomiya, Takumi; Sawada, Hidetaka; Nakamichi, Tomohiro; Hosokawa, Fumio; Nakamura, Yoshio; Tanishiro, Yasumasa; Takayanagi, Kunio
2013-12-01
A generic method to determine the aberration center is established, which can be utilized for aberration calculation and axis alignment for aberration corrected electron microscopes. In this method, decentering induced secondary aberrations from inherent primary aberrations are minimized to find the appropriate axis center. The fitness function to find the optimal decentering vector for the axis was defined as a sum of decentering induced secondary aberrations with properly distributed weight values according to the aberration order. Since the appropriate decentering vector is determined from the aberration values calculated at an arbitrary center axis, only one aberration measurement is in principle required to find the center, resulting in /very fast center search. This approach was tested for the Ronchigram based aberration calculation method for aberration corrected scanning transmission electron microscopy. Both in simulation and in experiments, the center search was confirmed to work well although the convergence to find the best axis becomes slower with larger primary aberrations. Such aberration center determination is expected to fully automatize the aberration correction procedures, which used to require pre-alignment of experienced users. This approach is also applicable to automated aperture positioning. Copyright © 2013 Elsevier B.V. All rights reserved.
Automated human skull landmarking with 2D Gabor wavelets
NASA Astrophysics Data System (ADS)
de Jong, Markus A.; Gül, Atilla; de Gijt, Jan Pieter; Koudstaal, Maarten J.; Kayser, Manfred; Wolvius, Eppo B.; Böhringer, Stefan
2018-05-01
Landmarking of CT scans is an important step in the alignment of skulls that is key in surgery planning, pre-/post-surgery comparisons, and morphometric studies. We present a novel method for automatically locating anatomical landmarks on the surface of cone beam CT-based image models of human skulls using 2D Gabor wavelets and ensemble learning. The algorithm is validated via human inter- and intra-rater comparisons on a set of 39 scans and a skull superimposition experiment with an established surgery planning software (Maxilim). Automatic landmarking results in an accuracy of 1–2 mm for a subset of landmarks around the nose area as compared to a gold standard derived from human raters. These landmarks are located in eye sockets and lower jaw, which is competitive with or surpasses inter-rater variability. The well-performing landmark subsets allow for the automation of skull superimposition in clinical applications. Our approach delivers accurate results, has modest training requirements (training set size of 30–40 items) and is generic, so that landmark sets can be easily expanded or modified to accommodate shifting landmark interests, which are important requirements for the landmarking of larger cohorts.
Automated hierarchical time gain compensation for in-vivo ultrasound imaging
NASA Astrophysics Data System (ADS)
Moshavegh, Ramin; Hemmsen, Martin C.; Martins, Bo; Brandt, Andreas H.; Hansen, Kristoffer L.; Nielsen, Michael B.; Jensen, Jørgen A.
2015-03-01
Time gain compensation (TGC) is essential to ensure the optimal image quality of the clinical ultrasound scans. When large fluid collections are present within the scan plane, the attenuation distribution is changed drastically and TGC compensation becomes challenging. This paper presents an automated hierarchical TGC (AHTGC) algorithm that accurately adapts to the large attenuation variation between different types of tissues and structures. The algorithm relies on estimates of tissue attenuation, scattering strength, and noise level to gain a more quantitative understanding of the underlying tissue and the ultrasound signal strength. The proposed algorithm was applied to a set of 44 in vivo abdominal movie sequences each containing 15 frames. Matching pairs of in vivo sequences, unprocessed and processed with the proposed AHTGC were visualized side by side and evaluated by two radiologists in terms of image quality. Wilcoxon signed-rank test was used to evaluate whether radiologists preferred the processed sequences or the unprocessed data. The results indicate that the average visual analogue scale (VAS) is positive ( p-value: 2.34 × 10-13) and estimated to be 1.01 (95% CI: 0.85; 1.16) favoring the processed data with the proposed AHTGC algorithm.
Zhang, Fumin; Qu, Xinghua; Ouyang, Jianfei
2012-01-01
A novel measurement prototype based on a mobile vehicle that carries a laser scanning sensor is proposed. The prototype is intended for the automated measurement of the interior 3D geometry of large-diameter long-stepped pipes. The laser displacement sensor, which has a small measurement range, is mounted on an extended arm of known length. It is scanned to improve the measurement accuracy for large-sized pipes. A fixing mechanism based on two sections is designed to ensure that the stepped pipe is concentric with the axis of rotation of the system. Data are acquired in a cylindrical coordinate system and fitted in a circle to determine diameter. Systematic errors covering arm length, tilt, and offset errors are analyzed and calibrated. The proposed system is applied to sample parts and the results are discussed to verify its effectiveness. This technique measures a diameter of 600 mm with an uncertainty of 0.02 mm at a 95% confidence probability. A repeatability test is performed to examine precision, which is 1.1 μm. A laser tracker is used to verify the measurement accuracy of the system, which is evaluated as 9 μm within a diameter of 600 mm.
Particle Accelerator Focus Automation
NASA Astrophysics Data System (ADS)
Lopes, José; Rocha, Jorge; Redondo, Luís; Cruz, João
2017-08-01
The Laboratório de Aceleradores e Tecnologias de Radiação (LATR) at the Campus Tecnológico e Nuclear, of Instituto Superior Técnico (IST) has a horizontal electrostatic particle accelerator based on the Van de Graaff machine which is used for research in the area of material characterization. This machine produces alfa (He+) and proton (H+) beams of some μA currents up to 2 MeV/q energies. Beam focusing is obtained using a cylindrical lens of the Einzel type, assembled near the high voltage terminal. This paper describes the developed system that automatically focuses the ion beam, using a personal computer running the LabVIEW software, a multifunction input/output board and signal conditioning circuits. The focusing procedure consists of a scanning method to find the lens bias voltage which maximizes the beam current measured on a beam stopper target, which is used as feedback for the scanning cycle. This system, as part of a wider start up and shut down automation system built for this particle accelerator, brings great advantages to the operation of the accelerator by turning it faster and easier to operate, requiring less human presence, and adding the possibility of total remote control in safe conditions.
Developing a short measure of organizational justice: a multisample health professionals study.
Elovainio, Marko; Heponiemi, Tarja; Kuusio, Hannamaria; Sinervo, Timo; Hintsa, Taina; Aalto, Anna-Mari
2010-11-01
To develop and test the validity of a short version of the original questionnaire measuring organizational justice. The study samples comprised working physicians (N = 2792) and registered nurses (n = 2137) from the Finnish Health Professionals study. Structural equation modelling was applied to test structural validity, using the justice scales. Furthermore, criterion validity was explored with well-being (sleeping problems) and health indicators (psychological distress/self-rated health). The short version of the organizational justice questionnaire (eight items) provides satisfactory psychometric properties (internal consistency, a good model fit of the data). All scales were associated with an increased risk of sleeping problems and psychological distress, indicating satisfactory criterion validity. This short version of the organizational justice questionnaire provides a useful tool for epidemiological studies focused on health-adverse effects of work environment.
Development and validation of the Work Conflict Appraisal Scale (WCAS).
González-Navarro, Pilar; Llinares-Insa, Lucía; Zurriaga-Llorens, Rosario; Lloret-Segura, Susana
2017-05-01
In the context of cognitive appraisal, the Work Conflict Appraisal Scale (WCAS) was developed to assess work conflict in terms of threat and challenge. In the first study, the factorial structure of the scale was tested using confirmatory factor analysis with a Spanish multi-occupational employee sample (N= 296). In the sec-ond study, we used multi-sampling confirmatory factor analysis (N= 815) to cross-validate the results. The analyses confirm the validity of the scale and are con-sistent with the tri-dimensional conflict classification. The findings support the distinc-tion between the challenge and threat appraisals of work conflict, highlighting the im-portance of measuring these two types of appraisal separately. This scale is a valid and reliable instrument to measure conflict appraisal in organizations.
Fully automated breast density assessment from low-dose chest CT
NASA Astrophysics Data System (ADS)
Liu, Shuang; Margolies, Laurie R.; Xie, Yiting; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.
2017-03-01
Breast cancer is the most common cancer diagnosed among US women and the second leading cause of cancer death 1 . Breast density is an independent risk factor for breast cancer and more than 25 states mandate its reporting to patients as part of the lay mammogram report 2 . Recent publications have demonstrated that breast density measured from low-dose chest CT (LDCT) correlates well with that measured from mammograms and MRIs 3-4 , thereby providing valuable information for many women who have undergone LDCT but not recent mammograms. A fully automated framework for breast density assessment from LDCT is presented in this paper. The whole breast region is first segmented using an anatomy-orientated novel approach based on the propagation of muscle fronts for separating the fibroglandular tissue from the underlying muscles. The fibroglandular tissue regions are then identified from the segmented whole breast and the percentage density is calculated based on the volume ratio of the fibroglandular tissue to the local whole breast region. The breast region segmentation framework was validated with 1270 LDCT scans, with 96.1% satisfactory outcomes based on visual inspection. The density assessment was evaluated by comparing with BI-RADS density grades established by an experienced radiologist in 100 randomly selected LDCT scans of female subjects. The continuous breast density measurement was shown to be consistent with the reference subjective grading, with the Spearman's rank correlation 0.91 (p-value < 0.001). After converting the continuous density to categorical grades, the automated density assessment was congruous with the radiologist's reading in 91% cases.
Sinha, Sumedha P; Goodsitt, Mitchell M; Roubidoux, Marilyn A; Booi, Rebecca C; LeCarpentier, Gerald L; Lashbrook, Christine R; Thomenius, Kai E; Chalek, Carl L; Carson, Paul L
2007-05-01
We are developing an automated ultrasound imaging-mammography system wherein a digital mammography unit has been augmented with a motorized ultrasound transducer carriage above a special compression paddle. Challenges of this system are acquiring complete coverage of the breast and minimizing motion. We assessed these problems and investigated methods to increase coverage and stabilize the compressed breast. Visual tracings of the breast-to-paddle contact area and breast periphery were made for 10 patients to estimate coverage area. Various motion artifacts were evaluated in 6 patients. Nine materials were tested for coupling the paddle to the breast. Fourteen substances were tested for coupling the transducer to the paddle in lateral-to-medial and medial-to-lateral views and filling the gap between the peripheral breast and paddle. In-house image registration software was used to register adjacent ultrasound sweeps. The average breast contact area was 56%. The average percentage of the peripheral air gap filled with ultrasound gel was 61%. Shallow patient breathing proved equivalent to breath holding, whereas speech and sudden breathing caused unacceptable artifacts. An adhesive spray that preserves image quality was found to be best for coupling the breast to the paddle and minimizing motion. A highly viscous ultrasound gel proved most effective for coupling the transducer to the paddle for lateral-to-medial and medial-to-lateral views and for edge fill-in. The challenges of automated ultrasound scanning in a multimodality breast imaging system have been addressed by developing methods to fill in peripheral gaps, minimize patient motion, and register and reconstruct multisweep ultrasound image volumes.
Coronary artery calcification identification and labeling in low-dose chest CT images
NASA Astrophysics Data System (ADS)
Xie, Yiting; Liu, Shuang; Miller, Albert; Miller, Jeffrey A.; Markowitz, Steven; Akhund, Ali; Reeves, Anthony P.
2017-03-01
A fully automated computer algorithm has been developed to evaluate coronary artery calcification (CAC) from lowdose CT scans. CAC is identified and evaluated in three main coronary artery groups: Left Main and Left Anterior Descending Artery (LM + LAD) CAC, Left Circumflex Artery (LCX) CAC, and Right Coronary Artery (RCA) CAC. The artery labeling is achieved by segmenting all CAC candidates in the heart region and applying geometric constraints on the candidates using locally pre-identified anatomy regions. This algorithm was evaluated on 1,359 low-dose ungated CT scans, in which each artery CAC content was categorically visually scored by a radiologist into none, mild, moderate and extensive. The Spearman correlation coefficient R was used to assess the agreement between three automated CAC scores (Agatston-weighted, volume, and mass) and categorical visual scores. For Agatston-weighted automated scores, R was 0.87 for total CAC, 0.82 for LM + LAD CAC, 0.66 for LCX CAC and 0.72 for RCA CAC; results using volume and mass scores were similar. CAC detection sensitivities were: 0.87 for total, 0.82 for LM + LAD, 0.65 for LCX and 0.74 for RCA. To assess the impact of image noise, the dataset was further partitioned into three subsets based on heart region noise level (low<=80HU, medium=(80HU, 110HU], high>110HU). The low and medium noise subsets had higher sensitivities and correlations than the high noise subset. These results indicate that location specific heart risk assessment is possible from low-dose chest CT images.
The Objective Identification and Quantification of Interstitial Lung Abnormalities in Smokers.
Ash, Samuel Y; Harmouche, Rola; Ross, James C; Diaz, Alejandro A; Hunninghake, Gary M; Putman, Rachel K; Onieva, Jorge; Martinez, Fernando J; Choi, Augustine M; Lynch, David A; Hatabu, Hiroto; Rosas, Ivan O; Estepar, Raul San Jose; Washko, George R
2017-08-01
Previous investigation suggests that visually detected interstitial changes in the lung parenchyma of smokers are highly clinically relevant and predict outcomes, including death. Visual subjective analysis to detect these changes is time-consuming, insensitive to subtle changes, and requires training to enhance reproducibility. Objective detection of such changes could provide a method of disease identification without these limitations. The goal of this study was to develop and test a fully automated image processing tool to objectively identify radiographic features associated with interstitial abnormalities in the computed tomography scans of a large cohort of smokers. An automated tool that uses local histogram analysis combined with distance from the pleural surface was used to detect radiographic features consistent with interstitial lung abnormalities in computed tomography scans from 2257 individuals from the Genetic Epidemiology of COPD study, a longitudinal observational study of smokers. The sensitivity and specificity of this tool was determined based on its ability to detect the visually identified presence of these abnormalities. The tool had a sensitivity of 87.8% and a specificity of 57.5% for the detection of interstitial lung abnormalities, with a c-statistic of 0.82, and was 100% sensitive and 56.7% specific for the detection of the visual subtype of interstitial abnormalities called fibrotic parenchymal abnormalities, with a c-statistic of 0.89. In smokers, a fully automated image processing tool is able to identify those individuals who have interstitial lung abnormalities with moderate sensitivity and specificity. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Fully automated bone mineral density assessment from low-dose chest CT
NASA Astrophysics Data System (ADS)
Liu, Shuang; Gonzalez, Jessica; Zulueta, Javier; de-Torres, Juan P.; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.
2018-02-01
A fully automated system is presented for bone mineral density (BMD) assessment from low-dose chest CT (LDCT). BMD assessment is central in the diagnosis and follow-up therapy monitoring of osteoporosis, which is characterized by low bone density and is estimated to affect 12.3 million US population aged 50 years or older, creating tremendous social and economic burdens. BMD assessment from DXA scans (BMDDXA) is currently the most widely used and gold standard technique for the diagnosis of osteoporosis and bone fracture risk estimation. With the recent large-scale implementation of annual lung cancer screening using LDCT, great potential emerges for the concurrent opportunistic osteoporosis screening. In the presented BMDCT assessment system, each vertebral body is first segmented and labeled with its anatomical name. Various 3D region of interest (ROI) inside the vertebral body are then explored for BMDCT measurements at different vertebral levels. The system was validated using 76 pairs of DXA and LDCT scans of the same subject. Average BMDDXA of L1-L4 was used as the reference standard. Statistically significant (p-value < 0.001) strong correlation is obtained between BMDDXA and BMDCT at all vertebral levels (T1 - L2). A Pearson correlation of 0.857 was achieved between BMDDXA and average BMDCT of T9-T11 by using a 3D ROI taking into account of both trabecular and cortical bone tissue. These encouraging results demonstrate the feasibility of fully automated quantitative BMD assessment and the potential of opportunistic osteoporosis screening with concurrent lung cancer screening using LDCT.
Automated posterior cranial fossa volumetry by MRI: applications to Chiari malformation type I.
Bagci, A M; Lee, S H; Nagornaya, N; Green, B A; Alperin, N
2013-09-01
Quantification of PCF volume and the degree of PCF crowdedness were found beneficial for differential diagnosis of tonsillar herniation and prediction of surgical outcome in CMI. However, lack of automated methods limits the clinical use of PCF volumetry. An atlas-based method for automated PCF segmentation tailored for CMI is presented. The method performance is assessed in terms of accuracy and spatial overlap with manual segmentation. The degree of association between PCF volumes and the lengths of previously proposed linear landmarks is reported. T1-weighted volumetric MR imaging data with 1-mm isotropic resolution obtained with the use of a 3T scanner from 14 patients with CMI and 3 healthy subjects were used for the study. Manually delineated PCF from 9 patients was used to establish a CMI-specific reference for an atlas-based automated PCF parcellation approach. Agreement between manual and automated segmentation of 5 different CMI datasets was verified by means of the t test. Measurement reproducibility was established through the use of 2 repeated scans from 3 healthy subjects. Degree of linear association between PCF volume and 6 linear landmarks was determined by means of Pearson correlation. PCF volumes measured by use of the automated method and with manual delineation were similar, 196.2 ± 8.7 mL versus 196.9 ± 11.0 mL, respectively. The mean relative difference of -0.3 ± 1.9% was not statistically significant. Low measurement variability, with a mean absolute percentage value of 0.6 ± 0.2%, was achieved. None of the PCF linear landmarks were significantly associated with PCF volume. PCF and tissue content volumes can be reliably measured in patients with CMI by use of an atlas-based automated segmentation method.
Campbell, J Q; Coombs, D J; Rao, M; Rullkoetter, P J; Petrella, A J
2016-09-06
The purpose of this study was to seek broad verification and validation of human lumbar spine finite element models created using a previously published automated algorithm. The automated algorithm takes segmented CT scans of lumbar vertebrae, automatically identifies important landmarks and contact surfaces, and creates a finite element model. Mesh convergence was evaluated by examining changes in key output variables in response to mesh density. Semi-direct validation was performed by comparing experimental results for a single specimen to the automated finite element model results for that specimen with calibrated material properties from a prior study. Indirect validation was based on a comparison of results from automated finite element models of 18 individual specimens, all using one set of generalized material properties, to a range of data from the literature. A total of 216 simulations were run and compared to 186 experimental data ranges in all six primary bending modes up to 7.8Nm with follower loads up to 1000N. Mesh convergence results showed less than a 5% difference in key variables when the original mesh density was doubled. The semi-direct validation results showed that the automated method produced results comparable to manual finite element modeling methods. The indirect validation results showed a wide range of outcomes due to variations in the geometry alone. The studies showed that the automated models can be used to reliably evaluate lumbar spine biomechanics, specifically within our intended context of use: in pure bending modes, under relatively low non-injurious simulated in vivo loads, to predict torque rotation response, disc pressures, and facet forces. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cardiac imaging: working towards fully-automated machine analysis & interpretation
Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido
2017-01-01
Introduction Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation. PMID:28277804
Fully automated adipose tissue measurement on abdominal CT
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Sussman, Daniel L.; Summers, Ronald M.
2011-03-01
Obesity has become widespread in America and has been associated as a risk factor for many illnesses. Adipose tissue (AT) content, especially visceral AT (VAT), is an important indicator for risks of many disorders, including heart disease and diabetes. Measuring adipose tissue (AT) with traditional means is often unreliable and inaccurate. CT provides a means to measure AT accurately and consistently. We present a fully automated method to segment and measure abdominal AT in CT. Our method integrates image preprocessing which attempts to correct for image artifacts and inhomogeneities. We use fuzzy cmeans to cluster AT regions and active contour models to separate subcutaneous and visceral AT. We tested our method on 50 abdominal CT scans and evaluated the correlations between several measurements.
Srinivasan, Pratul P.; Kim, Leo A.; Mettu, Priyatham S.; Cousins, Scott W.; Comer, Grant M.; Izatt, Joseph A.; Farsiu, Sina
2014-01-01
We present a novel fully automated algorithm for the detection of retinal diseases via optical coherence tomography (OCT) imaging. Our algorithm utilizes multiscale histograms of oriented gradient descriptors as feature vectors of a support vector machine based classifier. The spectral domain OCT data sets used for cross-validation consisted of volumetric scans acquired from 45 subjects: 15 normal subjects, 15 patients with dry age-related macular degeneration (AMD), and 15 patients with diabetic macular edema (DME). Our classifier correctly identified 100% of cases with AMD, 100% cases with DME, and 86.67% cases of normal subjects. This algorithm is a potentially impactful tool for the remote diagnosis of ophthalmic diseases. PMID:25360373
reCAPTCHA: human-based character recognition via Web security measures.
von Ahn, Luis; Maurer, Benjamin; McMillen, Colin; Abraham, David; Blum, Manuel
2008-09-12
CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) are widespread security measures on the World Wide Web that prevent automated programs from abusing online services. They do so by asking humans to perform a task that computers cannot yet perform, such as deciphering distorted characters. Our research explored whether such human effort can be channeled into a useful purpose: helping to digitize old printed material by asking users to decipher scanned words from books that computerized optical character recognition failed to recognize. We showed that this method can transcribe text with a word accuracy exceeding 99%, matching the guarantee of professional human transcribers. Our apparatus is deployed in more than 40,000 Web sites and has transcribed over 440 million words.
Automated Guided-Wave Scanning Developed to Characterize Materials and Detect Defects
NASA Technical Reports Server (NTRS)
Martin, Richard E.; Gyekenyeski, Andrew L.; Roth, Don J.
2004-01-01
The Nondestructive Evaluation (NDE) Group of the Optical Instrumentation Technology Branch at the NASA Glenn Research Center has developed a scanning system that uses guided waves to characterize materials and detect defects. The technique uses two ultrasonic transducers to interrogate the condition of a material. The sending transducer introduces an ultrasonic pulse at a point on the surface of the specimen, and the receiving transducer detects the signal after it has passed through the material. The aim of the method is to correlate certain parameters in both the time and frequency domains of the detected waveform to characteristics of the material between the two transducers. The scanning system is shown. The waveform parameters of interest include the attenuation due to internal damping, waveform shape parameters, and frequency shifts due to material changes. For the most part, guided waves are used to gauge the damage state and defect growth of materials subjected to various mechanical or environmental loads. The technique has been applied to polymer matrix composites, ceramic matrix composites, and metal matrix composites as well as metallic alloys. Historically, guided wave analysis has been a point-by-point, manual technique with waveforms collected at discrete locations and postprocessed. Data collection and analysis of this type limits the amount of detail that can be obtained. Also, the manual movement of the sensors is prone to user error and is time consuming. The development of an automated guided-wave scanning system has allowed the method to be applied to a wide variety of materials in a consistent, repeatable manner. Experimental studies have been conducted to determine the repeatability of the system as well as compare the results obtained using more traditional NDE methods. The following screen capture shows guided-wave scan results for a ceramic matrix composite plate, including images for each of nine calculated parameters. The system can display up to 18 different wave parameters. Multiple scans of the test specimen demonstrated excellent repeatability in the measurement of all the guided-wave parameters, far exceeding the traditional point-by-point technique. In addition, the scan was able to detect a subsurface defect that was confirmed using flash thermography This technology is being further refined to provide a more robust and efficient software environment. Future hardware upgrades will allow for multiple receiving transducers and the ability to scan more complex surfaces. This work supports composite materials development and testing under the Ultra-Efficient Engine Technology (UEET) Project, but it also will be applied to other material systems under development for a wide range of applications.
NASA Astrophysics Data System (ADS)
Wu, Jing; Gerendas, Bianca S.; Waldstein, Sebastian M.; Simader, Christian; Schmidt-Erfurth, Ursula
2014-03-01
Spectral-domain Optical Coherence Tomography (SD-OCT) is a non-invasive modality for acquiring high reso- lution, three-dimensional (3D) cross sectional volumetric images of the retina and the subretinal layers. SD-OCT also allows the detailed imaging of retinal pathology, aiding clinicians in the diagnosis of sight degrading diseases such as age-related macular degeneration (AMD) and glaucoma.1 Disease diagnosis, assessment, and treatment requires a patient to undergo multiple OCT scans, possibly using different scanning devices, to accurately and precisely gauge disease activity, progression and treatment success. However, the use of OCT imaging devices from different vendors, combined with patient movement may result in poor scan spatial correlation, potentially leading to incorrect patient diagnosis or treatment analysis. Image registration can be used to precisely compare disease states by registering differing 3D scans to one another. In order to align 3D scans from different time- points and vendors using registration, landmarks are required, the most obvious being the retinal vasculature. Presented here is a fully automated cross-vendor method to acquire retina vessel locations for OCT registration from fovea centred 3D SD-OCT scans based on vessel shadows. Noise filtered OCT scans are flattened based on vendor retinal layer segmentation, to extract the retinal pigment epithelium (RPE) layer of the retina. Voxel based layer profile analysis and k-means clustering is used to extract candidate vessel shadow regions from the RPE layer. In conjunction, the extracted RPE layers are combined to generate a projection image featuring all candidate vessel shadows. Image processing methods for vessel segmentation of the OCT constructed projection image are then applied to optimize the accuracy of OCT vessel shadow segmentation through the removal of false positive shadow regions such as those caused by exudates and cysts. Validation of segmented vessel shadows uses ground truth vessel shadow regions identified by expert graders at the Vienna Reading Center (VRC). The results presented here are intended to show the feasibility of this method for the accurate and precise extraction of suitable retinal vessel shadows from multiple vendor 3D SD-OCT scans for use in intra-vendor and cross-vendor 3D OCT registration, 2D fundus registration and actual retinal vessel segmentation. The resulting percentage of true vessel shadow segments to false positive segments identified by the proposed system compared to mean grader ground truth is 95%.
Kong, Lingyan; Liang, Jixiang; Xue, Huadan; Wang, Yining; Wang, Yun; Jin, Zhengyu; Zhang, Daming; Chen, Jin
2017-02-20
Objective To evaluate the application of automated tube potential selection technique in high-pitch dual-source CT aortic angiography on a third-generation dual-source CT scanner. Methods Whole aorta angiography were indiated in 59 patients,who were divided into 2 groups using a simple random method:in group 1 there were 31 patients who underwent the examination with automated tube potential selection using a vascular setting with a preferred image quality of 288 mA/100 kV;in group 2 there were 28 patients who underwent the examination with a tube voltage of 100 kV and automated tube current modulation using a reference tube current of 288 mA. Both groups were scanned on a third generation dual-source CT device operated in dual-source high-pitch ECG-gating mode with a pitch of 3.0,collimation of 2×192×0.6 mm,and a rotation time of 0.25 s. Iterative reconstruction algorithm was used. For group 1,the volume and flow of contrast medium and chasing saline were adapted to the tube voltage. For group 2,a contrast material bolus of 45 ml with a flow of 4.5 ml/s followed by a 50 ml saline chaser at 5 ml/s was used. CTA scan was automatically started using a bolus tracking technique at the level of the original part of aorta after a trigger threshold of 100 HU was reached. The start delay was set to 6 s in both groups. Effective dose (ED),signal to noise ratio (SNR),contrast to noise ratio (CNR),and subjective diagnostic quality of both groups were evaluated. Results The mean ED were 21.3% lower (t=-3.099,P=0.000) in group 1 [(2.48±0.80) mSv] than in group 2 [(3.15±0.86) mSv]. Two groups showed no significant difference in attenuation,SD,SNR,or CNR at all evaluational parts of aorta (ascending aorta,aortic arch,diaphragmatic aorta,or iliac bifurcation)(all P>0.05). There was no significant difference in subjective diagnostic quality values of two groups [(1.41±0.50) scores vs. (1.39±0.50) scores;W=828.5,P=0.837]. Conclusion Compared with automated tube current modulation,the automated tube potential selection technique in aorta CT angiography on a third-generation dual-source CT can dramatically reduce radiation dose without affecting image quality.
Howat, William J; Daley, Frances; Zabaglo, Lila; McDuffus, Leigh‐Anne; Blows, Fiona; Coulson, Penny; Raza Ali, H; Benitez, Javier; Milne, Roger; Brenner, Herman; Stegmaier, Christa; Mannermaa, Arto; Chang‐Claude, Jenny; Rudolph, Anja; Sinn, Peter; Couch, Fergus J; Tollenaar, Rob A.E.M.; Devilee, Peter; Figueroa, Jonine; Sherman, Mark E; Lissowska, Jolanta; Hewitt, Stephen; Eccles, Diana; Hooning, Maartje J; Hollestelle, Antoinette; WM Martens, John; HM van Deurzen, Carolien; Investigators, kConFab; Bolla, Manjeet K; Wang, Qin; Jones, Michael; Schoemaker, Minouk; Broeks, Annegien; van Leeuwen, Flora E; Van't Veer, Laura; Swerdlow, Anthony J; Orr, Nick; Dowsett, Mitch; Easton, Douglas; Schmidt, Marjanka K; Pharoah, Paul D; Garcia‐Closas, Montserrat
2016-01-01
Abstract Automated methods are needed to facilitate high‐throughput and reproducible scoring of Ki67 and other markers in breast cancer tissue microarrays (TMAs) in large‐scale studies. To address this need, we developed an automated protocol for Ki67 scoring and evaluated its performance in studies from the Breast Cancer Association Consortium. We utilized 166 TMAs containing 16,953 tumour cores representing 9,059 breast cancer cases, from 13 studies, with information on other clinical and pathological characteristics. TMAs were stained for Ki67 using standard immunohistochemical procedures, and scanned and digitized using the Ariol system. An automated algorithm was developed for the scoring of Ki67, and scores were compared to computer assisted visual (CAV) scores in a subset of 15 TMAs in a training set. We also assessed the correlation between automated Ki67 scores and other clinical and pathological characteristics. Overall, we observed good discriminatory accuracy (AUC = 85%) and good agreement (kappa = 0.64) between the automated and CAV scoring methods in the training set. The performance of the automated method varied by TMA (kappa range= 0.37–0.87) and study (kappa range = 0.39–0.69). The automated method performed better in satisfactory cores (kappa = 0.68) than suboptimal (kappa = 0.51) cores (p‐value for comparison = 0.005); and among cores with higher total nuclei counted by the machine (4,000–4,500 cells: kappa = 0.78) than those with lower counts (50–500 cells: kappa = 0.41; p‐value = 0.010). Among the 9,059 cases in this study, the correlations between automated Ki67 and clinical and pathological characteristics were found to be in the expected directions. Our findings indicate that automated scoring of Ki67 can be an efficient method to obtain good quality data across large numbers of TMAs from multicentre studies. However, robust algorithm development and rigorous pre‐ and post‐analytical quality control procedures are necessary in order to ensure satisfactory performance. PMID:27499923
Breakthrough Technologies Developed by the Air Force Research Laboratory and Its Predecessors
2005-12-21
quickly plan its most economical fabrication within the constraints of schedule, availability of raw materials, and variability of materials and...intensive—more efficient and economical . ManTech introduced automation and inspection technologies, including the use of scanning electron...the novel use of asphalt mixed with ammonium nitrate as a solid propellant, a mixture first devised at the Jet Propulsion Laboratory. That line of
2010-12-01
recommend [13]. 2.2 Commercial content scanning technology In [16], a companion piece to this paper, Magar completed a thorough review of commercially...defense Canada Chef de file au Canada en matiere de science et de technologie pour la defense et la securite nationale DEFENCE ~~EFENSE (_.,./ www.drdc-rddc.gc.ca
Electronic Fingerprinting for Industry
NASA Technical Reports Server (NTRS)
1995-01-01
Veritec's VeriSystem is a complete identification and tracking system for component traceability, improved manufacturing and processing, and automated shop floor applications. The system includes the Vericode Symbol, a more accurate and versatile alternative to the traditional bar code, that is scanned by charge coupled device (CCD) cameras. The system was developed by Veritec, Rockwell International and Marshall Space Flight Center to identify and track Space Shuttle parts.
JPRS report: Science and technology. Central Eurasia
NASA Astrophysics Data System (ADS)
1995-02-01
Translated articles cover the following topics: laser-controlled rotary microwave waveguide junction; optical pulse-phase modulation of semiconductor laser; amplitude-phase distortions of light beam obliquely propagating through ground layer of troposphere; antenna arrays with ultrafast beam scanning; materials for a walk on moon; textile-wood-coal briquette path to capitalism; and development of automated system for scientific research and design of heat and mass transfer processes.
A next generation processing system for edging and trimming
A. Lynn Abbott; Daniel L. Schmoldt; Philip A. Araman
2000-01-01
This paper describes a prototype scanning system that is being developed for the processing of rough hardwood lumber. The overall goal of the system is to automate the selection of cutting positions for the edges and ends of rough, green lumber. Such edge and trim cuts are typically performed at sawmills in an effort to increase board value prior to sale, and this...
ERIC Educational Resources Information Center
Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.
2015-01-01
Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…
Automated Slicing for a Multi-Axis Metal Deposition System (Preprint)
2006-09-01
experimented with different materials like H13 tool steel to build the part. Following the same slicing and scanning toolpath result, there is a geometric...and analysis tool -centroidal axis. Similar to medial axis, it contains geometry and topological information but is significantly computationally...geometry reasoning and analysis tool -centroidal axis. Similar to medial axis, it contains geometry and topological information but is significantly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piot, P.; Halavanau, A.
This paper discusses the implementation of a python- based high-level interface to the Fermilab acnet control system. The interface has been successfully employed during the commissioning of the Fermilab Accelerator Science & Technology (FAST) facility. Specifically, we present examples of applications at FAST which include the interfacing of the elegant program to assist lattice matching, an automated emittance measurement via the quadrupole-scan method and tranverse transport matrix measurement of a superconducting RF cavity.
Four-probe measurements with a three-probe scanning tunneling microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salomons, Mark; Martins, Bruno V. C.; Zikovsky, Janik
2014-04-15
We present an ultrahigh vacuum (UHV) three-probe scanning tunneling microscope in which each probe is capable of atomic resolution. A UHV JEOL scanning electron microscope aids in the placement of the probes on the sample. The machine also has a field ion microscope to clean, atomically image, and shape the probe tips. The machine uses bare conductive samples and tips with a homebuilt set of pliers for heating and loading. Automated feedback controlled tip-surface contacts allow for electrical stability and reproducibility while also greatly reducing tip and surface damage due to contact formation. The ability to register inter-tip position bymore » imaging of a single surface feature by multiple tips is demonstrated. Four-probe material characterization is achieved by deploying two tips as fixed current probes and the third tip as a movable voltage probe.« less
Three-dimensional Imaging and Scanning: Current and Future Applications for Pathology
Farahani, Navid; Braun, Alex; Jutt, Dylan; Huffman, Todd; Reder, Nick; Liu, Zheng; Yagi, Yukako; Pantanowitz, Liron
2017-01-01
Imaging is vital for the assessment of physiologic and phenotypic details. In the past, biomedical imaging was heavily reliant on analog, low-throughput methods, which would produce two-dimensional images. However, newer, digital, and high-throughput three-dimensional (3D) imaging methods, which rely on computer vision and computer graphics, are transforming the way biomedical professionals practice. 3D imaging has been useful in diagnostic, prognostic, and therapeutic decision-making for the medical and biomedical professions. Herein, we summarize current imaging methods that enable optimal 3D histopathologic reconstruction: Scanning, 3D scanning, and whole slide imaging. Briefly mentioned are emerging platforms, which combine robotics, sectioning, and imaging in their pursuit to digitize and automate the entire microscopy workflow. Finally, both current and emerging 3D imaging methods are discussed in relation to current and future applications within the context of pathology. PMID:28966836
NASA Technical Reports Server (NTRS)
Schwartzberg, F. R.; Toth, C., Jr.; King, R. G.; Todd, P. H., Jr.
1979-01-01
The technique for inspection of railroad rails containing transverse fissure defects was discussed. Both pulse-echo and pitch-catch inspection techniques were used. The pulse-echo technique results suggest that a multiple-scan approach using varying angles of inclination, three-surface scanning, and dual-direction traversing may offer promise of characterization of transverse defects. Because each scan is likely to produce a reflection indicating only a portion of the defect, summing of the individual reflections must be used to obtain a reasonably complete characterization of the defect. The ability of the collimated pitch-catch technique to detect relatively small amounts of flaw growth was shown. The method has a problem in characterizing the portions of the defect near the top surface or web intersection. The work performed was a preliminary evaluation of the prospects for automated mapping of rail flaws.
Real time automated inspection
Fant, K.M.; Fundakowski, R.A.; Levitt, T.S.; Overland, J.E.; Suresh, B.R.; Ulrich, F.W.
1985-05-21
A method and apparatus are described relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections. 43 figs.
Schlegel, R; Hänke, T; Baumann, D; Kaiser, M; Nag, P K; Voigtländer, R; Lindackers, D; Büchner, B; Hess, C
2014-01-01
We present the design, setup, and operation of a new dip-stick scanning tunneling microscope. Its special design allows measurements in the temperature range from 4.7 K up to room temperature, where cryogenic vacuum conditions are maintained during the measurement. The system fits into every (4)He vessel with a bore of 50 mm, e.g., a transport dewar or a magnet bath cryostat. The microscope is equipped with a cleaving mechanism for cleaving single crystals in the whole temperature range and under cryogenic vacuum conditions. For the tip approach, a capacitive automated coarse approach is implemented. We present test measurements on the charge density wave system 2H-NbSe2 and the superconductor LiFeAs which demonstrate scanning tunneling microscopy and spectroscopy data acquisition with high stability, high spatial resolution at variable temperatures and in high magnetic fields.
Exploring Cognition Using Software Defined Radios for NASA Missions
NASA Technical Reports Server (NTRS)
Mortensen, Dale J.; Reinhart, Richard C.
2016-01-01
NASA missions typically operate using a communication infrastructure that requires significant schedule planning with limited flexibility when the needs of the mission change. Parameters such as modulation, coding scheme, frequency, and data rate are fixed for the life of the mission. This is due to antiquated hardware and software for both the space and ground assets and a very complex set of mission profiles. Automated techniques in place by commercial telecommunication companies are being explored by NASA to determine their usability by NASA to reduce cost and increase science return. Adding cognition the ability to learn from past decisions and adjust behavior is also being investigated. Software Defined Radios are an ideal way to implement cognitive concepts. Cognition can be considered in many different aspects of the communication system. Radio functions, such as frequency, modulation, data rate, coding and filters can be adjusted based on measurements of signal degradation. Data delivery mechanisms and route changes based on past successes and failures can be made to more efficiently deliver the data to the end user. Automated antenna pointing can be added to improve gain, coverage, or adjust the target. Scheduling improvements and automation to reduce the dependence on humans provide more flexible capabilities. The Cognitive Communications project, funded by the Space Communication and Navigation Program, is exploring these concepts and using the SCaN Testbed on board the International Space Station to implement them as they evolve. The SCaN Testbed contains three Software Defined Radios and a flight computer. These four computing platforms, along with a tracking antenna system and the supporting ground infrastructure, will be used to implement various concepts in a system similar to those used by missions. Multiple universities and SBIR companies are supporting this investigation. This paper will describe the cognitive system ideas under consideration and the plan for implementing them on platforms, including the SCaN Testbed. Discussions in the paper will include how these concepts might be used to reduce cost and improve the science return for NASA missions.
Ding, Jie; Stopeck, Alison T; Gao, Yi; Marron, Marilyn T; Wertheim, Betsy C; Altbach, Maria I; Galons, Jean-Philippe; Roe, Denise J; Wang, Fang; Maskarinec, Gertraud; Thomson, Cynthia A; Thompson, Patricia A; Huang, Chuan
2018-04-06
Increased breast density is a significant independent risk factor for breast cancer, and recent studies show that this risk is modifiable. Hence, breast density measures sensitive to small changes are desired. Utilizing fat-water decomposition MRI, we propose an automated, reproducible breast density measurement, which is nonionizing and directly comparable to mammographic density (MD). Retrospective study. The study included two sample sets of breast cancer patients enrolled in a clinical trial, for concordance analysis with MD (40 patients) and reproducibility analysis (10 patients). The majority of MRI scans (59 scans) were performed with a 1.5T GE Signa scanner using radial IDEAL-GRASE sequence, while the remaining (seven scans) were performed with a 3T Siemens Skyra using 3D Cartesian 6-echo GRE sequence with a similar fat-water separation technique. After automated breast segmentation, breast density was calculated using FraGW, a new measure developed to reliably reflect the amount of fibroglandular tissue and total water content in the entire breast. Based on its concordance with MD, FraGW was calibrated to MR-based breast density (MRD) to be comparable to MD. A previous breast density measurement, Fra80-the ratio of breast voxels with <80% fat fraction-was also calculated for comparison with FraGW. Pearson correlation was performed between MD (reference standard) and FraGW (and Fra80). Test-retest reproducibility of MRD was evaluated using the difference between test-retest measures (Δ 1-2 ) and intraclass correlation coefficient (ICC). Both FraGW and Fra80 were strongly correlated with MD (Pearson ρ: 0.96 vs. 0.90, both P < 0.0001). MRD converted from FraGW showed higher test-retest reproducibility (Δ 1-2 variation: 1.1% ± 1.2%; ICC: 0.99) compared to MD itself (literature intrareader ICC ≤0.96) and Fra80. The proposed MRD is directly comparable with MD and highly reproducible, which enables the early detection of small breast density changes and treatment response. 3 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
Kuo, Phillip Hsin; Avery, Ryan; Krupinski, Elizabeth; Lei, Hong; Bauer, Adam; Sherman, Scott; McMillan, Natalie; Seibyl, John; Zubal, George
2013-03-01
A fully automated objective striatal analysis (OSA) program that quantitates dopamine transporter uptake in subjects with suspected Parkinson's disease was applied to images from clinical (123)I-ioflupane studies. The striatal binding ratios or alternatively the specific binding ratio (SBR) of the lowest putamen uptake was computed, and receiver-operating-characteristic (ROC) analysis was applied to 94 subjects to determine the best discriminator using this quantitative method. Ninety-four (123)I-ioflupane SPECT scans were analyzed from patients referred to our clinical imaging department and were reconstructed using the manufacturer-supplied reconstruction and filtering parameters for the radiotracer. Three trained readers conducted independent visual interpretations and reported each case as either normal or showing dopaminergic deficit (abnormal). The same images were analyzed using the OSA software, which locates the striatal and occipital structures and places regions of interest on the caudate and putamen. Additionally, the OSA places a region of interest on the occipital region that is used to calculate the background-subtracted SBR. The lower SBR of the 2 putamen regions was taken as the quantitative report. The 33 normal (bilateral comma-shaped striata) and 61 abnormal (unilateral or bilateral dopaminergic deficit) studies were analyzed to generate ROC curves. Twenty-nine of the scans were interpreted as normal and 59 as abnormal by all 3 readers. For 12 scans, the 3 readers did not unanimously agree in their interpretations (discordant). The ROC analysis, which used the visual-majority-consensus interpretation from the readers as the gold standard, yielded an area under the curve of 0.958 when using 1.08 as the threshold SBR for the lowest putamen. The sensitivity and specificity of the automated quantitative analysis were 95% and 89%, respectively. The OSA program delivers SBR quantitative values that have a high sensitivity and specificity, compared with visual interpretations by trained nuclear medicine readers. Such a program could be a helpful aid for readers not yet experienced with (123)I-ioflupane SPECT images and if further adapted and validated may be useful to assess disease progression during pharmaceutical testing of therapies.
Automated microdensitometer for digitizing astronomical plates
NASA Technical Reports Server (NTRS)
Angilello, J.; Chiang, W. H.; Elmegreen, D. M.; Segmueller, A.
1984-01-01
A precision microdensitometer was built under control of an IBM S/1 time-sharing computer system. The instrument's spatial resolution is better than 20 microns. A raster scan of an area of 10x10 sq mm (500x500 raster points) takes 255 minutes. The reproducibility is excellent and the stability is good over a period of 30 hours, which is significantly longer than the time required for most scans. The intrinsic accuracy of the instrument was tested using Kodak standard filters, and it was found to be better than 3%. A comparative accuracy was tested measuring astronomical plates of galaxies for which absolute photoelectric photometry data were available. The results showed an accuracy excellent for astronomical applications.
D Textured Modelling of both Exterior and Interior of Korean Styled Architectures
NASA Astrophysics Data System (ADS)
Lee, J.-D.; Bhang, K.-J.; Schuhr, W.
2017-08-01
This paper describes 3D modelling procedure of two Korean styled architectures which were performed through a series of processing from data acquired with the terrestrial laser scanner. These two case projects illustate the use of terrestrial laser scanner as a digital documentation tool for management, conservation and restoration of the cultural assets. We showed an approach to automate reconstruction of both the outside and inside models of a building from laser scanning data. Laser scanning technology is much more efficient than existing photogrammetry in measuring shape and constructing spatial database for preservation and restoration of cultural assets as well as for deformation monitoring and safety diagnosis of structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, S; Gulam, M; Song, K
2014-06-01
Purpose: The Varian EDGE machine is a new stereotactic platform, combining Calypso and VisionRT localization systems with a stereotactic linac. The system includes TrueBeam DeveloperMode, making possible the use of XML-scripting for automation of linac-related tasks. This study details the use of DeveloperMode to automate commissioning tasks for Varian EDGE, thereby improving efficiency and measurement consistency. Methods: XML-scripting was used for various commissioning tasks,including couch model verification,beam-scanning,and isocenter verification. For couch measurements, point measurements were acquired for several field sizes (2×2,4×4,10×10cm{sup 2}) at 42 gantry angles for two couch-models. Measurements were acquired with variations in couch position(rails in/out,couch shifted inmore » each of motion axes) compared to treatment planning system(TPS)-calculated values,which were logged automatically through advanced planning interface(API) scripting functionality. For beam scanning, XML-scripts were used to create custom MLC-apertures. For isocenter verification, XML-scripts were used to automate various Winston-Lutz-type tests. Results: For couch measurements, the time required for each set of angles was approximately 9 minutes. Without scripting, each set required approximately 12 minutes. Automated measurements required only one physicist, while manual measurements required at least two physicists to handle linac positions/beams and data recording. MLC apertures were generated outside of the TPS,and with the .xml file format, double-checking without use of TPS/operator console was possible. Similar time efficiency gains were found for isocenter verification measurements Conclusion: The use of XML scripting in TrueBeam DeveloperMode allows for efficient and accurate data acquisition during commissioning. The efficiency improvement is most pronounced for iterative measurements, exemplified by the time savings for couch modeling measurements(approximately 10 hours). The scripting also allowed for creation of the files in advance without requiring access to TPS. The API scripting functionality enabled efficient creation/mining of TPS data. Finally, automation reduces the potential for human error in entering linac values at the machine console,and the script provides a log of measurements acquired for each session. This research was supported in part by a grant from Varian Medical Systems, Palo Alto, CA.« less
Golbaz, Isabelle; Ahlers, Christian; Goesseringer, Nina; Stock, Geraldine; Geitzenauer, Wolfgang; Prünte, Christian; Schmidt-Erfurth, Ursula Margarethe
2011-03-01
This study compared automatic- and manual segmentation modalities in the retina of healthy eyes using high-definition optical coherence tomography (HD-OCT). Twenty retinas in 20 healthy individuals were examined using an HD-OCT system (Carl Zeiss Meditec, Inc.). Three-dimensional imaging was performed with an axial resolution of 6 μm at a maximum scanning speed of 25,000 A-scans/second. Volumes of 6 × 6 × 2 mm were scanned. Scans were analysed using a matlab-based algorithm and a manual segmentation software system (3D-Doctor). The volume values calculated by the two methods were compared. Statistical analysis revealed a high correlation between automatic and manual modes of segmentation. The automatic mode of measuring retinal volume and the corresponding three-dimensional images provided similar results to the manual segmentation procedure. Both methods were able to visualize retinal and subretinal features accurately. This study compared two methods of assessing retinal volume using HD-OCT scans in healthy retinas. Both methods were able to provide realistic volumetric data when applied to raster scan sets. Manual segmentation methods represent an adequate tool with which to control automated processes and to identify clinically relevant structures, whereas automatic procedures will be needed to obtain data in larger patient populations. © 2009 The Authors. Journal compilation © 2009 Acta Ophthalmol.
Fetal brain volumetry through MRI volumetric reconstruction and segmentation
Estroff, Judy A.; Barnewolt, Carol E.; Connolly, Susan A.; Warfield, Simon K.
2013-01-01
Purpose Fetal MRI volumetry is a useful technique but it is limited by a dependency upon motion-free scans, tedious manual segmentation, and spatial inaccuracy due to thick-slice scans. An image processing pipeline that addresses these limitations was developed and tested. Materials and methods The principal sequences acquired in fetal MRI clinical practice are multiple orthogonal single-shot fast spin echo scans. State-of-the-art image processing techniques were used for inter-slice motion correction and super-resolution reconstruction of high-resolution volumetric images from these scans. The reconstructed volume images were processed with intensity non-uniformity correction and the fetal brain extracted by using supervised automated segmentation. Results Reconstruction, segmentation and volumetry of the fetal brains for a cohort of twenty-five clinically acquired fetal MRI scans was done. Performance metrics for volume reconstruction, segmentation and volumetry were determined by comparing to manual tracings in five randomly chosen cases. Finally, analysis of the fetal brain and parenchymal volumes was performed based on the gestational age of the fetuses. Conclusion The image processing pipeline developed in this study enables volume rendering and accurate fetal brain volumetry by addressing the limitations of current volumetry techniques, which include dependency on motion-free scans, manual segmentation, and inaccurate thick-slice interpolation. PMID:20625848
Optical Coherence Tomography Evaluation in the Multicenter Uveitis Steroid Treatment (MUST) Trial
Domalpally, Amitha; Altaweel, Michael M.; Kempen, John H.; Myers, Dawn; Davis, Janet L; Foster, C Stephen; Latkany, Paul; Srivastava, Sunil K.; Stawell, Richard J.; Holbrook, Janet T.
2013-01-01
Purpose To describe the evaluation of optical coherence tomography (OCT) scans in the Muliticenter Uveitis Steroid Treatment (MUST) trial and report baseline OCT features of enrolled participants. Methods Time domain OCTs acquired by certified photographers using a standardized scan protocol were evaluated at a Reading Center. Accuracy of retinal thickness data was confirmed with quality evaluation and caliper measurement of centerpoint thickness (CPT) was performed when unreliable. Morphological evaluation included cysts, subretinal fluid,epiretinal membranes (ERMs),and vitreomacular traction. Results Of the 453 OCTs evaluated, automated retinal thickness was accurate in 69.5% of scans, caliper measurement was performed in 26%,and 4% were ungradable. Intraclass correlation was 0.98 for reproducibility of caliper measurement. Macular edema (centerpoint thickness ≥ 240um) was present in 36%. Cysts were present in 36.6% of scans and ERMs in 27.8%, predominantly central. Intergrader agreement ranged from 78 − 82% for morphological features. Conclusion Retinal thickness data can be retrieved in a majority of OCT scans in clinical trial submissions for uveitis studies. Small cysts and ERMs involving the center are common in intermediate and posterior/panuveitis requiring systemic corticosteroid therapy. PMID:23163490
NASA Astrophysics Data System (ADS)
Hausmann, Michael; Doelle, Juergen; Arnold, Armin; Stepanow, Boris; Wickert, Burkhard; Boscher, Jeannine; Popescu, Paul C.; Cremer, Christoph
1992-07-01
Laser fluorescence activated slit-scan flow cytometry offers an approach to a fast, quantitative characterization of chromosomes due to morphological features. It can be applied for screening of chromosomal abnormalities. We give a preliminary report on the development of the Heidelberg slit-scan flow cytometer. Time-resolved measurement of the fluorescence intensity along the chromosome axis can be registered simultaneously for two parameters when the chromosome axis can be registered simultaneously for two parameters when the chromosome passes perpendicularly through a narrowly focused laser beam combined by a detection slit in the image plane. So far automated data analysis has been performed off-line on a PC. In its final performance, the Heidelberg slit-scan flow cytometer will achieve on-line data analysis that allows an electro-acoustical sorting of chromosomes of interest. Interest is high in the agriculture field to study chromosome aberrations that influence the size of litters in pig (Sus scrofa domestica) breeding. Slit-scan measurements have been performed to characterize chromosomes of pigs; we present results for chromosome 1 and a translocation chromosome 6/15.
Manavella, Valeria; Romano, Federica; Garrone, Federica; Terzini, Mara; Bignardi, Cristina; Aimetti, Mario
2017-06-01
The aim of this study was to present and validate a novel procedure for the quantitative volumetric assessment of extraction sockets that combines cone-beam computed tomography (CBCT) and image processing techniques. The CBCT dataset of 9 severely resorbed extraction sockets was analyzed by means of two image processing software, Image J and Mimics, using manual and automated segmentation techniques. They were also applied on 5-mm spherical aluminum markers of known volume and on a polyvinyl chloride model of one alveolar socket scanned with Micro-CT to test the accuracy. Statistical differences in alveolar socket volume were found between the different methods of volumetric analysis (P<0.0001). The automated segmentation using Mimics was the most reliable and accurate method with a relative error of 1.5%, considerably smaller than the error of 7% and of 10% introduced by the manual method using Mimics and by the automated method using ImageJ. The currently proposed automated segmentation protocol for the three-dimensional rendering of alveolar sockets showed more accurate results, excellent inter-observer similarity and increased user friendliness. The clinical application of this method enables a three-dimensional evaluation of extraction socket healing after the reconstructive procedures and during the follow-up visits.
Bayesian automated cortical segmentation for neonatal MRI
NASA Astrophysics Data System (ADS)
Chou, Zane; Paquette, Natacha; Ganesh, Bhavana; Wang, Yalin; Ceschin, Rafael; Nelson, Marvin D.; Macyszyn, Luke; Gaonkar, Bilwaj; Panigrahy, Ashok; Lepore, Natasha
2017-11-01
Several attempts have been made in the past few years to develop and implement an automated segmentation of neonatal brain structural MRI. However, accurate automated MRI segmentation remains challenging in this population because of the low signal-to-noise ratio, large partial volume effects and inter-individual anatomical variability of the neonatal brain. In this paper, we propose a learning method for segmenting the whole brain cortical grey matter on neonatal T2-weighted images. We trained our algorithm using a neonatal dataset composed of 3 fullterm and 4 preterm infants scanned at term equivalent age. Our segmentation pipeline combines the FAST algorithm from the FSL library software and a Bayesian segmentation approach to create a threshold matrix that minimizes the error of mislabeling brain tissue types. Our method shows promising results with our pilot training set. In both preterm and full-term neonates, automated Bayesian segmentation generates a smoother and more consistent parcellation compared to FAST, while successfully removing the subcortical structure and cleaning the edges of the cortical grey matter. This method show promising refinement of the FAST segmentation by considerably reducing manual input and editing required from the user, and further improving reliability and processing time of neonatal MR images. Further improvement will include a larger dataset of training images acquired from different manufacturers.
Summers, Ronald M; Baecher, Nicolai; Yao, Jianhua; Liu, Jiamin; Pickhardt, Perry J; Choi, J Richard; Hill, Suvimol
2011-01-01
To show the feasibility of calculating the bone mineral density (BMD) from computed tomographic colonography (CTC) scans using fully automated software. Automated BMD measurement software was developed that measures the BMD of the first and second lumbar vertebrae on computed tomography and calculates the mean of the 2 values to provide a per patient BMD estimate. The software was validated in a reference population of 17 consecutive women who underwent quantitative computed tomography and in a population of 475 women from a consecutive series of asymptomatic patients enrolled in a CTC screening trial conducted at 3 medical centers. The mean (SD) BMD was 133.6 (34.6) mg/mL (95% confidence interval, 130.5-136.7; n = 475). In women aged 42 to 60 years (n = 316) and 61 to 79 years (n = 159), the mean (SD) BMDs were 143.1 (33.5) and 114.7 (28.3) mg/mL, respectively (P < 0.0001). Fully automated BMD measurements were reproducible for a given patient with 95% limits of agreement of -9.79 to 8.46 mg/mL for the mean difference between paired assessments on supine and prone CTC. Osteoporosis screening can be performed simultaneously with screening for colorectal polyps.
Morales, Juan; Alonso-Nanclares, Lidia; Rodríguez, José-Rodrigo; DeFelipe, Javier; Rodríguez, Ángel; Merchán-Pérez, Ángel
2011-01-01
The synapses in the cerebral cortex can be classified into two main types, Gray's type I and type II, which correspond to asymmetric (mostly glutamatergic excitatory) and symmetric (inhibitory GABAergic) synapses, respectively. Hence, the quantification and identification of their different types and the proportions in which they are found, is extraordinarily important in terms of brain function. The ideal approach to calculate the number of synapses per unit volume is to analyze 3D samples reconstructed from serial sections. However, obtaining serial sections by transmission electron microscopy is an extremely time consuming and technically demanding task. Using focused ion beam/scanning electron microscope microscopy, we recently showed that virtually all synapses can be accurately identified as asymmetric or symmetric synapses when they are visualized, reconstructed, and quantified from large 3D tissue samples obtained in an automated manner. Nevertheless, the analysis, segmentation, and quantification of synapses is still a labor intensive procedure. Thus, novel solutions are currently necessary to deal with the large volume of data that is being generated by automated 3D electron microscopy. Accordingly, we have developed ESPINA, a software tool that performs the automated segmentation and counting of synapses in a reconstructed 3D volume of the cerebral cortex, and that greatly facilitates and accelerates these processes. PMID:21633491
van 't Klooster, Ronald; de Koning, Patrick J H; Dehnavi, Reza Alizadeh; Tamsma, Jouke T; de Roos, Albert; Reiber, Johan H C; van der Geest, Rob J
2012-01-01
To develop and validate an automated segmentation technique for the detection of the lumen and outer wall boundaries in MR vessel wall studies of the common carotid artery. A new segmentation method was developed using a three-dimensional (3D) deformable vessel model requiring only one single user interaction by combining 3D MR angiography (MRA) and 2D vessel wall images. This vessel model is a 3D cylindrical Non-Uniform Rational B-Spline (NURBS) surface which can be deformed to fit the underlying image data. Image data of 45 subjects was used to validate the method by comparing manual and automatic segmentations. Vessel wall thickness and volume measurements obtained by both methods were compared. Substantial agreement was observed between manual and automatic segmentation; over 85% of the vessel wall contours were segmented successfully. The interclass correlation was 0.690 for the vessel wall thickness and 0.793 for the vessel wall volume. Compared with manual image analysis, the automated method demonstrated improved interobserver agreement and inter-scan reproducibility. Additionally, the proposed automated image analysis approach was substantially faster. This new automated method can reduce analysis time and enhance reproducibility of the quantification of vessel wall dimensions in clinical studies. Copyright © 2011 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leinders, Suzanne M.; Delft University of Technology, Delft; Breedveld, Sebastiaan
Purpose: To investigate how dose distributions for liver stereotactic body radiation therapy (SBRT) can be improved by using automated, daily plan reoptimization to account for anatomy deformations, compared with setup corrections only. Methods and Materials: For 12 tumors, 3 strategies for dose delivery were simulated. In the first strategy, computed tomography scans made before each treatment fraction were used only for patient repositioning before dose delivery for correction of detected tumor setup errors. In adaptive second and third strategies, in addition to the isocenter shift, intensity modulated radiation therapy beam profiles were reoptimized or both intensity profiles and beam orientationsmore » were reoptimized, respectively. All optimizations were performed with a recently published algorithm for automated, multicriteria optimization of both beam profiles and beam angles. Results: In 6 of 12 cases, violations of organs at risk (ie, heart, stomach, kidney) constraints of 1 to 6 Gy in single fractions occurred in cases of tumor repositioning only. By using the adaptive strategies, these could be avoided (<1 Gy). For 1 case, this needed adaptation by slightly underdosing the planning target volume. For 2 cases with restricted tumor dose in the planning phase to avoid organ-at-risk constraint violations, fraction doses could be increased by 1 and 2 Gy because of more favorable anatomy. Daily reoptimization of both beam profiles and beam angles (third strategy) performed slightly better than reoptimization of profiles only, but the latter required only a few minutes of computation time, whereas full reoptimization took several hours. Conclusions: This simulation study demonstrated that replanning based on daily acquired computed tomography scans can improve liver stereotactic body radiation therapy dose delivery.« less
NASA Astrophysics Data System (ADS)
Wahi-Anwar, M. Wasil; Emaminejad, Nastaran; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael F.
2018-02-01
Quantitative imaging in lung cancer CT seeks to characterize nodules through quantitative features, usually from a region of interest delineating the nodule. The segmentation, however, can vary depending on segmentation approach and image quality, which can affect the extracted feature values. In this study, we utilize a fully-automated nodule segmentation method - to avoid reader-influenced inconsistencies - to explore the effects of varied dose levels and reconstruction parameters on segmentation. Raw projection CT images from a low-dose screening patient cohort (N=59) were reconstructed at multiple dose levels (100%, 50%, 25%, 10%), two slice thicknesses (1.0mm, 0.6mm), and a medium kernel. Fully-automated nodule detection and segmentation was then applied, from which 12 nodules were selected. Dice similarity coefficient (DSC) was used to assess the similarity of the segmentation ROIs of the same nodule across different reconstruction and dose conditions. Nodules at 1.0mm slice thickness and dose levels of 25% and 50% resulted in DSC values greater than 0.85 when compared to 100% dose, with lower dose leading to a lower average and wider spread of DSC values. At 0.6mm, the increased bias and wider spread of DSC values from lowering dose were more pronounced. The effects of dose reduction on DSC for CAD-segmented nodules were similar in magnitude to reducing the slice thickness from 1.0mm to 0.6mm. In conclusion, variation of dose and slice thickness can result in very different segmentations because of noise and image quality. However, there exists some stability in segmentation overlap, as even at 1mm, an image with 25% of the lowdose scan still results in segmentations similar to that seen in a full-dose scan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brink, Carsten, E-mail: carsten.brink@rsyd.dk; Laboratory of Radiation Physics, Odense University Hospital; Bernchou, Uffe
2014-07-15
Purpose: Large interindividual variations in volume regression of non-small cell lung cancer (NSCLC) are observable on standard cone beam computed tomography (CBCT) during fractionated radiation therapy. Here, a method for automated assessment of tumor volume regression is presented and its potential use in response adapted personalized radiation therapy is evaluated empirically. Methods and Materials: Automated deformable registration with calculation of the Jacobian determinant was applied to serial CBCT scans in a series of 99 patients with NSCLC. Tumor volume at the end of treatment was estimated on the basis of the first one third and two thirds of the scans.more » The concordance between estimated and actual relative volume at the end of radiation therapy was quantified by Pearson's correlation coefficient. On the basis of the estimated relative volume, the patients were stratified into 2 groups having volume regressions below or above the population median value. Kaplan-Meier plots of locoregional disease-free rate and overall survival in the 2 groups were used to evaluate the predictive value of tumor regression during treatment. Cox proportional hazards model was used to adjust for other clinical characteristics. Results: Automatic measurement of the tumor regression from standard CBCT images was feasible. Pearson's correlation coefficient between manual and automatic measurement was 0.86 in a sample of 9 patients. Most patients experienced tumor volume regression, and this could be quantified early into the treatment course. Interestingly, patients with pronounced volume regression had worse locoregional tumor control and overall survival. This was significant on patient with non-adenocarcinoma histology. Conclusions: Evaluation of routinely acquired CBCT images during radiation therapy provides biological information on the specific tumor. This could potentially form the basis for personalized response adaptive therapy.« less
Khosroshahi, M E; Nourbakhsh, M S; Saremi, S; Hooshyar, A; Rabbani, Sh; Tabatabai, F; Anvari, M Sotudeh
2010-12-01
We sought to examine the impact of different parameters of laser soldering on the thermophysical properties of the skin and to optimize these parameters for sealing a full-thickness incision in the rat skin under closed feedback control under in vivo conditions. Laser tissue soldering based on protein as biologic glues and other compounds can provide greater bond strength and less collateral damage. Endogenous and exogenous materials such as indocyanine green (ICG) are often added to solders to enhance light absorption. In ex vivo study, the temperature increase, number of scan (Ns), and scan velocity (Vs) were investigated. In ex vivo studies, four skin incisions were made over rat dorsa and were closed by using two different methods: (a) wound closure by suture and (b) closure by using an automated temperature-controlled system. An automated soldering system was developed based on a diode laser, IR detector, photodiode, digital thermocouple, and camera. The true temperature of heated tissue was determined by using a calibration software method. The results showed that at each laser irradiance (I), the tensile strength (σ) of incisions repaired in the static mode is higher than in the dynamic mode. It must also be noted that the tensile strength of the repaired skin wound was increased by increasing the irradiance in both static and dynamic modes. However, in parallel, an increase in the corresponding temperature was observed. The tensile strength was measured for sutured and laser-soldered tissue after 2 to 10 postoperative days. Histopathologic studies showed a better healing and less inflammatory reactions than with those caused by standard sutures after day 7. It is demonstrated that automated laser soldering technique can be practical provided the optothermal properties of tissue is carefully optimized.
Automated image analysis of alpha-particle autoradiographs of human bone
NASA Astrophysics Data System (ADS)
Hatzialekou, Urania; Henshaw, Denis L.; Fews, A. Peter
1988-01-01
Further techniques [4,5] for the analysis of CR-39 α-particle autoradiographs have been developed for application to α-autoradiography of autopsy bone at natural levels for exposure. The most significant new approach is the use of fully automated image analysis using a system developed in this laboratory. A 5 cm × 5 cm autoradiograph of tissue in which the activity is below 1 Bq kg -1 is scanned to both locate and measure the recorded α-particle tracks at a rate of 5 cm 2/h. Improved methods of calibration have also been developed. The techniques are described and in order to illustrate their application, a bone sample contaminated with 239Pu is analysed. Results from natural levels are the subject of a separate publication.
Simões, Rodrigo Almeida; Bonato, Pierina Sueli; Mirnaghi, Fatemeh S; Bojko, Barbara; Pawliszyn, Janusz
2015-01-01
A high-throughput bioanalytical method using 96-blade thin film microextraction (TFME) and LC-MS/MS for the analysis of repaglinide (RPG) and two of its main metabolites was developed and used for an in vitro metabolism study. The target analytes were extracted from human microsomal medium by a 96-blade-TFME system employing the low-cost prototype 'SPME multi-sampler' using C18 coating. Method validation showed recoveries around 90% for all analytes and was linear over the concentration range of 2-1000 ng ml(-1) for RPG and of 2-500 ng ml(-1) for each RPG metabolite. The method was applied to an in vitro metabolism study of RPG employing human liver microsomes and proved to be very useful for this purpose.
Kuwajima, Masaaki; Mendenhall, John M.; Lindsey, Laurence F.; Harris, Kristen M.
2013-01-01
Transmission-mode scanning electron microscopy (tSEM) on a field emission SEM platform was developed for efficient and cost-effective imaging of circuit-scale volumes from brain at nanoscale resolution. Image area was maximized while optimizing the resolution and dynamic range necessary for discriminating key subcellular structures, such as small axonal, dendritic and glial processes, synapses, smooth endoplasmic reticulum, vesicles, microtubules, polyribosomes, and endosomes which are critical for neuronal function. Individual image fields from the tSEM system were up to 4,295 µm2 (65.54 µm per side) at 2 nm pixel size, contrasting with image fields from a modern transmission electron microscope (TEM) system, which were only 66.59 µm2 (8.160 µm per side) at the same pixel size. The tSEM produced outstanding images and had reduced distortion and drift relative to TEM. Automated stage and scan control in tSEM easily provided unattended serial section imaging and montaging. Lens and scan properties on both TEM and SEM platforms revealed no significant nonlinear distortions within a central field of ∼100 µm2 and produced near-perfect image registration across serial sections using the computational elastic alignment tool in Fiji/TrakEM2 software, and reliable geometric measurements from RECONSTRUCT™ or Fiji/TrakEM2 software. Axial resolution limits the analysis of small structures contained within a section (∼45 nm). Since this new tSEM is non-destructive, objects within a section can be explored at finer axial resolution in TEM tomography with current methods. Future development of tSEM tomography promises thinner axial resolution producing nearly isotropic voxels and should provide within-section analyses of structures without changing platforms. Brain was the test system given our interest in synaptic connectivity and plasticity; however, the new tSEM system is readily applicable to other biological systems. PMID:23555711
Multi-atlas segmentation of subcortical brain structures via the AutoSeg software pipeline
Wang, Jiahui; Vachet, Clement; Rumple, Ashley; Gouttard, Sylvain; Ouziel, Clémentine; Perrot, Emilie; Du, Guangwei; Huang, Xuemei; Gerig, Guido; Styner, Martin
2014-01-01
Automated segmenting and labeling of individual brain anatomical regions, in MRI are challenging, due to the issue of individual structural variability. Although atlas-based segmentation has shown its potential for both tissue and structure segmentation, due to the inherent natural variability as well as disease-related changes in MR appearance, a single atlas image is often inappropriate to represent the full population of datasets processed in a given neuroimaging study. As an alternative for the case of single atlas segmentation, the use of multiple atlases alongside label fusion techniques has been introduced using a set of individual “atlases” that encompasses the expected variability in the studied population. In our study, we proposed a multi-atlas segmentation scheme with a novel graph-based atlas selection technique. We first paired and co-registered all atlases and the subject MR scans. A directed graph with edge weights based on intensity and shape similarity between all MR scans is then computed. The set of neighboring templates is selected via clustering of the graph. Finally, weighted majority voting is employed to create the final segmentation over the selected atlases. This multi-atlas segmentation scheme is used to extend a single-atlas-based segmentation toolkit entitled AutoSeg, which is an open-source, extensible C++ based software pipeline employing BatchMake for its pipeline scripting, developed at the Neuro Image Research and Analysis Laboratories of the University of North Carolina at Chapel Hill. AutoSeg performs N4 intensity inhomogeneity correction, rigid registration to a common template space, automated brain tissue classification based skull-stripping, and the multi-atlas segmentation. The multi-atlas-based AutoSeg has been evaluated on subcortical structure segmentation with a testing dataset of 20 adult brain MRI scans and 15 atlas MRI scans. The AutoSeg achieved mean Dice coefficients of 81.73% for the subcortical structures. PMID:24567717
Montgomery, Sandra; Roman, Kiana; Ngyuen, Lan; Cardenas, Ana Maria; Knox, James; Tomaras, Andrew P.
2017-01-01
ABSTRACT Urinary tract infections are one of the most common reasons for health care visits. Diagnosis and optimal treatment often require a urine culture, which takes an average of 1.5 to 2 days from urine collection to results, delaying optimal therapy. Faster, but accurate, alternatives are needed. Light scatter technology has been proposed for several years as a rapid screening tool, whereby negative specimens are excluded from culture. A commercially available light scatter device, BacterioScan 216Dx (BacterioScan, Inc.), has recently been advertised for this application. Paired use of mass spectrometry (MS) for bacterial identification and automated-system-based susceptibility testing straight from the light scatter suspension might provide dramatic improvement in times to a result. The present study prospectively evaluated the BacterioScan device, with culture as the reference standard. Positive light scatter specimens were used for downstream rapid matrix-assisted laser desorption ionization–time of flight (MALDI-TOF) MS organism identification and automated-system-based antimicrobial susceptibility testing. Prospective evaluation of 439 urine samples showed a sensitivity of 96.5%, a specificity of 71.4%, and positive and negative predictive values of 45.1% and 98.8%, respectively. MALDI-TOF MS analysis of the suspension after density-based selection yielded a sensitivity of 72.1% and a specificity of 96.9%. Antimicrobial susceptibility testing of the samples identified by MALDI-TOF MS produced an overall categorical agreement of 99.2%. Given the high sensitivity and negative predictive value of results obtained, BacterioScan 216Dx is a reasonable approach for urine screening and might produce negative results in as few as 3 h, with no downstream workup. Paired rapid identification and susceptibility testing might be useful when MALDI-TOF MS results in an organism identification, and it might decrease the time to a result by more than 24 h. PMID:28356414
Montgomery, Sandra; Roman, Kiana; Ngyuen, Lan; Cardenas, Ana Maria; Knox, James; Tomaras, Andrew P; Graf, Erin H
2017-06-01
Urinary tract infections are one of the most common reasons for health care visits. Diagnosis and optimal treatment often require a urine culture, which takes an average of 1.5 to 2 days from urine collection to results, delaying optimal therapy. Faster, but accurate, alternatives are needed. Light scatter technology has been proposed for several years as a rapid screening tool, whereby negative specimens are excluded from culture. A commercially available light scatter device, BacterioScan 216Dx (BacterioScan, Inc.), has recently been advertised for this application. Paired use of mass spectrometry (MS) for bacterial identification and automated-system-based susceptibility testing straight from the light scatter suspension might provide dramatic improvement in times to a result. The present study prospectively evaluated the BacterioScan device, with culture as the reference standard. Positive light scatter specimens were used for downstream rapid matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) MS organism identification and automated-system-based antimicrobial susceptibility testing. Prospective evaluation of 439 urine samples showed a sensitivity of 96.5%, a specificity of 71.4%, and positive and negative predictive values of 45.1% and 98.8%, respectively. MALDI-TOF MS analysis of the suspension after density-based selection yielded a sensitivity of 72.1% and a specificity of 96.9%. Antimicrobial susceptibility testing of the samples identified by MALDI-TOF MS produced an overall categorical agreement of 99.2%. Given the high sensitivity and negative predictive value of results obtained, BacterioScan 216Dx is a reasonable approach for urine screening and might produce negative results in as few as 3 h, with no downstream workup. Paired rapid identification and susceptibility testing might be useful when MALDI-TOF MS results in an organism identification, and it might decrease the time to a result by more than 24 h. Copyright © 2017 American Society for Microbiology.
McHugh, Kieran; Harbron, Richard W; Pearce, Mark S; Berrington De Gonzalez, Amy
2016-01-01
Objective: To describe the medical conditions associated with the use of CT in children or young adults with no previous cancer diagnosis. Methods: Radiologist reports for scans performed in 1995–2008 in non-cancer patients less than 22 years of age were collected from the radiology information system in 44 hospitals of Great Britain. By semantic search, an automated procedure identified 185 medical conditions within the radiologist reports. Manual validation of a subsample by a paediatric radiologist showed a satisfactory performance of the automatic coding procedure. Results: Medical information was extracted for 37,807 scans; 19.5% scans were performed in children less than 5 years old; 52.0% scans were performed in 2000 or after. Trauma, diseases of the nervous (mainly hydrocephalus) or the circulatory system were each mentioned in 25–30% of scans. Hydrocephalus was mentioned in 19% of all scans, 59% of scans repeated ≥5 times in a year, and was the most frequent condition in children less than 5 years of age. Congenital diseases/malformations, disorders of the musculoskeletal system/connective tissues and infectious or respiratory diseases were each mentioned in 5–10% of scans. Suspicionor diagnosis of benign or malignant tumour was identified in 5% of scans. Conclusion: This study describes the medical conditions that likely underlie the use of CT in children in Great Britain. It shows that patients with hydrocephalus may receive high cumulative radiation exposures from CT in early life, i.e. at ages when they are most sensitive to radiation. Advances in knowledge: The majority of scans were unrelated to cancer suspicion. Repeated scans over time were mainly associated with the management of hydrocephalus. PMID:27767331
Journy, Neige M; McHugh, Kieran; Harbron, Richard W; Pearce, Mark S; Berrington De Gonzalez, Amy
2016-12-01
To describe the medical conditions associated with the use of CT in children or young adults with no previous cancer diagnosis. Radiologist reports for scans performed in 1995-2008 in non-cancer patients less than 22 years of age were collected from the radiology information system in 44 hospitals of Great Britain. By semantic search, an automated procedure identified 185 medical conditions within the radiologist reports. Manual validation of a subsample by a paediatric radiologist showed a satisfactory performance of the automatic coding procedure. Medical information was extracted for 37,807 scans; 19.5% scans were performed in children less than 5 years old; 52.0% scans were performed in 2000 or after. Trauma, diseases of the nervous (mainly hydrocephalus) or the circulatory system were each mentioned in 25-30% of scans. Hydrocephalus was mentioned in 19% of all scans, 59% of scans repeated ≥5 times in a year, and was the most frequent condition in children less than 5 years of age. Congenital diseases/malformations, disorders of the musculoskeletal system/connective tissues and infectious or respiratory diseases were each mentioned in 5-10% of scans. Suspicionor diagnosis of benign or malignant tumour was identified in 5% of scans. This study describes the medical conditions that likely underlie the use of CT in children in Great Britain. It shows that patients with hydrocephalus may receive high cumulative radiation exposures from CT in early life, i.e. at ages when they are most sensitive to radiation. Advances in knowledge: The majority of scans were unrelated to cancer suspicion. Repeated scans over time were mainly associated with the management of hydrocephalus.
Image Processing Diagnostics: Emphysema
NASA Astrophysics Data System (ADS)
McKenzie, Alex
2009-10-01
Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung
2013-05-01
This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.
Synthesis of Hydrophobic, Crosslinkable Resins.
1985-12-01
product by methanol precipitation the majority of the first oligomer was L-"- lost. 4.14 DIFFERENTIAL SCANNING CALORIMETRY. The DSC trace of a typical...polymer from the DSC traces obtained to dcte. Preliminary studies using an automated torsional pendulum indicate that the Tg of the crosslinked polymer is...enabling water to be used in the purification steps. The diethyl phosphonates are readily prepared by heating triethyl phosphite with the chloromethyl
Prehospital Use of Plasma for Traumatic Hemorrhage
2014-06-01
combination of automated scanning and careful checking by eye, Bush, Gentry, and Glass converted the marks and free text on the surveys into an electronic ...available for later review during transfusion reaction investigations. • All study subjects’ blood typing results will be placed in their electronic ...from the study coordinator. We converted answers marked by hand, on paper, into an electronic format that was accessible to statistical methods. Then
Open Architecture as an Enabler for FORCEnet Cruise Missile Defense
2007-09-01
2007). Step 4 introduces another tool called the Strengths, Weaknesses, Opportunities, and Threats ( SWOT ) analysis. Once the TRO has been identified...the SWOT analysis can be used to help in the pursuit of that objective or mission objective. SWOT is defined as Strengths: attributes of the...overtime. In addition to the SCAN and SWOT , analysis processes also needed are Automated Battle Management Aids (ABMA) tools that are required to
NASA Astrophysics Data System (ADS)
Tan, Ou; Liu, Gangjun; Liang, Liu; Gao, Simon S.; Pechauer, Alex D.; Jia, Yali; Huang, David
2015-06-01
An automated algorithm was developed for total retinal blood flow (TRBF) using 70-kHz spectral optical coherence tomography (OCT). The OCT was calibrated for the transformation from Doppler shift to speed based on a flow phantom. The TRBF scan pattern contained five repeated volume scans (2×2 mm) obtained in 3 s and centered on central retinal vessels in the optic disc. The TRBF was calculated using an en face Doppler technique. For each retinal vein, blood flow was measured at an optimal plane where the calculated flow was maximized. The TRBF was calculated by summing flow in all veins. The algorithm tracked vascular branching so that either root or branch veins are summed, but never both. The TRBF in five repeated volumes were averaged to reduce variation due to cardiac cycle pulsation. Finally, the TRBF was corrected for eye length variation. Twelve healthy eyes and 12 glaucomatous eyes were enrolled to test the algorithm. The TRBF was 45.4±6.7 μl/min for healthy control and 34.7±7.6 μl/min for glaucomatous participants (p-value=0.01). The intravisit repeatability was 8.6% for healthy controls and 8.4% for glaucoma participants. The proposed automated method provided repeatable TRBF measurement.
NASA Astrophysics Data System (ADS)
Sun, Yankui; Li, Shan; Sun, Zhongyang
2017-01-01
We propose a framework for automated detection of dry age-related macular degeneration (AMD) and diabetic macular edema (DME) from retina optical coherence tomography (OCT) images, based on sparse coding and dictionary learning. The study aims to improve the classification performance of state-of-the-art methods. First, our method presents a general approach to automatically align and crop retina regions; then it obtains global representations of images by using sparse coding and a spatial pyramid; finally, a multiclass linear support vector machine classifier is employed for classification. We apply two datasets for validating our algorithm: Duke spectral domain OCT (SD-OCT) dataset, consisting of volumetric scans acquired from 45 subjects-15 normal subjects, 15 AMD patients, and 15 DME patients; and clinical SD-OCT dataset, consisting of 678 OCT retina scans acquired from clinics in Beijing-168, 297, and 213 OCT images for AMD, DME, and normal retinas, respectively. For the former dataset, our classifier correctly identifies 100%, 100%, and 93.33% of the volumes with DME, AMD, and normal subjects, respectively, and thus performs much better than the conventional method; for the latter dataset, our classifier leads to a correct classification rate of 99.67%, 99.67%, and 100.00% for DME, AMD, and normal images, respectively.
NASA Astrophysics Data System (ADS)
Paiè, Petra; Bassi, Andrea; Bragheri, Francesca; Osellame, Roberto
2017-02-01
Selective plane illumination microscopy (SPIM) is an optical sectioning technique that allows imaging of biological samples at high spatio-temporal resolution. Standard SPIM devices require dedicated set-ups, complex sample preparation and accurate system alignment, thus limiting the automation of the technique, its accessibility and throughput. We present a millimeter-scaled optofluidic device that incorporates selective plane illumination and fully automatic sample delivery and scanning. To this end an integrated cylindrical lens and a three-dimensional fluidic network were fabricated by femtosecond laser micromachining into a single glass chip. This device can upgrade any standard fluorescence microscope to a SPIM system. We used SPIM on a CHIP to automatically scan biological samples under a conventional microscope, without the need of any motorized stage: tissue spheroids expressing fluorescent proteins were flowed in the microchannel at constant speed and their sections were acquired while passing through the light sheet. We demonstrate high-throughput imaging of the entire sample volume (with a rate of 30 samples/min), segmentation and quantification in thick (100-300 μm diameter) cellular spheroids. This optofluidic device gives access to SPIM analyses to non-expert end-users, opening the way to automatic and fast screening of a high number of samples at subcellular resolution.
Zhang, Fumin; Qu, Xinghua; Ouyang, Jianfei
2012-01-01
A novel measurement prototype based on a mobile vehicle that carries a laser scanning sensor is proposed. The prototype is intended for the automated measurement of the interior 3D geometry of large-diameter long-stepped pipes. The laser displacement sensor, which has a small measurement range, is mounted on an extended arm of known length. It is scanned to improve the measurement accuracy for large-sized pipes. A fixing mechanism based on two sections is designed to ensure that the stepped pipe is concentric with the axis of rotation of the system. Data are acquired in a cylindrical coordinate system and fitted in a circle to determine diameter. Systematic errors covering arm length, tilt, and offset errors are analyzed and calibrated. The proposed system is applied to sample parts and the results are discussed to verify its effectiveness. This technique measures a diameter of 600 mm with an uncertainty of 0.02 mm at a 95% confidence probability. A repeatability test is performed to examine precision, which is 1.1 μm. A laser tracker is used to verify the measurement accuracy of the system, which is evaluated as 9 μm within a diameter of 600 mm. PMID:22778615
Sun, Yankui; Li, Shan; Sun, Zhongyang
2017-01-01
We propose a framework for automated detection of dry age-related macular degeneration (AMD) and diabetic macular edema (DME) from retina optical coherence tomography (OCT) images, based on sparse coding and dictionary learning. The study aims to improve the classification performance of state-of-the-art methods. First, our method presents a general approach to automatically align and crop retina regions; then it obtains global representations of images by using sparse coding and a spatial pyramid; finally, a multiclass linear support vector machine classifier is employed for classification. We apply two datasets for validating our algorithm: Duke spectral domain OCT (SD-OCT) dataset, consisting of volumetric scans acquired from 45 subjects—15 normal subjects, 15 AMD patients, and 15 DME patients; and clinical SD-OCT dataset, consisting of 678 OCT retina scans acquired from clinics in Beijing—168, 297, and 213 OCT images for AMD, DME, and normal retinas, respectively. For the former dataset, our classifier correctly identifies 100%, 100%, and 93.33% of the volumes with DME, AMD, and normal subjects, respectively, and thus performs much better than the conventional method; for the latter dataset, our classifier leads to a correct classification rate of 99.67%, 99.67%, and 100.00% for DME, AMD, and normal images, respectively.
Validation of a Parametric Approach for 3d Fortification Modelling: Application to Scale Models
NASA Astrophysics Data System (ADS)
Jacquot, K.; Chevrier, C.; Halin, G.
2013-02-01
Parametric modelling approach applied to cultural heritage virtual representation is a field of research explored for years since it can address many limitations of digitising tools. For example, essential historical sources for fortification virtual reconstructions like plans-reliefs have several shortcomings when they are scanned. To overcome those problems, knowledge based-modelling can be used: knowledge models based on the analysis of theoretical literature of a specific domain such as bastioned fortification treatises can be the cornerstone of the creation of a parametric library of fortification components. Implemented in Grasshopper, these components are manually adjusted on the data available (i.e. 3D surveys of plans-reliefs or scanned maps). Most of the fortification area is now modelled and the question of accuracy assessment is raised. A specific method is used to evaluate the accuracy of the parametric components. The results of the assessment process will allow us to validate the parametric approach. The automation of the adjustment process can finally be planned. The virtual model of fortification is part of a larger project aimed at valorising and diffusing a very unique cultural heritage item: the collection of plans-reliefs. As such, knowledge models are precious assets when automation and semantic enhancements will be considered.
Zhou, Jinghao; Yan, Zhennan; Lasio, Giovanni; Huang, Junzhou; Zhang, Baoshe; Sharma, Navesh; Prado, Karl; D'Souza, Warren
2015-12-01
To resolve challenges in image segmentation in oncologic patients with severely compromised lung, we propose an automated right lung segmentation framework that uses a robust, atlas-based active volume model with a sparse shape composition prior. The robust atlas is achieved by combining the atlas with the output of sparse shape composition. Thoracic computed tomography images (n=38) from patients with lung tumors were collected. The right lung in each scan was manually segmented to build a reference training dataset against which the performance of the automated segmentation method was assessed. The quantitative results of this proposed segmentation method with sparse shape composition achieved mean Dice similarity coefficient (DSC) of (0.72, 0.81) with 95% CI, mean accuracy (ACC) of (0.97, 0.98) with 95% CI, and mean relative error (RE) of (0.46, 0.74) with 95% CI. Both qualitative and quantitative comparisons suggest that this proposed method can achieve better segmentation accuracy with less variance than other atlas-based segmentation methods in the compromised lung segmentation. Published by Elsevier Ltd.
Shrestha, Sachin L; Breen, Andrew J; Trimby, Patrick; Proust, Gwénaëlle; Ringer, Simon P; Cairney, Julie M
2014-02-01
The identification and quantification of the different ferrite microconstituents in steels has long been a major challenge for metallurgists. Manual point counting from images obtained by optical and scanning electron microscopy (SEM) is commonly used for this purpose. While classification systems exist, the complexity of steel microstructures means that identifying and quantifying these phases is still a great challenge. Moreover, point counting is extremely tedious, time consuming, and subject to operator bias. This paper presents a new automated identification and quantification technique for the characterisation of complex ferrite microstructures by electron backscatter diffraction (EBSD). This technique takes advantage of the fact that different classes of ferrite exhibit preferential grain boundary misorientations, aspect ratios and mean misorientation, all of which can be detected using current EBSD software. These characteristics are set as criteria for identification and linked to grain size to determine the area fractions. The results of this method were evaluated by comparing the new automated technique with point counting results. The technique could easily be applied to a range of other steel microstructures. © 2013 Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Agrawal, Ritu; Sharma, Manisha; Singh, Bikesh Kumar
2018-04-01
Manual segmentation and analysis of lesions in medical images is time consuming and subjected to human errors. Automated segmentation has thus gained significant attention in recent years. This article presents a hybrid approach for brain lesion segmentation in different imaging modalities by combining median filter, k means clustering, Sobel edge detection and morphological operations. Median filter is an essential pre-processing step and is used to remove impulsive noise from the acquired brain images followed by k-means segmentation, Sobel edge detection and morphological processing. The performance of proposed automated system is tested on standard datasets using performance measures such as segmentation accuracy and execution time. The proposed method achieves a high accuracy of 94% when compared with manual delineation performed by an expert radiologist. Furthermore, the statistical significance test between lesion segmented using automated approach and that by expert delineation using ANOVA and correlation coefficient achieved high significance values of 0.986 and 1 respectively. The experimental results obtained are discussed in lieu of some recently reported studies.
Automated Classification and Analysis of Non-metallic Inclusion Data Sets
NASA Astrophysics Data System (ADS)
Abdulsalam, Mohammad; Zhang, Tongsheng; Tan, Jia; Webler, Bryan A.
2018-05-01
The aim of this study is to utilize principal component analysis (PCA), clustering methods, and correlation analysis to condense and examine large, multivariate data sets produced from automated analysis of non-metallic inclusions. Non-metallic inclusions play a major role in defining the properties of steel and their examination has been greatly aided by automated analysis in scanning electron microscopes equipped with energy dispersive X-ray spectroscopy. The methods were applied to analyze inclusions on two sets of samples: two laboratory-scale samples and four industrial samples from a near-finished 4140 alloy steel components with varying machinability. The laboratory samples had well-defined inclusions chemistries, composed of MgO-Al2O3-CaO, spinel (MgO-Al2O3), and calcium aluminate inclusions. The industrial samples contained MnS inclusions as well as (Ca,Mn)S + calcium aluminate oxide inclusions. PCA could be used to reduce inclusion chemistry variables to a 2D plot, which revealed inclusion chemistry groupings in the samples. Clustering methods were used to automatically classify inclusion chemistry measurements into groups, i.e., no user-defined rules were required.
NASA Astrophysics Data System (ADS)
Dubey, Kavita; Srivastava, Vishal; Singh Mehta, Dalip
2018-04-01
Early identification of fungal infection on the human scalp is crucial for avoiding hair loss. The diagnosis of fungal infection on the human scalp is based on a visual assessment by trained experts or doctors. Optical coherence tomography (OCT) has the ability to capture fungal infection information from the human scalp with a high resolution. In this study, we present a fully automated, non-contact, non-invasive optical method for rapid detection of fungal infections based on the extracted features from A-line and B-scan images of OCT. A multilevel ensemble machine model is designed to perform automated classification, which shows the superiority of our classifier to the best classifier based on the features extracted from OCT images. In this study, 60 samples (30 fungal, 30 normal) were imaged by OCT and eight features were extracted. The classification algorithm had an average sensitivity, specificity and accuracy of 92.30, 90.90 and 91.66%, respectively, for identifying fungal and normal human scalps. This remarkable classifying ability makes the proposed model readily applicable to classifying the human scalp.
Worker, Amanda; Dima, Danai; Combes, Anna; Crum, William R; Streffer, Johannes; Einstein, Steven; Mehta, Mitul A; Barker, Gareth J; C R Williams, Steve; O'daly, Owen
2018-04-01
The hippocampal formation is a complex brain structure that is important in cognitive processes such as memory, mood, reward processing and other executive functions. Histological and neuroimaging studies have implicated the hippocampal region in neuropsychiatric disorders as well as in neurodegenerative diseases. This highly plastic limbic region is made up of several subregions that are believed to have different functional roles. Therefore, there is a growing interest in imaging the subregions of the hippocampal formation rather than modelling the hippocampus as a homogenous structure, driving the development of new automated analysis tools. Consequently, there is a pressing need to understand the stability of the measures derived from these new techniques. In this study, an automated hippocampal subregion segmentation pipeline, released as a developmental version of Freesurfer (v6.0), was applied to T1-weighted magnetic resonance imaging (MRI) scans of 22 healthy older participants, scanned on 3 separate occasions and a separate longitudinal dataset of 40 Alzheimer's disease (AD) patients. Test-retest reliability of hippocampal subregion volumes was assessed using the intra-class correlation coefficient (ICC), percentage volume difference and percentage volume overlap (Dice). Sensitivity of the regional estimates to longitudinal change was estimated using linear mixed effects (LME) modelling. The results show that out of the 24 hippocampal subregions, 20 had ICC scores of 0.9 or higher in both samples; these regions include the molecular layer, granule cell layer of the dentate gyrus, CA1, CA3 and the subiculum (ICC > 0.9), whilst the hippocampal fissure and fimbria had lower ICC scores (0.73-0.88). Furthermore, LME analysis of the independent AD dataset demonstrated sensitivity to group and individual differences in the rate of volume change over time in several hippocampal subregions (CA1, molecular layer, CA3, hippocampal tail, fissure and presubiculum). These results indicate that this automated segmentation method provides a robust method with which to measure hippocampal subregions, and may be useful in tracking disease progression and measuring the effects of pharmacological intervention. © 2018 Wiley Periodicals, Inc.
[Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain].
Fiedler, E; Platsch, G; Schwarz, A; Schmiedehausen, K; Tomandl, B; Huk, W; Rupprecht, Th; Rahn, N; Kuwert, T
2003-10-01
Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. PATIENTS, MATERIAL AND METHOD: In 32 patients regional cerebral blood flow was measured using (99m)Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.
Sungjun Lim; Nowak, Michael R; Yoonsuck Choe
2016-08-01
We present a novel, parallelizable algorithm capable of automatically reconstructing and calculating anatomical statistics of cerebral vascular networks embedded in large volumes of Rat Nissl-stained data. In this paper, we report the results of our method using Rattus somatosensory cortical data acquired using Knife-Edge Scanning Microscopy. Our algorithm performs the reconstruction task with averaged precision, recall, and F2-score of 0.978, 0.892, and 0.902 respectively. Calculated anatomical statistics show some conformance to values previously reported. The results that can be obtained from our method are expected to help explicate the relationship between the structural organization of the microcirculation and normal (and abnormal) cerebral functioning.
MS/MS Automated Selected Ion Chromatograms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monroe, Matthew
2005-12-12
This program can be used to read a LC-MS/MS data file from either a Finnigan ion trap mass spectrometer (.Raw file) or an Agilent Ion Trap mass spectrometer (.MGF and .CDF files) and create a selected ion chromatogram (SIC) for each of the parent ion masses chosen for fragmentation. The largest peak in each SIC is also identified, with reported statistics including peak elution time, height, area, and signal to noise ratio. It creates several output files, including a base peak intensity (BPI) chromatogram for the survey scan, a BPI for the fragmentation scans, an XML file containing the SICmore » data for each parent ion, and a "flat file" (ready for import into a database) containing summaries of the SIC data statistics.« less
Automated fudicial labeling on human body data
NASA Astrophysics Data System (ADS)
Lewark, Erick A.; Nurre, Joseph H.
1998-03-01
The Cyberware WB4 whole body scanner generates a high- resolution data set of the outer surface of the human body. The acquisition of anthropometric data from this data set is important for the development of custom sizing for the apparel industry. Software for locating anthropometric landmarks from a cloud of more than 200,000 three-dimensional data points, captured from a human subject, is presented. The first phase of identification is to locate externally placed fudicials on the human body using luminance information captured at scan time. The fudicials are then autonomously labeled and categorized according to their general position and anthropometric significance in the scan. Once registration of the landmarks is complete, body measurements may be extracted for apparel sizing.
Martin-Gonzalez, Teresa; Penney, Graeme; Chong, Debra; Davis, Meryl; Mastracci, Tara M
2018-05-01
Fusion imaging is standard for the endovascular treatment of complex aortic aneurysms, but its role in follow up has not been explored. A critical issue is renal function deterioration over time. Renal volume has been used as a marker of renal impairment; however, it is not reproducible and remains a complex and resource-intensive procedure. The aim of this study is to determine the accuracy of a fusion-based software to automatically calculate the renal volume changes during follow up. In this study, computerized tomography (CT) scans of 16 patients who underwent complex aortic endovascular repair were analysed. Preoperative, 1-month and 1-year follow-up CT scans have been analysed using a conventional approach of semi-automatic segmentation, and a second approach with automatic segmentation. For each kidney and at each time point the percentage of change in renal volume was calculated using both techniques. After review, volume assessment was feasible for all CT scans. For the left kidney, the intraclass correlation coefficient (ICC) was 0.794 and 0.877 at 1 month and 1 year, respectively. For the right side, the ICC was 0.817 at 1 month and 0.966 at 1 year. The automated technique reliably detected a decrease in renal volume for the eight patients with occluded renal arteries during follow up. This is the first report of a fusion-based algorithm to detect changes in renal volume during postoperative surveillance using an automated process. Using this technique, the standardized assessment of renal volume could be implemented with greater ease and reproducibility and serve as a warning of potential renal impairment.
NASA Astrophysics Data System (ADS)
Johri, Ansh; Schimel, Daniel; Noguchi, Audrey; Hsu, Lewis L.
2010-03-01
Imaging is a crucial clinical tool for diagnosis and assessment of pneumonia, but quantitative methods are lacking. Micro-computed tomography (micro CT), designed for lab animals, provides opportunities for non-invasive radiographic endpoints for pneumonia studies. HYPOTHESIS: In vivo micro CT scans of mice with early bacterial pneumonia can be scored quantitatively by semiautomated imaging methods, with good reproducibility and correlation with bacterial dose inoculated, pneumonia survival outcome, and radiologists' scores. METHODS: Healthy mice had intratracheal inoculation of E. coli bacteria (n=24) or saline control (n=11). In vivo micro CT scans were performed 24 hours later with microCAT II (Siemens). Two independent radiologists scored the extent of airspace abnormality, on a scale of 0 (normal) to 24 (completely abnormal). Using the Amira 5.2 software (Mercury Computer Systems), a histogram distribution of voxel counts between the Hounsfield range of -510 to 0 was created and analyzed, and a segmentation procedure was devised. RESULTS: A t-test was performed to determine whether there was a significant difference in the mean voxel value of each mouse in the three experimental groups: Saline Survivors, Pneumonia Survivors, and Pneumonia Non-survivors. It was found that the voxel count method was able to statistically tell apart the Saline Survivors from the Pneumonia Survivors, the Saline Survivors from the Pneumonia Non-survivors, but not the Pneumonia Survivors vs. Pneumonia Non-survivors. The segmentation method, however, was successfully able to distinguish the two Pneumonia groups. CONCLUSION: We have pilot-tested an evaluation of early pneumonia in mice using micro CT and a semi-automated method for lung segmentation and scoring system. Statistical analysis indicates that the system is reliable and merits further evaluation.
Automatic thoracic body region localization
NASA Astrophysics Data System (ADS)
Bai, PeiRui; Udupa, Jayaram K.; Tong, YuBing; Xie, ShiPeng; Torigian, Drew A.
2017-03-01
Radiological imaging and image interpretation for clinical decision making are mostly specific to each body region such as head & neck, thorax, abdomen, pelvis, and extremities. For automating image analysis and consistency of results, standardizing definitions of body regions and the various anatomic objects, tissue regions, and zones in them becomes essential. Assuming that a standardized definition of body regions is available, a fundamental early step needed in automated image and object analytics is to automatically trim the given image stack into image volumes exactly satisfying the body region definition. This paper presents a solution to this problem based on the concept of virtual landmarks and evaluates it on whole-body positron emission tomography/computed tomography (PET/CT) scans. The method first selects a (set of) reference object(s), segments it (them) roughly, and identifies virtual landmarks for the object(s). The geometric relationship between these landmarks and the boundary locations of body regions in the craniocaudal direction is then learned through a neural network regressor, and the locations are predicted. Based on low-dose unenhanced CT images of 180 near whole-body PET/CT scans (which includes 34 whole-body PET/CT scans), the mean localization error for the boundaries of superior of thorax (TS) and inferior of thorax (TI), expressed as number of slices (slice spacing ≍ 4mm)), and using either the skeleton or the pleural spaces as reference objects, is found to be 3,2 (using skeleton) and 3, 5 (using pleural spaces) respectively, or in mm 13, 10 mm (using skeleton) and 10.5, 20 mm (using pleural spaces), respectively. Improvements of this performance via optimal selection of objects and virtual landmarks and other object analytics applications are currently being pursued. and the skeleton and pleural spaces used as a reference objects
Sci-Thur AM: YIS – 08: Automated Imaging Quality Assurance for Image-Guided Small Animal Irradiators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstone, Chris; Bazalova-Carter, Magdalena
Purpose: To develop quality assurance (QA) standards and tolerance levels for image quality of small animal irradiators. Methods: A fully automated in-house QA software for image analysis of a commercial microCT phantom was created. Quantitative analyses of CT linearity, signal-to-noise ratio (SNR), uniformity and noise, geometric accuracy, modulation transfer function (MTF), and CT number evaluation was performed. Phantom microCT scans from seven institutions acquired with varying parameters (kVp, mA, time, voxel size, and frame rate) and five irradiator units (Xstrahl SARRP, PXI X-RAD 225Cx, PXI X-RAD SmART, GE explore CT/RT 140, and GE Explore CT 120) were analyzed. Multi-institutional datamore » sets were compared using our in-house software to establish pass/fail criteria for each QA test. Results: CT linearity (R2>0.996) was excellent at all but Institution 2. Acceptable SNR (>35) and noise levels (<55HU) were obtained at four of the seven institutions, where failing scans were acquired with less than 120mAs. Acceptable MTF (>1.5 lp/mm for MTF=0.2) was obtained at all but Institution 6 due to the largest scan voxel size (0.35mm). The geometric accuracy passed (<1.5%) at five of the seven institutions. Conclusion: Our QA software can be used to rapidly perform quantitative imaging QA for small animal irradiators, accumulate results over time, and display possible changes in imaging functionality from its original performance and/or from the recommended tolerance levels. This tool will aid researchers in maintaining high image quality, enabling precise conformal dose delivery to small animals.« less
Ross, Douglas H.; Clark, Mark E.; Godara, Pooja; Huisingh, Carrie; McGwin, Gerald; Owsley, Cynthia; Litts, Katie M.; Spaide, Richard F.; Sloan, Kenneth R.; Curcio, Christine A.
2015-01-01
Purpose. To validate a model-driven method (RefMoB) of automatically describing the four outer retinal hyperreflective bands revealed by spectral-domain optical coherence tomography (SDOCT), for comparison with histology of normal macula; to report thickness and position of bands, particularly band 2 (ellipsoid zone [EZ], commonly called IS/OS). Methods. Foveal and superior perifoveal scans of seven SDOCT volumes of five individuals aged 28 to 69 years with healthy maculas were used (seven eyes for validation, five eyes for measurement). RefMoB determines band thickness and position by a multistage procedure that models reflectivities as a summation of Gaussians. Band thickness and positions were compared with those obtained by manual evaluators for the same scans, and compared with an independent published histological dataset. Results. Agreement among manual evaluators was moderate. Relative to manual evaluation, RefMoB reported reduced thickness and vertical shifts in band positions in a band-specific manner for both simulated and empirical data. In foveal and perifoveal scans, band 1 was thick relative to the anatomical external limiting membrane, band 2 aligned with the outer one-third of the anatomical IS ellipsoid, and band 3 (IZ, interdigitation of retinal pigment epithelium and photoreceptors) was cleanly delineated. Conclusions. RefMoB is suitable for automatic description of the location and thickness of the four outer retinal hyperreflective bands. Initial results suggest that band 2 aligns with the outer ellipsoid, thus supporting its recent designation as EZ. Automated and objective delineation of band 3 will help investigations of structural biomarkers of dark-adaptation changes in aging. PMID:26132776
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelic, S.K., E-mail: susanne.michelic@unileoben.ac.at; Loder, D.; Reip, T.
2015-02-15
Titanium-alloyed ferritic chromium steels are a competitive option to classical austenitic stainless steels owing to their similar corrosion resistance. The addition of titanium significantly influences their final steel cleanliness. The present contribution focuses on the detailed metallographic characterization of titanium nitrides, titanium carbides and titanium carbonitrides with regard to their size, morphology and composition. The methods used are manual and automated Scanning Electron Microscopy with Energy Dispersive X-ray Spectroscopy as well as optical microscopy. Additional thermodynamic calculations are performed to explain the precipitation procedure of the analyzed titanium nitrides. The analyses showed that homogeneous nucleation is decisive at an earlymore » process stage after the addition of titanium. Heterogeneous nucleation gets crucial with ongoing process time and essentially influences the final inclusion size of titanium nitrides. A detailed investigation of the nuclei for heterogeneous nucleation with automated Scanning Electron Microscopy proved to be difficult due to their small size. Manual Scanning Electron Microscopy and optical microscopy have to be applied. Furthermore, it was found that during solidification an additional layer around an existing titanium nitride can be formed which changes the final inclusion morphology significantly. These layers are also characterized in detail. Based on these different inclusion morphologies, in combination with thermodynamic results, tendencies regarding the formation and modification time of titanium containing inclusions in ferritic chromium steels are derived. - Graphical abstract: Display Omitted - Highlights: • The formation and modification of TiN in the steel 1.4520 was examined. • Heterogeneous nucleation essentially influences the final steel cleanliness. • In most cases heterogeneous nuclei in TiN inclusions are magnesium based. • Particle morphology provides important information on inclusion formation.« less
Assessment of DNA replication in central nervous system by Laser Scanning Cytometry
NASA Astrophysics Data System (ADS)
Lenz, Dominik; Mosch, Birgit; Bocsi, Jozsef; Arendt, Thomas; Tárnok, Attila
2004-07-01
μIn neurons of patients with Alzheimers's disease (AD) signs of cell cycle re-entry as well as polyploidy have been reported1, 2, indicating that the entire or a part of the genome of the neurons is duplicated before its death but mitosis is not initiated so that the cellular DNA content remains tetraploid. It was concluded, that this imbalance is the direct cause of the neuronal loss in AD3. Manual counting of polyploidal cells is possible but time consuming and possibly statistically insufficient. The aim of this study was to develop an automated method that detects the neuronal DNA content abnormalities with Laser Scanning Cytometry (LSC).Frozen sections of formalin-fixed brain tissue of AD patients and control subjects were labelled with anti-cyclin B and anti-NeuN antibodies. Immunolabelling was performed using Cy5- and Cy2-conjugated secondary antibodies and biotin streptavidin or tyramid signal amplification. In the end sections of 20m thickness were incubated with propidium iodide (PI) (50μg/ml) and covered on slides. For analysis by the LSC PI was used as trigger. Cells identified as neurons by NeuN expression were analyzed for cyclin B expression. Per specimen data of at least 10,000 neurons were acquired. In the frozen brain sections an automated quantification of the amount of nuclear DNA is possible with LSC. The DNA ploidy as well as the cell cycle distribution can be analyzed. A high number of neurons can be scanned and the duration of measuring is shorter than a manual examination. The amount of DNA is sufficiently represented by the PI fluorescence to be able to distinguish between eu- and polyploid neurons.
NASA Astrophysics Data System (ADS)
Girolamo, D.; Girolamo, L.; Yuan, F. G.
2015-03-01
Nondestructive evaluation (NDE) for detection and quantification of damage in composite materials is fundamental in the assessment of the overall structural integrity of modern aerospace systems. Conventional NDE systems have been extensively used to detect the location and size of damages by propagating ultrasonic waves normal to the surface. However they usually require physical contact with the structure and are time consuming and labor intensive. An automated, contactless laser ultrasonic imaging system for barely visible impact damage (BVID) detection in advanced composite structures has been developed to overcome these limitations. Lamb waves are generated by a Q-switched Nd:YAG laser, raster scanned by a set of galvano-mirrors over the damaged area. The out-of-plane vibrations are measured through a laser Doppler Vibrometer (LDV) that is stationary at a point on the corner of the grid. The ultrasonic wave field of the scanned area is reconstructed in polar coordinates and analyzed for high resolution characterization of impact damage in the composite honeycomb panel. Two methodologies are used for ultrasonic wave-field analysis: scattered wave field analysis (SWA) and standing wave energy analysis (SWEA) in the frequency domain. The SWA is employed for processing the wave field and estimate spatially dependent wavenumber values, related to discontinuities in the structural domain. The SWEA algorithm extracts standing waves trapped within damaged areas and, by studying the spectrum of the standing wave field, returns high fidelity damage imaging. While the SWA can be used to locate the impact damage in the honeycomb panel, the SWEA produces damage images in good agreement with X-ray computed tomographic (X-ray CT) scans. The results obtained prove that the laser-based nondestructive system is an effective alternative to overcome limitations of conventional NDI technologies.
Multiatlas segmentation of thoracic and abdominal anatomy with level set-based local search.
Schreibmann, Eduard; Marcus, David M; Fox, Tim
2014-07-08
Segmentation of organs at risk (OARs) remains one of the most time-consuming tasks in radiotherapy treatment planning. Atlas-based segmentation methods using single templates have emerged as a practical approach to automate the process for brain or head and neck anatomy, but pose significant challenges in regions where large interpatient variations are present. We show that significant changes are needed to autosegment thoracic and abdominal datasets by combining multi-atlas deformable registration with a level set-based local search. Segmentation is hierarchical, with a first stage detecting bulk organ location, and a second step adapting the segmentation to fine details present in the patient scan. The first stage is based on warping multiple presegmented templates to the new patient anatomy using a multimodality deformable registration algorithm able to cope with changes in scanning conditions and artifacts. These segmentations are compacted in a probabilistic map of organ shape using the STAPLE algorithm. Final segmentation is obtained by adjusting the probability map for each organ type, using customized combinations of delineation filters exploiting prior knowledge of organ characteristics. Validation is performed by comparing automated and manual segmentation using the Dice coefficient, measured at an average of 0.971 for the aorta, 0.869 for the trachea, 0.958 for the lungs, 0.788 for the heart, 0.912 for the liver, 0.884 for the kidneys, 0.888 for the vertebrae, 0.863 for the spleen, and 0.740 for the spinal cord. Accurate atlas segmentation for abdominal and thoracic regions can be achieved with the usage of a multi-atlas and perstructure refinement strategy. To improve clinical workflow and efficiency, the algorithm was embedded in a software service, applying the algorithm automatically on acquired scans without any user interaction.
Ebert, Lars Christian; Ptacek, Wolfgang; Naether, Silvio; Fürst, Martin; Ross, Steffen; Buck, Ursula; Weber, Stefan; Thali, Michael
2010-03-01
The Virtopsy project, a multi-disciplinary project that involves forensic science, diagnostic imaging, computer science, automation technology, telematics and biomechanics, aims to develop new techniques to improve the outcome of forensic investigations. This paper presents a new approach in the field of minimally invasive virtual autopsy for a versatile robotic system that is able to perform three-dimensional (3D) surface scans as well as post mortem image-guided soft tissue biopsies. The system consists of an industrial six-axis robot with additional extensions (i.e. a linear axis to increase working space, a tool-changing system and a dedicated safety system), a multi-slice CT scanner with equipment for angiography, a digital photogrammetry and 3D optical surface-scanning system, a 3D tracking system, and a biopsy end effector for automatic needle placement. A wax phantom was developed for biopsy accuracy tests. Surface scanning times were significantly reduced (scanning times cut in half, calibration three times faster). The biopsy module worked with an accuracy of 3.2 mm. Using the Virtobot, the surface-scanning procedure could be standardized and accelerated. The biopsy module is accurate enough for use in biopsies in a forensic setting. The Virtobot can be utilized for several independent tasks in the field of forensic medicine, and is sufficiently versatile to be adapted to different tasks in the future. (c) 2009 John Wiley & Sons, Ltd.
AlaScan: A Graphical User Interface for Alanine Scanning Free-Energy Calculations.
Ramadoss, Vijayaraj; Dehez, François; Chipot, Christophe
2016-06-27
Computation of the free-energy changes that underlie molecular recognition and association has gained significant importance due to its considerable potential in drug discovery. The massive increase of computational power in recent years substantiates the application of more accurate theoretical methods for the calculation of binding free energies. The impact of such advances is the application of parent approaches, like computational alanine scanning, to investigate in silico the effect of amino-acid replacement in protein-ligand and protein-protein complexes, or probe the thermostability of individual proteins. Because human effort represents a significant cost that precludes the routine use of this form of free-energy calculations, minimizing manual intervention constitutes a stringent prerequisite for any such systematic computation. With this objective in mind, we propose a new plug-in, referred to as AlaScan, developed within the popular visualization program VMD to automate the major steps in alanine-scanning calculations, employing free-energy perturbation as implemented in the widely used molecular dynamics code NAMD. The AlaScan plug-in can be utilized upstream, to prepare input files for selected alanine mutations. It can also be utilized downstream to perform the analysis of different alanine-scanning calculations and to report the free-energy estimates in a user-friendly graphical user interface, allowing favorable mutations to be identified at a glance. The plug-in also assists the end-user in assessing the reliability of the calculation through rapid visual inspection.
Spotting L3 slice in CT scans using deep convolutional network and transfer learning.
Belharbi, Soufiane; Chatelain, Clément; Hérault, Romain; Adam, Sébastien; Thureau, Sébastien; Chastan, Mathieu; Modzelewski, Romain
2017-08-01
In this article, we present a complete automated system for spotting a particular slice in a complete 3D Computed Tomography exam (CT scan). Our approach does not require any assumptions on which part of the patient's body is covered by the scan. It relies on an original machine learning regression approach. Our models are learned using the transfer learning trick by exploiting deep architectures that have been pre-trained on imageNet database, and therefore it requires very little annotation for its training. The whole pipeline consists of three steps: i) conversion of the CT scans into Maximum Intensity Projection (MIP) images, ii) prediction from a Convolutional Neural Network (CNN) applied in a sliding window fashion over the MIP image, and iii) robust analysis of the prediction sequence to predict the height of the desired slice within the whole CT scan. Our approach is applied to the detection of the third lumbar vertebra (L3) slice that has been found to be representative to the whole body composition. Our system is evaluated on a database collected in our clinical center, containing 642 CT scans from different patients. We obtained an average localization error of 1.91±2.69 slices (less than 5 mm) in an average time of less than 2.5 s/CT scan, allowing integration of the proposed system into daily clinical routines. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automated Visibility Measurements with a Horizon Scanning Imager. Volume 1. Technical Discussion
1990-12-01
environment, if one difficult. In the case of many relatively simplistic "rule assumes that the measured values of Cr are consistently following...report unless contractual obligations or notices on a specific document requires that it be returned. REPORT DOCUMENTATION PAGE Approved OMB No. 0704-0188...control computer from the original Zenith Z-248 incentive being that the sooner one identifies the class desktop to the Texas Micro Systems (TMI) design
Automated Discovery of Long Intergenic RNAs Associated with Breast Cancer Progression
2012-02-01
manuscript in preparation), (2) development and publication of an algorithm for detecting gene fusions in RNA-Seq data [1], and (3) discovery of outlier long...subjected to de novo assembly algorithms to discover novel transcripts representing either unannotated genes or novel somatic mutations such as gene...fusions. To this end the P.I. developed and published a novel algorithm called ChimeraScan to facilitate the discovery and validation of gene
Using Machine Learning to Enable Big Data Analysis within Human Review Time Budgets
NASA Astrophysics Data System (ADS)
Bue, B.; Rebbapragada, U.; Wagstaff, K.; Thompson, D. R.
2014-12-01
The quantity of astronomical observations collected by today's instruments far exceeds the capability of manual inspection by domain experts. Scientists often have a fixed time budget of a few hours spend to perform the monotonous task of scanning through a live stream or data dump of candidates that must be prioritized for follow-up analysis. Today's and next generation astronomical instruments produce millions of candidate detection per day, and necessitate the use of automated classifiers that serve as "data triage" in order to filter out spurious signals. Automated data triage enables increased science return by prioritizing interesting or anomalous observations for follow-up inspection, while also expediting analysis by filtering out noisy or redundant observations. We describe three specific astronomical investigations that are currently benefiting from data triage techniques in their respective processing pipelines.
CloudSat Reflectivity Data Visualization Inside Hurricanes
NASA Technical Reports Server (NTRS)
Suzuki, Shigeru; Wright, John R.; Falcon, Pedro C.
2011-01-01
Animations and other outreach products have been created and released to the public quickly after the CloudSat spacecraft flew over hurricanes. The automated script scans through the CloudSat quicklook data to find significant atmospheric moisture content. Once such a region is found, data from multiple sources is combined to produce the data products and the animations. KMZ products are quickly generated from the quicklook data for viewing in Google Earth and other tools. Animations are also generated to show the atmospheric moisture data in context with the storm cloud imagery. Global images from GOES satellites are shown to give context. The visualization provides better understanding of the interior of the hurricane storm clouds, which is difficult to observe directly. The automated process creates the finished animation in the High Definition (HD) video format for quick release to the media and public.
Expert systems in clinical microbiology.
Winstanley, Trevor; Courvalin, Patrice
2011-07-01
This review aims to discuss expert systems in general and how they may be used in medicine as a whole and clinical microbiology in particular (with the aid of interpretive reading). It considers rule-based systems, pattern-based systems, and data mining and introduces neural nets. A variety of noncommercial systems is described, and the central role played by the EUCAST is stressed. The need for expert rules in the environment of reset EUCAST breakpoints is also questioned. Commercial automated systems with on-board expert systems are considered, with emphasis being placed on the "big three": Vitek 2, BD Phoenix, and MicroScan. By necessity and in places, the review becomes a general review of automated system performances for the detection of specific resistance mechanisms rather than focusing solely on expert systems. Published performance evaluations of each system are drawn together and commented on critically.
Automated Registration of Multimodal Optic Disc Images: Clinical Assessment of Alignment Accuracy.
Ng, Wai Siene; Legg, Phil; Avadhanam, Venkat; Aye, Kyaw; Evans, Steffan H P; North, Rachel V; Marshall, Andrew D; Rosin, Paul; Morgan, James E
2016-04-01
To determine the accuracy of automated alignment algorithms for the registration of optic disc images obtained by 2 different modalities: fundus photography and scanning laser tomography. Images obtained with the Heidelberg Retina Tomograph II and paired photographic optic disc images of 135 eyes were analyzed. Three state-of-the-art automated registration techniques Regional Mutual Information, rigid Feature Neighbourhood Mutual Information (FNMI), and nonrigid FNMI (NRFNMI) were used to align these image pairs. Alignment of each composite picture was assessed on a 5-point grading scale: "Fail" (no alignment of vessels with no vessel contact), "Weak" (vessels have slight contact), "Good" (vessels with <50% contact), "Very Good" (vessels with >50% contact), and "Excellent" (complete alignment). Custom software generated an image mosaic in which the modalities were interleaved as a series of alternate 5×5-pixel blocks. These were graded independently by 3 clinically experienced observers. A total of 810 image pairs were assessed. All 3 registration techniques achieved a score of "Good" or better in >95% of the image sets. NRFNMI had the highest percentage of "Excellent" (mean: 99.6%; range, 95.2% to 99.6%), followed by Regional Mutual Information (mean: 81.6%; range, 86.3% to 78.5%) and FNMI (mean: 73.1%; range, 85.2% to 54.4%). Automated registration of optic disc images by different modalities is a feasible option for clinical application. All 3 methods provided useful levels of alignment, but the NRFNMI technique consistently outperformed the others and is recommended as a practical approach to the automated registration of multimodal disc images.
CT liver volumetry using geodesic active contour segmentation with a level-set algorithm
NASA Astrophysics Data System (ADS)
Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard
2010-03-01
Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.
A flexible system to capture sample vials in a storage box - the box vial scanner.
Nowakowski, Steven E; Kressin, Kenneth R; Deick, Steven D
2009-01-01
Tracking sample vials in a research environment is a critical task and doing so efficiently can have a large impact on productivity, especially in high volume laboratories. There are several challenges to automating the capture process, including the variety of containers used to store samples. We developed a fast and robust system to capture the location of sample vials being placed in storage that allows the laboratories the flexibility to use sample containers of varying dimensions. With a single scan, this device captures the box identifier, the vial identifier and the location of each vial within a freezer storage box. The sample vials are tracked through a barcode label affixed to the cap while the boxes are tracked by a barcode label on the side of the box. Scanning units are placed at the point of use and forward data to a sever application for processing the scanned data. Scanning units consist of an industrial barcode reader mounted in a fixture positioning the box for scanning and providing lighting during the scan. The server application transforms the scan data into a list of storage locations holding vial identifiers. The list is then transferred to the laboratory database. The box vial scanner captures the IDs and location information for an entire box of sample vials into the laboratory database in a single scan. The system accommodates a wide variety of vials sizes by inserting risers under the sample box and a variety of storage box layouts are supported via the processing algorithm on the server.
Kottner, Sören; Ebert, Lars C; Ampanozi, Garyfalia; Braun, Marcel; Thali, Michael J; Gascho, Dominic
2017-03-01
Injuries such as bite marks or boot prints can leave distinct patterns on the body's surface and can be used for 3D reconstructions. Although various systems for 3D surface imaging have been introduced in the forensic field, most techniques are both cost-intensive and time-consuming. In this article, we present the VirtoScan, a mobile, multi-camera rig based on close-range photogrammetry. The system can be integrated into automated PMCT scanning procedures or used manually together with lifting carts, autopsy tables and examination couch. The VirtoScan is based on a moveable frame that carries 7 digital single-lens reflex cameras. A remote control is attached to each camera and allows the simultaneous triggering of the shutter release of all cameras. Data acquisition in combination with the PMCT scanning procedures took 3:34 min for the 3D surface documentation of one side of the body compared to 20:20 min of acquisition time when using our in-house standard. A surface model comparison between the high resolution output from our in-house standard and a high resolution model from the multi-camera rig showed a mean surface deviation of 0.36 mm for the whole body scan and 0.13 mm for a second comparison of a detailed section of the scan. The use of the multi-camera rig reduces the acquisition time for whole-body surface documentations in medico-legal examinations and provides a low-cost 3D surface scanning alternative for forensic investigations.
Inspection of float glass using a novel retroreflective laser scanning system
NASA Astrophysics Data System (ADS)
Holmes, Jonathan D.
1997-07-01
Since 1988, Image Automation has marketed a float glass inspection system using a novel retro-reflective laser scanning system. The (patented) instrument scans a laser beam by use of a polygon through the glass onto a retro-reflective screen, and collects the retro-reflected light off the polygon, such that a stationary image of the moving spot on the screen is produced. The spot image is then analyzed for optical effects introduced by defects within the glass, which typically distort and attenuate the scanned laser beam, by use of suitable detectors. The inspection system processing provides output of defect size, shape and severity, to the factory network for use in rejection or sorting of glass plates to the end customer. This paper briefly describes the principles of operation, the system architecture, and limitations to sensitivity and measurement repeatability. New instruments based on the retro-reflective scanning method have recently been developed. The principles and implementation are described. They include: (1) Simultaneous detection of defects within the glass and defects in a mirror coating on the glass surface using polarized light. (2) A novel distortion detector for very dark glass. (3) Measurement of optical quality (flatness/refractive homogeneity) of the glass using a position sensitive detector.
NASA Astrophysics Data System (ADS)
Meaney, Paul M.; Raynolds, Timothy; Geimer, Shireen D.; Potwin, Lincoln; Paulsen, Keith D.
2004-07-01
We are developing a scanned focused ultrasound system for hyperthermia treatment of breast cancer. Focused ultrasound has significant potential as a therapy delivery device because it can focus sufficient heating energy below the skin surface with minimal damage to intervening tissue. However, as a practical therapy system, the focal zone is generally quite small and requires either electronic (in the case of a phased array system) or mechanical steering (for a fixed bowl transducer) to cover a therapeutically useful area. We have devised a simple automated steering system consisting of a focused bowl transducer supported by three vertically movable rods which are connected to computer controlled linear actuators. This scheme is particularly attractive for breast cancer hyperthermia where the support rods can be fed through the base of a liquid coupling tank to treat tumors within the breast while coupled to our noninvasive microwave thermal imaging system. A MATLAB routine has been developed for controlling the rod motion such that the beam focal point scans a horizontal spiral and the subsequent heating zone is cylindrical. In coordination with this effort, a 3D finite element thermal model has been developed to evaluate the temperature distributions from the scanned focused heating. In this way, scanning protocols can be optimized to deliver the most uniform temperature rise to the desired location.
Smart align -- A new tool for robust non-rigid registration of scanning microscope data
Jones, Lewys; Yang, Hao; Pennycook, Timothy J.; ...
2015-07-10
Many microscopic investigations of materials may benefit from the recording of multiple successive images. This can include techniques common to several types of microscopy such as frame averaging to improve signal-to-noise ratios (SNR) or time series to study dynamic processes or more specific applications. In the scanning transmission electron microscope, this might include focal series for optical sectioning or aberration measurement, beam damage studies or camera-length series to study the effects of strain; whilst in the scanning tunnelling microscope, this might include bias voltage series to probe local electronic structure. Whatever the application, such investigations must begin with the carefulmore » alignment of these data stacks, an operation that is not always trivial. In addition, the presence of low-frequency scanning distortions can introduce intra-image shifts to the data. Here, we describe an improved automated method of performing non-rigid registration customised for the challenges unique to scanned microscope data specifically addressing the issues of low-SNR data, images containing a large proportion of crystalline material and/or local features of interest such as dislocations or edges. Careful attention has been paid to artefact testing of the non-rigid registration method used, and the importance of this registration for the quantitative interpretation of feature intensities and positions is evaluated.« less
Smart align -- A new tool for robust non-rigid registration of scanning microscope data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Lewys; Yang, Hao; Pennycook, Timothy J.
Many microscopic investigations of materials may benefit from the recording of multiple successive images. This can include techniques common to several types of microscopy such as frame averaging to improve signal-to-noise ratios (SNR) or time series to study dynamic processes or more specific applications. In the scanning transmission electron microscope, this might include focal series for optical sectioning or aberration measurement, beam damage studies or camera-length series to study the effects of strain; whilst in the scanning tunnelling microscope, this might include bias voltage series to probe local electronic structure. Whatever the application, such investigations must begin with the carefulmore » alignment of these data stacks, an operation that is not always trivial. In addition, the presence of low-frequency scanning distortions can introduce intra-image shifts to the data. Here, we describe an improved automated method of performing non-rigid registration customised for the challenges unique to scanned microscope data specifically addressing the issues of low-SNR data, images containing a large proportion of crystalline material and/or local features of interest such as dislocations or edges. Careful attention has been paid to artefact testing of the non-rigid registration method used, and the importance of this registration for the quantitative interpretation of feature intensities and positions is evaluated.« less
Automated aerial image based CD metrology initiated by pattern marking with photomask layout data
NASA Astrophysics Data System (ADS)
Davis, Grant; Choi, Sun Young; Jung, Eui Hee; Seyfarth, Arne; van Doornmalen, Hans; Poortinga, Eric
2007-05-01
The photomask is a critical element in the lithographic image transfer process from the drawn layout to the final structures on the wafer. The non-linearity of the imaging process and the related MEEF impose a tight control requirement on the photomask critical dimensions. Critical dimensions can be measured in aerial images with hardware emulation. This is a more recent complement to the standard scanning electron microscope measurement of wafers and photomasks. Aerial image measurement includes non-linear, 3-dimensional, and materials effects on imaging that cannot be observed directly by SEM measurement of the mask. Aerial image measurement excludes the processing effects of printing and etching on the wafer. This presents a unique contribution to the difficult process control and modeling tasks in mask making. In the past, aerial image measurements have been used mainly to characterize the printability of mask repair sites. Development of photomask CD characterization with the AIMS TM tool was motivated by the benefit of MEEF sensitivity and the shorter feedback loop compared to wafer exposures. This paper describes a new application that includes: an improved interface for the selection of meaningful locations using the photomask and design layout data with the Calibre TM Metrology Interface, an automated recipe generation process, an automated measurement process, and automated analysis and result reporting on a Carl Zeiss AIMS TM system.
Chest wall segmentation in automated 3D breast ultrasound scans.
Tan, Tao; Platel, Bram; Mann, Ritse M; Huisman, Henkjan; Karssemeijer, Nico
2013-12-01
In this paper, we present an automatic method to segment the chest wall in automated 3D breast ultrasound images. Determining the location of the chest wall in automated 3D breast ultrasound images is necessary in computer-aided detection systems to remove automatically detected cancer candidates beyond the chest wall and it can be of great help for inter- and intra-modal image registration. We show that the visible part of the chest wall in an automated 3D breast ultrasound image can be accurately modeled by a cylinder. We fit the surface of our cylinder model to a set of automatically detected rib-surface points. The detection of the rib-surface points is done by a classifier using features representing local image intensity patterns and presence of rib shadows. Due to attenuation of the ultrasound signal, a clear shadow is visible behind the ribs. Evaluation of our segmentation method is done by computing the distance of manually annotated rib points to the surface of the automatically detected chest wall. We examined the performance on images obtained with the two most common 3D breast ultrasound devices in the market. In a dataset of 142 images, the average mean distance of the annotated points to the segmented chest wall was 5.59 ± 3.08 mm. Copyright © 2012 Elsevier B.V. All rights reserved.
Dorninger, Peter; Pfeifer, Norbert
2008-01-01
Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects. PMID:27873931
Compact Microscope Imaging System Developed
NASA Technical Reports Server (NTRS)
McDowell, Mark
2001-01-01
The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. The CMIS can be used in situ with a minimum amount of user intervention. This system, which was developed at the NASA Glenn Research Center, can scan, find areas of interest, focus, and acquire images automatically. Large numbers of multiple cell experiments require microscopy for in situ observations; this is only feasible with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control capabilities. The software also has a user-friendly interface that can be used independently of the hardware for post-experiment analysis. CMIS has potential commercial uses in the automated online inspection of precision parts, medical imaging, security industry (examination of currency in automated teller machines and fingerprint identification in secure entry locks), environmental industry (automated examination of soil/water samples), biomedical field (automated blood/cell analysis), and microscopy community. CMIS will improve research in several ways: It will expand the capabilities of MSD experiments utilizing microscope technology. It may be used in lunar and Martian experiments (Rover Robot). Because of its reduced size, it will enable experiments that were not feasible previously. It may be incorporated into existing shuttle orbiter and space station experiments, including glove-box-sized experiments as well as ground-based experiments.
Race, Caitlin M.; Kwon, Lydia E.; Foreman, Myles T.; ...
2017-11-24
Here, we report on the implementation of an automated platform for detecting the presence of an antibody biomarker for human papillomavirus-associated oropharyngeal cancer from a single droplet of serum, in which a nanostructured photonic crystal surface is used to amplify the output of a fluorescence-linked immunosorbent assay. The platform is comprised of a microfluidic cartridge with integrated photonic crystal chips that interfaces with an assay instrument that automates the introduction of reagents, wash steps, and surface drying. Upon assay completion, the cartridge interfaces with a custom laser-scanning instrument that couples light into the photonic crystal at the optimal resonance conditionmore » for fluorescence enhancement. The instrument is used to measure the fluorescence intensity values of microarray spots corresponding to the biomarkers of interest, in addition to several experimental controls that verify correct functioning of the assay protocol. In this work, we report both dose-response characterization of the system using anti-E7 antibody introduced at known concentrations into serum and characterization of a set of clinical samples from which results were compared with a conventional enzyme-linked immunosorbent assay (ELISA) performed in microplate format. Finally, the demonstrated capability represents a simple, rapid, automated, and high-sensitivity method for multiplexed detection of protein biomarkers from a low-volume test sample.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Race, Caitlin M.; Kwon, Lydia E.; Foreman, Myles T.
Here, we report on the implementation of an automated platform for detecting the presence of an antibody biomarker for human papillomavirus-associated oropharyngeal cancer from a single droplet of serum, in which a nanostructured photonic crystal surface is used to amplify the output of a fluorescence-linked immunosorbent assay. The platform is comprised of a microfluidic cartridge with integrated photonic crystal chips that interfaces with an assay instrument that automates the introduction of reagents, wash steps, and surface drying. Upon assay completion, the cartridge interfaces with a custom laser-scanning instrument that couples light into the photonic crystal at the optimal resonance conditionmore » for fluorescence enhancement. The instrument is used to measure the fluorescence intensity values of microarray spots corresponding to the biomarkers of interest, in addition to several experimental controls that verify correct functioning of the assay protocol. In this work, we report both dose-response characterization of the system using anti-E7 antibody introduced at known concentrations into serum and characterization of a set of clinical samples from which results were compared with a conventional enzyme-linked immunosorbent assay (ELISA) performed in microplate format. Finally, the demonstrated capability represents a simple, rapid, automated, and high-sensitivity method for multiplexed detection of protein biomarkers from a low-volume test sample.« less
Vupparaboina, Kiran Kumar; Nizampatnam, Srinath; Chhablani, Jay; Richhariya, Ashutosh; Jana, Soumya
2015-12-01
A variety of vision ailments are indicated by anomalies in the choroid layer of the posterior visual section. Consequently, choroidal thickness and volume measurements, usually performed by experts based on optical coherence tomography (OCT) images, have assumed diagnostic significance. Now, to save precious expert time, it has become imperative to develop automated methods. To this end, one requires choroid outer boundary (COB) detection as a crucial step, where difficulty arises as the COB divides the choroidal granularity and the scleral uniformity only notionally, without marked brightness variation. In this backdrop, we measure the structural dissimilarity between choroid and sclera by structural similarity (SSIM) index, and hence estimate the COB by thresholding. Subsequently, smooth COB estimates, mimicking manual delineation, are obtained using tensor voting. On five datasets, each consisting of 97 adult OCT B-scans, automated and manual segmentation results agree visually. We also demonstrate close statistical match (greater than 99.6% correlation) between choroidal thickness distributions obtained algorithmically and manually. Further, quantitative superiority of our method is established over existing results by respective factors of 27.67% and 76.04% in two quotient measures defined relative to observer repeatability. Finally, automated choroidal volume estimation, being attempted for the first time, also yields results in close agreement with that of manual methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
Khan, Ali R; Wang, Lei; Beg, Mirza Faisal
2008-07-01
Fully-automated brain segmentation methods have not been widely adopted for clinical use because of issues related to reliability, accuracy, and limitations of delineation protocol. By combining the probabilistic-based FreeSurfer (FS) method with the Large Deformation Diffeomorphic Metric Mapping (LDDMM)-based label-propagation method, we are able to increase reliability and accuracy, and allow for flexibility in template choice. Our method uses the automated FreeSurfer subcortical labeling to provide a coarse-to-fine introduction of information in the LDDMM template-based segmentation resulting in a fully-automated subcortical brain segmentation method (FS+LDDMM). One major advantage of the FS+LDDMM-based approach is that the automatically generated segmentations generated are inherently smooth, thus subsequent steps in shape analysis can directly follow without manual post-processing or loss of detail. We have evaluated our new FS+LDDMM method on several databases containing a total of 50 subjects with different pathologies, scan sequences and manual delineation protocols for labeling the basal ganglia, thalamus, and hippocampus. In healthy controls we report Dice overlap measures of 0.81, 0.83, 0.74, 0.86 and 0.75 for the right caudate nucleus, putamen, pallidum, thalamus and hippocampus respectively. We also find statistically significant improvement of accuracy in FS+LDDMM over FreeSurfer for the caudate nucleus and putamen of Huntington's disease and Tourette's syndrome subjects, and the right hippocampus of Schizophrenia subjects.
NASA Astrophysics Data System (ADS)
Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian
2014-03-01
Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.
Multiple diffraction in an icosahedral Al-Cu-Fe quasicrystal
NASA Astrophysics Data System (ADS)
Fan, C. Z.; Weber, Th.; Deloudi, S.; Steurer, W.
2011-07-01
In order to reveal its influence on quasicrystal structure analysis, multiple diffraction (MD) effects in an icosahedral Al-Cu-Fe quasicrystal have been investigated in-house on an Oxford Diffraction four-circle diffractometer equipped with an Onyx™ CCD area detector and MoKα radiation. For that purpose, an automated approach for Renninger scans (ψ-scans) has been developed. Two weak reflections were chosen as the main reflections (called P) in the present measurements. As is well known for periodic crystals, it is also observed for this quasicrystal that the intensity of the main reflection may significantly increase if the simultaneous (H) and the coupling (P-H) reflections are both strong, while there is no obvious MD effect if one of them is weak. The occurrence of MD events during ψ-scans has been studied based on an ideal structure model and the kinematical MD theory. The reliability of the approach is revealed by the good agreement between simulation and experiment. It shows that the multiple diffraction effect is quite significant.
A high-resolution full-field range imaging system
NASA Astrophysics Data System (ADS)
Carnegie, D. A.; Cree, M. J.; Dorrington, A. A.
2005-08-01
There exist a number of applications where the range to all objects in a field of view needs to be obtained. Specific examples include obstacle avoidance for autonomous mobile robots, process automation in assembly factories, surface profiling for shape analysis, and surveying. Ranging systems can be typically characterized as being either laser scanning systems where a laser point is sequentially scanned over a scene or a full-field acquisition where the range to every point in the image is simultaneously obtained. The former offers advantages in terms of range resolution, while the latter tend to be faster and involve no moving parts. We present a system for determining the range to any object within a camera's field of view, at the speed of a full-field system and the range resolution of some point laser scans. Initial results obtained have a centimeter range resolution for a 10 second acquisition time. Modifications to the existing system are discussed that should provide faster results with submillimeter resolution.
Optical toolkits for in vivo deep tissue laser scanning microscopy: a primer
NASA Astrophysics Data System (ADS)
Lee, Woei Ming; McMenamin, Thomas; Li, Yongxiao
2018-06-01
Life at the microscale is animated and multifaceted. The impact of dynamic in vivo microscopy in small animals has opened up opportunities to peer into a multitude of biological processes at the cellular scale in their native microenvironments. Laser scanning microscopy (LSM) coupled with targeted fluorescent proteins has become an indispensable tool to enable dynamic imaging in vivo at high temporal and spatial resolutions. In the last few decades, the technique has been translated from imaging cells in thin samples to mapping cells in the thick biological tissue of living organisms. Here, we sought to provide a concise overview of the design considerations of a LSM that enables cellular and subcellular imaging in deep tissue. Individual components under review include: long working distance microscope objectives, laser scanning technologies, adaptive optics devices, beam shaping technologies and photon detectors, with an emphasis on more recent advances. The review will conclude with the latest innovations in automated optical microscopy, which would impact tracking and quantification of heterogeneous populations of cells in vivo.
Angiography with a multifunctional line scanning ophthalmoscope
Ferguson, R. Daniel; Patel, Ankit H.; Vazquez, Vanessa; Husain, Deeba
2012-01-01
Abstract. A multifunctional line scanning ophthalmoscope (mLSO) was designed, constructed, and tested on human subjects. The mLSO could sequentially acquire wide-field, confocal, near-infrared reflectance, fluorescein angiography (FA), and indocyanine green angiography (ICGA) retinal images. The system also included a retinal tracker (RT) and a photodynamic therapy laser treatment port. The mLSO was tested in a pilot clinical study on human subjects with and without retinal disease. The instrument exhibited robust retinal tracking and high-contrast line scanning imaging. The FA and ICGA angiograms showed a similar appearance of hyper- and hypo-pigmented disease features and a nearly equivalent resolution of fine capillaries compared to a commercial flood-illumination fundus imager. An mLSO-based platform will enable researchers and clinicians to image human and animal eyes with a variety of modalities and deliver therapeutic beams from a single automated interface. This approach has the potential to improve patient comfort and reduce imaging session times, allowing clinicians to better diagnose, plan, and conduct patient procedures with improved outcomes. PMID:22463040
NASA Astrophysics Data System (ADS)
Bejaoui, A.; Alonso, M. I.; Garriga, M.; Campoy-Quiles, M.; Goñi, A. R.; Hetsch, F.; Kershaw, S. V.; Rogach, A. L.; To, C. H.; Foo, Y.; Zapien, J. A.
2017-11-01
We report on the investigation by spectroscopic ellipsometry of films containing Cd1 - xHgxTe alloy quantum dots (QDs). The alloy QDs were fabricated from colloidal CdTe QDs grown by an aqueous synthesis process followed by an ion-exchange step in which Hg2+ ions progressively replace Cd2+. For ellipsometric studies, several films were prepared on glass substrates using layer-by-layer (LBL) deposition. The contribution of the QDs to the measured ellipsometric spectra is extracted from a multi-sample, transmission and multi- angle-of-incidence ellipsometric data analysis fitted using standard multilayer and effective medium models that include surface roughness effects, modeled by an effective medium approximation. The relationship of the dielectric function of the QDs retrieved from these studies to that of the corresponding II-VI bulk material counterparts is presented and discussed.
Raab, Phillip Andrew; Claypoole, Keith Harvey; Hayashi, Kentaro; Baker, Charlene
2012-10-01
Based on the concept of allostatic load, this study proposed and evaluated a model for the relationship between childhood trauma, chronic medical conditions, and intervening variables affecting this relationship in individuals with severe mental illness. Childhood trauma, adult trauma, major depressive disorder symptoms, posttraumatic stress disorder symptoms, health risk factors, and chronic medical conditions were retrospectively assessed using a cross-sectional survey design in a sample of 117 individuals with severe mental illness receiving public mental health services. Path analyses produced a good-fitting model, with significant pathways from childhood to adult trauma and from adult trauma to chronic medical conditions. Multisample path analyses revealed the equivalence of the model across sex. The results support a model for the relationship between childhood and adult trauma and chronic medical conditions, which highlights the pathophysiological toll of cumulative trauma experienced across the life span and the pressing need to prevent retraumatization in this population.
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.