Progress in Fully Automated Abdominal CT Interpretation
Summers, Ronald M.
2016-01-01
OBJECTIVE Automated analysis of abdominal CT has advanced markedly over just the last few years. Fully automated assessment of organs, lymph nodes, adipose tissue, muscle, bowel, spine, and tumors are some examples where tremendous progress has been made. Computer-aided detection of lesions has also improved dramatically. CONCLUSION This article reviews the progress and provides insights into what is in store in the near future for automated analysis for abdominal CT, ultimately leading to fully automated interpretation. PMID:27101207
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2009-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
Huang, Jianyan; Maram, Jyotsna; Tepelus, Tudor C; Modak, Cristina; Marion, Ken; Sadda, SriniVas R; Chopra, Vikas; Lee, Olivia L
2017-08-07
To determine the reliability of corneal endothelial cell density (ECD) obtained by automated specular microscopy versus that of validated manual methods and factors that predict such reliability. Sharp central images from 94 control and 106 glaucomatous eyes were captured with Konan specular microscope NSP-9900. All images were analyzed by trained graders using Konan CellChek Software, employing the fully- and semi-automated methods as well as Center Method. Images with low cell count (input cells number <100) and/or guttata were compared with the Center and Flex-Center Methods. ECDs were compared and absolute error was used to assess variation. The effect on ECD of age, cell count, cell size, and cell size variation was evaluated. No significant difference was observed between the Center and Flex-Center Methods in corneas with guttata (p=0.48) or low ECD (p=0.11). No difference (p=0.32) was observed in ECD of normal controls <40 yrs old between the fully-automated method and manual Center Method. However, in older controls and glaucomatous eyes, ECD was overestimated by the fully-automated method (p=0.034) and semi-automated method (p=0.025) as compared to manual method. Our findings show that automated analysis significantly overestimates ECD in the eyes with high polymegathism and/or large cell size, compared to the manual method. Therefore, we discourage reliance upon the fully-automated method alone to perform specular microscopy analysis, particularly if an accurate ECD value is imperative. Copyright © 2017. Published by Elsevier España, S.L.U.
Cardiac imaging: working towards fully-automated machine analysis & interpretation.
Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido
2017-03-01
Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.
DOT National Transportation Integrated Search
2009-02-01
The Office of Special Investigations at Iowa Department of Transportation (DOT) collects FWD data on regular basis to evaluate pavement structural conditions. The primary objective of this study was to develop a fully-automated software system for ra...
Automated frame selection process for high-resolution microendoscopy
NASA Astrophysics Data System (ADS)
Ishijima, Ayumu; Schwarz, Richard A.; Shin, Dongsuk; Mondrik, Sharon; Vigneswaran, Nadarajah; Gillenwater, Ann M.; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2015-04-01
We developed an automated frame selection algorithm for high-resolution microendoscopy video sequences. The algorithm rapidly selects a representative frame with minimal motion artifact from a short video sequence, enabling fully automated image analysis at the point-of-care. The algorithm was evaluated by quantitative comparison of diagnostically relevant image features and diagnostic classification results obtained using automated frame selection versus manual frame selection. A data set consisting of video sequences collected in vivo from 100 oral sites and 167 esophageal sites was used in the analysis. The area under the receiver operating characteristic curve was 0.78 (automated selection) versus 0.82 (manual selection) for oral sites, and 0.93 (automated selection) versus 0.92 (manual selection) for esophageal sites. The implementation of fully automated high-resolution microendoscopy at the point-of-care has the potential to reduce the number of biopsies needed for accurate diagnosis of precancer and cancer in low-resource settings where there may be limited infrastructure and personnel for standard histologic analysis.
Kim, Jungkyu; Jensen, Erik C; Stockton, Amanda M; Mathies, Richard A
2013-08-20
A fully integrated multilayer microfluidic chemical analyzer for automated sample processing and labeling, as well as analysis using capillary zone electrophoresis is developed and characterized. Using lifting gate microfluidic control valve technology, a microfluidic automaton consisting of a two-dimensional microvalve cellular array is fabricated with soft lithography in a format that enables facile integration with a microfluidic capillary electrophoresis device. The programmable sample processor performs precise mixing, metering, and routing operations that can be combined to achieve automation of complex and diverse assay protocols. Sample labeling protocols for amino acid, aldehyde/ketone and carboxylic acid analysis are performed automatically followed by automated transfer and analysis by the integrated microfluidic capillary electrophoresis chip. Equivalent performance to off-chip sample processing is demonstrated for each compound class; the automated analysis resulted in a limit of detection of ~16 nM for amino acids. Our microfluidic automaton provides a fully automated, portable microfluidic analysis system capable of autonomous analysis of diverse compound classes in challenging environments.
Ercan, Ertuğrul; Kırılmaz, Bahadır; Kahraman, İsmail; Bayram, Vildan; Doğan, Hüseyin
2012-11-01
Flow-mediated dilation (FMD) is used to evaluate endothelial functions. Computer-assisted analysis utilizing edge detection permits continuous measurements along the vessel wall. We have developed a new fully automated software program to allow accurate and reproducible measurement. FMD has been measured and analyzed in 18 coronary artery disease (CAD) patients and 17 controls both by manually and by the software developed (computer supported) methods. The agreement between methods was assessed by Bland-Altman analysis. The mean age, body mass index and cardiovascular risk factors were higher in CAD group. Automated FMD% measurement for the control subjects was 18.3±8.5 and 6.8±6.5 for the CAD group (p=0.0001). The intraobserver and interobserver correlation for automated measurement was high (r=0.974, r=0.981, r=0.937, r=0.918, respectively). Manual FMD% at 60th second was correlated with automated FMD % (r=0.471, p=0.004). The new fully automated software© can be used to precise measurement of FMD with low intra- and interobserver variability than manual assessment.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2010-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
Cardiac imaging: working towards fully-automated machine analysis & interpretation
Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido
2017-01-01
Introduction Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation. PMID:28277804
NASA Astrophysics Data System (ADS)
Yu, Long; Druckenbrod, Markus; Greve, Martin; Wang, Ke-qi; Abdel-Maksoud, Moustafa
2015-10-01
A fully automated optimization process is provided for the design of ducted propellers under open water conditions, including 3D geometry modeling, meshing, optimization algorithm and CFD analysis techniques. The developed process allows the direct integration of a RANSE solver in the design stage. A practical ducted propeller design case study is carried out for validation. Numerical simulations and open water tests are fulfilled and proved that the optimum ducted propeller improves hydrodynamic performance as predicted.
Lou, Junyang; Obuchowski, Nancy A; Krishnaswamy, Amar; Popovic, Zoran; Flamm, Scott D; Kapadia, Samir R; Svensson, Lars G; Bolen, Michael A; Desai, Milind Y; Halliburton, Sandra S; Tuzcu, E Murat; Schoenhagen, Paul
2015-01-01
Preprocedural 3-dimensional CT imaging of the aortic annular plane plays a critical role for transcatheter aortic valve replacement (TAVR) planning; however, manual reconstructions are complex. Automated analysis software may improve reproducibility and agreement between readers but is incompletely validated. In 110 TAVR patients (mean age, 81 years; 37% female) undergoing preprocedural multidetector CT, automated reconstruction of the aortic annular plane and planimetry of the annulus was performed with a prototype of now commercially available software (syngo.CT Cardiac Function-Valve Pilot; Siemens Healthcare, Erlangen, Germany). Fully automated, semiautomated, and manual annulus measurements were compared. Intrareader and inter-reader agreement, intermodality agreement, and interchangeability were analyzed. Finally, the impact of these measurements on recommended valve size was evaluated. Semiautomated analysis required major correction in 5 patients (4.5%). In the remaining 95.5%, only minor correction was performed. Mean manual annulus area was significantly smaller than fully automated results (P < .001 for both readers) but similar to semiautomated measurements (5.0 vs 5.4 vs 4.9 cm(2), respectively). The frequency of concordant recommendations for valve size increased if manual analysis was replaced with the semiautomated method (60% agreement was improved to 82.4%; 95% confidence interval for the difference [69.1%-83.4%]). Semiautomated aortic annulus analysis, with minor correction by the user, provides reliable results in the context of TAVR annulus evaluation. Copyright © 2015 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
A Fully Automated High-Throughput Zebrafish Behavioral Ototoxicity Assay.
Todd, Douglas W; Philip, Rohit C; Niihori, Maki; Ringle, Ryan A; Coyle, Kelsey R; Zehri, Sobia F; Zabala, Leanne; Mudery, Jordan A; Francis, Ross H; Rodriguez, Jeffrey J; Jacob, Abraham
2017-08-01
Zebrafish animal models lend themselves to behavioral assays that can facilitate rapid screening of ototoxic, otoprotective, and otoregenerative drugs. Structurally similar to human inner ear hair cells, the mechanosensory hair cells on their lateral line allow the zebrafish to sense water flow and orient head-to-current in a behavior called rheotaxis. This rheotaxis behavior deteriorates in a dose-dependent manner with increased exposure to the ototoxin cisplatin, thereby establishing itself as an excellent biomarker for anatomic damage to lateral line hair cells. Building on work by our group and others, we have built a new, fully automated high-throughput behavioral assay system that uses automated image analysis techniques to quantify rheotaxis behavior. This novel system consists of a custom-designed swimming apparatus and imaging system consisting of network-controlled Raspberry Pi microcomputers capturing infrared video. Automated analysis techniques detect individual zebrafish, compute their orientation, and quantify the rheotaxis behavior of a zebrafish test population, producing a powerful, high-throughput behavioral assay. Using our fully automated biological assay to test a standardized ototoxic dose of cisplatin against varying doses of compounds that protect or regenerate hair cells may facilitate rapid translation of candidate drugs into preclinical mammalian models of hearing loss.
Fully automated analysis of multi-resolution four-channel micro-array genotyping data
NASA Astrophysics Data System (ADS)
Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.
2006-03-01
We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.
ROBOCAL: An automated NDA (nondestructive analysis) calorimetry and gamma isotopic system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, J.R.; Powell, W.D.; Ostenak, C.A.
1989-11-01
ROBOCAL, which is presently being developed and tested at Los Alamos National Laboratory, is a full-scale, prototype robotic system for remote calorimetric and gamma-ray analysis of special nuclear materials. It integrates a fully automated, multidrawer, vertical stacker-retriever system for staging unmeasured nuclear materials, and a fully automated gantry robot for computer-based selection and transfer of nuclear materials to calorimetric and gamma-ray measurement stations. Since ROBOCAL is designed for minimal operator intervention, a completely programmed user interface is provided to interact with the automated mechanical and assay systems. The assay system is designed to completely integrate calorimetric and gamma-ray data acquisitionmore » and to perform state-of-the-art analyses on both homogeneous and heterogeneous distributions of nuclear materials in a wide variety of matrices.« less
21 CFR 866.1645 - Fully automated short-term incubation cycle antimicrobial susceptibility system.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Fully automated short-term incubation cycle... Diagnostic Devices § 866.1645 Fully automated short-term incubation cycle antimicrobial susceptibility system. (a) Identification. A fully automated short-term incubation cycle antimicrobial susceptibility system...
Hatz, F; Hardmeier, M; Bousleiman, H; Rüegg, S; Schindler, C; Fuhr, P
2015-02-01
To compare the reliability of a newly developed Matlab® toolbox for the fully automated, pre- and post-processing of resting state EEG (automated analysis, AA) with the reliability of analysis involving visually controlled pre- and post-processing (VA). 34 healthy volunteers (age: median 38.2 (20-49), 82% female) had three consecutive 256-channel resting-state EEG at one year intervals. Results of frequency analysis of AA and VA were compared with Pearson correlation coefficients, and reliability over time was assessed with intraclass correlation coefficients (ICC). Mean correlation coefficient between AA and VA was 0.94±0.07, mean ICC for AA 0.83±0.05 and for VA 0.84±0.07. AA and VA yield very similar results for spectral EEG analysis and are equally reliable. AA is less time-consuming, completely standardized, and independent of raters and their training. Automated processing of EEG facilitates workflow in quantitative EEG analysis. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Cost-Effectiveness Analysis of the Automation of a Circulation System.
ERIC Educational Resources Information Center
Mosley, Isobel
A general methodology for cost effectiveness analysis was developed and applied to the Colorado State University library loan desk. The cost effectiveness of the existing semi-automated circulation system was compared with that of a fully manual one, based on the existing manual subsystem. Faculty users' time and computer operating costs were…
Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly
2013-01-01
High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652
Slide Set: Reproducible image analysis and batch processing with ImageJ.
Nanes, Benjamin A
2015-11-01
Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.
Peng, Chen; Frommlet, Alexandra; Perez, Manuel; Cobas, Carlos; Blechschmidt, Anke; Dominguez, Santiago; Lingel, Andreas
2016-04-14
NMR binding assays are routinely applied in hit finding and validation during early stages of drug discovery, particularly for fragment-based lead generation. To this end, compound libraries are screened by ligand-observed NMR experiments such as STD, T1ρ, and CPMG to identify molecules interacting with a target. The analysis of a high number of complex spectra is performed largely manually and therefore represents a limiting step in hit generation campaigns. Here we report a novel integrated computational procedure that processes and analyzes ligand-observed proton and fluorine NMR binding data in a fully automated fashion. A performance evaluation comparing automated and manual analysis results on (19)F- and (1)H-detected data sets shows that the program delivers robust, high-confidence hit lists in a fraction of the time needed for manual analysis and greatly facilitates visual inspection of the associated NMR spectra. These features enable considerably higher throughput, the assessment of larger libraries, and shorter turn-around times.
Maruoka, Sachiko; Nakakura, Shunsuke; Matsuo, Naoko; Yoshitomi, Kayo; Katakami, Chikako; Tabuchi, Hitoshi; Chikama, Taiichiro; Kiuchi, Yoshiaki
2017-10-30
To evaluate two specular microscopy analysis methods across different endothelial cell densities (ECDs). Endothelial images of one eye from each of 45 patients were taken by using three different specular microscopes (three replicates each). To determine the consistency of the center-dot method, we compared SP-6000 and SP-2000P images. CME-530 and SP-6000 images were compared to assess the consistency of the fully automated method. The SP-6000 images from the two methods were compared. Intraclass correlation coefficients (ICCs) for the three measurements were calculated, and parametric multiple comparisons tests and Bland-Altman analysis were performed. The ECD mean value was 2425 ± 883 (range 516-3707) cells/mm 2 . ICC values were > 0.9 for all three microscopes for ECD, but the coefficients of variation (CVs) were 0.3-0.6. For ECD measurements, Bland-Altman analysis revealed that the mean difference was 42 cells/mm 2 between the SP-2000P and SP-6000 for the center-dot method; 57 cells/mm 2 between the SP-6000 measurements from both methods; and -5 cells/mm 2 between the SP-6000 and CME-530 for the fully automated method (95% limits of agreement: - 201 to 284 cell/mm 2 , - 410 to 522 cells/mm 2 , and - 327 to 318 cells/mm 2 , respectively). For CV measurements, the mean differences were - 3, - 12, and 13% (95% limits of agreement - 18 to 11, - 26 to 2, and - 5 to 32%, respectively). Despite using three replicate measurements, the precision of the center-dot method with the SP-2000P and SP-6000 software was only ± 10% for ECD data and was even worse for the fully automated method. Japan Clinical Trials Register ( http://www.umin.ac.jp/ctr/index/htm9 ) number UMIN 000015236.
Automated Detection and Analysis of Interplanetary Shocks with Real-Time Application
NASA Astrophysics Data System (ADS)
Vorotnikov, V.; Smith, C. W.; Hu, Q.; Szabo, A.; Skoug, R. M.; Cohen, C. M.
2006-12-01
The ACE real-time data stream provides web-based now-casting capabilities for solar wind conditions upstream of Earth. Our goal is to provide an automated code that finds and analyzes interplanetary shocks as they occur for possible real-time application to space weather nowcasting. Shock analysis algorithms based on the Rankine-Hugoniot jump conditions exist and are in wide-spread use today for the interactive analysis of interplanetary shocks yielding parameters such as shock speed and propagation direction and shock strength in the form of compression ratios. Although these codes can be automated in a reasonable manner to yield solutions not far from those obtained by user-directed interactive analysis, event detection presents an added obstacle and the first step in a fully automated analysis. We present a fully automated Rankine-Hugoniot analysis code that can scan the ACE science data, find shock candidates, analyze the events, obtain solutions in good agreement with those derived from interactive applications, and dismiss false positive shock candidates on the basis of the conservation equations. The intent is to make this code available to NOAA for use in real-time space weather applications. The code has the added advantage of being able to scan spacecraft data sets to provide shock solutions for use outside real-time applications and can easily be applied to science-quality data sets from other missions. Use of the code for this purpose will also be explored.
Barbesi, Donato; Vicente Vilas, Víctor; Millet, Sylvain; Sandow, Miguel; Colle, Jean-Yves; Aldave de Las Heras, Laura
2017-01-01
A LabVIEW ® -based software for the control of the fully automated multi-sequential flow injection analysis Lab-on-Valve (MSFIA-LOV) platform AutoRAD performing radiochemical analysis is described. The analytical platform interfaces an Arduino ® -based device triggering multiple detectors providing a flexible and fit for purpose choice of detection systems. The different analytical devices are interfaced to the PC running LabVIEW ® VI software using USB and RS232 interfaces, both for sending commands and receiving confirmation or error responses. The AUTORAD platform has been successfully applied for the chemical separation and determination of Sr, an important fission product pertinent to nuclear waste.
Salimi, Nima; Loh, Kar Hoe; Kaur Dhillon, Sarinder; Chong, Ving Ching
2016-01-01
Background. Fish species may be identified based on their unique otolith shape or contour. Several pattern recognition methods have been proposed to classify fish species through morphological features of the otolith contours. However, there has been no fully-automated species identification model with the accuracy higher than 80%. The purpose of the current study is to develop a fully-automated model, based on the otolith contours, to identify the fish species with the high classification accuracy. Methods. Images of the right sagittal otoliths of 14 fish species from three families namely Sciaenidae, Ariidae, and Engraulidae were used to develop the proposed identification model. Short-time Fourier transform (STFT) was used, for the first time in the area of otolith shape analysis, to extract important features of the otolith contours. Discriminant Analysis (DA), as a classification technique, was used to train and test the model based on the extracted features. Results. Performance of the model was demonstrated using species from three families separately, as well as all species combined. Overall classification accuracy of the model was greater than 90% for all cases. In addition, effects of STFT variables on the performance of the identification model were explored in this study. Conclusions. Short-time Fourier transform could determine important features of the otolith outlines. The fully-automated model proposed in this study (STFT-DA) could predict species of an unknown specimen with acceptable identification accuracy. The model codes can be accessed at http://mybiodiversityontologies.um.edu.my/Otolith/ and https://peerj.com/preprints/1517/. The current model has flexibility to be used for more species and families in future studies.
Grimmer, Timo; Wutz, Carolin; Alexopoulos, Panagiotis; Drzezga, Alexander; Förster, Stefan; Förstl, Hans; Goldhardt, Oliver; Ortner, Marion; Sorg, Christian; Kurz, Alexander
2016-02-01
Biomarkers of Alzheimer disease (AD) can be imaged in vivo and can be used for diagnostic and prognostic purposes in people with cognitive decline and dementia. Indicators of amyloid deposition such as (11)C-Pittsburgh compound B ((11)C-PiB) PET are primarily used to identify or rule out brain diseases that are associated with amyloid pathology but have also been deployed to forecast the clinical course. Indicators of neuronal metabolism including (18)F-FDG PET demonstrate the localization and severity of neuronal dysfunction and are valuable for differential diagnosis and for predicting the progression from mild cognitive impairment (MCI) to dementia. It is a matter of debate whether to analyze these images visually or using automated techniques. Therefore, we compared the usefulness of both imaging methods and both analyzing strategies to predict dementia due to AD. In MCI participants, a baseline examination, including clinical and imaging assessments, and a clinical follow-up examination after a planned interval of 24 mo were performed. Of 28 MCI patients, 9 developed dementia due to AD, 2 developed frontotemporal dementia, and 1 developed moderate dementia of unknown etiology. The positive and negative predictive values and the accuracy of visual and fully automated analyses of (11)C-PiB for the prediction of progression to dementia due to AD were 0.50, 1.00, and 0.68, respectively, for the visual and 0.53, 1.00, and 0.71, respectively, for the automated analyses. Positive predictive value, negative predictive value, and accuracy of fully automated analyses of (18)F-FDG PET were 0.37, 0.78, and 0.50, respectively. Results of visual analyses were highly variable between raters but were superior to automated analyses. Both (18)F-FDG and (11)C-PiB imaging appear to be of limited use for predicting the progression from MCI to dementia due to AD in short-term follow-up, irrespective of the strategy of analysis. On the other hand, amyloid PET is extremely useful to rule out underlying AD. The findings of the present study favor a fully automated method of analysis for (11)C-PiB assessments and a visual analysis by experts for (18)F-FDG assessments. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Giancardo, Luca; Li, Yaquin
2013-01-01
Automated retina image analysis has reached a high level of maturity in recent years, and thus the question of how validation is performed in these systems is beginning to grow in importance. One application of retina image analysis is in telemedicine, where an automated system could enable the automated detection of diabetic retinopathy and other eye diseases as a low-cost method for broad-based screening. In this work we discuss our experiences in developing a telemedical network for retina image analysis, including our progression from a manual diagnosis network to a more fully automated one. We pay special attention to howmore » validations of our algorithm steps are performed, both using data from the telemedicine network and other public databases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, J.R.; Bonner, C.A.; Ostenak, C.A.
1989-01-01
ROBOCAL, which is presently being developed and tested at Los Alamos National Laboratory, is a full-scale, prototypical robotic system, for remote calorimetric and gamma-ray analysis of special nuclear materials. It integrates a fully automated, multi-drawer, vertical stacker-retriever system for staging unmeasured nuclear materials, and a fully automated gantry robot for computer-based selection and transfer of nuclear materials to calorimetric and gamma-ray measurement stations. Since ROBOCAL is designed for minimal operator intervention, a completely programmed user interface and data-base system are provided to interact with the automated mechanical and assay systems. The assay system is designed to completely integrate calorimetric andmore » gamma-ray data acquisition and to perform state-of-the-art analyses on both homogeneous and heterogeneous distributions of nuclear materials in a wide variety of matrices. 10 refs., 10 figs., 4 tabs.« less
ROBOCAL: Gamma-ray isotopic hardware/software interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, J.R.; Bonner, C.A.; Ostenak, C.A.
1989-01-01
ROBOCAL, presently being developed at the Los Alamos National Laboratory, is a full-scale prototypical robotic system for remotely performing calorimetric and gamma-ray isotopics measurements of nuclear materials. It features a fully automated vertical stacker-retriever for storing and retrieving packaged nuclear materials from a multi-drawer system, and a fully automated, uniquely integrated gantry robot for programmable selection and transfer of nuclear materials to calorimetric and gamma-ray isotopic measurement stations. Since ROBOCAL is to require almost no operator intervention, a mechanical control system is required in addition to a totally automated assay system. The assay system must be a completely integrated datamore » acquisition and isotopic analysis package fully capable of performing state-of-the-art homogeneous and heterogeneous analyses on many varied matrices. The TRIFID assay system being discussed at this conference by J. G. Fleissner of the Rocky Flats Plant has been adopted because of its many automated features. These include: MCA/ADC setup and acquisition; spectral storage and analysis utilizing an expert system formalism; report generation with internal measurement control printout; user friendly screens and menus. The mechanical control portion consists primarily of two detector platforms and a sample platform, each with independent movement. Some minor modifications and additions are needed with TRIFID to interface the assay and mechanical portions with the CimRoc 4000 software controlling the robot. 6 refs., 5 figs., 3 tabs.« less
An Intelligent Automation Platform for Rapid Bioprocess Design.
Wu, Tianyi; Zhou, Yuhong
2014-08-01
Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user's inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. © 2013 Society for Laboratory Automation and Screening.
NMR-based automated protein structure determination.
Würz, Julia M; Kazemi, Sina; Schmidt, Elena; Bagaria, Anurag; Güntert, Peter
2017-08-15
NMR spectra analysis for protein structure determination can now in many cases be performed by automated computational methods. This overview of the computational methods for NMR protein structure analysis presents recent automated methods for signal identification in multidimensional NMR spectra, sequence-specific resonance assignment, collection of conformational restraints, and structure calculation, as implemented in the CYANA software package. These algorithms are sufficiently reliable and integrated into one software package to enable the fully automated structure determination of proteins starting from NMR spectra without manual interventions or corrections at intermediate steps, with an accuracy of 1-2 Å backbone RMSD in comparison with manually solved reference structures. Copyright © 2017 Elsevier Inc. All rights reserved.
Fully automated contour detection of the ascending aorta in cardiac 2D phase-contrast MRI.
Codari, Marina; Scarabello, Marco; Secchi, Francesco; Sforza, Chiarella; Baselli, Giuseppe; Sardanelli, Francesco
2018-04-01
In this study we proposed a fully automated method for localizing and segmenting the ascending aortic lumen with phase-contrast magnetic resonance imaging (PC-MRI). Twenty-five phase-contrast series were randomly selected out of a large population dataset of patients whose cardiac MRI examination, performed from September 2008 to October 2013, was unremarkable. The local Ethical Committee approved this retrospective study. The ascending aorta was automatically identified on each phase of the cardiac cycle using a priori knowledge of aortic geometry. The frame that maximized the area, eccentricity, and solidity parameters was chosen for unsupervised initialization. Aortic segmentation was performed on each frame using active contouring without edges techniques. The entire algorithm was developed using Matlab R2016b. To validate the proposed method, the manual segmentation performed by a highly experienced operator was used. Dice similarity coefficient, Bland-Altman analysis, and Pearson's correlation coefficient were used as performance metrics. Comparing automated and manual segmentation of the aortic lumen on 714 images, Bland-Altman analysis showed a bias of -6.68mm 2 , a coefficient of repeatability of 91.22mm 2 , a mean area measurement of 581.40mm 2 , and a reproducibility of 85%. Automated and manual segmentation were highly correlated (R=0.98). The Dice similarity coefficient versus the manual reference standard was 94.6±2.1% (mean±standard deviation). A fully automated and robust method for identification and segmentation of ascending aorta on PC-MRI was developed. Its application on patients with a variety of pathologic conditions is advisable. Copyright © 2017 Elsevier Inc. All rights reserved.
Choe, Leila H; Lee, Kelvin H
2003-10-01
We investigate one approach to assess the quantitative variability in two-dimensional gel electrophoresis (2-DE) separations based on gel-to-gel variability, sample preparation variability, sample load differences, and the effect of automation on image analysis. We observe that 95% of spots present in three out of four replicate gels exhibit less than a 0.52 coefficient of variation (CV) in fluorescent stain intensity (% volume) for a single sample run on multiple gels. When four parallel sample preparations are performed, this value increases to 0.57. We do not observe any significant change in quantitative value for an increase or decrease in sample load of 30% when using appropriate image analysis variables. Increasing use of automation, while necessary in modern 2-DE experiments, does change the observed level of quantitative and qualitative variability among replicate gels. The number of spots that change qualitatively for a single sample run in parallel varies from a CV = 0.03 for fully manual analysis to CV = 0.20 for a fully automated analysis. We present a systematic method by which a single laboratory can measure gel-to-gel variability using only three gel runs.
A fully-automated neural network analysis of AFM force-distance curves for cancer tissue diagnosis
NASA Astrophysics Data System (ADS)
Minelli, Eleonora; Ciasca, Gabriele; Sassun, Tanya Enny; Antonelli, Manila; Palmieri, Valentina; Papi, Massimiliano; Maulucci, Giuseppe; Santoro, Antonio; Giangaspero, Felice; Delfini, Roberto; Campi, Gaetano; De Spirito, Marco
2017-10-01
Atomic Force Microscopy (AFM) has the unique capability of probing the nanoscale mechanical properties of biological systems that affect and are affected by the occurrence of many pathologies, including cancer. This capability has triggered growing interest in the translational process of AFM from physics laboratories to clinical practice. A factor still hindering the current use of AFM in diagnostics is related to the complexity of AFM data analysis, which is time-consuming and needs highly specialized personnel with a strong physical and mathematical background. In this work, we demonstrate an operator-independent neural-network approach for the analysis of surgically removed brain cancer tissues. This approach allowed us to distinguish—in a fully automated fashion—cancer from healthy tissues with high accuracy, also highlighting the presence and the location of infiltrating tumor cells.
An Intelligent Automation Platform for Rapid Bioprocess Design
Wu, Tianyi
2014-01-01
Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user’s inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. PMID:24088579
Sochor, Jiri; Ryvolova, Marketa; Krystofova, Olga; Salas, Petr; Hubalek, Jaromir; Adam, Vojtech; Trnkova, Libuse; Havel, Ladislav; Beklova, Miroslava; Zehnalek, Josef; Provaznik, Ivo; Kizek, Rene
2010-11-29
The aim of this study was to describe behaviour, kinetics, time courses and limitations of the six different fully automated spectrometric methods--DPPH, TEAC, FRAP, DMPD, Free Radicals and Blue CrO5. Absorption curves were measured and absorbance maxima were found. All methods were calibrated using the standard compounds Trolox® and/or gallic acid. Calibration curves were determined (relative standard deviation was within the range from 1.5 to 2.5%). The obtained characteristics were compared and discussed. Moreover, the data obtained were applied to optimize and to automate all mentioned protocols. Automatic analyzer allowed us to analyse simultaneously larger set of samples, to decrease the measurement time, to eliminate the errors and to provide data of higher quality in comparison to manual analysis. The total time of analysis for one sample was decreased to 10 min for all six methods. In contrary, the total time of manual spectrometric determination was approximately 120 min. The obtained data provided good correlations between studied methods (R=0.97-0.99).
Lee, Hyunkwang; Troschel, Fabian M; Tajmir, Shahein; Fuchs, Georg; Mario, Julia; Fintelmann, Florian J; Do, Synho
2017-08-01
Pretreatment risk stratification is key for personalized medicine. While many physicians rely on an "eyeball test" to assess whether patients will tolerate major surgery or chemotherapy, "eyeballing" is inherently subjective and difficult to quantify. The concept of morphometric age derived from cross-sectional imaging has been found to correlate well with outcomes such as length of stay, morbidity, and mortality. However, the determination of the morphometric age is time intensive and requires highly trained experts. In this study, we propose a fully automated deep learning system for the segmentation of skeletal muscle cross-sectional area (CSA) on an axial computed tomography image taken at the third lumbar vertebra. We utilized a fully automated deep segmentation model derived from an extended implementation of a fully convolutional network with weight initialization of an ImageNet pre-trained model, followed by post processing to eliminate intramuscular fat for a more accurate analysis. This experiment was conducted by varying window level (WL), window width (WW), and bit resolutions in order to better understand the effects of the parameters on the model performance. Our best model, fine-tuned on 250 training images and ground truth labels, achieves 0.93 ± 0.02 Dice similarity coefficient (DSC) and 3.68 ± 2.29% difference between predicted and ground truth muscle CSA on 150 held-out test cases. Ultimately, the fully automated segmentation system can be embedded into the clinical environment to accelerate the quantification of muscle and expanded to volume analysis of 3D datasets.
ATALARS Operational Requirements: Automated Tactical Aircraft Launch and Recovery System
DOT National Transportation Integrated Search
1988-04-01
The Automated Tactical Aircraft Launch and Recovery System (ATALARS) is a fully automated air traffic management system intended for the military service but is also fully compatible with civil air traffic control systems. This report documents a fir...
van der Logt, Elise M. J.; Kuperus, Deborah A. J.; van Setten, Jan W.; van den Heuvel, Marius C.; Boers, James. E.; Schuuring, Ed; Kibbelaar, Robby E.
2015-01-01
HER2 assessment is routinely used to select patients with invasive breast cancer that might benefit from HER2-targeted therapy. The aim of this study was to validate a fully automated in situ hybridization (ISH) procedure that combines the automated Leica HER2 fluorescent ISH system for Bond with supervised automated analysis with the Visia imaging D-Sight digital imaging platform. HER2 assessment was performed on 328 formalin-fixed/paraffin-embedded invasive breast cancer tumors on tissue microarrays (TMA) and 100 (50 selected IHC 2+ and 50 random IHC scores) full-sized slides of resections/biopsies obtained for diagnostic purposes previously. For digital analysis slides were pre-screened at 20x and 100x magnification for all fluorescent signals and supervised-automated scoring was performed on at least two pictures (in total at least 20 nuclei were counted) with the D-Sight HER2 FISH analysis module by two observers independently. Results were compared to data obtained previously with the manual Abbott FISH test. The overall agreement with Abbott FISH data among TMA samples and 50 selected IHC 2+ cases was 98.8% (κ = 0.94) and 93.8% (κ = 0.88), respectively. The results of 50 additionally tested unselected IHC cases were concordant with previously obtained IHC and/or FISH data. The combination of the Leica FISH system with the D-Sight digital imaging platform is a feasible method for HER2 assessment in routine clinical practice for patients with invasive breast cancer. PMID:25844540
Design and Prototype of an Automated Column-Switching HPLC System for Radiometabolite Analysis.
Vasdev, Neil; Collier, Thomas Lee
2016-08-17
Column-switching high performance liquid chromatography (HPLC) is extensively used for the critical analysis of radiolabeled ligands and their metabolites in plasma. However, the lack of streamlined apparatus and consequently varying protocols remain as a challenge among positron emission tomography laboratories. We report here the prototype apparatus and implementation of a fully automated and simplified column-switching procedure to allow for the easy and automated determination of radioligands and their metabolites in up to 5 mL of plasma. The system has been used with conventional UV and coincidence radiation detectors, as well as with a single quadrupole mass spectrometer.
Ihlow, Alexander; Schweizer, Patrick; Seiffert, Udo
2008-01-23
To find candidate genes that potentially influence the susceptibility or resistance of crop plants to powdery mildew fungi, an assay system based on transient-induced gene silencing (TIGS) as well as transient over-expression in single epidermal cells of barley has been developed. However, this system relies on quantitative microscopic analysis of the barley/powdery mildew interaction and will only become a high-throughput tool of phenomics upon automation of the most time-consuming steps. We have developed a high-throughput screening system based on a motorized microscope which evaluates the specimens fully automatically. A large-scale double-blind verification of the system showed an excellent agreement of manual and automated analysis and proved the system to work dependably. Furthermore, in a series of bombardment experiments an RNAi construct targeting the Mlo gene was included, which is expected to phenocopy resistance mediated by recessive loss-of-function alleles such as mlo5. In most cases, the automated analysis system recorded a shift towards resistance upon RNAi of Mlo, thus providing proof of concept for its usefulness in detecting gene-target effects. Besides saving labor and enabling a screening of thousands of candidate genes, this system offers continuous operation of expensive laboratory equipment and provides a less subjective analysis as well as a complete and enduring documentation of the experimental raw data in terms of digital images. In general, it proves the concept of enabling available microscope hardware to handle challenging screening tasks fully automatically.
Horsch, Salome; Kopczynski, Dominik; Kuthe, Elias; Baumbach, Jörg Ingo; Rahmann, Sven
2017-01-01
Motivation Disease classification from molecular measurements typically requires an analysis pipeline from raw noisy measurements to final classification results. Multi capillary column—ion mobility spectrometry (MCC-IMS) is a promising technology for the detection of volatile organic compounds in the air of exhaled breath. From raw measurements, the peak regions representing the compounds have to be identified, quantified, and clustered across different experiments. Currently, several steps of this analysis process require manual intervention of human experts. Our goal is to identify a fully automatic pipeline that yields competitive disease classification results compared to an established but subjective and tedious semi-manual process. Method We combine a large number of modern methods for peak detection, peak clustering, and multivariate classification into analysis pipelines for raw MCC-IMS data. We evaluate all combinations on three different real datasets in an unbiased cross-validation setting. We determine which specific algorithmic combinations lead to high AUC values in disease classifications across the different medical application scenarios. Results The best fully automated analysis process achieves even better classification results than the established manual process. The best algorithms for the three analysis steps are (i) SGLTR (Savitzky-Golay Laplace-operator filter thresholding regions) and LM (Local Maxima) for automated peak identification, (ii) EM clustering (Expectation Maximization) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) for the clustering step and (iii) RF (Random Forest) for multivariate classification. Thus, automated methods can replace the manual steps in the analysis process to enable an unbiased high throughput use of the technology. PMID:28910313
Frapid: achieving full automation of FRAP for chemical probe validation
Yapp, Clarence; Rogers, Catherine; Savitsky, Pavel; Philpott, Martin; Müller, Susanne
2016-01-01
Fluorescence Recovery After Photobleaching (FRAP) is an established method for validating chemical probes against the chromatin reading bromodomains, but so far requires constant human supervision. Here, we present Frapid, an automated open source code implementation of FRAP that fully handles cell identification through fuzzy logic analysis, drug dispensing with a custom-built fluid handler, image acquisition & analysis, and reporting. We successfully tested Frapid on 3 bromodomains as well as on spindlin1 (SPIN1), a methyl lysine binder, for the first time. PMID:26977352
Fully Automated Sunspot Detection and Classification Using SDO HMI Imagery in MATLAB
2014-03-27
FULLY AUTOMATED SUNSPOT DETECTION AND CLASSIFICATION USING SDO HMI IMAGERY IN MATLAB THESIS Gordon M. Spahr, Second Lieutenant, USAF AFIT-ENP-14-M-34...CLASSIFICATION USING SDO HMI IMAGERY IN MATLAB THESIS Presented to the Faculty Department of Engineering Physics Graduate School of Engineering and Management Air...DISTRIUBUTION UNLIMITED. AFIT-ENP-14-M-34 FULLY AUTOMATED SUNSPOT DETECTION AND CLASSIFICATION USING SDO HMI IMAGERY IN MATLAB Gordon M. Spahr, BS Second
NASA Astrophysics Data System (ADS)
Gorlach, Igor; Wessel, Oliver
2008-09-01
In the global automotive industry, for decades, vehicle manufacturers have continually increased the level of automation of production systems in order to be competitive. However, there is a new trend to decrease the level of automation, especially in final car assembly, for reasons of economy and flexibility. In this research, the final car assembly lines at three production sites of Volkswagen are analysed in order to determine the best level of automation for each, in terms of manufacturing costs, productivity, quality and flexibility. The case study is based on the methodology proposed by the Fraunhofer Institute. The results of the analysis indicate that fully automated assembly systems are not necessarily the best option in terms of cost, productivity and quality combined, which is attributed to high complexity of final car assembly systems; some de-automation is therefore recommended. On the other hand, the analysis shows that low automation can result in poor product quality due to reasons related to plant location, such as inadequate workers' skills, motivation, etc. Hence, the automation strategy should be formulated on the basis of analysis of all relevant aspects of the manufacturing process, such as costs, quality, productivity and flexibility in relation to the local context. A more balanced combination of automated and manual assembly operations provides better utilisation of equipment, reduces production costs and improves throughput.
ATLAS, an integrated structural analysis and design system. Volume 6: Design module theory
NASA Technical Reports Server (NTRS)
Backman, B. F.
1979-01-01
The automated design theory underlying the operation of the ATLAS Design Module is decribed. The methods, applications and limitations associated with the fully stressed design, the thermal fully stressed design and a regional optimization algorithm are presented. A discussion of the convergence characteristics of the fully stressed design is also included. Derivations and concepts specific to the ATLAS design theory are shown, while conventional terminology and established methods are identified by references.
Pertuz, Said; McDonald, Elizabeth S; Weinstein, Susan P; Conant, Emily F; Kontos, Despina
2016-04-01
To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board-approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration-cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging-based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment.
Yoon, Nara; Do, In-Gu; Cho, Eun Yoon
2014-09-01
Easy and accurate HER2 testing is essential when considering the prognostic and predictive significance of HER2 in breast cancer. The use of a fully automated, quantitative FISH assay would be helpful to detect HER2 amplification in breast cancer tissue specimens with reduced inter-laboratory variability. We compared the concordance of HER2 status as assessed by an automated FISH staining system to manual FISH testing. Using 60 formalin-fixed paraffin-embedded breast carcinoma specimens, we assessed HER2 immunoexpression with two antibodies (DAKO HercepTest and CB11). In addition, HER2 status was evaluated with automated FISH using the Leica FISH System for BOND and a manual FISH using the Abbott PathVysion DNA Probe Kit. All but one specimen were successfully stained using both FISH methods. When the data were divided into two groups according to HER2/CEP17 ratio, positive and negative, the results from both the automated and manual FISH techniques were identical for all 59 evaluable specimens. The HER2 and CEP17 copy numbers and HER2/CEP17 ratio showed great agreement between both FISH methods. The automated FISH technique was interpretable with signal intensity similar to those of the manual FISH technique. In contrast with manual FISH, the automated FISH technique showed well-preserved architecture due to low membrane digestion. HER2 immunohistochemistry and FISH results showed substantial significant agreement (κ = 1.0, p < 0.001). HER2 status can be reliably determined using a fully automated HER2 FISH system with high concordance to the well-established manual FISH method. Because of stable signal intensity and high staining quality, the automated FISH technique may be more appropriate than manual FISH for routine applications. © 2013 APMIS. Published by John Wiley & Sons Ltd.
Nielsen, Patricia Switten; Riber-Hansen, Rikke; Schmidt, Henrik; Steiniche, Torben
2016-04-09
Staging of melanoma includes quantification of a proliferation index, i.e., presumed melanocytic mitoses of H&E stains are counted manually in hot spots. Yet, its reproducibility and prognostic impact increases by immunohistochemical dual staining for phosphohistone H3 (PHH3) and MART1, which also may enable fully automated quantification by image analysis. To ensure manageable workloads and repeatable measurements in modern pathology, the study aimed to present an automated quantification of proliferation with automated hot-spot selection in PHH3/MART1-stained melanomas. Formalin-fixed, paraffin-embedded tissue from 153 consecutive stage I/II melanoma patients was immunohistochemically dual-stained for PHH3 and MART1. Whole slide images were captured, and the number of PHH3/MART1-positive cells was manually and automatically counted in the global tumor area and in a manually and automatically selected hot spot, i.e., a fixed 1-mm(2) square. Bland-Altman plots and hypothesis tests compared manual and automated procedures, and the Cox proportional hazards model established their prognostic impact. The mean difference between manual and automated global counts was 2.9 cells/mm(2) (P = 0.0071) and 0.23 cells per hot spot (P = 0.96) for automated counts in manually and automatically selected hot spots. In 77 % of cases, manual and automated hot spots overlapped. Fully manual hot-spot counts yielded the highest prognostic performance with an adjusted hazard ratio of 5.5 (95 % CI, 1.3-24, P = 0.024) as opposed to 1.3 (95 % CI, 0.61-2.9, P = 0.47) for automated counts with automated hot spots. The automated index and automated hot-spot selection were highly correlated to their manual counterpart, but altogether their prognostic impact was noticeably reduced. Because correct recognition of only one PHH3/MART1-positive cell seems important, extremely high sensitivity and specificity of the algorithm is required for prognostic purposes. Thus, automated analysis may still aid and improve the pathologists' detection of mitoses in melanoma and possibly other malignancies.
DOT National Transportation Integrated Search
2014-08-01
Fully automated or autonomous vehicles (AVs) hold great promise for the future of transportation. By 2020 : Google, auto manufacturers and other technology providers intend to introduce self-driving cars to the public with : either limited or fully a...
Looney, Pádraig; Stevenson, Gordon N; Nicolaides, Kypros H; Plasencia, Walter; Molloholli, Malid; Natsis, Stavros; Collins, Sally L
2018-06-07
We present a new technique to fully automate the segmentation of an organ from 3D ultrasound (3D-US) volumes, using the placenta as the target organ. Image analysis tools to estimate organ volume do exist but are too time consuming and operator dependant. Fully automating the segmentation process would potentially allow the use of placental volume to screen for increased risk of pregnancy complications. The placenta was segmented from 2,393 first trimester 3D-US volumes using a semiautomated technique. This was quality controlled by three operators to produce the "ground-truth" data set. A fully convolutional neural network (OxNNet) was trained using this ground-truth data set to automatically segment the placenta. OxNNet delivered state-of-the-art automatic segmentation. The effect of training set size on the performance of OxNNet demonstrated the need for large data sets. The clinical utility of placental volume was tested by looking at predictions of small-for-gestational-age babies at term. The receiver-operating characteristics curves demonstrated almost identical results between OxNNet and the ground-truth). Our results demonstrated good similarity to the ground-truth and almost identical clinical results for the prediction of SGA.
Zhang, Zhongqi; Zhang, Aming; Xiao, Gang
2012-06-05
Protein hydrogen/deuterium exchange (HDX) followed by protease digestion and mass spectrometric (MS) analysis is accepted as a standard method for studying protein conformation and conformational dynamics. In this article, an improved HDX MS platform with fully automated data processing is described. The platform significantly reduces systematic and random errors in the measurement by introducing two types of corrections in HDX data analysis. First, a mixture of short peptides with fast HDX rates is introduced as internal standards to adjust the variations in the extent of back exchange from run to run. Second, a designed unique peptide (PPPI) with slow intrinsic HDX rate is employed as another internal standard to reflect the possible differences in protein intrinsic HDX rates when protein conformations at different solution conditions are compared. HDX data processing is achieved with a comprehensive HDX model to simulate the deuterium labeling and back exchange process. The HDX model is implemented into the in-house developed software MassAnalyzer and enables fully unattended analysis of the entire protein HDX MS data set starting from ion detection and peptide identification to final processed HDX output, typically within 1 day. The final output of the automated data processing is a set (or the average) of the most possible protection factors for each backbone amide hydrogen. The utility of the HDX MS platform is demonstrated by exploring the conformational transition of a monoclonal antibody by increasing concentrations of guanidine.
NASA Astrophysics Data System (ADS)
Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.
2006-03-01
Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.
NASA Astrophysics Data System (ADS)
Xie, Dengling; Xie, Yanjun; Liu, Peng; Tong, Lieshu; Chu, Kaiqin; Smith, Zachary J.
2017-02-01
Current flow-based blood counting devices require expensive and centralized medical infrastructure and are not appropriate for field use. In this paper we report a method to count red blood cells, white blood cells as well as platelets through a low-cost and fully-automated blood counting system. The approach consists of using a compact, custom-built microscope with large field-of-view to record bright-field and fluorescence images of samples that are diluted with a single, stable reagent mixture and counted using automatic algorithms. Sample collection is performed manually using a spring loaded lancet, and volume-metering capillary tubes. The capillaries are then dropped into a tube of pre-measured reagents and gently shaken for 10-30 seconds. The sample is loaded into a measurement chamber and placed on a custom 3D printed platform. Sample translation and focusing is fully automated, and a user has only to press a button for the measurement and analysis to commence. Cost of the system is minimized through the use of custom-designed motorized components. We performed a series of comparative experiments by trained and untrained users on blood from adults and children. We compare the performance of our system, as operated by trained and untrained users, to the clinical gold standard using a Bland-Altman analysis, demonstrating good agreement of our system to the clinical standard. The system's low cost, complete automation, and good field performance indicate that it can be successfully translated for use in low-resource settings where central hematology laboratories are not accessible.
Fully automated processing of fMRI data in SPM: from MRI scanner to PACS.
Maldjian, Joseph A; Baer, Aaron H; Kraft, Robert A; Laurienti, Paul J; Burdette, Jonathan H
2009-01-01
Here we describe the Wake Forest University Pipeline, a fully automated method for the processing of fMRI data using SPM. The method includes fully automated data transfer and archiving from the point of acquisition, real-time batch script generation, distributed grid processing, interface to SPM in MATLAB, error recovery and data provenance, DICOM conversion and PACS insertion. It has been used for automated processing of fMRI experiments, as well as for the clinical implementation of fMRI and spin-tag perfusion imaging. The pipeline requires no manual intervention, and can be extended to any studies requiring offline processing.
Data-Driven Surface Traversability Analysis for Mars 2020 Landing Site Selection
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Rothrock, Brandon; Almeida, Eduardo; Ansar, Adnan; Otero, Richard; Huertas, Andres; Heverly, Matthew
2015-01-01
The objective of this paper is three-fold: 1) to describe the engineering challenges in the surface mobility of the Mars 2020 Rover mission that are considered in the landing site selection processs, 2) to introduce new automated traversability analysis capabilities, and 3) to present the preliminary analysis results for top candidate landing sites. The analysis capabilities presented in this paper include automated terrain classification, automated rock detection, digital elevation model (DEM) generation, and multi-ROI (region of interest) route planning. These analysis capabilities enable to fully utilize the vast volume of high-resolution orbiter imagery, quantitatively evaluate surface mobility requirements for each candidate site, and reject subjectivity in the comparison between sites in terms of engineering considerations. The analysis results supported the discussion in the Second Landing Site Workshop held in August 2015, which resulted in selecting eight candidate sites that will be considered in the third workshop.
Nakanishi, Rine; Sankaran, Sethuraman; Grady, Leo; Malpeso, Jenifer; Yousfi, Razik; Osawa, Kazuhiro; Ceponiene, Indre; Nazarat, Negin; Rahmani, Sina; Kissel, Kendall; Jayawardena, Eranthi; Dailing, Christopher; Zarins, Christopher; Koo, Bon-Kwon; Min, James K; Taylor, Charles A; Budoff, Matthew J
2018-03-23
Our goal was to evaluate the efficacy of a fully automated method for assessing the image quality (IQ) of coronary computed tomography angiography (CCTA). The machine learning method was trained using 75 CCTA studies by mapping features (noise, contrast, misregistration scores, and un-interpretability index) to an IQ score based on manual ground truth data. The automated method was validated on a set of 50 CCTA studies and subsequently tested on a new set of 172 CCTA studies against visual IQ scores on a 5-point Likert scale. The area under the curve in the validation set was 0.96. In the 172 CCTA studies, our method yielded a Cohen's kappa statistic for the agreement between automated and visual IQ assessment of 0.67 (p < 0.01). In the group where good to excellent (n = 163), fair (n = 6), and poor visual IQ scores (n = 3) were graded, 155, 5, and 2 of the patients received an automated IQ score > 50 %, respectively. Fully automated assessment of the IQ of CCTA data sets by machine learning was reproducible and provided similar results compared with visual analysis within the limits of inter-operator variability. • The proposed method enables automated and reproducible image quality assessment. • Machine learning and visual assessments yielded comparable estimates of image quality. • Automated assessment potentially allows for more standardised image quality. • Image quality assessment enables standardization of clinical trial results across different datasets.
Image segmentation evaluation for very-large datasets
NASA Astrophysics Data System (ADS)
Reeves, Anthony P.; Liu, Shuang; Xie, Yiting
2016-03-01
With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.
Sahore, Vishal; Sonker, Mukul; Nielsen, Anna V; Knob, Radim; Kumar, Suresh; Woolley, Adam T
2018-01-01
We have developed multichannel integrated microfluidic devices for automated preconcentration, labeling, purification, and separation of preterm birth (PTB) biomarkers. We fabricated multilayer poly(dimethylsiloxane)-cyclic olefin copolymer (PDMS-COC) devices that perform solid-phase extraction (SPE) and microchip electrophoresis (μCE) for automated PTB biomarker analysis. The PDMS control layer had a peristaltic pump and pneumatic valves for flow control, while the PDMS fluidic layer had five input reservoirs connected to microchannels and a μCE system. The COC layers had a reversed-phase octyl methacrylate porous polymer monolith for SPE and fluorescent labeling of PTB biomarkers. We determined μCE conditions for two PTB biomarkers, ferritin (Fer) and corticotropin-releasing factor (CRF). We used these integrated microfluidic devices to preconcentrate and purify off-chip-labeled Fer and CRF in an automated fashion. Finally, we performed a fully automated on-chip analysis of unlabeled PTB biomarkers, involving SPE, labeling, and μCE separation with 1 h total analysis time. These integrated systems have strong potential to be combined with upstream immunoaffinity extraction, offering a compact sample-to-answer biomarker analysis platform. Graphical abstract Pressure-actuated integrated microfluidic devices have been developed for automated solid-phase extraction, fluorescent labeling, and microchip electrophoresis of preterm birth biomarkers.
Khan, Ali R; Wang, Lei; Beg, Mirza Faisal
2008-07-01
Fully-automated brain segmentation methods have not been widely adopted for clinical use because of issues related to reliability, accuracy, and limitations of delineation protocol. By combining the probabilistic-based FreeSurfer (FS) method with the Large Deformation Diffeomorphic Metric Mapping (LDDMM)-based label-propagation method, we are able to increase reliability and accuracy, and allow for flexibility in template choice. Our method uses the automated FreeSurfer subcortical labeling to provide a coarse-to-fine introduction of information in the LDDMM template-based segmentation resulting in a fully-automated subcortical brain segmentation method (FS+LDDMM). One major advantage of the FS+LDDMM-based approach is that the automatically generated segmentations generated are inherently smooth, thus subsequent steps in shape analysis can directly follow without manual post-processing or loss of detail. We have evaluated our new FS+LDDMM method on several databases containing a total of 50 subjects with different pathologies, scan sequences and manual delineation protocols for labeling the basal ganglia, thalamus, and hippocampus. In healthy controls we report Dice overlap measures of 0.81, 0.83, 0.74, 0.86 and 0.75 for the right caudate nucleus, putamen, pallidum, thalamus and hippocampus respectively. We also find statistically significant improvement of accuracy in FS+LDDMM over FreeSurfer for the caudate nucleus and putamen of Huntington's disease and Tourette's syndrome subjects, and the right hippocampus of Schizophrenia subjects.
Solvepol: A Reduction Pipeline for Imaging Polarimetry Data
NASA Astrophysics Data System (ADS)
Ramírez, Edgar A.; Magalhães, Antônio M.; Davidson, James W., Jr.; Pereyra, Antonio; Rubinho, Marcelo
2017-05-01
We present a newly, fully automated, data pipeline, Solvepol, designed to reduce and analyze polarimetric data. It has been optimized for imaging data from the Instituto de Astronomía, Geofísica e Ciências Atmosféricas (IAG) of the University of São Paulo (USP), calcite Savart prism plate-based IAGPOL polarimeter. Solvepol is also the basis of a reduction pipeline for the wide-field optical polarimeter that will execute SOUTH POL, a survey of the polarized southern sky. Solvepol was written using the Interactive data language (IDL) and is based on the Image Reduction and Analysis Facility (IRAF) task PCCDPACK, developed by our polarimetry group. We present and discuss reduced data from standard stars and other fields and compare these results with those obtained in the IRAF environment. Our analysis shows that Solvepol, in addition to being a fully automated pipeline, produces results consistent with those reduced by PCCDPACK and reported in the literature.
Merlos Rodrigo, Miguel Angel; Krejcova, Ludmila; Kudr, Jiri; Cernei, Natalia; Kopel, Pavel; Richtera, Lukas; Moulick, Amitava; Hynek, David; Adam, Vojtech; Stiborova, Marie; Eckschlager, Tomas; Heger, Zbynek; Zitka, Ondrej
2016-12-15
Metallothioneins (MTs) are involved in heavy metal detoxification in a wide range of living organisms. Currently, it is well known that MTs play substantial role in many pathophysiological processes, including carcinogenesis, and they can serve as diagnostic biomarkers. In order to increase the applicability of MT in cancer diagnostics, an easy-to-use and rapid method for its detection is required. Hence, the aim of this study was to develop a fully automated and high-throughput assay for the estimation of MT levels. Here, we report the optimal conditions for the isolation of MTs from rabbit liver and their characterization using MALDI-TOF MS. In addition, we described a two-step assay, which started with an isolation of the protein using functionalized paramagnetic particles and finished with their electrochemical analysis. The designed easy-to-use, cost-effective, error-free and fully automated procedure for the isolation of MT coupled with a simple analytical detection method can provide a prototype for the construction of a diagnostic instrument, which would be appropriate for the monitoring of carcinogenesis or MT-related chemoresistance of tumors. Copyright © 2016 Elsevier B.V. All rights reserved.
Peters, Sonja; Kaal, Erwin; Horsting, Iwan; Janssen, Hans-Gerd
2012-02-24
A new method is presented for the analysis of phenolic acids in plasma based on ion-pairing 'Micro-extraction in packed sorbent' (MEPS) coupled on-line to in-liner derivatisation-gas chromatography-mass spectrometry (GC-MS). The ion-pairing reagent served a dual purpose. It was used both to improve extraction yields of the more polar analytes and as the methyl donor in the automated in-liner derivatisation method. In this way, a fully automated procedure for the extraction, derivatisation and injection of a wide range of phenolic acids in plasma samples has been obtained. An extensive optimisation of the extraction and derivatisation procedure has been performed. The entire method showed excellent repeatabilities of under 10% and linearities of 0.99 or better for all phenolic acids. The limits of detection of the optimised method for the majority of phenolic acids were 10ng/mL or lower with three phenolic acids having less-favourable detection limits of around 100 ng/mL. Finally, the newly developed method has been applied in a human intervention trial in which the bioavailability of polyphenols from wine and tea was studied. Forty plasma samples could be analysed within 24h in a fully automated method including sample extraction, derivatisation and gas chromatographic analysis. Copyright © 2011 Elsevier B.V. All rights reserved.
Effects of Automation Types on Air Traffic Controller Situation Awareness and Performance
NASA Technical Reports Server (NTRS)
Sethumadhavan, A.
2009-01-01
The Joint Planning and Development Office has proposed the introduction of automated systems to help air traffic controllers handle the increasing volume of air traffic in the next two decades (JPDO, 2007). Because fully automated systems leave operators out of the decision-making loop (e.g., Billings, 1991), it is important to determine the right level and type of automation that will keep air traffic controllers in the loop. This study examined the differences in the situation awareness (SA) and collision detection performance of individuals when they worked with information acquisition, information analysis, decision and action selection and action implementation automation to control air traffic (Parasuraman, Sheridan, & Wickens, 2000). When the automation was unreliable, the time taken to detect an upcoming collision was significantly longer for all the automation types compared with the information acquisition automation. This poor performance following automation failure was mediated by SA, with lower SA yielding poor performance. Thus, the costs associated with automation failure are greater when automation is applied to higher order stages of information processing. Results have practical implications for automation design and development of SA training programs.
Panuccio, Giuseppe; Torsello, Giovanni Federico; Pfister, Markus; Bisdas, Theodosios; Bosiers, Michel J; Torsello, Giovanni; Austermann, Martin
2016-12-01
To assess the usability of a fully automated fusion imaging engine prototype, matching preinterventional computed tomography with intraoperative fluoroscopic angiography during endovascular aortic repair. From June 2014 to February 2015, all patients treated electively for abdominal and thoracoabdominal aneurysms were enrolled prospectively. Before each procedure, preoperative planning was performed with a fully automated fusion engine prototype based on computed tomography angiography, creating a mesh model of the aorta. In a second step, this three-dimensional dataset was registered with the two-dimensional intraoperative fluoroscopy. The main outcome measure was the applicability of the fully automated fusion engine. Secondary outcomes were freedom from failure of automatic segmentation or of the automatic registration as well as accuracy of the mesh model, measuring deviations from intraoperative angiography in millimeters, if applicable. Twenty-five patients were enrolled in this study. The fusion imaging engine could be used in successfully 92% of the cases (n = 23). Freedom from failure of automatic segmentation was 44% (n = 11). The freedom from failure of the automatic registration was 76% (n = 19), the median error of the automatic registration process was 0 mm (interquartile range, 0-5 mm). The fully automated fusion imaging engine was found to be applicable in most cases, albeit in several cases a fully automated data processing was not possible, requiring manual intervention. The accuracy of the automatic registration yielded excellent results and promises a useful and simple to use technology. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Costa, Susana P F; Pinto, Paula C A G; Lapa, Rui A S; Saraiva, M Lúcia M F S
2015-03-02
A fully automated Vibrio fischeri methodology based on sequential injection analysis (SIA) has been developed. The methodology was based on the aspiration of 75 μL of bacteria and 50 μL of inhibitor followed by measurement of the luminescence of bacteria. The assays were conducted for contact times of 5, 15, and 30 min, by means of three mixing chambers that ensured adequate mixing conditions. The optimized methodology provided a precise control of the reaction conditions which is an asset for the analysis of a large number of samples. The developed methodology was applied to the evaluation of the impact of a set of ionic liquids (ILs) on V. fischeri and the results were compared with those provided by a conventional assay kit (Biotox(®)). The collected data evidenced the influence of different cation head groups and anion moieties on the toxicity of ILs. Generally, aromatic cations and fluorine-containing anions displayed higher impact on V. fischeri, evidenced by lower EC50. The proposed methodology was validated through statistical analysis which demonstrated a strong positive correlation (P>0.98) between assays. It is expected that the automated methodology can be tested for more classes of compounds and used as alternative to microplate based V. fischeri assay kits. Copyright © 2014 Elsevier B.V. All rights reserved.
Weber, Emanuel; Pinkse, Martijn W. H.; Bener-Aksam, Eda; Vellekoop, Michael J.; Verhaert, Peter D. E. M.
2012-01-01
We present a fully automated setup for performing in-line mass spectrometry (MS) analysis of conditioned media in cell cultures, in particular focusing on the peptides therein. The goal is to assess peptides secreted by cells in different culture conditions. The developed system is compatible with MS as analytical technique, as this is one of the most powerful analysis methods for peptide detection and identification. Proof of concept was achieved using the well-known mating-factor signaling in baker's yeast, Saccharomyces cerevisiae. Our concept system holds 1 mL of cell culture medium and allows maintaining a yeast culture for, at least, 40 hours with continuous supernatant extraction (and medium replenishing). The device's small dimensions result in reduced costs for reagents and open perspectives towards full integration on-chip. Experimental data that can be obtained are time-resolved peptide profiles in a yeast culture, including information about the appearance of mating-factor-related peptides. We emphasize that the system operates without any manual intervention or pipetting steps, which allows for an improved overall sensitivity compared to non-automated alternatives. MS data confirmed previously reported aspects of the physiology of the yeast-mating process. Moreover, matingfactor breakdown products (as well as evidence for a potentially responsible protease) were found. PMID:23091722
Geraghty, Adam W A; Torres, Leandro D; Leykin, Yan; Pérez-Stable, Eliseo J; Muñoz, Ricardo F
2013-09-01
Worldwide automated Internet health interventions have the potential to greatly reduce health disparities. High attrition from automated Internet interventions is ubiquitous, and presents a challenge in the evaluation of their effectiveness. Our objective was to evaluate variables hypothesized to be related to attrition, by modeling predictors of attrition in a secondary data analysis of two cohorts of an international, dual language (English and Spanish) Internet smoking cessation intervention. The two cohorts were identical except for the approach to follow-up (FU): one cohort employed only fully automated FU (n = 16 430), while the other cohort also used 'live' contact conditional upon initial non-response (n = 1000). Attrition rates were 48.1 and 10.8% for the automated FU and live FU cohorts, respectively. Significant attrition predictors in the automated FU cohort included higher levels of nicotine dependency, lower education, lower quitting confidence and receiving more contact emails. Participants' younger age was the sole predictor of attrition in the live FU cohort. While research on large-scale deployment of Internet interventions is at an early stage, this study demonstrates that differences in attrition from trials on this scale are (i) systematic and predictable and (ii) can largely be eliminated by live FU efforts. In fully automated trials, targeting the predictors we identify may reduce attrition, a necessary precursor to effective behavioral Internet interventions that can be accessed globally.
Otani, Kyoko; Nakazono, Akemi; Salgo, Ivan S; Lang, Roberto M; Takeuchi, Masaaki
2016-10-01
Echocardiographic determination of left heart chamber volumetric parameters by using manual tracings during multiple beats is tedious in atrial fibrillation (AF). The aim of this study was to determine the usefulness of fully automated left chamber quantification software with single-beat three-dimensional transthoracic echocardiographic data sets in patients with AF. Single-beat full-volume three-dimensional transthoracic echocardiographic data sets were prospectively acquired during consecutive multiple cardiac beats (≥10 beats) in 88 patients with AF. In protocol 1, left ventricular volumes, left ventricular ejection fraction, and maximal left atrial volume were validated using automated quantification against the manual tracing method in identical beats in 10 patients. In protocol 2, automated quantification-derived averaged values from multiple beats were compared with the corresponding values obtained from the indexed beat in all patients. Excellent correlations of left chamber parameters between automated quantification and the manual method were observed (r = 0.88-0.98) in protocol 1. The time required for the analysis with the automated quantification method (5 min) was significantly less compared with the manual method (27 min) (P < .0001). In protocol 2, there were excellent linear correlations between the averaged left chamber parameters and the corresponding values obtained from the indexed beat (r = 0.94-0.99), and test-retest variability of left chamber parameters was low (3.5%-4.8%). Three-dimensional transthoracic echocardiography with fully automated quantification software is a rapid and reliable way to measure averaged values of left heart chamber parameters during multiple consecutive beats. Thus, it is a potential new approach for left chamber quantification in patients with AF in daily routine practice. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
Reproducibility of myelin content-based human habenula segmentation at 3 Tesla.
Kim, Joo-Won; Naidich, Thomas P; Joseph, Joshmi; Nair, Divya; Glasser, Matthew F; O'halloran, Rafael; Doucet, Gaelle E; Lee, Won Hee; Krinsky, Hannah; Paulino, Alejandro; Glahn, David C; Anticevic, Alan; Frangou, Sophia; Xu, Junqian
2018-03-26
In vivo morphological study of the human habenula, a pair of small epithalamic nuclei adjacent to the dorsomedial thalamus, has recently gained significant interest for its role in reward and aversion processing. However, segmenting the habenula from in vivo magnetic resonance imaging (MRI) is challenging due to the habenula's small size and low anatomical contrast. Although manual and semi-automated habenula segmentation methods have been reported, the test-retest reproducibility of the segmented habenula volume and the consistency of the boundaries of habenula segmentation have not been investigated. In this study, we evaluated the intra- and inter-site reproducibility of in vivo human habenula segmentation from 3T MRI (0.7-0.8 mm isotropic resolution) using our previously proposed semi-automated myelin contrast-based method and its fully-automated version, as well as a previously published manual geometry-based method. The habenula segmentation using our semi-automated method showed consistent boundary definition (high Dice coefficient, low mean distance, and moderate Hausdorff distance) and reproducible volume measurement (low coefficient of variation). Furthermore, the habenula boundary in our semi-automated segmentation from 3T MRI agreed well with that in the manual segmentation from 7T MRI (0.5 mm isotropic resolution) of the same subjects. Overall, our proposed semi-automated habenula segmentation showed reliable and reproducible habenula localization, while its fully-automated version offers an efficient way for large sample analysis. © 2018 Wiley Periodicals, Inc.
Parallel solution-phase synthesis of a 2-aminothiazole library including fully automated work-up.
Buchstaller, Hans-Peter; Anlauf, Uwe
2011-02-01
A straightforward and effective procedure for the solution phase preparation of a 2-aminothiazole combinatorial library is described. Reaction, work-up and isolation of the title compounds as free bases was accomplished in a fully automated fashion using the Chemspeed ASW 2000 automated synthesizer. The compounds were obtained in good yields and excellent purities without any further purification procedure.
RootGraph: a graphic optimization tool for automated image analysis of plant roots
Cai, Jinhai; Zeng, Zhanghui; Connor, Jason N.; Huang, Chun Yuan; Melino, Vanessa; Kumar, Pankaj; Miklavcic, Stanley J.
2015-01-01
This paper outlines a numerical scheme for accurate, detailed, and high-throughput image analysis of plant roots. In contrast to existing root image analysis tools that focus on root system-average traits, a novel, fully automated and robust approach for the detailed characterization of root traits, based on a graph optimization process is presented. The scheme, firstly, distinguishes primary roots from lateral roots and, secondly, quantifies a broad spectrum of root traits for each identified primary and lateral root. Thirdly, it associates lateral roots and their properties with the specific primary root from which the laterals emerge. The performance of this approach was evaluated through comparisons with other automated and semi-automated software solutions as well as against results based on manual measurements. The comparisons and subsequent application of the algorithm to an array of experimental data demonstrate that this method outperforms existing methods in terms of accuracy, robustness, and the ability to process root images under high-throughput conditions. PMID:26224880
Assessment of Automated Analyses of Cell Migration on Flat and Nanostructured Surfaces
Grădinaru, Cristian; Łopacińska, Joanna M.; Huth, Johannes; Kestler, Hans A.; Flyvbjerg, Henrik; Mølhave, Kristian
2012-01-01
Motility studies of cells often rely on computer software that analyzes time-lapse recorded movies and establishes cell trajectories fully automatically. This raises the question of reproducibility of results, since different programs could yield significantly different results of such automated analysis. The fact that the segmentation routines of such programs are often challenged by nanostructured surfaces makes the question more pertinent. Here we illustrate how it is possible to track cells on bright field microscopy images with image analysis routines implemented in an open-source cell tracking program, PACT (Program for Automated Cell Tracking). We compare the automated motility analysis of three cell tracking programs, PACT, Autozell, and TLA, using the same movies as input for all three programs. We find that different programs track overlapping, but different subsets of cells due to different segmentation methods. Unfortunately, population averages based on such different cell populations, differ significantly in some cases. Thus, results obtained with one software package are not necessarily reproducible by other software. PMID:24688640
Automated position control of a surface array relative to a liquid microjunction surface sampler
Van Berkel, Gary J.; Kertesz, Vilmos; Ford, Michael James
2007-11-13
A system and method utilizes an image analysis approach for controlling the probe-to-surface distance of a liquid junction-based surface sampling system for use with mass spectrometric detection. Such an approach enables a hands-free formation of the liquid microjunction used to sample solution composition from the surface and for re-optimization, as necessary, of the microjunction thickness during a surface scan to achieve a fully automated surface sampling system.
DeepPicker: A deep learning approach for fully automated particle picking in cryo-EM.
Wang, Feng; Gong, Huichao; Liu, Gaochao; Li, Meijing; Yan, Chuangye; Xia, Tian; Li, Xueming; Zeng, Jianyang
2016-09-01
Particle picking is a time-consuming step in single-particle analysis and often requires significant interventions from users, which has become a bottleneck for future automated electron cryo-microscopy (cryo-EM). Here we report a deep learning framework, called DeepPicker, to address this problem and fill the current gaps toward a fully automated cryo-EM pipeline. DeepPicker employs a novel cross-molecule training strategy to capture common features of particles from previously-analyzed micrographs, and thus does not require any human intervention during particle picking. Tests on the recently-published cryo-EM data of three complexes have demonstrated that our deep learning based scheme can successfully accomplish the human-level particle picking process and identify a sufficient number of particles that are comparable to those picked manually by human experts. These results indicate that DeepPicker can provide a practically useful tool to significantly reduce the time and manual effort spent in single-particle analysis and thus greatly facilitate high-resolution cryo-EM structure determination. DeepPicker is released as an open-source program, which can be downloaded from https://github.com/nejyeah/DeepPicker-python. Copyright © 2016 Elsevier Inc. All rights reserved.
Pertuz, Said; McDonald, Elizabeth S.; Weinstein, Susan P.; Conant, Emily F.
2016-01-01
Purpose To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Materials and Methods Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board–approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration–cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Results Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging–based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Conclusion Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment. © RSNA, 2015 Online supplemental material is available for this article. PMID:26491909
Does bacteriology laboratory automation reduce time to results and increase quality management?
Dauwalder, O; Landrieve, L; Laurent, F; de Montclos, M; Vandenesch, F; Lina, G
2016-03-01
Due to reductions in financial and human resources, many microbiological laboratories have merged to build very large clinical microbiology laboratories, which allow the use of fully automated laboratory instruments. For clinical chemistry and haematology, automation has reduced the time to results and improved the management of laboratory quality. The aim of this review was to examine whether fully automated laboratory instruments for microbiology can reduce time to results and impact quality management. This study focused on solutions that are currently available, including the BD Kiestra™ Work Cell Automation and Total Lab Automation and the Copan WASPLab(®). Copyright © 2015 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Automated structure determination of proteins with the SAIL-FLYA NMR method.
Takeda, Mitsuhiro; Ikeya, Teppei; Güntert, Peter; Kainosho, Masatsune
2007-01-01
The labeling of proteins with stable isotopes enhances the NMR method for the determination of 3D protein structures in solution. Stereo-array isotope labeling (SAIL) provides an optimal stereospecific and regiospecific pattern of stable isotopes that yields sharpened lines, spectral simplification without loss of information, and the ability to collect rapidly and evaluate fully automatically the structural restraints required to solve a high-quality solution structure for proteins up to twice as large as those that can be analyzed using conventional methods. Here, we describe a protocol for the preparation of SAIL proteins by cell-free methods, including the preparation of S30 extract and their automated structure analysis using the FLYA algorithm and the program CYANA. Once efficient cell-free expression of the unlabeled or uniformly labeled target protein has been achieved, the NMR sample preparation of a SAIL protein can be accomplished in 3 d. A fully automated FLYA structure calculation can be completed in 1 d on a powerful computer system.
Rapid test for the detection of hazardous microbiological material
NASA Astrophysics Data System (ADS)
Mordmueller, Mario; Bohling, Christian; John, Andreas; Schade, Wolfgang
2009-09-01
After attacks with anthrax pathogens have been committed since 2001 all over the world the fast detection and determination of biological samples has attracted interest. A very promising method for a rapid test is Laser Induced Breakdown Spectroscopy (LIBS). LIBS is an optical method which uses time-resolved or time-integrated spectral analysis of optical plasma emission after pulsed laser excitation. Even though LIBS is well established for the determination of metals and other inorganic materials the analysis of microbiological organisms is difficult due to their very similar stoichiometric composition. To analyze similar LIBS-spectra computer assisted chemometrics is a very useful approach. In this paper we report on first results of developing a compact and fully automated rapid test for the detection of hazardous microbiological material. Experiments have been carried out with two setups: A bulky one which is composed of standard laboratory components and a compact one consisting of miniaturized industrial components. Both setups work at an excitation wavelength of λ=1064nm (Nd:YAG). Data analysis is done by Principal Component Analysis (PCA) with an adjacent neural network for fully automated sample identification.
Lab-on-a-Chip Device for Rapid Measurement of Vitamin D Levels.
Peter, Harald; Bistolas, Nikitas; Schumacher, Soeren; Laurisch, Cecilia; Guest, Paul C; Höller, Ulrich; Bier, Frank F
2018-01-01
Lab-on-a-chip assays allow rapid analysis of one or more molecular analytes on an automated user-friendly platform. Here we describe a fully automated assay and readout for measurement of vitamin D levels in less than 15 min using the Fraunhofer in vitro diagnostics platform. Vitamin D (25-hydroxyvitamin D 3 [25(OH)D 3 ]) dilution series in buffer were successfully tested down to 2 ng/mL. This could be applied in the future as an inexpensive point-of-care analysis for patients suffering from a variety of conditions marked by vitamin D deficiencies.
Fully automated segmentation of callus by micro-CT compared to biomechanics.
Bissinger, Oliver; Götz, Carolin; Wolff, Klaus-Dietrich; Hapfelmeier, Alexander; Prodinger, Peter Michael; Tischer, Thomas
2017-07-11
A high percentage of closed femur fractures have slight comminution. Using micro-CT (μCT), multiple fragment segmentation is much more difficult than segmentation of unfractured or osteotomied bone. Manual or semi-automated segmentation has been performed to date. However, such segmentation is extremely laborious, time-consuming and error-prone. Our aim was to therefore apply a fully automated segmentation algorithm to determine μCT parameters and examine their association with biomechanics. The femura of 64 rats taken after randomised inhibitory or neutral medication, in terms of the effect on fracture healing, and controls were closed fractured after a Kirschner wire was inserted. After 21 days, μCT and biomechanical parameters were determined by a fully automated method and correlated (Pearson's correlation). The fully automated segmentation algorithm automatically detected bone and simultaneously separated cortical bone from callus without requiring ROI selection for each single bony structure. We found an association of structural callus parameters obtained by μCT to the biomechanical properties. However, results were only explicable by additionally considering the callus location. A large number of slightly comminuted fractures in combination with therapies that influence the callus qualitatively and/or quantitatively considerably affects the association between μCT and biomechanics. In the future, contrast-enhanced μCT imaging of the callus cartilage might provide more information to improve the non-destructive and non-invasive prediction of callus mechanical properties. As studies evaluating such important drugs increase, fully automated segmentation appears to be clinically important.
Gene ARMADA: an integrated multi-analysis platform for microarray data implemented in MATLAB.
Chatziioannou, Aristotelis; Moulos, Panagiotis; Kolisis, Fragiskos N
2009-10-27
The microarray data analysis realm is ever growing through the development of various tools, open source and commercial. However there is absence of predefined rational algorithmic analysis workflows or batch standardized processing to incorporate all steps, from raw data import up to the derivation of significantly differentially expressed gene lists. This absence obfuscates the analytical procedure and obstructs the massive comparative processing of genomic microarray datasets. Moreover, the solutions provided, heavily depend on the programming skills of the user, whereas in the case of GUI embedded solutions, they do not provide direct support of various raw image analysis formats or a versatile and simultaneously flexible combination of signal processing methods. We describe here Gene ARMADA (Automated Robust MicroArray Data Analysis), a MATLAB implemented platform with a Graphical User Interface. This suite integrates all steps of microarray data analysis including automated data import, noise correction and filtering, normalization, statistical selection of differentially expressed genes, clustering, classification and annotation. In its current version, Gene ARMADA fully supports 2 coloured cDNA and Affymetrix oligonucleotide arrays, plus custom arrays for which experimental details are given in tabular form (Excel spreadsheet, comma separated values, tab-delimited text formats). It also supports the analysis of already processed results through its versatile import editor. Besides being fully automated, Gene ARMADA incorporates numerous functionalities of the Statistics and Bioinformatics Toolboxes of MATLAB. In addition, it provides numerous visualization and exploration tools plus customizable export data formats for seamless integration by other analysis tools or MATLAB, for further processing. Gene ARMADA requires MATLAB 7.4 (R2007a) or higher and is also distributed as a stand-alone application with MATLAB Component Runtime. Gene ARMADA provides a highly adaptable, integrative, yet flexible tool which can be used for automated quality control, analysis, annotation and visualization of microarray data, constituting a starting point for further data interpretation and integration with numerous other tools.
Singh, Tulika; Sharma, Madhurima; Singla, Veenu; Khandelwal, Niranjan
2016-01-01
The objective of our study was to calculate mammographic breast density with a fully automated volumetric breast density measurement method and to compare it to breast imaging reporting and data system (BI-RADS) breast density categories assigned by two radiologists. A total of 476 full-field digital mammography examinations with standard mediolateral oblique and craniocaudal views were evaluated by two blinded radiologists and BI-RADS density categories were assigned. Using a fully automated software, mean fibroglandular tissue volume, mean breast volume, and mean volumetric breast density were calculated. Based on percentage volumetric breast density, a volumetric density grade was assigned from 1 to 4. The weighted overall kappa was 0.895 (almost perfect agreement) for the two radiologists' BI-RADS density estimates. A statistically significant difference was seen in mean volumetric breast density among the BI-RADS density categories. With increased BI-RADS density category, increase in mean volumetric breast density was also seen (P < 0.001). A significant positive correlation was found between BI-RADS categories and volumetric density grading by fully automated software (ρ = 0.728, P < 0.001 for first radiologist and ρ = 0.725, P < 0.001 for second radiologist). Pairwise estimates of the weighted kappa between Volpara density grade and BI-RADS density category by two observers showed fair agreement (κ = 0.398 and 0.388, respectively). In our study, a good correlation was seen between density grading using fully automated volumetric method and density grading using BI-RADS density categories assigned by the two radiologists. Thus, the fully automated volumetric method may be used to quantify breast density on routine mammography. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Effects of imperfect automation on decision making in a simulated command and control task.
Rovira, Ericka; McGarry, Kathleen; Parasuraman, Raja
2007-02-01
Effects of four types of automation support and two levels of automation reliability were examined. The objective was to examine the differential impact of information and decision automation and to investigate the costs of automation unreliability. Research has shown that imperfect automation can lead to differential effects of stages and levels of automation on human performance. Eighteen participants performed a "sensor to shooter" targeting simulation of command and control. Dependent variables included accuracy and response time of target engagement decisions, secondary task performance, and subjective ratings of mental work-load, trust, and self-confidence. Compared with manual performance, reliable automation significantly reduced decision times. Unreliable automation led to greater cost in decision-making accuracy under the higher automation reliability condition for three different forms of decision automation relative to information automation. At low automation reliability, however, there was a cost in performance for both information and decision automation. The results are consistent with a model of human-automation interaction that requires evaluation of the different stages of information processing to which automation support can be applied. If fully reliable decision automation cannot be guaranteed, designers should provide users with information automation support or other tools that allow for inspection and analysis of raw data.
NASA Astrophysics Data System (ADS)
Srivastava, Vishal; Dalal, Devjyoti; Kumar, Anuj; Prakash, Surya; Dalal, Krishna
2018-06-01
Moisture content is an important feature of fruits and vegetables. As 80% of apple content is water, so decreasing the moisture content will degrade the quality of apples (Golden Delicious). The computational and texture features of the apples were extracted from optical coherence tomography (OCT) images. A support vector machine with a Gaussian kernel model was used to perform automated classification. To evaluate the quality of wax coated apples during storage in vivo, our proposed method opens up the possibility of fully automated quantitative analysis based on the morphological features of apples. Our results demonstrate that the analysis of the computational and texture features of OCT images may be a good non-destructive method for the assessment of the quality of apples.
Gokce, Sertan Kutal; Guo, Samuel X.; Ghorashian, Navid; Everett, W. Neil; Jarrell, Travis; Kottek, Aubri; Bovik, Alan C.; Ben-Yakar, Adela
2014-01-01
Femtosecond laser nanosurgery has been widely accepted as an axonal injury model, enabling nerve regeneration studies in the small model organism, Caenorhabditis elegans. To overcome the time limitations of manual worm handling techniques, automation and new immobilization technologies must be adopted to improve throughput in these studies. While new microfluidic immobilization techniques have been developed that promise to reduce the time required for axotomies, there is a need for automated procedures to minimize the required amount of human intervention and accelerate the axotomy processes crucial for high-throughput. Here, we report a fully automated microfluidic platform for performing laser axotomies of fluorescently tagged neurons in living Caenorhabditis elegans. The presented automation process reduces the time required to perform axotomies within individual worms to ∼17 s/worm, at least one order of magnitude faster than manual approaches. The full automation is achieved with a unique chip design and an operation sequence that is fully computer controlled and synchronized with efficient and accurate image processing algorithms. The microfluidic device includes a T-shaped architecture and three-dimensional microfluidic interconnects to serially transport, position, and immobilize worms. The image processing algorithms can identify and precisely position axons targeted for ablation. There were no statistically significant differences observed in reconnection probabilities between axotomies carried out with the automated system and those performed manually with anesthetics. The overall success rate of automated axotomies was 67.4±3.2% of the cases (236/350) at an average processing rate of 17.0±2.4 s. This fully automated platform establishes a promising methodology for prospective genome-wide screening of nerve regeneration in C. elegans in a truly high-throughput manner. PMID:25470130
Verplaetse, Ruth; Henion, Jack
2016-01-01
Opioids are well known, widely used painkillers. Increased stability of opioids in the dried blood spot (DBS) matrix compared to blood/plasma has been described. Other benefits provided by DBS techniques include point-of-care collection, less invasive micro sampling, more economical shipment, and convenient storage. Current methodology for analysis of micro whole blood samples for opioids is limited to the classical DBS workflow, including tedious manual punching of the DBS cards followed by extraction and liquid chromatography-tandem mass spectrometry (LC-MS/MS) bioanalysis. The goal of this study was to develop and validate a fully automated on-line sample preparation procedure for the analysis of DBS micro samples relevant to the detection of opioids in finger prick blood. To this end, automated flow-through elution of DBS cards was followed by on-line solid-phase extraction (SPE) and analysis by LC-MS/MS. Selective, sensitive, accurate, and reproducible quantitation of five representative opioids in human blood at sub-therapeutic, therapeutic, and toxic levels was achieved. The range of reliable response (R(2) ≥0.997) was 1 to 500 ng/mL whole blood for morphine, codeine, oxycodone, hydrocodone; and 0.1 to 50 ng/mL for fentanyl. Inter-day, intra-day, and matrix inter-lot accuracy and precision was less than 15% (even at lower limits of quantitation (LLOQ) level). The method was successfully used to measure hydrocodone and its major metabolite norhydrocodone in incurred human samples. Our data support the enormous potential of DBS sampling and automated analysis for monitoring opioids as well as other pharmaceuticals in both anti-doping and pain management regimens. Copyright © 2015 John Wiley & Sons, Ltd.
Arlt, Sönke; Buchert, Ralph; Spies, Lothar; Eichenlaub, Martin; Lehmbeck, Jan T; Jahn, Holger
2013-06-01
Fully automated magnetic resonance imaging (MRI)-based volumetry may serve as biomarker for the diagnosis in patients with mild cognitive impairment (MCI) or dementia. We aimed at investigating the relation between fully automated MRI-based volumetric measures and neuropsychological test performance in amnestic MCI and patients with mild dementia due to Alzheimer's disease (AD) in a cross-sectional and longitudinal study. In order to assess a possible prognostic value of fully automated MRI-based volumetry for future cognitive performance, the rate of change of neuropsychological test performance over time was also tested for its correlation with fully automated MRI-based volumetry at baseline. In 50 subjects, 18 with amnestic MCI, 21 with mild AD, and 11 controls, neuropsychological testing and T1-weighted MRI were performed at baseline and at a mean follow-up interval of 2.1 ± 0.5 years (n = 19). Fully automated MRI volumetry of the grey matter volume (GMV) was performed using a combined stereotactic normalisation and segmentation approach as provided by SPM8 and a set of pre-defined binary lobe masks. Left and right hippocampus masks were derived from probabilistic cytoarchitectonic maps. Volumes of the inner and outer liquor space were also determined automatically from the MRI. Pearson's test was used for the correlation analyses. Left hippocampal GMV was significantly correlated with performance in memory tasks, and left temporal GMV was related to performance in language tasks. Bilateral frontal, parietal and occipital GMVs were correlated to performance in neuropsychological tests comprising multiple domains. Rate of GMV change in the left hippocampus was correlated with decline of performance in the Boston Naming Test (BNT), Mini-Mental Status Examination, and trail making test B (TMT-B). The decrease of BNT and TMT-A performance over time correlated with the loss of grey matter in multiple brain regions. We conclude that fully automated MRI-based volumetry allows detection of regional grey matter volume loss that correlates with neuropsychological performance in patients with amnestic MCI or mild AD. Because of the high level of automation, MRI-based volumetry may easily be integrated into clinical routine to complement the current diagnostic procedure.
NASA Astrophysics Data System (ADS)
Chęciński, Jakub; Frankowski, Marek
2016-10-01
We present a tool for fully-automated generation of both simulations configuration files (Mif) and Matlab scripts for automated data analysis, dedicated for Object Oriented Micromagnetic Framework (OOMMF). We introduce extended graphical user interface (GUI) that allows for fast, error-proof and easy creation of Mifs, without any programming skills usually required for manual Mif writing necessary. With MAGE we provide OOMMF extensions for complementing it by mangetoresistance and spin-transfer-torque calculations, as well as local magnetization data selection for output. Our software allows for creation of advanced simulations conditions like simultaneous parameters sweeps and synchronic excitation application. Furthermore, since output of such simulation could be long and complicated we provide another GUI allowing for automated creation of Matlab scripts suitable for analysis of such data with Fourier and wavelet transforms as well as user-defined operations.
Hlavnička, Jan; Čmejla, Roman; Tykalová, Tereza; Šonka, Karel; Růžička, Evžen; Rusz, Jan
2017-02-02
For generations, the evaluation of speech abnormalities in neurodegenerative disorders such as Parkinson's disease (PD) has been limited to perceptual tests or user-controlled laboratory analysis based upon rather small samples of human vocalizations. Our study introduces a fully automated method that yields significant features related to respiratory deficits, dysphonia, imprecise articulation and dysrhythmia from acoustic microphone data of natural connected speech for predicting early and distinctive patterns of neurodegeneration. We compared speech recordings of 50 subjects with rapid eye movement sleep behaviour disorder (RBD), 30 newly diagnosed, untreated PD patients and 50 healthy controls, and showed that subliminal parkinsonian speech deficits can be reliably captured even in RBD patients, which are at high risk of developing PD or other synucleinopathies. Thus, automated vocal analysis should soon be able to contribute to screening and diagnostic procedures for prodromal parkinsonian neurodegeneration in natural environments.
ERIC Educational Resources Information Center
Mu, Jin; Stegmann, Karsten; Mayfield, Elijah; Rose, Carolyn; Fischer, Frank
2012-01-01
Research related to online discussions frequently faces the problem of analyzing huge corpora. Natural Language Processing (NLP) technologies may allow automating this analysis. However, the state-of-the-art in machine learning and text mining approaches yields models that do not transfer well between corpora related to different topics. Also,…
Fully Automated Sample Preparation for Ultrafast N-Glycosylation Analysis of Antibody Therapeutics.
Szigeti, Marton; Lew, Clarence; Roby, Keith; Guttman, Andras
2016-04-01
There is a growing demand in the biopharmaceutical industry for high-throughput, large-scale N-glycosylation profiling of therapeutic antibodies in all phases of product development, but especially during clone selection when hundreds of samples should be analyzed in a short period of time to assure their glycosylation-based biological activity. Our group has recently developed a magnetic bead-based protocol for N-glycosylation analysis of glycoproteins to alleviate the hard-to-automate centrifugation and vacuum-centrifugation steps of the currently used protocols. Glycan release, fluorophore labeling, and cleanup were all optimized, resulting in a <4 h magnetic bead-based process with excellent yield and good repeatability. This article demonstrates the next level of this work by automating all steps of the optimized magnetic bead-based protocol from endoglycosidase digestion, through fluorophore labeling and cleanup with high-throughput sample processing in 96-well plate format, using an automated laboratory workstation. Capillary electrophoresis analysis of the fluorophore-labeled glycans was also optimized for rapid (<3 min) separation to accommodate the high-throughput processing of the automated sample preparation workflow. Ultrafast N-glycosylation analyses of several commercially relevant antibody therapeutics are also shown and compared to their biosimilar counterparts, addressing the biological significance of the differences. © 2015 Society for Laboratory Automation and Screening.
Develop Advanced Nonlinear Signal Analysis Topographical Mapping System
NASA Technical Reports Server (NTRS)
Jong, Jen-Yi
1997-01-01
During the development of the SSME, a hierarchy of advanced signal analysis techniques for mechanical signature analysis has been developed by NASA and AI Signal Research Inc. (ASRI) to improve the safety and reliability for Space Shuttle operations. These techniques can process and identify intelligent information hidden in a measured signal which is often unidentifiable using conventional signal analysis methods. Currently, due to the highly interactive processing requirements and the volume of dynamic data involved, detailed diagnostic analysis is being performed manually which requires immense man-hours with extensive human interface. To overcome this manual process, NASA implemented this program to develop an Advanced nonlinear signal Analysis Topographical Mapping System (ATMS) to provide automatic/unsupervised engine diagnostic capabilities. The ATMS will utilize a rule-based Clips expert system to supervise a hierarchy of diagnostic signature analysis techniques in the Advanced Signal Analysis Library (ASAL). ASAL will perform automatic signal processing, archiving, and anomaly detection/identification tasks in order to provide an intelligent and fully automated engine diagnostic capability. The ATMS has been successfully developed under this contract. In summary, the program objectives to design, develop, test and conduct performance evaluation for an automated engine diagnostic system have been successfully achieved. Software implementation of the entire ATMS system on MSFC's OISPS computer has been completed. The significance of the ATMS developed under this program is attributed to the fully automated coherence analysis capability for anomaly detection and identification which can greatly enhance the power and reliability of engine diagnostic evaluation. The results have demonstrated that ATMS can significantly save time and man-hours in performing engine test/flight data analysis and performance evaluation of large volumes of dynamic test data.
21 CFR 864.5200 - Automated cell counter.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated cell counter. 864.5200 Section 864.5200....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the...
21 CFR 864.5240 - Automated blood cell diluting apparatus.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated blood cell diluting apparatus. 864.5240... § 864.5240 Automated blood cell diluting apparatus. (a) Identification. An automated blood cell diluting apparatus is a fully automated or semi-automated device used to make appropriate dilutions of a blood sample...
21 CFR 864.5240 - Automated blood cell diluting apparatus.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated blood cell diluting apparatus. 864.5240... § 864.5240 Automated blood cell diluting apparatus. (a) Identification. An automated blood cell diluting apparatus is a fully automated or semi-automated device used to make appropriate dilutions of a blood sample...
Harder, Nathalie; Mora-Bermúdez, Felipe; Godinez, William J; Wünsche, Annelie; Eils, Roland; Ellenberg, Jan; Rohr, Karl
2009-11-01
Live-cell imaging allows detailed dynamic cellular phenotyping for cell biology and, in combination with small molecule or drug libraries, for high-content screening. Fully automated analysis of live cell movies has been hampered by the lack of computational approaches that allow tracking and recognition of individual cell fates over time in a precise manner. Here, we present a fully automated approach to analyze time-lapse movies of dividing cells. Our method dynamically categorizes cells into seven phases of the cell cycle and five aberrant morphological phenotypes over time. It reliably tracks cells and their progeny and can thus measure the length of mitotic phases and detect cause and effect if mitosis goes awry. We applied our computational scheme to annotate mitotic phenotypes induced by RNAi gene knockdown of CKAP5 (also known as ch-TOG) or by treatment with the drug nocodazole. Our approach can be readily applied to comparable assays aiming at uncovering the dynamic cause of cell division phenotypes.
21 CFR 866.1645 - Fully automated short-term incubation cycle antimicrobial susceptibility system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Fully automated short-term incubation cycle antimicrobial susceptibility system. 866.1645 Section 866.1645 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES IMMUNOLOGY AND MICROBIOLOGY DEVICES...
21 CFR 866.1645 - Fully automated short-term incubation cycle antimicrobial susceptibility system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Fully automated short-term incubation cycle antimicrobial susceptibility system. 866.1645 Section 866.1645 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES IMMUNOLOGY AND MICROBIOLOGY DEVICES...
21 CFR 866.1645 - Fully automated short-term incubation cycle antimicrobial susceptibility system.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Fully automated short-term incubation cycle antimicrobial susceptibility system. 866.1645 Section 866.1645 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES IMMUNOLOGY AND MICROBIOLOGY DEVICES...
21 CFR 866.1645 - Fully automated short-term incubation cycle antimicrobial susceptibility system.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Fully automated short-term incubation cycle antimicrobial susceptibility system. 866.1645 Section 866.1645 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES IMMUNOLOGY AND MICROBIOLOGY DEVICES...
Integrated Formulation of Beacon-Based Exception Analysis for Multimissions
NASA Technical Reports Server (NTRS)
Mackey, Ryan; James, Mark; Park, Han; Zak, Mickail
2003-01-01
Further work on beacon-based exception analysis for multimissions (BEAM), a method of real-time, automated diagnosis of a complex electromechanical systems, has greatly expanded its capability and suitability of application. This expanded formulation, which fully integrates physical models and symbolic analysis, is described. The new formulation of BEAM expands upon previous advanced techniques for analysis of signal data, utilizing mathematical modeling of the system physics, and expert-system reasoning,
Anesthesiology, automation, and artificial intelligence.
Alexander, John C; Joshi, Girish P
2018-01-01
There have been many attempts to incorporate automation into the practice of anesthesiology, though none have been successful. Fundamentally, these failures are due to the underlying complexity of anesthesia practice and the inability of rule-based feedback loops to fully master it. Recent innovations in artificial intelligence, especially machine learning, may usher in a new era of automation across many industries, including anesthesiology. It would be wise to consider the implications of such potential changes before they have been fully realized.
Automated Morphological Analysis of Microglia After Stroke.
Heindl, Steffanie; Gesierich, Benno; Benakis, Corinne; Llovera, Gemma; Duering, Marco; Liesz, Arthur
2018-01-01
Microglia are the resident immune cells of the brain and react quickly to changes in their environment with transcriptional regulation and morphological changes. Brain tissue injury such as ischemic stroke induces a local inflammatory response encompassing microglial activation. The change in activation status of a microglia is reflected in its gradual morphological transformation from a highly ramified into a less ramified or amoeboid cell shape. For this reason, the morphological changes of microglia are widely utilized to quantify microglial activation and studying their involvement in virtually all brain diseases. However, the currently available methods, which are mainly based on manual rating of immunofluorescent microscopic images, are often inaccurate, rater biased, and highly time consuming. To address these issues, we created a fully automated image analysis tool, which enables the analysis of microglia morphology from a confocal Z-stack and providing up to 59 morphological features. We developed the algorithm on an exploratory dataset of microglial cells from a stroke mouse model and validated the findings on an independent data set. In both datasets, we could demonstrate the ability of the algorithm to sensitively discriminate between the microglia morphology in the peri-infarct and the contralateral, unaffected cortex. Dimensionality reduction by principal component analysis allowed to generate a highly sensitive compound score for microglial shape analysis. Finally, we tested for concordance of results between the novel automated analysis tool and the conventional manual analysis and found a high degree of correlation. In conclusion, our novel method for the fully automatized analysis of microglia morphology shows excellent accuracy and time efficacy compared to traditional analysis methods. This tool, which we make openly available, could find application to study microglia morphology using fluorescence imaging in a wide range of brain disease models.
21 CFR 864.5200 - Automated cell counter.
Code of Federal Regulations, 2014 CFR
2014-04-01
....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the patient's peripheral blood (blood circulating in one of the body's extremities, such as the arm). These...
21 CFR 864.5200 - Automated cell counter.
Code of Federal Regulations, 2011 CFR
2011-04-01
....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the patient's peripheral blood (blood circulating in one of the body's extremities, such as the arm). These...
21 CFR 864.5200 - Automated cell counter.
Code of Federal Regulations, 2012 CFR
2012-04-01
....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the patient's peripheral blood (blood circulating in one of the body's extremities, such as the arm). These...
21 CFR 864.5200 - Automated cell counter.
Code of Federal Regulations, 2013 CFR
2013-04-01
....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the patient's peripheral blood (blood circulating in one of the body's extremities, such as the arm). These...
Wyatt, S K; Barck, K H; Kates, L; Zavala-Solorio, J; Ross, J; Kolumam, G; Sonoda, J; Carano, R A D
2015-11-01
The ability to non-invasively measure body composition in mouse models of obesity and obesity-related disorders is essential for elucidating mechanisms of metabolic regulation and monitoring the effects of novel treatments. These studies aimed to develop a fully automated, high-throughput micro-computed tomography (micro-CT)-based image analysis technique for longitudinal quantitation of adipose, non-adipose and lean tissue as well as bone and demonstrate utility for assessing the effects of two distinct treatments. An initial validation study was performed in diet-induced obesity (DIO) and control mice on a vivaCT 75 micro-CT system. Subsequently, four groups of DIO mice were imaged pre- and post-treatment with an experimental agonistic antibody specific for anti-fibroblast growth factor receptor 1 (anti-FGFR1, R1MAb1), control immunoglobulin G antibody, a known anorectic antiobesity drug (rimonabant, SR141716), or solvent control. The body composition analysis technique was then ported to a faster micro-CT system (CT120) to markedly increase throughput as well as to evaluate the use of micro-CT image intensity for hepatic lipid content in DIO and control mice. Ex vivo chemical analysis and colorimetric analysis of the liver triglycerides were performed as the standard metrics for correlation with body composition and hepatic lipid status, respectively. Micro-CT-based body composition measures correlate with ex vivo chemical analysis metrics and enable distinction between DIO and control mice. R1MAb1 and rimonabant have differing effects on body composition as assessed by micro-CT. High-throughput body composition imaging is possible using a modified CT120 system. Micro-CT also provides a non-invasive assessment of hepatic lipid content. This work describes, validates and demonstrates utility of a fully automated image analysis technique to quantify in vivo micro-CT-derived measures of adipose, non-adipose and lean tissue, as well as bone. These body composition metrics highly correlate with standard ex vivo chemical analysis and enable longitudinal evaluation of body composition and therapeutic efficacy monitoring.
Analysis of postural load during tasks related to milking cows-a case study.
Groborz, Anna; Tokarski, Tomasz; Roman-Liu, Danuta
2011-01-01
The aim of this study was to analyse postural load during tasks related to milking cows of 2 farmers on 2 different farms (one with a manual milk transport system, the other with a fully automated milk transport system) as a case study. The participants were full-time farmers, they were both healthy and experienced in their job. The Ovako Working Posture Analyzing System (OWAS) was used to evaluate postural load and postural risk. Postural load was medium for the farmer on the farm with a manual milk transport system and high for the farmer working on the farm with a fully automated milk transport system. Thus, it can be concluded that a higher level of farm mechanization not always mean that the farmer's postural load is lower, but limitation of OWAS should be considered.
NASA Astrophysics Data System (ADS)
Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei
2017-03-01
The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.
Automated, Ultra-Sterile Solid Sample Handling and Analysis on a Chip
NASA Technical Reports Server (NTRS)
Mora, Maria F.; Stockton, Amanda M.; Willis, Peter A.
2013-01-01
There are no existing ultra-sterile lab-on-a-chip systems that can accept solid samples and perform complete chemical analyses without human intervention. The proposed solution is to demonstrate completely automated lab-on-a-chip manipulation of powdered solid samples, followed by on-chip liquid extraction and chemical analysis. This technology utilizes a newly invented glass micro-device for solid manipulation, which mates with existing lab-on-a-chip instrumentation. Devices are fabricated in a Class 10 cleanroom at the JPL MicroDevices Lab, and are plasma-cleaned before and after assembly. Solid samples enter the device through a drilled hole in the top. Existing micro-pumping technology is used to transfer milligrams of powdered sample into an extraction chamber where it is mixed with liquids to extract organic material. Subsequent chemical analysis is performed using portable microchip capillary electrophoresis systems (CE). These instruments have been used for ultra-highly sensitive (parts-per-trillion, pptr) analysis of organic compounds including amines, amino acids, aldehydes, ketones, carboxylic acids, and thiols. Fully autonomous amino acid analyses in liquids were demonstrated; however, to date there have been no reports of completely automated analysis of solid samples on chip. This approach utilizes an existing portable instrument that houses optics, high-voltage power supplies, and solenoids for fully autonomous microfluidic sample processing and CE analysis with laser-induced fluorescence (LIF) detection. Furthermore, the entire system can be sterilized and placed in a cleanroom environment for analyzing samples returned from extraterrestrial targets, if desired. This is an entirely new capability never demonstrated before. The ability to manipulate solid samples, coupled with lab-on-a-chip analysis technology, will enable ultraclean and ultrasensitive end-to-end analysis of samples that is orders of magnitude more sensitive than the ppb goal given in the Science Instruments.
An ODE-Based Wall Model for Turbulent Flow Simulations
NASA Technical Reports Server (NTRS)
Berger, Marsha J.; Aftosmis, Michael J.
2017-01-01
Fully automated meshing for Reynolds-Averaged Navier-Stokes Simulations, Mesh generation for complex geometry continues to be the biggest bottleneck in the RANS simulation process; Fully automated Cartesian methods routinely used for inviscid simulations about arbitrarily complex geometry; These methods lack of an obvious & robust way to achieve near wall anisotropy; Goal: Extend these methods for RANS simulation without sacrificing automation, at an affordable cost; Note: Nothing here is limited to Cartesian methods, and much becomes simpler in a body-fitted setting.
Anesthesiology, automation, and artificial intelligence
Alexander, John C.; Joshi, Girish P.
2018-01-01
ABSTRACT There have been many attempts to incorporate automation into the practice of anesthesiology, though none have been successful. Fundamentally, these failures are due to the underlying complexity of anesthesia practice and the inability of rule-based feedback loops to fully master it. Recent innovations in artificial intelligence, especially machine learning, may usher in a new era of automation across many industries, including anesthesiology. It would be wise to consider the implications of such potential changes before they have been fully realized. PMID:29686578
Campbell, J Q; Petrella, A J
2016-09-06
Population-based modeling of the lumbar spine has the potential to be a powerful clinical tool. However, developing a fully parameterized model of the lumbar spine with accurate geometry has remained a challenge. The current study used automated methods for landmark identification to create a statistical shape model of the lumbar spine. The shape model was evaluated using compactness, generalization ability, and specificity. The primary shape modes were analyzed visually, quantitatively, and biomechanically. The biomechanical analysis was performed by using the statistical shape model with an automated method for finite element model generation to create a fully parameterized finite element model of the lumbar spine. Functional finite element models of the mean shape and the extreme shapes (±3 standard deviations) of all 17 shape modes were created demonstrating the robust nature of the methods. This study represents an advancement in finite element modeling of the lumbar spine and will allow population-based modeling in the future. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kovacevic, Sanja; Rafii, Michael S.; Brewer, James B.
2008-01-01
Medial temporal lobe (MTL) atrophy is associated with increased risk for conversion to Alzheimer's disease (AD), but manual tracing techniques and even semi-automated techniques for volumetric assessment are not practical in the clinical setting. In addition, most studies that examined MTL atrophy in AD have focused only on the hippocampus. It is unknown the extent to which volumes of amygdala and temporal horn of the lateral ventricle predict subsequent clinical decline. This study examined whether measures of hippocampus, amygdala, and temporal horn volume predict clinical decline over the following 6-month period in patients with mild cognitive impairment (MCI). Fully-automated volume measurements were performed in 269 MCI patients. Baseline volumes of the hippocampus, amygdala, and temporal horn were evaluated as predictors of change in Mini-mental State Exam (MMSE) and Clinical Dementia Rating Sum of Boxes (CDR SB) over a 6-month interval. Fully-automated measurements of baseline hippocampus and amygdala volumes correlated with baseline delayed recall scores. Patients with smaller baseline volumes of the hippocampus and amygdala or larger baseline volumes of the temporal horn had more rapid subsequent clinical decline on MMSE and CDR SB. Fully-automated and rapid measurement of segmental MTL volumes may help clinicians predict clinical decline in MCI patients. PMID:19474571
Twelve automated thresholding methods for segmentation of PET images: a phantom study.
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M
2012-06-21
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Twelve automated thresholding methods for segmentation of PET images: a phantom study
NASA Astrophysics Data System (ADS)
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.
2012-06-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
SEM AutoAnalysis: enhancing photomask and NIL defect disposition and review
NASA Astrophysics Data System (ADS)
Schulz, Kristian; Egodage, Kokila; Tabbone, Gilles; Ehrlich, Christian; Garetto, Anthony
2017-06-01
For defect disposition and repair verification regarding printability, AIMS™ is the state of the art measurement tool in industry. With its unique capability of capturing aerial images of photomasks it is the one method that comes closest to emulating the printing behaviour of a scanner. However for nanoimprint lithography (NIL) templates aerial images cannot be applied to evaluate the success of a repair process. Hence, for NIL defect dispositioning scanning, electron microscopy (SEM) imaging is the method of choice. In addition, it has been a standard imaging method for further root cause analysis of defects and defect review on optical photomasks which enables 2D or even 3D mask profiling at high resolutions. In recent years a trend observed in mask shops has been the automation of processes that traditionally were driven by operators. This of course has brought many advantages one of which is freeing cost intensive labour from conducting repetitive and tedious work. Furthermore, it reduces variability in processes due to different operator skill and experience levels which at the end contributes to eliminating the human factor. Taking these factors into consideration, one of the software based solutions available under the FAVOR® brand to support customer needs is the aerial image evaluation software, AIMS™ AutoAnalysis (AAA). It provides fully automated analysis of AIMS™ images and runs in parallel to measurements. This is enabled by its direct connection and communication with the AIMS™tools. As one of many positive outcomes, generating automated result reports is facilitated, standardizing the mask manufacturing workflow. Today, AAA has been successfully introduced into production at multiple customers and is supporting the workflow as described above. These trends indeed have triggered the demand for similar automation with respect to SEM measurements leading to the development of SEM AutoAnalysis (SAA). It aims towards a fully automated SEM image evaluation process utilizing a completely different algorithm due to the different nature of SEM images and aerial images. Both AAA and SAA are the building blocks towards an image evaluation suite in the mask shop industry.
The role of 3-D interactive visualization in blind surveys of H I in galaxies
NASA Astrophysics Data System (ADS)
Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.
2015-09-01
Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.
Bjørk, Marie Kjærgaard; Simonsen, Kirsten Wiese; Andersen, David Wederkinck; Dalsgaard, Petur Weihe; Sigurðardóttir, Stella Rögn; Linnet, Kristian; Rasmussen, Brian Schou
2013-03-01
An efficient method for analyzing illegal and medicinal drugs in whole blood using fully automated sample preparation and short ultra-high-performance liquid chromatography-tandem mass spectrometry (MS/MS) run time is presented. A selection of 31 drugs, including amphetamines, cocaine, opioids, and benzodiazepines, was used. In order to increase the efficiency of routine analysis, a robotic system based on automated liquid handling and capable of handling all unit operation for sample preparation was built on a Freedom Evo 200 platform with several add-ons from Tecan and third-party vendors. Solid-phase extraction was performed using Strata X-C plates. Extraction time for 96 samples was less than 3 h. Chromatography was performed using an ACQUITY UPLC system (Waters Corporation, Milford, USA). Analytes were separated on a 100 mm × 2.1 mm, 1.7 μm Acquity UPLC CSH C(18) column using a 6.5 min 0.1 % ammonia (25 %) in water/0.1 % ammonia (25 %) in methanol gradient and quantified by MS/MS (Waters Quattro Premier XE) in multiple-reaction monitoring mode. Full validation, including linearity, precision and trueness, matrix effect, ion suppression/enhancement of co-eluting analytes, recovery, and specificity, was performed. The method was employed successfully in the laboratory and used for routine analysis of forensic material. In combination with tetrahydrocannabinol analysis, the method covered 96 % of cases involving driving under the influence of drugs. The manual labor involved in preparing blood samples, solvents, etc., was reduced to a half an hour per batch. The automated sample preparation setup also minimized human exposure to hazardous materials, provided highly improved ergonomics, and eliminated manual pipetting.
AIRSAR Automated Web-based Data Processing and Distribution System
NASA Technical Reports Server (NTRS)
Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen
2005-01-01
In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.
Automated Verification of Specifications with Typestates and Access Permissions
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu I.; Catano, Nestor
2011-01-01
We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).
Semi-Supervised Clustering for High-Dimensional and Sparse Features
ERIC Educational Resources Information Center
Yan, Su
2010-01-01
Clustering is one of the most common data mining tasks, used frequently for data organization and analysis in various application domains. Traditional machine learning approaches to clustering are fully automated and unsupervised where class labels are unknown a priori. In real application domains, however, some "weak" form of side…
Sergé, Arnauld; Bernard, Anne-Marie; Phélipot, Marie-Claire; Bertaux, Nicolas; Fallet, Mathieu; Grenot, Pierre; Marguet, Didier; He, Hai-Tao; Hamon, Yannick
2013-01-01
We introduce a series of experimental procedures enabling sensitive calcium monitoring in T cell populations by confocal video-microscopy. Tracking and post-acquisition analysis was performed using Methods for Automated and Accurate Analysis of Cell Signals (MAAACS), a fully customized program that associates a high throughput tracking algorithm, an intuitive reconnection routine and a statistical platform to provide, at a glance, the calcium barcode of a population of individual T-cells. Combined with a sensitive calcium probe, this method allowed us to unravel the heterogeneity in shape and intensity of the calcium response in T cell populations and especially in naive T cells, which display intracellular calcium oscillations upon stimulation by antigen presenting cells. PMID:24086124
Wang, Xinggang; Yang, Wei; Weinreb, Jeffrey; Han, Juan; Li, Qiubai; Kong, Xiangchuang; Yan, Yongluan; Ke, Zan; Luo, Bo; Liu, Tao; Wang, Liang
2017-11-13
Prostate cancer (PCa) is a major cause of death since ancient time documented in Egyptian Ptolemaic mummy imaging. PCa detection is critical to personalized medicine and varies considerably under an MRI scan. 172 patients with 2,602 morphologic images (axial 2D T2-weighted imaging) of the prostate were obtained. A deep learning with deep convolutional neural network (DCNN) and a non-deep learning with SIFT image feature and bag-of-word (BoW), a representative method for image recognition and analysis, were used to distinguish pathologically confirmed PCa patients from prostate benign conditions (BCs) patients with prostatitis or prostate benign hyperplasia (BPH). In fully automated detection of PCa patients, deep learning had a statistically higher area under the receiver operating characteristics curve (AUC) than non-deep learning (P = 0.0007 < 0.001). The AUCs were 0.84 (95% CI 0.78-0.89) for deep learning method and 0.70 (95% CI 0.63-0.77) for non-deep learning method, respectively. Our results suggest that deep learning with DCNN is superior to non-deep learning with SIFT image feature and BoW model for fully automated PCa patients differentiation from prostate BCs patients. Our deep learning method is extensible to image modalities such as MR imaging, CT and PET of other organs.
Understanding reliance on automation: effects of error type, error distribution, age and experience
Sanchez, Julian; Rogers, Wendy A.; Fisk, Arthur D.; Rovira, Ericka
2015-01-01
An obstacle detection task supported by “imperfect” automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation. PMID:25642142
Understanding reliance on automation: effects of error type, error distribution, age and experience.
Sanchez, Julian; Rogers, Wendy A; Fisk, Arthur D; Rovira, Ericka
2014-03-01
An obstacle detection task supported by "imperfect" automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation.
21 CFR 864.5620 - Automated hemoglobin system.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated hemoglobin system. 864.5620 Section 864....5620 Automated hemoglobin system. (a) Identification. An automated hemoglobin system is a fully... hemoglobin content of human blood. (b) Classification. Class II (performance standards). [45 FR 60601, Sept...
21 CFR 864.5620 - Automated hemoglobin system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated hemoglobin system. 864.5620 Section 864....5620 Automated hemoglobin system. (a) Identification. An automated hemoglobin system is a fully... hemoglobin content of human blood. (b) Classification. Class II (performance standards). [45 FR 60601, Sept...
Kalal, M; Nugent, K A; Luther-Davies, B
1987-05-01
An interferometric technique which enables simultaneous phase and amplitude imaging of optically transparent objects is discussed with respect to its application for the measurement of spontaneous toroidal magnetic fields generated in laser-produced plasmas. It is shown that this technique can replace the normal independent pair of optical systems (interferometry and shadowgraphy) by one system and use computer image processing to recover both the plasma density and magnetic field information with high accuracy. A fully automatic algorithm for the numerical analysis of the data has been developed and its performance demonstrated for the case of simulated as well as experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalal, M.; Nugent, K.A.; Luther-Davies, B.
1987-05-01
An interferometric technique which enables simultaneous phase and amplitude imaging of optically transparent objects is discussed with respect to its application for the measurement of spontaneous toroidal magnetic fields generated in laser-produced plasmas. It is shown that this technique can replace the normal independent pair of optical systems (interferometry and shadowgraphy) by one system and use computer image processing to recover both the plasma density and magnetic field information with high accuracy. A fully automatic algorithm for the numerical analysis of the data has been developed and its performance demonstrated for the case of simulated as well as experimental data.
Automated Microflow NMR: Routine Analysis of Five-Microliter Samples
Jansma, Ariane; Chuan, Tiffany; Geierstanger, Bernhard H.; Albrecht, Robert W.; Olson, Dean L.; Peck, Timothy L.
2006-01-01
A microflow CapNMR probe double-tuned for 1H and 13C was installed on a 400-MHz NMR spectrometer and interfaced to an automated liquid handler. Individual samples dissolved in DMSO-d6 are submitted for NMR analysis in vials containing as little as 10 μL of sample. Sets of samples are submitted in a low-volume 384-well plate. Of the 10 μL of sample per well, as with vials, 5 μL is injected into the microflow NMR probe for analysis. For quality control of chemical libraries, 1D NMR spectra are acquired under full automation from 384-well plates on as many as 130 compounds within 24 h using 128 scans per spectrum and a sample-to-sample cycle time of ∼11 min. Because of the low volume requirements and high mass sensitivity of the microflow NMR system, 30 nmol of a typical small molecule is sufficient to obtain high-quality, well-resolved, 1D proton or 2D COSY NMR spectra in ∼6 or 20 min of data acquisition time per experiment, respectively. Implementation of pulse programs with automated solvent peak identification and suppression allow for reliable data collection, even for samples submitted in fully protonated DMSO. The automated microflow NMR system is controlled and monitored using web-based software. PMID:16194121
Towards Automated Screening of Two-dimensional Crystals
Cheng, Anchi; Leung, Albert; Fellmann, Denis; Quispe, Joel; Suloway, Christian; Pulokas, James; Carragher, Bridget; Potter, Clinton S.
2007-01-01
Screening trials to determine the presence of two-dimensional (2D) protein crystals suitable for three-dimensional structure determination using electron crystallography is a very labor-intensive process. Methods compatible with fully automated screening have been developed for the process of crystal production by dialysis and for producing negatively stained grids of the resulting trials. Further automation via robotic handling of the EM grids, and semi-automated transmission electron microscopic imaging and evaluation of the trial grids is also possible. We, and others, have developed working prototypes for several of these tools and tested and evaluated them in a simple screen of 24 crystallization conditions. While further development of these tools is certainly required for a turn-key system, the goal of fully automated screening appears to be within reach. PMID:17977016
Study of living single cells in culture: automated recognition of cell behavior.
Bodin, P; Papin, S; Meyer, C; Travo, P
1988-07-01
An automated system capable of analyzing the behavior, in real time, of single living cells in culture, in a noninvasive and nondestructive way, has been developed. A large number of cell positions in single culture dishes were recorded using a computer controlled, robotized microscope. During subsequent observations, binary images obtained from video image analysis of the microscope visual field allowed the identification of the recorded cells. These cells could be revisited automatically every few minutes. Long-term studies of the behavior of cells make possible the analysis of cellular locomotary and mitotic activities as well as determination of cell shape (chosen from a defined library) for several hours or days in a fully automated way with observations spaced up to 30 minutes. Short-term studies of the behavior of cells permit the study, in a semiautomatic way, of acute effects of drugs (5 to 15 minutes) on changes of surface area and length of cells.
Dzyubachyk, Oleh; Essers, Jeroen; van Cappellen, Wiggert A; Baldeyron, Céline; Inagaki, Akiko; Niessen, Wiro J; Meijering, Erik
2010-10-01
Complete, accurate and reproducible analysis of intracellular foci from fluorescence microscopy image sequences of live cells requires full automation of all processing steps involved: cell segmentation and tracking followed by foci segmentation and pattern analysis. Integrated systems for this purpose are lacking. Extending our previous work in cell segmentation and tracking, we developed a new system for performing fully automated analysis of fluorescent foci in single cells. The system was validated by applying it to two common tasks: intracellular foci counting (in DNA damage repair experiments) and cell-phase identification based on foci pattern analysis (in DNA replication experiments). Experimental results show that the system performs comparably to expert human observers. Thus, it may replace tedious manual analyses for the considered tasks, and enables high-content screening. The described system was implemented in MATLAB (The MathWorks, Inc., USA) and compiled to run within the MATLAB environment. The routines together with four sample datasets are available at http://celmia.bigr.nl/. The software is planned for public release, free of charge for non-commercial use, after publication of this article.
Investigating Factors Affecting the Uptake of Automated Assessment Technology
ERIC Educational Resources Information Center
Dreher, Carl; Reiners, Torsten; Dreher, Heinz
2011-01-01
Automated assessment is an emerging innovation in educational praxis, however its pedagogical potential is not fully utilised in Australia, particularly regarding automated essay grading. The rationale for this research is that the usage of automated assessment currently lags behind the capacity that the technology provides, thus restricting the…
NASA Technical Reports Server (NTRS)
Steinberg, R.
1978-01-01
A low-cost communications system to provide meteorological data from commercial aircraft, in neat real-time, on a fully automated basis has been developed. The complete system including the low profile antenna and all installation hardware weighs 34 kg. The prototype system was installed on a B-747 aircraft and provided meteorological data (wind angle and velocity, temperature, altitude and position as a function of time) on a fully automated basis. The results were exceptional. This concept is expected to have important implications for operational meteorology and airline route forecasting.
Fully automated muscle quality assessment by Gabor filtering of second harmonic generation images
NASA Astrophysics Data System (ADS)
Paesen, Rik; Smolders, Sophie; Vega, José Manolo de Hoyos; Eijnde, Bert O.; Hansen, Dominique; Ameloot, Marcel
2016-02-01
Although structural changes on the sarcomere level of skeletal muscle are known to occur due to various pathologies, rigorous studies of the reduced sarcomere quality remain scarce. This can possibly be explained by the lack of an objective tool for analyzing and comparing sarcomere images across biological conditions. Recent developments in second harmonic generation (SHG) microscopy and increasing insight into the interpretation of sarcomere SHG intensity profiles have made SHG microscopy a valuable tool to study microstructural properties of sarcomeres. Typically, sarcomere integrity is analyzed by fitting a set of manually selected, one-dimensional SHG intensity profiles with a supramolecular SHG model. To circumvent this tedious manual selection step, we developed a fully automated image analysis procedure to map the sarcomere disorder for the entire image at once. The algorithm relies on a single-frequency wavelet-based Gabor approach and includes a newly developed normalization procedure allowing for unambiguous data interpretation. The method was validated by showing the correlation between the sarcomere disorder, quantified by the M-band size obtained from manually selected profiles, and the normalized Gabor value ranging from 0 to 1 for decreasing disorder. Finally, to elucidate the applicability of our newly developed protocol, Gabor analysis was used to study the effect of experimental autoimmune encephalomyelitis on the sarcomere regularity. We believe that the technique developed in this work holds great promise for high-throughput, unbiased, and automated image analysis to study sarcomere integrity by SHG microscopy.
Campone, Luca; Piccinelli, Anna Lisa; Celano, Rita; Russo, Mariateresa; Valdés, Alberto; Ibáñez, Clara; Rastrelli, Luca
2015-04-01
According to current demands and future perspectives in food safety, this study reports a fast and fully automated analytical method for the simultaneous analysis of the mycotoxins with high toxicity and wide spread, aflatoxins (AFs) and ochratoxin A (OTA) in dried fruits, a high-risk foodstuff. The method is based on pressurized liquid extraction (PLE), with aqueous methanol (30%) at 110 °C, of the slurried dried fruit and online solid-phase extraction (online SPE) cleanup of the PLE extracts with a C18 cartridge. The purified sample was directly analysed by ultra-high-pressure liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) for sensitive and selective determination of AFs and OTA. The proposed analytical procedure was validated for different dried fruits (vine fruit, fig and apricot), providing method detection and quantification limits much lower than the AFs and OTA maximum levels imposed by EU regulation in dried fruit for direct human consumption. Also, recoveries (83-103%) and repeatability (RSD < 8, n = 3) meet the performance criteria required by EU regulation for the determination of the levels of mycotoxins in foodstuffs. The main advantage of the proposed method is full automation of the whole analytical procedure that reduces the time and cost of the analysis, sample manipulation and solvent consumption, enabling high-throughput analysis and highly accurate and precise results.
Automated Detection and Analysis of Interplanetary Shocks Running Real-Time on the Web
NASA Astrophysics Data System (ADS)
Vorotnikov, V.; Smith, C. W.; Hu, Q.; Szabo, A.; Skoug, R. M.; Cohen, C. M.; Davis, A. J.
2008-05-01
The ACE real-time data stream provides web-based now-casting capabilities for solar wind conditions upstream of Earth. We have built a fully automated code that finds and analyzes interplanetary shocks as they occur and posts their solutions on the Web for possible real-time application to space weather nowcasting. Shock analysis algorithms based on the Rankine-Hugoniot jump conditions exist and are in wide-spread use today for the interactive analysis of interplanetary shocks yielding parameters such as shock speed and propagation direction and shock strength in the form of compression ratios. At a previous meeting we reported on efforts to develop a fully automated code that used ACE Level-2 (science quality) data to prove the applicability and correctness of the code and the associated shock-finder. We have since adapted the code to run ACE RTSW data provided by NOAA. This data lacks the full 3-dimensional velocity vector for the solar wind and contains only a single component wind speed. We show that by assuming the wind velocity to be radial strong shock solutions remain essentially unchanged and the analysis performs as well as it would if 3-D velocity components were available. This is due, at least in part, to the fact that strong shocks tend to have nearly radial shock normals and it is the strong shocks that are most effective in space weather applications. Strong shocks are the only shocks that concern us in this application. The code is now running on the Web and the results are available to all.
NASA Astrophysics Data System (ADS)
Liu, Robin H.; Lodes, Mike; Fuji, H. Sho; Danley, David; McShea, Andrew
Microarray assays typically involve multistage sample processing and fluidic handling, which are generally labor-intensive and time-consuming. Automation of these processes would improve robustness, reduce run-to-run and operator-to-operator variation, and reduce costs. In this chapter, a fully integrated and self-contained microfluidic biochip device that has been developed to automate the fluidic handling steps for microarray-based gene expression or genotyping analysis is presented. The device consists of a semiconductor-based CustomArray® chip with 12,000 features and a microfluidic cartridge. The CustomArray was manufactured using a semiconductor-based in situ synthesis technology. The micro-fluidic cartridge consists of microfluidic pumps, mixers, valves, fluid channels, and reagent storage chambers. Microarray hybridization and subsequent fluidic handling and reactions (including a number of washing and labeling steps) were performed in this fully automated and miniature device before fluorescent image scanning of the microarray chip. Electrochemical micropumps were integrated in the cartridge to provide pumping of liquid solutions. A micromixing technique based on gas bubbling generated by electrochemical micropumps was developed. Low-cost check valves were implemented in the cartridge to prevent cross-talk of the stored reagents. Gene expression study of the human leukemia cell line (K562) and genotyping detection and sequencing of influenza A subtypes have been demonstrated using this integrated biochip platform. For gene expression assays, the microfluidic CustomArray device detected sample RNAs with a concentration as low as 0.375 pM. Detection was quantitative over more than three orders of magnitude. Experiment also showed that chip-to-chip variability was low indicating that the integrated microfluidic devices eliminate manual fluidic handling steps that can be a significant source of variability in genomic analysis. The genotyping results showed that the device identified influenza A hemagglutinin and neuraminidase subtypes and sequenced portions of both genes, demonstrating the potential of integrated microfluidic and microarray technology for multiple virus detection. The device provides a cost-effective solution to eliminate labor-intensive and time-consuming fluidic handling steps and allows microarray-based DNA analysis in a rapid and automated fashion.
Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina
2016-05-01
Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.
Vrooman, Henri A; Cocosco, Chris A; van der Lijn, Fedde; Stokking, Rik; Ikram, M Arfan; Vernooij, Meike W; Breteler, Monique M B; Niessen, Wiro J
2007-08-01
Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue in MR data, requires training on manually labeled subjects. This manual labeling is a laborious and time-consuming procedure. In this work, a new fully automated brain tissue classification procedure is presented, in which kNN training is automated. This is achieved by non-rigidly registering the MR data with a tissue probability atlas to automatically select training samples, followed by a post-processing step to keep the most reliable samples. The accuracy of the new method was compared to rigid registration-based training and to conventional kNN-based segmentation using training on manually labeled subjects for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) in 12 data sets. Furthermore, for all classification methods, the performance was assessed when varying the free parameters. Finally, the robustness of the fully automated procedure was evaluated on 59 subjects. The automated training method using non-rigid registration with a tissue probability atlas was significantly more accurate than rigid registration. For both automated training using non-rigid registration and for the manually trained kNN classifier, the difference with the manual labeling by observers was not significantly larger than inter-observer variability for all tissue types. From the robustness study, it was clear that, given an appropriate brain atlas and optimal parameters, our new fully automated, non-rigid registration-based method gives accurate and robust segmentation results. A similarity index was used for comparison with manually trained kNN. The similarity indices were 0.93, 0.92 and 0.92, for CSF, GM and WM, respectively. It can be concluded that our fully automated method using non-rigid registration may replace manual segmentation, and thus that automated brain tissue segmentation without laborious manual training is feasible.
Gao, Shan; van 't Klooster, Ronald; Brandts, Anne; Roes, Stijntje D; Alizadeh Dehnavi, Reza; de Roos, Albert; Westenberg, Jos J M; van der Geest, Rob J
2017-01-01
To develop and evaluate a method that can fully automatically identify the vessel wall boundaries and quantify the wall thickness for both common carotid artery (CCA) and descending aorta (DAO) from axial magnetic resonance (MR) images. 3T MRI data acquired with T 1 -weighted gradient-echo black-blood imaging sequence from carotid (39 subjects) and aorta (39 subjects) were used to develop and test the algorithm. The vessel wall segmentation was achieved by respectively fitting a 3D cylindrical B-spline surface to the boundaries of lumen and outer wall. The tube-fitting was based on the edge detection performed on the signal intensity (SI) profile along the surface normal. To achieve a fully automated process, Hough Transform (HT) was developed to estimate the lumen centerline and radii for the target vessel. Using the outputs of HT, a tube model for lumen segmentation was initialized and deformed to fit the image data. Finally, lumen segmentation was dilated to initiate the adaptation procedure of outer wall tube. The algorithm was validated by determining: 1) its performance against manual tracing; 2) its interscan reproducibility in quantifying vessel wall thickness (VWT); 3) its capability of detecting VWT difference in hypertensive patients compared with healthy controls. Statistical analysis including Bland-Altman analysis, t-test, and sample size calculation were performed for the purpose of algorithm evaluation. The mean distance between the manual and automatically detected lumen/outer wall contours was 0.00 ± 0.23/0.09 ± 0.21 mm for CCA and 0.12 ± 0.24/0.14 ± 0.35 mm for DAO. No significant difference was observed between the interscan VWT assessment using automated segmentation for both CCA (P = 0.19) and DAO (P = 0.94). Both manual and automated segmentation detected significantly higher carotid (P = 0.016 and P = 0.005) and aortic (P < 0.001 and P = 0.021) wall thickness in the hypertensive patients. A reliable and reproducible pipeline for fully automatic vessel wall quantification was developed and validated on healthy volunteers as well as patients with increased vessel wall thickness. This method holds promise for helping in efficient image interpretation for large-scale cohort studies. 4 J. Magn. Reson. Imaging 2017;45:215-228. © 2016 International Society for Magnetic Resonance in Medicine.
Automated image analysis of alpha-particle autoradiographs of human bone
NASA Astrophysics Data System (ADS)
Hatzialekou, Urania; Henshaw, Denis L.; Fews, A. Peter
1988-01-01
Further techniques [4,5] for the analysis of CR-39 α-particle autoradiographs have been developed for application to α-autoradiography of autopsy bone at natural levels for exposure. The most significant new approach is the use of fully automated image analysis using a system developed in this laboratory. A 5 cm × 5 cm autoradiograph of tissue in which the activity is below 1 Bq kg -1 is scanned to both locate and measure the recorded α-particle tracks at a rate of 5 cm 2/h. Improved methods of calibration have also been developed. The techniques are described and in order to illustrate their application, a bone sample contaminated with 239Pu is analysed. Results from natural levels are the subject of a separate publication.
Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging
Patel, Tapan P.; Man, Karen; Firestein, Bonnie L.; Meaney, David F.
2017-01-01
Background Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s–1000 +neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. New method Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. Results We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. Comparison with existing method(s) We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. Conclusions We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. PMID:25629800
Eye Tracking Metrics for Workload Estimation in Flight Deck Operation
NASA Technical Reports Server (NTRS)
Ellis, Kyle; Schnell, Thomas
2010-01-01
Flight decks of the future are being enhanced through improved avionics that adapt to both aircraft and operator state. Eye tracking allows for non-invasive analysis of pilot eye movements, from which a set of metrics can be derived to effectively and reliably characterize workload. This research identifies eye tracking metrics that correlate to aircraft automation conditions, and identifies the correlation of pilot workload to the same automation conditions. Saccade length was used as an indirect index of pilot workload: Pilots in the fully automated condition were observed to have on average, larger saccadic movements in contrast to the guidance and manual flight conditions. The data set itself also provides a general model of human eye movement behavior and so ostensibly visual attention distribution in the cockpit for approach to land tasks with various levels of automation, by means of the same metrics used for workload algorithm development.
Automatic high throughput empty ISO container verification
NASA Astrophysics Data System (ADS)
Chalmers, Alex
2007-04-01
Encouraging results are presented for the automatic analysis of radiographic images of a continuous stream of ISO containers to confirm they are truly empty. A series of image processing algorithms are described that process real-time data acquired during the actual inspection of each container and assigns each to one of the classes "empty", "not empty" or "suspect threat". This research is one step towards achieving fully automated analysis of cargo containers.
Image-Based Single Cell Profiling: High-Throughput Processing of Mother Machine Experiments
Sachs, Christian Carsten; Grünberger, Alexander; Helfrich, Stefan; Probst, Christopher; Wiechert, Wolfgang; Kohlheyer, Dietrich; Nöh, Katharina
2016-01-01
Background Microfluidic lab-on-chip technology combined with live-cell imaging has enabled the observation of single cells in their spatio-temporal context. The mother machine (MM) cultivation system is particularly attractive for the long-term investigation of rod-shaped bacteria since it facilitates continuous cultivation and observation of individual cells over many generations in a highly parallelized manner. To date, the lack of fully automated image analysis software limits the practical applicability of the MM as a phenotypic screening tool. Results We present an image analysis pipeline for the automated processing of MM time lapse image stacks. The pipeline supports all analysis steps, i.e., image registration, orientation correction, channel/cell detection, cell tracking, and result visualization. Tailored algorithms account for the specialized MM layout to enable a robust automated analysis. Image data generated in a two-day growth study (≈ 90 GB) is analyzed in ≈ 30 min with negligible differences in growth rate between automated and manual evaluation quality. The proposed methods are implemented in the software molyso (MOther machine AnaLYsis SOftware) that provides a new profiling tool to analyze unbiasedly hitherto inaccessible large-scale MM image stacks. Conclusion Presented is the software molyso, a ready-to-use open source software (BSD-licensed) for the unsupervised analysis of MM time-lapse image stacks. molyso source code and user manual are available at https://github.com/modsim/molyso. PMID:27661996
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barker, Alan M; Freer, Eva B; Omitaomu, Olufemi A
An ORNL team working on the Energy Awareness and Resiliency Standardized Services (EARSS) project developed a fully automated procedure to take wind speed and location estimates provided by hurricane forecasters and provide a geospatial estimate on the impact to the electric grid in terms of outage areas and projected duration of outages. Hurricane Sandy was one of the worst US storms ever, with reported injuries and deaths, millions of people without power for several days, and billions of dollars in economic impact. Hurricane advisories were released for Sandy from October 22 through 31, 2012. The fact that the geoprocessing wasmore » automated was significant there were 64 advisories for Sandy. Manual analysis typically takes about one hour for each advisory. During a storm event, advisories are released every two to three hours around the clock, and an analyst capable of performing the manual analysis has other tasks they would like to focus on. Initial predictions of a big impact and landfall usually occur three days in advance, so time is of the essence to prepare for utility repair. Automated processing developed at ORNL allowed this analysis to be completed and made publicly available within minutes of each new advisory being released.« less
Orbiter Autoland reliability analysis
NASA Technical Reports Server (NTRS)
Welch, D. Phillip
1993-01-01
The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.
Costs to Automate Demand Response - Taxonomy and Results from Field Studies and Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piette, Mary A.; Schetrit, Oren; Kiliccote, Sila
During the past decade, the technology to automate demand response (DR) in buildings and industrial facilities has advanced significantly. Automation allows rapid, repeatable, reliable operation. This study focuses on costs for DR automation in commercial buildings with some discussion on residential buildings and industrial facilities. DR automation technology relies on numerous components, including communication systems, hardware and software gateways, standards-based messaging protocols, controls and integration platforms, and measurement and telemetry systems. This report compares cost data from several DR automation programs and pilot projects, evaluates trends in the cost per unit of DR and kilowatts (kW) available from automated systems,more » and applies a standard naming convention and classification or taxonomy for system elements. Median costs for the 56 installed automated DR systems studied here are about $200/kW. The deviation around this median is large with costs in some cases being an order of magnitude great or less than the median. This wide range is a result of variations in system age, size of load reduction, sophistication, and type of equipment included in cost analysis. The costs to automate fast DR systems for ancillary services are not fully analyzed in this report because additional research is needed to determine the total cost to install, operate, and maintain these systems. However, recent research suggests that they could be developed at costs similar to those of existing hot-summer DR automation systems. This report considers installation and configuration costs and does include the costs of owning and operating DR automation systems. Future analysis of the latter costs should include the costs to the building or facility manager costs as well as utility or third party program manager cost.« less
Validation of automated white matter hyperintensity segmentation.
Smart, Sean D; Firbank, Michael J; O'Brien, John T
2011-01-01
Introduction. White matter hyperintensities (WMHs) are a common finding on MRI scans of older people and are associated with vascular disease. We compared 3 methods for automatically segmenting WMHs from MRI scans. Method. An operator manually segmented WMHs on MRI images from a 3T scanner. The scans were also segmented in a fully automated fashion by three different programmes. The voxel overlap between manual and automated segmentation was compared. Results. Between observer overlap ratio was 63%. Using our previously described in-house software, we had overlap of 62.2%. We investigated the use of a modified version of SPM segmentation; however, this was not successful, with only 14% overlap. Discussion. Using our previously reported software, we demonstrated good segmentation of WMHs in a fully automated fashion.
CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation
2013-01-01
The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening. PMID:23938087
CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation.
Hodneland, Erlend; Kögel, Tanja; Frei, Dominik Michael; Gerdes, Hans-Hermann; Lundervold, Arvid
2013-08-09
: The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening.
Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard
2018-04-01
To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
How a Fully Automated eHealth Program Simulates Three Therapeutic Processes: A Case Study.
Holter, Marianne T S; Johansen, Ayna; Brendryen, Håvar
2016-06-28
eHealth programs may be better understood by breaking down the components of one particular program and discussing its potential for interactivity and tailoring in regard to concepts from face-to-face counseling. In the search for the efficacious elements within eHealth programs, it is important to understand how a program using lapse management may simultaneously support working alliance, internalization of motivation, and behavior maintenance. These processes have been applied to fully automated eHealth programs individually. However, given their significance in face-to-face counseling, it may be important to simulate the processes simultaneously in interactive, tailored programs. We propose a theoretical model for how fully automated behavior change eHealth programs may be more effective by simulating a therapist's support of a working alliance, internalization of motivation, and managing lapses. We show how the model is derived from theory and its application to Endre, a fully automated smoking cessation program that engages the user in several "counseling sessions" about quitting. A descriptive case study based on tools from the intervention mapping protocol shows how each therapeutic process is simulated. The program supports the user's working alliance through alliance factors, the nonembodied relational agent Endre and computerized motivational interviewing. Computerized motivational interviewing also supports internalized motivation to quit, whereas a lapse management component responds to lapses. The description operationalizes working alliance, internalization of motivation, and managing lapses, in terms of eHealth support of smoking cessation. A program may simulate working alliance, internalization of motivation, and lapse management through interactivity and individual tailoring, potentially making fully automated eHealth behavior change programs more effective.
How a Fully Automated eHealth Program Simulates Three Therapeutic Processes: A Case Study
Johansen, Ayna; Brendryen, Håvar
2016-01-01
Background eHealth programs may be better understood by breaking down the components of one particular program and discussing its potential for interactivity and tailoring in regard to concepts from face-to-face counseling. In the search for the efficacious elements within eHealth programs, it is important to understand how a program using lapse management may simultaneously support working alliance, internalization of motivation, and behavior maintenance. These processes have been applied to fully automated eHealth programs individually. However, given their significance in face-to-face counseling, it may be important to simulate the processes simultaneously in interactive, tailored programs. Objective We propose a theoretical model for how fully automated behavior change eHealth programs may be more effective by simulating a therapist’s support of a working alliance, internalization of motivation, and managing lapses. Methods We show how the model is derived from theory and its application to Endre, a fully automated smoking cessation program that engages the user in several “counseling sessions” about quitting. A descriptive case study based on tools from the intervention mapping protocol shows how each therapeutic process is simulated. Results The program supports the user’s working alliance through alliance factors, the nonembodied relational agent Endre and computerized motivational interviewing. Computerized motivational interviewing also supports internalized motivation to quit, whereas a lapse management component responds to lapses. The description operationalizes working alliance, internalization of motivation, and managing lapses, in terms of eHealth support of smoking cessation. Conclusions A program may simulate working alliance, internalization of motivation, and lapse management through interactivity and individual tailoring, potentially making fully automated eHealth behavior change programs more effective. PMID:27354373
CANEapp: a user-friendly application for automated next generation transcriptomic data analysis.
Velmeshev, Dmitry; Lally, Patrick; Magistri, Marco; Faghihi, Mohammad Ali
2016-01-13
Next generation sequencing (NGS) technologies are indispensable for molecular biology research, but data analysis represents the bottleneck in their application. Users need to be familiar with computer terminal commands, the Linux environment, and various software tools and scripts. Analysis workflows have to be optimized and experimentally validated to extract biologically meaningful data. Moreover, as larger datasets are being generated, their analysis requires use of high-performance servers. To address these needs, we developed CANEapp (application for Comprehensive automated Analysis of Next-generation sequencing Experiments), a unique suite that combines a Graphical User Interface (GUI) and an automated server-side analysis pipeline that is platform-independent, making it suitable for any server architecture. The GUI runs on a PC or Mac and seamlessly connects to the server to provide full GUI control of RNA-sequencing (RNA-seq) project analysis. The server-side analysis pipeline contains a framework that is implemented on a Linux server through completely automated installation of software components and reference files. Analysis with CANEapp is also fully automated and performs differential gene expression analysis and novel noncoding RNA discovery through alternative workflows (Cuffdiff and R packages edgeR and DESeq2). We compared CANEapp to other similar tools, and it significantly improves on previous developments. We experimentally validated CANEapp's performance by applying it to data derived from different experimental paradigms and confirming the results with quantitative real-time PCR (qRT-PCR). CANEapp adapts to any server architecture by effectively using available resources and thus handles large amounts of data efficiently. CANEapp performance has been experimentally validated on various biological datasets. CANEapp is available free of charge at http://psychiatry.med.miami.edu/research/laboratory-of-translational-rna-genomics/CANE-app . We believe that CANEapp will serve both biologists with no computational experience and bioinformaticians as a simple, timesaving but accurate and powerful tool to analyze large RNA-seq datasets and will provide foundations for future development of integrated and automated high-throughput genomics data analysis tools. Due to its inherently standardized pipeline and combination of automated analysis and platform-independence, CANEapp is an ideal for large-scale collaborative RNA-seq projects between different institutions and research groups.
NASA Technical Reports Server (NTRS)
Cabrall, C.; Gomez, A.; Homola, J.; Hunt, S..; Martin, L.; Merccer, J.; Prevott, T.
2013-01-01
As part of an ongoing research effort on separation assurance and functional allocation in NextGen, a controller- in-the-loop study with ground-based automation was conducted at NASA Ames' Airspace Operations Laboratory in August 2012 to investigate the potential impact of introducing self-separating aircraft in progressively advanced NextGen timeframes. From this larger study, the current exploratory analysis of controller-automation interaction styles focuses on the last and most far-term time frame. Measurements were recorded that firstly verified the continued operational validity of this iteration of the ground-based functional allocation automation concept in forecast traffic densities up to 2x that of current day high altitude en-route sectors. Additionally, with greater levels of fully automated conflict detection and resolution as well as the introduction of intervention functionality, objective and subjective analyses showed a range of passive to active controller- automation interaction styles between the participants. Not only did the controllers work with the automation to meet their safety and capacity goals in the simulated future NextGen timeframe, they did so in different ways and with different attitudes of trust/use of the automation. Taken as a whole, the results showed that the prototyped controller-automation functional allocation framework was very flexible and successful overall.
Automated MRI segmentation for individualized modeling of current flow in the human head.
Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C
2013-12-01
High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
Independent Verification and Validation of Complex User Interfaces: A Human Factors Approach
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Berman, Andrea; Chmielewski, Cynthia
1996-01-01
The Usability Testing and Analysis Facility (UTAF) at the NASA Johnson Space Center has identified and evaluated a potential automated software interface inspection tool capable of assessing the degree to which space-related critical and high-risk software system user interfaces meet objective human factors standards across each NASA program and project. Testing consisted of two distinct phases. Phase 1 compared analysis times and similarity of results for the automated tool and for human-computer interface (HCI) experts. In Phase 2, HCI experts critiqued the prototype tool's user interface. Based on this evaluation, it appears that a more fully developed version of the tool will be a promising complement to a human factors-oriented independent verification and validation (IV&V) process.
Medical ADP Systems: Automated Medical Records Hold Promise to Improve Patient Care
1991-01-01
automated medical records. The report discusses the potential benefits that automation could make to the quality of patient care and the factors that impede...information systems, but no organization has fully automated one of the most critical types of information, patient medical records. The patient medical record...its review of automated medical records. GAO’s objectives in this study were to identify the (1) benefits of automating patient records and (2) factors
1987-06-01
commercial products. · OP -- Typical cutout at a plumbiinc location where an automated monitoring system has bv :• installed. The sensor used with the...This report provides a description of commercially available sensors , instruments, and ADP equipment that may be selected to fully automate...automated. The automated plumbline monitoring system includes up to twelve sensors , repeaters, a system controller, and a printer. The system may
An Automated, High-Throughput System for GISAXS and GIWAXS Measurements of Thin Films
NASA Astrophysics Data System (ADS)
Schaible, Eric; Jimenez, Jessica; Church, Matthew; Lim, Eunhee; Stewart, Polite; Hexemer, Alexander
Grazing incidence small-angle X-ray scattering (GISAXS) and grazing incidence wide-angle X-ray scattering (GIWAXS) are important techniques for characterizing thin films. In order to meet rapidly increasing demand, the SAXSWAXS beamline at the Advanced Light Source (beamline 7.3.3) has implemented a fully automated, high-throughput system to conduct SAXS, GISAXS and GIWAXS measurements. An automated robot arm transfers samples from a holding tray to a measurement stage. Intelligent software aligns each sample in turn, and measures each according to user-defined specifications. Users mail in trays of samples on individually barcoded pucks, and can download and view their data remotely. Data will be pipelined to the NERSC supercomputing facility, and will be available to users via a web portal that facilitates highly parallelized analysis.
2009-10-01
molecular breast imaging, with the ability to dynamically contour any sized breast, will improve detection and potentially in vivo characterization of...Having flexible 3D positioning about the breast yielded minimal RMSD differences, which is important for high resolution molecular emission imaging. This...TITLE: Automation and Preclinical Evaluation of a Dedicated Emission Mammotomography System for Fully 3-D Molecular Breast Imaging PRINCIPAL
Bloomfield, M S
2002-12-06
4-Aminophenol (4AP) is the primary degradation product of paracetamol which is limited at a low level (50 ppm or 0.005% w/w) in the drug substance by the European, United States, British and German Pharmacopoeias, employing a manual colourimetric limit test. The 4AP limit is widened to 1000 ppm or 0.1% w/w for the tablet product monographs, which quote the use of a less sensitive automated HPLC method. The lower drug substance specification limit is applied to our products, (50 ppm, equivalent to 25 mug 4AP in a tablet containing 500-mg paracetamol) and the pharmacopoeial HPLC assay was not suitable at this low level due to matrix interference. For routine analysis a rapid, automated assay was required. This paper presents a highly sensitive, precise and automated method employing the technique of Flow Injection (FI) analysis to quantitatively assay low levels of this degradant. A solution of the drug substance, or an extract of the tablets, containing 4AP and paracetamol is injected into a solvent carrier stream and merged on-line with alkaline sodium nitroprusside reagent, to form a specific blue derivative which is detected spectrophotometrically at 710 nm. Standard HPLC equipment is used throughout. The procedure is fully quantitative and has been optimised for sensitivity and robustness using a multivariate experimental design (multi-level 'Central Composite' response surface) model. The method has been fully validated and is linear down to 0.01 mug ml(-1). The approach should be applicable to a range of paracetamol products.
DOT National Transportation Integrated Search
2014-09-01
The concept of Automated Transit Networks (ATN) - in which fully automated vehicles on exclusive, grade-separated guideways : provide on-demand, primarily non-stop, origin-to-destination service over an area network has been around since the 1950...
Burns, Joseph E.; Yao, Jianhua; Muñoz, Hector
2016-01-01
Purpose To design and validate a fully automated computer system for the detection and anatomic localization of traumatic thoracic and lumbar vertebral body fractures at computed tomography (CT). Materials and Methods This retrospective study was HIPAA compliant. Institutional review board approval was obtained, and informed consent was waived. CT examinations in 104 patients (mean age, 34.4 years; range, 14–88 years; 32 women, 72 men), consisting of 94 examinations with positive findings for fractures (59 with vertebral body fractures) and 10 control examinations (without vertebral fractures), were performed. There were 141 thoracic and lumbar vertebral body fractures in the case set. The locations of fractures were marked and classified by a radiologist according to Denis column involvement. The CT data set was divided into training and testing subsets (37 and 67 subsets, respectively) for analysis by means of prototype software for fully automated spinal segmentation and fracture detection. Free-response receiver operating characteristic analysis was performed. Results Training set sensitivity for detection and localization of fractures within each vertebra was 0.82 (28 of 34 findings; 95% confidence interval [CI]: 0.68, 0.90), with a false-positive rate of 2.5 findings per patient. The sensitivity for fracture localization to the correct vertebra was 0.88 (23 of 26 findings; 95% CI: 0.72, 0.96), with a false-positive rate of 1.3. Testing set sensitivity for the detection and localization of fractures within each vertebra was 0.81 (87 of 107 findings; 95% CI: 0.75, 0.87), with a false-positive rate of 2.7. The sensitivity for fracture localization to the correct vertebra was 0.92 (55 of 60 findings; 95% CI: 0.79, 0.94), with a false-positive rate of 1.6. The most common cause of false-positive findings was nutrient foramina (106 of 272 findings [39%]). Conclusion The fully automated computer system detects and anatomically localizes vertebral body fractures in the thoracic and lumbar spine on CT images with a high sensitivity and a low false-positive rate. © RSNA, 2015 Online supplemental material is available for this article. PMID:26172532
ERIC Educational Resources Information Center
Sheehan, Kathleen M.
2016-01-01
The "TextEvaluator"® text analysis tool is a fully automated text complexity evaluation tool designed to help teachers and other educators select texts that are consistent with the text complexity guidelines specified in the Common Core State Standards (CCSS). This paper provides an overview of the TextEvaluator measurement approach and…
Kim, Jinsuh; Leira, Enrique C; Callison, Richard C; Ludwig, Bryan; Moritani, Toshio; Magnotta, Vincent A; Madsen, Mark T
2010-05-01
We developed fully automated software for dynamic susceptibility contrast (DSC) MR perfusion-weighted imaging (PWI) to efficiently and reliably derive critical hemodynamic information for acute stroke treatment decisions. Brain MR PWI was performed in 80 consecutive patients with acute nonlacunar ischemic stroke within 24h after onset of symptom from January 2008 to August 2009. These studies were automatically processed to generate hemodynamic parameters that included cerebral blood flow and cerebral blood volume, and the mean transit time (MTT). To develop reliable software for PWI analysis, we used computationally robust algorithms including the piecewise continuous regression method to determine bolus arrival time (BAT), log-linear curve fitting, arrival time independent deconvolution method and sophisticated motion correction methods. An optimal arterial input function (AIF) search algorithm using a new artery-likelihood metric was also developed. Anatomical locations of the automatically determined AIF were reviewed and validated. The automatically computed BAT values were statistically compared with estimated BAT by a single observer. In addition, gamma-variate curve-fitting errors of AIF and inter-subject variability of AIFs were analyzed. Lastly, two observes independently assessed the quality and area of hypoperfusion mismatched with restricted diffusion area from motion corrected MTT maps and compared that with time-to-peak (TTP) maps using the standard approach. The AIF was identified within an arterial branch and enhanced areas of perfusion deficit were visualized in all evaluated cases. Total processing time was 10.9+/-2.5s (mean+/-s.d.) without motion correction and 267+/-80s (mean+/-s.d.) with motion correction on a standard personal computer. The MTT map produced with our software adequately estimated brain areas with perfusion deficit and was significantly less affected by random noise of the PWI when compared with the TTP map. Results of image quality assessment by two observers revealed that the MTT maps exhibited superior quality over the TTP maps (88% good rating of MTT as compared to 68% of TTP). Our software allowed fully automated deconvolution analysis of DSC PWI using proven efficient algorithms that can be applied to acute stroke treatment decisions. Our streamlined method also offers promise for further development of automated quantitative analysis of the ischemic penumbra. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.
MetAMOS: a modular and open source metagenomic assembly and analysis pipeline
2013-01-01
We describe MetAMOS, an open source and modular metagenomic assembly and analysis pipeline. MetAMOS represents an important step towards fully automated metagenomic analysis, starting with next-generation sequencing reads and producing genomic scaffolds, open-reading frames and taxonomic or functional annotations. MetAMOS can aid in reducing assembly errors, commonly encountered when assembling metagenomic samples, and improves taxonomic assignment accuracy while also reducing computational cost. MetAMOS can be downloaded from: https://github.com/treangen/MetAMOS. PMID:23320958
Validation of Automated White Matter Hyperintensity Segmentation
Smart, Sean D.; Firbank, Michael J.; O'Brien, John T.
2011-01-01
Introduction. White matter hyperintensities (WMHs) are a common finding on MRI scans of older people and are associated with vascular disease. We compared 3 methods for automatically segmenting WMHs from MRI scans. Method. An operator manually segmented WMHs on MRI images from a 3T scanner. The scans were also segmented in a fully automated fashion by three different programmes. The voxel overlap between manual and automated segmentation was compared. Results. Between observer overlap ratio was 63%. Using our previously described in-house software, we had overlap of 62.2%. We investigated the use of a modified version of SPM segmentation; however, this was not successful, with only 14% overlap. Discussion. Using our previously reported software, we demonstrated good segmentation of WMHs in a fully automated fashion. PMID:21904678
Madduri, Ravi K.; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J.; Foster, Ian T.
2014-01-01
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads. PMID:25342933
Madduri, Ravi K; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J; Foster, Ian T
2014-09-10
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads.
Autoreject: Automated artifact rejection for MEG and EEG data.
Jas, Mainak; Engemann, Denis A; Bekhti, Yousra; Raimondo, Federico; Gramfort, Alexandre
2017-10-01
We present an automated algorithm for unified rejection and repair of bad trials in magnetoencephalography (MEG) and electroencephalography (EEG) signals. Our method capitalizes on cross-validation in conjunction with a robust evaluation metric to estimate the optimal peak-to-peak threshold - a quantity commonly used for identifying bad trials in M/EEG. This approach is then extended to a more sophisticated algorithm which estimates this threshold for each sensor yielding trial-wise bad sensors. Depending on the number of bad sensors, the trial is then repaired by interpolation or by excluding it from subsequent analysis. All steps of the algorithm are fully automated thus lending itself to the name Autoreject. In order to assess the practical significance of the algorithm, we conducted extensive validation and comparisons with state-of-the-art methods on four public datasets containing MEG and EEG recordings from more than 200 subjects. The comparisons include purely qualitative efforts as well as quantitatively benchmarking against human supervised and semi-automated preprocessing pipelines. The algorithm allowed us to automate the preprocessing of MEG data from the Human Connectome Project (HCP) going up to the computation of the evoked responses. The automated nature of our method minimizes the burden of human inspection, hence supporting scalability and reliability demanded by data analysis in modern neuroscience. Copyright © 2017 Elsevier Inc. All rights reserved.
Schulze, H Georg; Turner, Robin F B
2014-01-01
Charge-coupled device detectors are vulnerable to cosmic rays that can contaminate Raman spectra with positive going spikes. Because spikes can adversely affect spectral processing and data analyses, they must be removed. Although both hardware-based and software-based spike removal methods exist, they typically require parameter and threshold specification dependent on well-considered user input. Here, we present a fully automated spike removal algorithm that proceeds without requiring user input. It is minimally dependent on sample attributes, and those that are required (e.g., standard deviation of spectral noise) can be determined with other fully automated procedures. At the core of the method is the identification and location of spikes with coincident second derivatives along both the spectral and spatiotemporal dimensions of two-dimensional datasets. The method can be applied to spectra that are relatively inhomogeneous because it provides fairly effective and selective targeting of spikes resulting in minimal distortion of spectra. Relatively effective spike removal obtained with full automation could provide substantial benefits to users where large numbers of spectra must be processed.
Automated retinofugal visual pathway reconstruction with multi-shell HARDI and FOD-based analysis.
Kammen, Alexandra; Law, Meng; Tjan, Bosco S; Toga, Arthur W; Shi, Yonggang
2016-01-15
Diffusion MRI tractography provides a non-invasive modality to examine the human retinofugal projection, which consists of the optic nerves, optic chiasm, optic tracts, the lateral geniculate nuclei (LGN) and the optic radiations. However, the pathway has several anatomic features that make it particularly challenging to study with tractography, including its location near blood vessels and bone-air interface at the base of the cerebrum, crossing fibers at the chiasm, somewhat-tortuous course around the temporal horn via Meyer's Loop, and multiple closely neighboring fiber bundles. To date, these unique complexities of the visual pathway have impeded the development of a robust and automated reconstruction method using tractography. To overcome these challenges, we develop a novel, fully automated system to reconstruct the retinofugal visual pathway from high-resolution diffusion imaging data. Using multi-shell, high angular resolution diffusion imaging (HARDI) data, we reconstruct precise fiber orientation distributions (FODs) with high order spherical harmonics (SPHARM) to resolve fiber crossings, which allows the tractography algorithm to successfully navigate the complicated anatomy surrounding the retinofugal pathway. We also develop automated algorithms for the identification of ROIs used for fiber bundle reconstruction. In particular, we develop a novel approach to extract the LGN region of interest (ROI) based on intrinsic shape analysis of a fiber bundle computed from a seed region at the optic chiasm to a target at the primary visual cortex. By combining automatically identified ROIs and FOD-based tractography, we obtain a fully automated system to compute the main components of the retinofugal pathway, including the optic tract and the optic radiation. We apply our method to the multi-shell HARDI data of 215 subjects from the Human Connectome Project (HCP). Through comparisons with post-mortem dissection measurements, we demonstrate the retinotopic organization of the optic radiation including a successful reconstruction of Meyer's loop. Then, using the reconstructed optic radiation bundle from the HCP cohort, we construct a probabilistic atlas and demonstrate its consistency with a post-mortem atlas. Finally, we generate a shape-based representation of the optic radiation for morphometry analysis. Copyright © 2015 Elsevier Inc. All rights reserved.
Arsanjani, Reza; Xu, Yuan; Hayes, Sean W.; Fish, Mathews; Lemley, Mark; Gerlach, James; Dorbala, Sharmila; Berman, Daniel S.; Germano, Guido; Slomka, Piotr
2012-01-01
We compared the performance of a fully automated quantification of attenuation-corrected (AC) and non-corrected (NC) myocardial perfusion single photon emission computed tomography (MPS) with the corresponding performance of experienced readers for the detection coronary artery disease (CAD). Methods 995 rest/stress 99mTc-sestamibi MPS studies, [650 consecutive cases with coronary angiography and 345 with likelihood of CAD < 5% (LLk)] were obtained by MPS with AC. Total perfusion deficit (TPD) for AC and NC data were compared to the visual summed stress and rest scores of 2 experienced readers. Visual reads were performed in 4 consecutive steps with the following information progressively revealed: NC data, AC+NC data, computer results, all clinical information. Results The diagnostic accuracy of TPD for detection of CAD was similar to both readers (NC: 82% vs. 84%, AC: 86% vs. 85–87% p = NS) with the exception of second reader when using clinical information (89%, p < 0.05). The Receiver-Operator-Characteristics Areas-Under-Curve (ROC-AUC) for TPD were significantly better than visual reads for NC (0.91 vs. 0.87 and 0.89, p < 0.01) and AC (0.92 vs. 0.90, p < 0.01), and it was comparable to visual reads incorporating all clinical information. Per-vessel accuracy of TPD was superior to one reader for NC (81% vs. 77%, p < 0.05) and AC (83% vs. 78%, p < 0.05) and equivalent to second reader [NC (79%) and AC (81%)]. Per-vessel ROC-AUC for NC (0.83) and AC (0.84) for TPD were better than (0.78–0.80 p < 0.01), and comparable to second reader (0.82–0.84, p = NS), for all steps. Conclusion For the detection of ≥ 70% stenosis based on angiographic criteria, a fully automated computer analysis of NC and AC MPS data is equivalent for per-patient and can be superior for per-vessel analysis, when compared to expert analysis. PMID:23315665
NASA Technical Reports Server (NTRS)
Steinberg, R.
1978-01-01
The National Aeronautics and Space Administration has developed a low-cost communications system to provide meteorological data from commercial aircraft, in near real-time, on a fully automated basis. The complete system including the low profile antenna and all installation hardware weighs 34 kg. The prototype system has been installed on a Pan American B-747 aircraft and has been providing meteorological data (wind angle and velocity, temperature, altitude and position as a function of time) on a fully automated basis for the past several months. The results have been exceptional. This concept is expected to have important implications for operational meteorology and airline route forecasting.
Automated Reconstruction of Historic Roof Structures from Point Clouds - Development and Examples
NASA Astrophysics Data System (ADS)
Pöchtrager, M.; Styhler-Aydın, G.; Döring-Williams, M.; Pfeifer, N.
2017-08-01
The analysis of historic roof constructions is an important task for planning the adaptive reuse of buildings or for maintenance and restoration issues. Current approaches to modeling roof constructions consist of several consecutive operations that need to be done manually or using semi-automatic routines. To increase efficiency and allow the focus to be on analysis rather than on data processing, a set of methods was developed for the fully automated analysis of the roof constructions, including integration of architectural and structural modeling. Terrestrial laser scanning permits high-detail surveying of large-scale structures within a short time. Whereas 3-D laser scan data consist of millions of single points on the object surface, we need a geometric description of structural elements in order to obtain a structural model consisting of beam axis and connections. Preliminary results showed that the developed methods work well for beams in flawless condition with a quadratic cross section and no bending. Deformations or damages such as cracks and cuts on the wooden beams can lead to incomplete representations in the model. Overall, a high degree of automation was achieved.
Shallow water benthic imaging and substrate characterization using recreational-grade sidescan-sonar
Buscombe, Daniel D.
2017-01-01
In recent years, lightweight, inexpensive, vessel-mounted ‘recreational grade’ sonar systems have rapidly grown in popularity among aquatic scientists, for swath imaging of benthic substrates. To promote an ongoing ‘democratization’ of acoustical imaging of shallow water environments, methods to carry out geometric and radiometric correction and georectification of sonar echograms are presented, based on simplified models for sonar-target geometry and acoustic backscattering and attenuation in shallow water. Procedures are described for automated removal of the acoustic shadows, identification of bed-water interface for situations when the water is too turbid or turbulent for reliable depth echosounding, and for automated bed substrate classification based on singlebeam full-waveform analysis. These methods are encoded in an open-source and freely-available software package, which should further facilitate use of recreational-grade sidescan sonar, in a fully automated and objective manner. The sequential correction, mapping, and analysis steps are demonstrated using a data set from a shallow freshwater environment.
Puścian, Alicja; Łęski, Szymon; Kasprowicz, Grzegorz; Winiarski, Maciej; Borowska, Joanna; Nikolaev, Tomasz; Boguszewski, Paweł M; Lipp, Hans-Peter; Knapska, Ewelina
2016-10-12
Eco-HAB is an open source, RFID-based system for automated measurement and analysis of social preference and in-cohort sociability in mice. The system closely follows murine ethology. It requires no contact between a human experimenter and tested animals, overcoming the confounding factors that lead to irreproducible assessment of murine social behavior between laboratories. In Eco-HAB, group-housed animals live in a spacious, four-compartment apparatus with shadowed areas and narrow tunnels, resembling natural burrows. Eco-HAB allows for assessment of the tendency of mice to voluntarily spend time together in ethologically relevant mouse group sizes. Custom-made software for automated tracking, data extraction, and analysis enables quick evaluation of social impairments. The developed protocols and standardized behavioral measures demonstrate high replicability. Unlike classic three-chambered sociability tests, Eco-HAB provides measurements of spontaneous, ecologically relevant social behaviors in group-housed animals. Results are obtained faster, with less manpower, and without confounding factors.
A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image
NASA Astrophysics Data System (ADS)
Barat, Christian; Phlypo, Ronald
2010-12-01
We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.
Fananapazir, Ghaneh; Bashir, Mustafa R; Marin, Daniele; Boll, Daniel T
2015-06-01
To evaluate the performance of a prototype, fully-automated post-processing solution for whole-liver and lobar segmentation based on MDCT datasets. A polymer liver phantom was used to assess accuracy of post-processing applications comparing phantom volumes determined via Archimedes' principle with MDCT segmented datasets. For the IRB-approved, HIPAA-compliant study, 25 patients were enrolled. Volumetry performance compared the manual approach with the automated prototype, assessing intraobserver variability, and interclass correlation for whole-organ and lobar segmentation using ANOVA comparison. Fidelity of segmentation was evaluated qualitatively. Phantom volume was 1581.0 ± 44.7 mL, manually segmented datasets estimated 1628.0 ± 47.8 mL, representing a mean overestimation of 3.0%, automatically segmented datasets estimated 1601.9 ± 0 mL, representing a mean overestimation of 1.3%. Whole-liver and segmental volumetry demonstrated no significant intraobserver variability for neither manual nor automated measurements. For whole-liver volumetry, automated measurement repetitions resulted in identical values; reproducible whole-organ volumetry was also achieved with manual segmentation, p(ANOVA) 0.98. For lobar volumetry, automated segmentation improved reproducibility over manual approach, without significant measurement differences for either methodology, p(ANOVA) 0.95-0.99. Whole-organ and lobar segmentation results from manual and automated segmentation showed no significant differences, p(ANOVA) 0.96-1.00. Assessment of segmentation fidelity found that segments I-IV/VI showed greater segmentation inaccuracies compared to the remaining right hepatic lobe segments. Automated whole-liver segmentation showed non-inferiority of fully-automated whole-liver segmentation compared to manual approaches with improved reproducibility and post-processing duration; automated dual-seed lobar segmentation showed slight tendencies for underestimating the right hepatic lobe volume and greater variability in edge detection for the left hepatic lobe compared to manual segmentation.
Advantages and challenges in automated apatite fission track counting
NASA Astrophysics Data System (ADS)
Enkelmann, E.; Ehlers, T. A.
2012-04-01
Fission track thermochronometer data are often a core element of modern tectonic and denudation studies. Soon after the development of the fission track methods interest emerged for the developed an automated counting procedure to replace the time consuming labor of counting fission tracks under the microscope. Automated track counting became feasible in recent years with increasing improvements in computer software and hardware. One such example used in this study is the commercial automated fission track counting procedure from Autoscan Systems Pty that has been highlighted through several venues. We conducted experiments that are designed to reliably and consistently test the ability of this fully automated counting system to recognize fission tracks in apatite and a muscovite external detector. Fission tracks were analyzed in samples with a step-wise increase in sample complexity. The first set of experiments used a large (mm-size) slice of Durango apatite cut parallel to the prism plane. Second, samples with 80-200 μm large apatite grains of Fish Canyon Tuff were analyzed. This second sample set is characterized by complexities often found in apatites in different rock types. In addition to the automated counting procedure, the same samples were also analyzed using conventional counting procedures. We found for all samples that the fully automated fission track counting procedure using the Autoscan System yields a larger scatter in the fission track densities measured compared to conventional (manual) track counting. This scatter typically resulted from the false identification of tracks due surface and mineralogical defects, regardless of the image filtering procedure used. Large differences between track densities analyzed with the automated counting persisted between different grains analyzed in one sample as well as between different samples. As a result of these differences a manual correction of the fully automated fission track counts is necessary for each individual surface area and grain counted. This manual correction procedure significantly increases (up to four times) the time required to analyze a sample with the automated counting procedure compared to the conventional approach.
2015-01-01
Biological assays formatted as microarrays have become a critical tool for the generation of the comprehensive data sets required for systems-level understanding of biological processes. Manual annotation of data extracted from images of microarrays, however, remains a significant bottleneck, particularly for protein microarrays due to the sensitivity of this technology to weak artifact signal. In order to automate the extraction and curation of data from protein microarrays, we describe an algorithm called Crossword that logically combines information from multiple approaches to fully automate microarray segmentation. Automated artifact removal is also accomplished by segregating structured pixels from the background noise using iterative clustering and pixel connectivity. Correlation of the location of structured pixels across image channels is used to identify and remove artifact pixels from the image prior to data extraction. This component improves the accuracy of data sets while reducing the requirement for time-consuming visual inspection of the data. Crossword enables a fully automated protocol that is robust to significant spatial and intensity aberrations. Overall, the average amount of user intervention is reduced by an order of magnitude and the data quality is increased through artifact removal and reduced user variability. The increase in throughput should aid the further implementation of microarray technologies in clinical studies. PMID:24417579
NASA Astrophysics Data System (ADS)
Hopp, T.; Zapf, M.; Ruiter, N. V.
2014-03-01
An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.
NASA Astrophysics Data System (ADS)
Nuzhnaya, Tatyana; Bakic, Predrag; Kontos, Despina; Megalooikonomou, Vasileios; Ling, Haibin
2012-02-01
This work is a part of our ongoing study aimed at understanding a relation between the topology of anatomical branching structures with the underlying image texture. Morphological variability of the breast ductal network is associated with subsequent development of abnormalities in patients with nipple discharge such as papilloma, breast cancer and atypia. In this work, we investigate complex dependence among ductal components to perform segmentation, the first step for analyzing topology of ductal lobes. Our automated framework is based on incorporating a conditional random field with texture descriptors of skewness, coarseness, contrast, energy and fractal dimension. These features are selected to capture the architectural variability of the enhanced ducts by encoding spatial variations between pixel patches in galactographic image. The segmentation algorithm was applied to a dataset of 20 x-ray galactograms obtained at the Hospital of the University of Pennsylvania. We compared the performance of the proposed approach with fully and semi automated segmentation algorithms based on neural network classification, fuzzy-connectedness, vesselness filter and graph cuts. Global consistency error and confusion matrix analysis were used as accuracy measurements. For the proposed approach, the true positive rate was higher and the false negative rate was significantly lower compared to other fully automated methods. This indicates that segmentation based on CRF incorporated with texture descriptors has potential to efficiently support the analysis of complex topology of the ducts and aid in development of realistic breast anatomy phantoms.
Li, Wei; Abram, François; Pelletier, Jean-Pierre; Raynauld, Jean-Pierre; Dorais, Marc; d'Anjou, Marc-André; Martel-Pelletier, Johanne
2010-01-01
Joint effusion is frequently associated with osteoarthritis (OA) flare-up and is an important marker of therapeutic response. This study aimed at developing and validating a fully automated system based on magnetic resonance imaging (MRI) for the quantification of joint effusion volume in knee OA patients. MRI examinations consisted of two axial sequences: a T2-weighted true fast imaging with steady-state precession and a T1-weighted gradient echo. An automated joint effusion volume quantification system using MRI was developed and validated (a) with calibrated phantoms (cylinder and sphere) and effusion from knee OA patients; (b) with assessment by manual quantification; and (c) by direct aspiration. Twenty-five knee OA patients with joint effusion were included in the study. The automated joint effusion volume quantification was developed as a four stage sequencing process: bone segmentation, filtering of unrelated structures, segmentation of joint effusion, and subvoxel volume calculation. Validation experiments revealed excellent coefficients of variation with the calibrated cylinder (1.4%) and sphere (0.8%) phantoms. Comparison of the OA knee joint effusion volume assessed by the developed automated system and by manual quantification was also excellent (r = 0.98; P < 0.0001), as was the comparison with direct aspiration (r = 0.88; P = 0.0008). The newly developed fully automated MRI-based system provided precise quantification of OA knee joint effusion volume with excellent correlation with data from phantoms, a manual system, and joint aspiration. Such an automated system will be instrumental in improving the reproducibility/reliability of the evaluation of this marker in clinical application.
Applying machine learning to pattern analysis for automated in-design layout optimization
NASA Astrophysics Data System (ADS)
Cain, Jason P.; Fakhry, Moutaz; Pathak, Piyush; Sweis, Jason; Gennari, Frank; Lai, Ya-Chieh
2018-04-01
Building on previous work for cataloging unique topological patterns in an integrated circuit physical design, a new process is defined in which a risk scoring methodology is used to rank patterns based on manufacturing risk. Patterns with high risk are then mapped to functionally equivalent patterns with lower risk. The higher risk patterns are then replaced in the design with their lower risk equivalents. The pattern selection and replacement is fully automated and suitable for use for full-chip designs. Results from 14nm product designs show that the approach can identify and replace risk patterns with quantifiable positive impact on the risk score distribution after replacement.
Meyer, M.T.; Mills, M.S.; Thurman, E.M.
1993-01-01
An automated solid-phase extraction (SPE) method was developed for the pre-concentration of chloroacetanilide and triazine herbicides, and two triazine metabolites from 100-ml water samples. Breakthrough experiments for the C18 SPE cartridge show that the two triazine metabolites are not fully retained and that increasing flow-rate decreases their retention. Standard curve r2 values of 0.998-1.000 for each compound were consistently obtained and a quantitation level of 0.05 ??g/l was achieved for each compound tested. More than 10,000 surface and ground water samples have been analyzed by this method.
Microvessel prediction in H&E Stained Pathology Images using fully convolutional neural networks.
Yi, Faliu; Yang, Lin; Wang, Shidan; Guo, Lei; Huang, Chenglong; Xie, Yang; Xiao, Guanghua
2018-02-27
Pathological angiogenesis has been identified in many malignancies as a potential prognostic factor and target for therapy. In most cases, angiogenic analysis is based on the measurement of microvessel density (MVD) detected by immunostaining of CD31 or CD34. However, most retrievable public data is generally composed of Hematoxylin and Eosin (H&E)-stained pathology images, for which is difficult to get the corresponding immunohistochemistry images. The role of microvessels in H&E stained images has not been widely studied due to their complexity and heterogeneity. Furthermore, identifying microvessels manually for study is a labor-intensive task for pathologists, with high inter- and intra-observer variation. Therefore, it is important to develop automated microvessel-detection algorithms in H&E stained pathology images for clinical association analysis. In this paper, we propose a microvessel prediction method using fully convolutional neural networks. The feasibility of our proposed algorithm is demonstrated through experimental results on H&E stained images. Furthermore, the identified microvessel features were significantly associated with the patient clinical outcomes. This is the first study to develop an algorithm for automated microvessel detection in H&E stained pathology images.
Williams, James A; Eddleman, Laura; Pantone, Amy; Martinez, Regina; Young, Stephen; Van Der Pol, Barbara
2014-08-01
Next-generation diagnostics for Chlamydia trachomatis and Neisseria gonorrhoeae are available on semi- or fully-automated platforms. These systems require less hands-on time than older platforms and are user friendly. Four automated systems, the ABBOTT m2000 system, Becton Dickinson Viper System with XTR Technology, Gen-Probe Tigris DTS system, and Roche cobas 4800 system, were evaluated for total run time, hands-on time, and walk-away time. All of the systems evaluated in this time-motion study were able to complete a diagnostic test run within an 8-h work shift, instrument setup and operation were straightforward and uncomplicated, and walk-away time ranged from approximately 90 to 270 min in a head-to-head comparison of each system. All of the automated systems provide technical staff with increased time to perform other tasks during the run, offer easy expansion of the diagnostic test menu, and have the ability to increase specimen throughput. © 2013 Society for Laboratory Automation and Screening.
Automated determination of arterial input function for DCE-MRI of the prostate
NASA Astrophysics Data System (ADS)
Zhu, Yingxuan; Chang, Ming-Ching; Gupta, Sandeep
2011-03-01
Prostate cancer is one of the commonest cancers in the world. Dynamic contrast enhanced MRI (DCE-MRI) provides an opportunity for non-invasive diagnosis, staging, and treatment monitoring. Quantitative analysis of DCE-MRI relies on determination of an accurate arterial input function (AIF). Although several methods for automated AIF detection have been proposed in literature, none are optimized for use in prostate DCE-MRI, which is particularly challenging due to large spatial signal inhomogeneity. In this paper, we propose a fully automated method for determining the AIF from prostate DCE-MRI. Our method is based on modeling pixel uptake curves as gamma variate functions (GVF). First, we analytically compute bounds on GVF parameters for more robust fitting. Next, we approximate a GVF for each pixel based on local time domain information, and eliminate the pixels with false estimated AIFs using the deduced upper and lower bounds. This makes the algorithm robust to signal inhomogeneity. After that, according to spatial information such as similarity and distance between pixels, we formulate the global AIF selection as an energy minimization problem and solve it using a message passing algorithm to further rule out the weak pixels and optimize the detected AIF. Our method is fully automated without training or a priori setting of parameters. Experimental results on clinical data have shown that our method obtained promising detection accuracy (all detected pixels inside major arteries), and a very good match with expert traced manual AIF.
Payload Operations Control Center (POCC). [spacelab flight operations
NASA Technical Reports Server (NTRS)
Shipman, D. L.; Noneman, S. R.; Terry, E. S.
1981-01-01
The Spacelab payload operations control center (POCC) timeline analysis program which is used to provide POCC activity and resource information as a function of mission time is described. This program is fully automated and interactive, and is equipped with tutorial displays. The tutorial displays are sufficiently detailed for use by a program analyst having no computer experience. The POCC timeline analysis program is designed to operate on the VAX/VMS version V2.1 computer system.
NASA Technical Reports Server (NTRS)
Corker, Kevin M.; Smith, Barry R.
1993-01-01
The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.
Fully automated urban traffic system
NASA Technical Reports Server (NTRS)
Dobrotin, B. M.; Hansen, G. R.; Peng, T. K. C.; Rennels, D. A.
1977-01-01
The replacement of the driver with an automatic system which could perform the functions of guiding and routing a vehicle with a human's capability of responding to changing traffic demands was discussed. The problem was divided into four technological areas; guidance, routing, computing, and communications. It was determined that the latter three areas being developed independent of any need for fully automated urban traffic. A guidance system that would meet system requirements was not being developed but was technically feasible.
A fully automated digitally controlled 30-inch telescope
NASA Technical Reports Server (NTRS)
Colgate, S. A.; Moore, E. P.; Carlson, R.
1975-01-01
A fully automated 30-inch (75-cm) telescope has been successfully designed and constructed from a military surplus Nike-Ajax radar mount. Novel features include: closed-loop operation between mountain telescope and campus computer 30 km apart via microwave link, a TV-type sensor which is photon shot-noise limited, a special lightweight primary mirror, and a stepping motor drive capable of slewing and settling one degree in one second or a radian in fifteen seconds.
Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.
2000-01-01
The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.
NASA Astrophysics Data System (ADS)
Fotin, Sergei V.; Yin, Yin; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Cornud, François; Turkbey, Baris; Choyke, Peter L.
2012-02-01
Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment: it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and evaluated. The components of the method, offline template learning and the localization algorithm, are described in detail. The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were 4.06 +/- 0.33 mm and 3.10 +/- 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results demonstrate high utility of the detection method for a fully automated prostate segmentation.
Fully automated chest wall line segmentation in breast MRI by using context information
NASA Astrophysics Data System (ADS)
Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Localio, A. Russell; Schnall, Mitchell D.; Kontos, Despina
2012-03-01
Breast MRI has emerged as an effective modality for the clinical management of breast cancer. Evidence suggests that computer-aided applications can further improve the diagnostic accuracy of breast MRI. A critical and challenging first step for automated breast MRI analysis, is to separate the breast as an organ from the chest wall. Manual segmentation or user-assisted interactive tools are inefficient, tedious, and error-prone, which is prohibitively impractical for processing large amounts of data from clinical trials. To address this challenge, we developed a fully automated and robust computerized segmentation method that intensively utilizes context information of breast MR imaging and the breast tissue's morphological characteristics to accurately delineate the breast and chest wall boundary. A critical component is the joint application of anisotropic diffusion and bilateral image filtering to enhance the edge that corresponds to the chest wall line (CWL) and to reduce the effect of adjacent non-CWL tissues. A CWL voting algorithm is proposed based on CWL candidates yielded from multiple sequential MRI slices, in which a CWL representative is generated and used through a dynamic time warping (DTW) algorithm to filter out inferior candidates, leaving the optimal one. Our method is validated by a representative dataset of 20 3D unilateral breast MRI scans that span the full range of the American College of Radiology (ACR) Breast Imaging Reporting and Data System (BI-RADS) fibroglandular density categorization. A promising performance (average overlay percentage of 89.33%) is observed when the automated segmentation is compared to manually segmented ground truth obtained by an experienced breast imaging radiologist. The automated method runs time-efficiently at ~3 minutes for each breast MR image set (28 slices).
Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-01-01
Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977
Automated MRI segmentation for individualized modeling of current flow in the human head
NASA Astrophysics Data System (ADS)
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-12-01
Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
Maximizing Your Investment in Building Automation System Technology.
ERIC Educational Resources Information Center
Darnell, Charles
2001-01-01
Discusses how organizational issues and system standardization can be important factors that determine an institution's ability to fully exploit contemporary building automation systems (BAS). Further presented is management strategy for maximizing BAS investments. (GR)
Xu, Weiyi; Wan, Feng; Lou, Yufeng; Jin, Jiali; Mao, Weilin
2014-01-01
A number of automated devices for pretransfusion testing have recently become available. This study evaluated the Immucor Galileo System, a fully automated device based on the microplate hemagglutination technique for ABO/Rh (D) determinations. Routine ABO/Rh typing tests were performed on 13,045 samples using the Immucor automated instruments. Manual tube method was used to resolve ABO forward and reverse grouping discrepancies. D-negative test results were investigated and confirmed manually by the indirect antiglobulin test (IAT). The system rejected 70 tests for sample inadequacy. 87 samples were read as "No-type-determined" due to forward and reverse grouping discrepancies. 25 tests gave these results because of sample hemolysis. After further tests, we found 34 tests were caused by weakened RBC antibodies, 5 tests were attributable to weak A and/or B antigens, 4 tests were due to mixed-field reactions, and 8 tests had high titer cold agglutinin with blood qualifications which react only at temperatures below 34 degrees C. In the remaining 11 cases, irregular RBC antibodies were identified in 9 samples (seven anti-M and two anti-P) and two subgroups were identified in 2 samples (one A1 and one A2) by a reference laboratory. As for D typing, 2 weak D+ samples missed by automated systems gave negative results, but weak-positive reactions were observed in the IAT. The Immucor Galileo System is reliable and suited for ABO and D blood groups, some reasons may cause a discrepancy in ABO/D typing using a fully automated system. It is suggested that standardization of sample collection may improve the performance of the fully automated system.
2010-01-01
Introduction Joint effusion is frequently associated with osteoarthritis (OA) flare-up and is an important marker of therapeutic response. This study aimed at developing and validating a fully automated system based on magnetic resonance imaging (MRI) for the quantification of joint effusion volume in knee OA patients. Methods MRI examinations consisted of two axial sequences: a T2-weighted true fast imaging with steady-state precession and a T1-weighted gradient echo. An automated joint effusion volume quantification system using MRI was developed and validated (a) with calibrated phantoms (cylinder and sphere) and effusion from knee OA patients; (b) with assessment by manual quantification; and (c) by direct aspiration. Twenty-five knee OA patients with joint effusion were included in the study. Results The automated joint effusion volume quantification was developed as a four stage sequencing process: bone segmentation, filtering of unrelated structures, segmentation of joint effusion, and subvoxel volume calculation. Validation experiments revealed excellent coefficients of variation with the calibrated cylinder (1.4%) and sphere (0.8%) phantoms. Comparison of the OA knee joint effusion volume assessed by the developed automated system and by manual quantification was also excellent (r = 0.98; P < 0.0001), as was the comparison with direct aspiration (r = 0.88; P = 0.0008). Conclusions The newly developed fully automated MRI-based system provided precise quantification of OA knee joint effusion volume with excellent correlation with data from phantoms, a manual system, and joint aspiration. Such an automated system will be instrumental in improving the reproducibility/reliability of the evaluation of this marker in clinical application. PMID:20846392
Integrating Test-Form Formatting into Automated Test Assembly
ERIC Educational Resources Information Center
Diao, Qi; van der Linden, Wim J.
2013-01-01
Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…
Wire-Guide Manipulator For Automated Welding
NASA Technical Reports Server (NTRS)
Morris, Tim; White, Kevin; Gordon, Steve; Emerich, Dave; Richardson, Dave; Faulkner, Mike; Stafford, Dave; Mccutcheon, Kim; Neal, Ken; Milly, Pete
1994-01-01
Compact motor drive positions guide for welding filler wire. Drive part of automated wire feeder in partly or fully automated welding system. Drive unit contains three parallel subunits. Rotations of lead screws in three subunits coordinated to obtain desired motions in three degrees of freedom. Suitable for both variable-polarity plasma arc welding and gas/tungsten arc welding.
ERIC Educational Resources Information Center
Sheehan, Kathleen M.
2015-01-01
The "TextEvaluator"® text analysis tool is a fully automated text complexity evaluation tool designed to help teachers, curriculum specialists, textbook publishers, and test developers select texts that are consistent with the text complexity guidelines specified in the Common Core State Standards.This paper documents the procedure used…
Kline, Timothy L; Korfiatis, Panagiotis; Edwards, Marie E; Blais, Jaime D; Czerwiec, Frank S; Harris, Peter C; King, Bernard F; Torres, Vicente E; Erickson, Bradley J
2017-08-01
Deep learning techniques are being rapidly applied to medical imaging tasks-from organ and lesion segmentation to tissue and tumor classification. These techniques are becoming the leading algorithmic approaches to solve inherently difficult image processing tasks. Currently, the most critical requirement for successful implementation lies in the need for relatively large datasets that can be used for training the deep learning networks. Based on our initial studies of MR imaging examinations of the kidneys of patients affected by polycystic kidney disease (PKD), we have generated a unique database of imaging data and corresponding reference standard segmentations of polycystic kidneys. In the study of PKD, segmentation of the kidneys is needed in order to measure total kidney volume (TKV). Automated methods to segment the kidneys and measure TKV are needed to increase measurement throughput and alleviate the inherent variability of human-derived measurements. We hypothesize that deep learning techniques can be leveraged to perform fast, accurate, reproducible, and fully automated segmentation of polycystic kidneys. Here, we describe a fully automated approach for segmenting PKD kidneys within MR images that simulates a multi-observer approach in order to create an accurate and robust method for the task of segmentation and computation of TKV for PKD patients. A total of 2000 cases were used for training and validation, and 400 cases were used for testing. The multi-observer ensemble method had mean ± SD percent volume difference of 0.68 ± 2.2% compared with the reference standard segmentations. The complete framework performs fully automated segmentation at a level comparable with interobserver variability and could be considered as a replacement for the task of segmentation of PKD kidneys by a human.
Systematic review automation technologies
2014-01-01
Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128
Autonomous Vehicle Operation A person can operate a fully autonomous vehicle with the automated federal motor vehicle safety standards and is registered as a fully autonomous vehicle. Other conditions
Xenon International Automated Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-08-05
The Xenon International Automated Control software monitors, displays status, and allows for manual operator control as well as fully automatic control of multiple commercial and PNNL designed hardware components to generate and transmit atmospheric radioxenon concentration measurements every six hours.
Development of Fully Automated Low-Cost Immunoassay System for Research Applications.
Wang, Guochun; Das, Champak; Ledden, Bradley; Sun, Qian; Nguyen, Chien
2017-10-01
Enzyme-linked immunosorbent assay (ELISA) automation for routine operation in a small research environment would be very attractive. A portable fully automated low-cost immunoassay system was designed, developed, and evaluated with several protein analytes. It features disposable capillary columns as the reaction sites and uses real-time calibration for improved accuracy. It reduces the overall assay time to less than 75 min with the ability of easy adaptation of new testing targets. The running cost is extremely low due to the nature of automation, as well as reduced material requirements. Details about system configuration, components selection, disposable fabrication, system assembly, and operation are reported. The performance of the system was initially established with a rabbit immunoglobulin G (IgG) assay, and an example of assay adaptation with an interleukin 6 (IL6) assay is shown. This system is ideal for research use, but could work for broader testing applications with further optimization.
Fully printable, strain-engineered electronic wrap for customizable soft electronics.
Byun, Junghwan; Lee, Byeongmoon; Oh, Eunho; Kim, Hyunjong; Kim, Sangwoo; Lee, Seunghwan; Hong, Yongtaek
2017-03-24
Rapid growth of stretchable electronics stimulates broad uses in multidisciplinary fields as well as industrial applications. However, existing technologies are unsuitable for implementing versatile applications involving adaptable system design and functions in a cost/time-effective way because of vacuum-conditioned, lithographically-predefined processes. Here, we present a methodology for a fully printable, strain-engineered electronic wrap as a universal strategy which makes it more feasible to implement various stretchable electronic systems with customizable layouts and functions. The key aspects involve inkjet-printed rigid island (PRI)-based stretchable platform technology and corresponding printing-based automated electronic functionalization methodology, the combination of which provides fully printed, customized layouts of stretchable electronic systems with simplified process. Specifically, well-controlled contact line pinning effect of printed polymer solution enables the formation of PRIs with tunable thickness; and surface strain analysis on those PRIs leads to the optimized stability and device-to-island fill factor of strain-engineered electronic wraps. Moreover, core techniques of image-based automated pinpointing, surface-mountable device based electronic functionalizing, and one-step interconnection networking of PRIs enable customized circuit design and adaptable functionalities. To exhibit the universality of our approach, multiple types of practical applications ranging from self-computable digital logics to display and sensor system are demonstrated on skin in a customized form.
Fully printable, strain-engineered electronic wrap for customizable soft electronics
NASA Astrophysics Data System (ADS)
Byun, Junghwan; Lee, Byeongmoon; Oh, Eunho; Kim, Hyunjong; Kim, Sangwoo; Lee, Seunghwan; Hong, Yongtaek
2017-03-01
Rapid growth of stretchable electronics stimulates broad uses in multidisciplinary fields as well as industrial applications. However, existing technologies are unsuitable for implementing versatile applications involving adaptable system design and functions in a cost/time-effective way because of vacuum-conditioned, lithographically-predefined processes. Here, we present a methodology for a fully printable, strain-engineered electronic wrap as a universal strategy which makes it more feasible to implement various stretchable electronic systems with customizable layouts and functions. The key aspects involve inkjet-printed rigid island (PRI)-based stretchable platform technology and corresponding printing-based automated electronic functionalization methodology, the combination of which provides fully printed, customized layouts of stretchable electronic systems with simplified process. Specifically, well-controlled contact line pinning effect of printed polymer solution enables the formation of PRIs with tunable thickness; and surface strain analysis on those PRIs leads to the optimized stability and device-to-island fill factor of strain-engineered electronic wraps. Moreover, core techniques of image-based automated pinpointing, surface-mountable device based electronic functionalizing, and one-step interconnection networking of PRIs enable customized circuit design and adaptable functionalities. To exhibit the universality of our approach, multiple types of practical applications ranging from self-computable digital logics to display and sensor system are demonstrated on skin in a customized form.
Fully printable, strain-engineered electronic wrap for customizable soft electronics
Byun, Junghwan; Lee, Byeongmoon; Oh, Eunho; Kim, Hyunjong; Kim, Sangwoo; Lee, Seunghwan; Hong, Yongtaek
2017-01-01
Rapid growth of stretchable electronics stimulates broad uses in multidisciplinary fields as well as industrial applications. However, existing technologies are unsuitable for implementing versatile applications involving adaptable system design and functions in a cost/time-effective way because of vacuum-conditioned, lithographically-predefined processes. Here, we present a methodology for a fully printable, strain-engineered electronic wrap as a universal strategy which makes it more feasible to implement various stretchable electronic systems with customizable layouts and functions. The key aspects involve inkjet-printed rigid island (PRI)-based stretchable platform technology and corresponding printing-based automated electronic functionalization methodology, the combination of which provides fully printed, customized layouts of stretchable electronic systems with simplified process. Specifically, well-controlled contact line pinning effect of printed polymer solution enables the formation of PRIs with tunable thickness; and surface strain analysis on those PRIs leads to the optimized stability and device-to-island fill factor of strain-engineered electronic wraps. Moreover, core techniques of image-based automated pinpointing, surface-mountable device based electronic functionalizing, and one-step interconnection networking of PRIs enable customized circuit design and adaptable functionalities. To exhibit the universality of our approach, multiple types of practical applications ranging from self-computable digital logics to display and sensor system are demonstrated on skin in a customized form. PMID:28338055
ProDeGe: A computational protocol for fully automated decontamination of genomes
Tennessen, Kristin; Andersen, Evan; Clingenpeel, Scott; ...
2015-06-09
Single amplified genomes and genomes assembled from metagenomes have enabled the exploration of uncultured microorganisms at an unprecedented scale. However, both these types of products are plagued by contamination. Since these genomes are now being generated in a high-throughput manner and sequences from them are propagating into public databases to drive novel scientific discoveries, rigorous quality controls and decontamination protocols are urgently needed. Here, we present ProDeGe (Protocol for fully automated Decontamination of Genomes), the first computational protocol for fully automated decontamination of draft genomes. ProDeGe classifies sequences into two classes—clean and contaminant—using a combination of homology and feature-based methodologies.more » On average, 84% of sequence from the non-target organism is removed from the data set (specificity) and 84% of the sequence from the target organism is retained (sensitivity). Lastly, the procedure operates successfully at a rate of ~0.30 CPU core hours per megabase of sequence and can be applied to any type of genome sequence.« less
ProDeGe: A computational protocol for fully automated decontamination of genomes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tennessen, Kristin; Andersen, Evan; Clingenpeel, Scott
Single amplified genomes and genomes assembled from metagenomes have enabled the exploration of uncultured microorganisms at an unprecedented scale. However, both these types of products are plagued by contamination. Since these genomes are now being generated in a high-throughput manner and sequences from them are propagating into public databases to drive novel scientific discoveries, rigorous quality controls and decontamination protocols are urgently needed. Here, we present ProDeGe (Protocol for fully automated Decontamination of Genomes), the first computational protocol for fully automated decontamination of draft genomes. ProDeGe classifies sequences into two classes—clean and contaminant—using a combination of homology and feature-based methodologies.more » On average, 84% of sequence from the non-target organism is removed from the data set (specificity) and 84% of the sequence from the target organism is retained (sensitivity). Lastly, the procedure operates successfully at a rate of ~0.30 CPU core hours per megabase of sequence and can be applied to any type of genome sequence.« less
Kesner, Adam Leon; Kuntner, Claudia
2010-10-01
Respiratory gating in PET is an approach used to minimize the negative effects of respiratory motion on spatial resolution. It is based on an initial determination of a patient's respiratory movements during a scan, typically using hardware based systems. In recent years, several fully automated databased algorithms have been presented for extracting a respiratory signal directly from PET data, providing a very practical strategy for implementing gating in the clinic. In this work, a new method is presented for extracting a respiratory signal from raw PET sinogram data and compared to previously presented automated techniques. The acquisition of respiratory signal from PET data in the newly proposed method is based on rebinning the sinogram data into smaller data structures and then analyzing the time activity behavior in the elements of these structures. From this analysis, a 1D respiratory trace is produced, analogous to a hardware derived respiratory trace. To assess the accuracy of this fully automated method, respiratory signal was extracted from a collection of 22 clinical FDG-PET scans using this method, and compared to signal derived from several other software based methods as well as a signal derived from a hardware system. The method presented required approximately 9 min of processing time for each 10 min scan (using a single 2.67 GHz processor), which in theory can be accomplished while the scan is being acquired and therefore allowing a real-time respiratory signal acquisition. Using the mean correlation between the software based and hardware based respiratory traces, the optimal parameters were determined for the presented algorithm. The mean/median/range of correlations for the set of scans when using the optimal parameters was found to be 0.58/0.68/0.07-0.86. The speed of this method was within the range of real-time while the accuracy surpassed the most accurate of the previously presented algorithms. PET data inherently contains information about patient motion; information that is not currently being utilized. We have shown that a respiratory signal can be extracted from raw PET data in potentially real-time and in a fully automated manner. This signal correlates well with hardware based signal for a large percentage of scans, and avoids the efforts and complications associated with hardware. The proposed method to extract a respiratory signal can be implemented on existing scanners and, if properly integrated, can be applied without changes to routine clinical procedures.
High-throughput mouse genotyping using robotics automation.
Linask, Kaari L; Lo, Cecilia W
2005-02-01
The use of mouse models is rapidly expanding in biomedical research. This has dictated the need for the rapid genotyping of mutant mouse colonies for more efficient utilization of animal holding space. We have established a high-throughput protocol for mouse genotyping using two robotics workstations: a liquid-handling robot to assemble PCR and a microfluidics electrophoresis robot for PCR product analysis. This dual-robotics setup incurs lower start-up costs than a fully automated system while still minimizing human intervention. Essential to this automation scheme is the construction of a database containing customized scripts for programming the robotics workstations. Using these scripts and the robotics systems, multiple combinations of genotyping reactions can be assembled simultaneously, allowing even complex genotyping data to be generated rapidly with consistency and accuracy. A detailed protocol, database, scripts, and additional background information are available at http://dir.nhlbi.nih.gov/labs/ldb-chd/autogene/.
Automated Analysis of Stateflow Models
NASA Technical Reports Server (NTRS)
Bourbouh, Hamza; Garoche, Pierre-Loic; Garion, Christophe; Gurfinkel, Arie; Kahsaia, Temesghen; Thirioux, Xavier
2017-01-01
Stateflow is a widely used modeling framework for embedded and cyber physical systems where control software interacts with physical processes. In this work, we present a framework a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of State flow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.
Localization-based super-resolution imaging meets high-content screening.
Beghin, Anne; Kechkar, Adel; Butler, Corey; Levet, Florian; Cabillic, Marine; Rossier, Olivier; Giannone, Gregory; Galland, Rémi; Choquet, Daniel; Sibarita, Jean-Baptiste
2017-12-01
Single-molecule localization microscopy techniques have proven to be essential tools for quantitatively monitoring biological processes at unprecedented spatial resolution. However, these techniques are very low throughput and are not yet compatible with fully automated, multiparametric cellular assays. This shortcoming is primarily due to the huge amount of data generated during imaging and the lack of software for automation and dedicated data mining. We describe an automated quantitative single-molecule-based super-resolution methodology that operates in standard multiwell plates and uses analysis based on high-content screening and data-mining software. The workflow is compatible with fixed- and live-cell imaging and allows extraction of quantitative data like fluorophore photophysics, protein clustering or dynamic behavior of biomolecules. We demonstrate that the method is compatible with high-content screening using 3D dSTORM and DNA-PAINT based super-resolution microscopy as well as single-particle tracking.
Automation in Clinical Microbiology
Ledeboer, Nathan A.
2013-01-01
Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547
The Objective Identification and Quantification of Interstitial Lung Abnormalities in Smokers.
Ash, Samuel Y; Harmouche, Rola; Ross, James C; Diaz, Alejandro A; Hunninghake, Gary M; Putman, Rachel K; Onieva, Jorge; Martinez, Fernando J; Choi, Augustine M; Lynch, David A; Hatabu, Hiroto; Rosas, Ivan O; Estepar, Raul San Jose; Washko, George R
2017-08-01
Previous investigation suggests that visually detected interstitial changes in the lung parenchyma of smokers are highly clinically relevant and predict outcomes, including death. Visual subjective analysis to detect these changes is time-consuming, insensitive to subtle changes, and requires training to enhance reproducibility. Objective detection of such changes could provide a method of disease identification without these limitations. The goal of this study was to develop and test a fully automated image processing tool to objectively identify radiographic features associated with interstitial abnormalities in the computed tomography scans of a large cohort of smokers. An automated tool that uses local histogram analysis combined with distance from the pleural surface was used to detect radiographic features consistent with interstitial lung abnormalities in computed tomography scans from 2257 individuals from the Genetic Epidemiology of COPD study, a longitudinal observational study of smokers. The sensitivity and specificity of this tool was determined based on its ability to detect the visually identified presence of these abnormalities. The tool had a sensitivity of 87.8% and a specificity of 57.5% for the detection of interstitial lung abnormalities, with a c-statistic of 0.82, and was 100% sensitive and 56.7% specific for the detection of the visual subtype of interstitial abnormalities called fibrotic parenchymal abnormalities, with a c-statistic of 0.89. In smokers, a fully automated image processing tool is able to identify those individuals who have interstitial lung abnormalities with moderate sensitivity and specificity. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Towards fully automated structure-based function prediction in structural genomics: a case study.
Watson, James D; Sanderson, Steve; Ezersky, Alexandra; Savchenko, Alexei; Edwards, Aled; Orengo, Christine; Joachimiak, Andrzej; Laskowski, Roman A; Thornton, Janet M
2007-04-13
As the global Structural Genomics projects have picked up pace, the number of structures annotated in the Protein Data Bank as hypothetical protein or unknown function has grown significantly. A major challenge now involves the development of computational methods to assign functions to these proteins accurately and automatically. As part of the Midwest Center for Structural Genomics (MCSG) we have developed a fully automated functional analysis server, ProFunc, which performs a battery of analyses on a submitted structure. The analyses combine a number of sequence-based and structure-based methods to identify functional clues. After the first stage of the Protein Structure Initiative (PSI), we review the success of the pipeline and the importance of structure-based function prediction. As a dataset, we have chosen all structures solved by the MCSG during the 5 years of the first PSI. Our analysis suggests that two of the structure-based methods are particularly successful and provide examples of local similarity that is difficult to identify using current sequence-based methods. No one method is successful in all cases, so, through the use of a number of complementary sequence and structural approaches, the ProFunc server increases the chances that at least one method will find a significant hit that can help elucidate function. Manual assessment of the results is a time-consuming process and subject to individual interpretation and human error. We present a method based on the Gene Ontology (GO) schema using GO-slims that can allow the automated assessment of hits with a success rate approaching that of expert manual assessment.
Interim Assessment of the VAL Automated Guideway Transit System.
DOT National Transportation Integrated Search
1981-11-01
This report describes an interim assessment of the VAL (Vehicules Automatiques Legers or Light Automated Vehicle) AGT system which is currently under construction in Lille, France, and which is to become fully operational in December 1983. This repor...
De Diego, Nuria; Fürst, Tomáš; Humplík, Jan F; Ugena, Lydia; Podlešáková, Kateřina; Spíchal, Lukáš
2017-01-01
High-throughput plant phenotyping platforms provide new possibilities for automated, fast scoring of several plant growth and development traits, followed over time using non-invasive sensors. Using Arabidops is as a model offers important advantages for high-throughput screening with the opportunity to extrapolate the results obtained to other crops of commercial interest. In this study we describe the development of a highly reproducible high-throughput Arabidopsis in vitro bioassay established using our OloPhen platform, suitable for analysis of rosette growth in multi-well plates. This method was successfully validated on example of multivariate analysis of Arabidopsis rosette growth in different salt concentrations and the interaction with varying nutritional composition of the growth medium. Several traits such as changes in the rosette area, relative growth rate, survival rate and homogeneity of the population are scored using fully automated RGB imaging and subsequent image analysis. The assay can be used for fast screening of the biological activity of chemical libraries, phenotypes of transgenic or recombinant inbred lines, or to search for potential quantitative trait loci. It is especially valuable for selecting genotypes or growth conditions that improve plant stress tolerance.
Shenouda, Ninette; Proudfoot, Nicole A; Currie, Katharine D; Timmons, Brian W; MacDonald, Maureen J
2018-05-01
Many commercial ultrasound systems are now including automated analysis packages for the determination of carotid intima-media thickness (cIMT); however, details regarding their algorithms and methodology are not published. Few studies have compared their accuracy and reliability with previously established automated software, and those that have were in asymptomatic adults. Therefore, this study compared cIMT measures from a fully automated ultrasound edge-tracking software (EchoPAC PC, Version 110.0.2; GE Medical Systems, Horten, Norway) to an established semi-automated reference software (Artery Measurement System (AMS) II, Version 1.141; Gothenburg, Sweden) in 30 healthy preschool children (ages 3-5 years) and 27 adults with coronary artery disease (CAD; ages 48-81 years). For both groups, Bland-Altman plots revealed good agreement with a negligible mean cIMT difference of -0·03 mm. Software differences were statistically, but not clinically, significant for preschool images (P = 0·001) and were not significant for CAD images (P = 0·09). Intra- and interoperator repeatability was high and comparable between software for preschool images (ICC, 0·90-0·96; CV, 1·3-2·5%), but slightly higher with the automated ultrasound than the semi-automated reference software for CAD images (ICC, 0·98-0·99; CV, 1·4-2·0% versus ICC, 0·84-0·89; CV, 5·6-6·8%). These findings suggest that the automated ultrasound software produces valid cIMT values in healthy preschool children and adults with CAD. Automated ultrasound software may be useful for ensuring consistency among multisite research initiatives or large cohort studies involving repeated cIMT measures, particularly in adults with documented CAD. © 2017 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Beijbom, Oscar; Edmunds, Peter J.; Roelfsema, Chris; Smith, Jennifer; Kline, David I.; Neal, Benjamin P.; Dunlap, Matthew J.; Moriarty, Vincent; Fan, Tung-Yung; Tan, Chih-Jui; Chan, Stephen; Treibitz, Tali; Gamst, Anthony; Mitchell, B. Greg; Kriegman, David
2015-01-01
Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys. PMID:26154157
ERIC Educational Resources Information Center
Gbadamosi, Belau Olatunde
2011-01-01
The paper examines the level of library automation and virtual library development in four academic libraries. A validated questionnaire was used to capture the responses from academic librarians of the libraries under study. The paper discovers that none of the four academic libraries is fully automated. The libraries make use of librarians with…
Joslin, John; Gilligan, James; Anderson, Paul; Garcia, Catherine; Sharif, Orzala; Hampton, Janice; Cohen, Steven; King, Miranda; Zhou, Bin; Jiang, Shumei; Trussell, Christopher; Dunn, Robert; Fathman, John W; Snead, Jennifer L; Boitano, Anthony E; Nguyen, Tommy; Conner, Michael; Cooke, Mike; Harris, Jennifer; Ainscow, Ed; Zhou, Yingyao; Shaw, Chris; Sipes, Dan; Mainquist, James; Lesley, Scott
2018-05-01
The goal of high-throughput screening is to enable screening of compound libraries in an automated manner to identify quality starting points for optimization. This often involves screening a large diversity of compounds in an assay that preserves a connection to the disease pathology. Phenotypic screening is a powerful tool for drug identification, in that assays can be run without prior understanding of the target and with primary cells that closely mimic the therapeutic setting. Advanced automation and high-content imaging have enabled many complex assays, but these are still relatively slow and low throughput. To address this limitation, we have developed an automated workflow that is dedicated to processing complex phenotypic assays for flow cytometry. The system can achieve a throughput of 50,000 wells per day, resulting in a fully automated platform that enables robust phenotypic drug discovery. Over the past 5 years, this screening system has been used for a variety of drug discovery programs, across many disease areas, with many molecules advancing quickly into preclinical development and into the clinic. This report will highlight a diversity of approaches that automated flow cytometry has enabled for phenotypic drug discovery.
Lloyd, T L; Perschy, T B; Gooding, A E; Tomlinson, J J
1992-01-01
A fully automated assay for the analysis of ranitidine in serum and plasma, with and without an internal standard, was validated. It utilizes robotic solid phase extraction with on-line high performance liquid chromatographic (HPLC) analysis. The ruggedness of the assay was demonstrated over a three-year period. A Zymark Py Technology II robotic system was used for serial processing from initial aspiration of samples from original collection containers, to final direct injection onto the on-line HPLC system. Automated serial processing with on-line analysis provided uniform sample history and increased productivity by freeing the chemist to analyse data and perform other tasks. The solid phase extraction efficiency was 94% throughout the assay range of 10-250 ng/mL. The coefficients of variation for within- and between-day quality control samples ranged from 1 to 6% and 1 to 5%, respectively. Mean accuracy for between-day standards and quality control results ranged from 97 to 102% of the respective theoretical concentrations.
Patel, Darshan C; Lyu, Yaqi Fara; Gandarilla, Jorge; Doherty, Steve
2018-04-03
In-process sampling and analysis is an important aspect of monitoring kinetic profiles and impurity formation or rejection, both in development and during commercial manufacturing. In pharmaceutical process development, the technology of choice for a substantial portion of this analysis is high-performance liquid chromatography (HPLC). Traditionally, the sample extraction and preparation for reaction characterization have been performed manually. This can be time consuming, laborious, and impractical for long processes. Depending on the complexity of the sample preparation, there can be variability introduced by different analysts, and in some cases, the integrity of the sample can be compromised during handling. While there are commercial instruments available for on-line monitoring with HPLC, they lack capabilities in many key areas. Some do not provide integration of the sampling and analysis, while others afford limited flexibility in sample preparation. The current offerings provide a limited number of unit operations available for sample processing and no option for workflow customizability. This work describes development of a microfluidic automated program (MAP) which fully automates the sample extraction, manipulation, and on-line LC analysis. The flexible system is controlled using an intuitive Microsoft Excel based user interface. The autonomous system is capable of unattended reaction monitoring that allows flexible unit operations and workflow customization to enable complex operations and on-line sample preparation. The automated system is shown to offer advantages over manual approaches in key areas while providing consistent and reproducible in-process data. Copyright © 2017 Elsevier B.V. All rights reserved.
Czerniecki, Stefan M; Cruz, Nelly M; Harder, Jennifer L; Menon, Rajasree; Annis, James; Otto, Edgar A; Gulieva, Ramila E; Islas, Laura V; Kim, Yong Kyun; Tran, Linh M; Martins, Timothy J; Pippin, Jeffrey W; Fu, Hongxia; Kretzler, Matthias; Shankland, Stuart J; Himmelfarb, Jonathan; Moon, Randall T; Paragas, Neal; Freedman, Benjamin S
2018-05-15
Organoids derived from human pluripotent stem cells are a potentially powerful tool for high-throughput screening (HTS), but the complexity of organoid cultures poses a significant challenge for miniaturization and automation. Here, we present a fully automated, HTS-compatible platform for enhanced differentiation and phenotyping of human kidney organoids. The entire 21-day protocol, from plating to differentiation to analysis, can be performed automatically by liquid-handling robots, or alternatively by manual pipetting. High-content imaging analysis reveals both dose-dependent and threshold effects during organoid differentiation. Immunofluorescence and single-cell RNA sequencing identify previously undetected parietal, interstitial, and partially differentiated compartments within organoids and define conditions that greatly expand the vascular endothelium. Chemical modulation of toxicity and disease phenotypes can be quantified for safety and efficacy prediction. Screening in gene-edited organoids in this system reveals an unexpected role for myosin in polycystic kidney disease. Organoids in HTS formats thus establish an attractive platform for multidimensional phenotypic screening. Copyright © 2018 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertesz, Vilmos; Van Berkel, Gary J
A fully automated liquid extraction-based surface sampling system utilizing a commercially available autosampler coupled to high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) detection is reported. Discrete spots selected for droplet-based sampling and automated sample queue generation for both the autosampler and MS were enabled by using in-house developed software. In addition, co-registration of spatially resolved sampling position and HPLC-MS information to generate heatmaps of compounds monitored for subsequent data analysis was also available in the software. The system was evaluated with whole-body thin tissue sections from propranolol dosed rat. The hands-free operation of the system was demonstrated by creating heatmapsmore » of the parent drug and its hydroxypropranolol glucuronide metabolites with 1 mm resolution in the areas of interest. The sample throughput was approximately 5 min/sample defined by the time needed for chromatographic separation. The spatial distributions of both the drug and its metabolites were consistent with previous studies employing other liquid extraction-based surface sampling methodologies.« less
Puścian, Alicja; Łęski, Szymon; Kasprowicz, Grzegorz; Winiarski, Maciej; Borowska, Joanna; Nikolaev, Tomasz; Boguszewski, Paweł M; Lipp, Hans-Peter; Knapska, Ewelina
2016-01-01
Eco-HAB is an open source, RFID-based system for automated measurement and analysis of social preference and in-cohort sociability in mice. The system closely follows murine ethology. It requires no contact between a human experimenter and tested animals, overcoming the confounding factors that lead to irreproducible assessment of murine social behavior between laboratories. In Eco-HAB, group-housed animals live in a spacious, four-compartment apparatus with shadowed areas and narrow tunnels, resembling natural burrows. Eco-HAB allows for assessment of the tendency of mice to voluntarily spend time together in ethologically relevant mouse group sizes. Custom-made software for automated tracking, data extraction, and analysis enables quick evaluation of social impairments. The developed protocols and standardized behavioral measures demonstrate high replicability. Unlike classic three-chambered sociability tests, Eco-HAB provides measurements of spontaneous, ecologically relevant social behaviors in group-housed animals. Results are obtained faster, with less manpower, and without confounding factors. DOI: http://dx.doi.org/10.7554/eLife.19532.001 PMID:27731798
Deep machine learning provides state-of-the-art performance in image-based plant phenotyping.
Pound, Michael P; Atkinson, Jonathan A; Townsend, Alexandra J; Wilson, Michael H; Griffiths, Marcus; Jackson, Aaron S; Bulat, Adrian; Tzimiropoulos, Georgios; Wells, Darren M; Murchie, Erik H; Pridmore, Tony P; French, Andrew P
2017-10-01
In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets. © The Authors 2017. Published by Oxford University Press.
ST-Segment Analysis Using Wireless Technology in Acute Myocardial Infarction (STAT-MI) trial.
Dhruva, Vivek N; Abdelhadi, Samir I; Anis, Ather; Gluckman, William; Hom, David; Dougan, William; Kaluski, Edo; Haider, Bunyad; Klapholz, Marc
2007-08-07
Our goal was to examine the effects of implementing a fully automated wireless network to reduce door-to-intervention times (D2I) in ST-segment elevation myocardial infarction (STEMI). Wireless technologies used to transmit prehospital electrocardiograms (ECGs) have helped to decrease D2I times but have unrealized potential. A fully automated wireless network that facilitates simultaneous 12-lead ECG transmission from emergency medical services (EMS) personnel in the field to the emergency department (ED) and offsite cardiologists via smartphones was developed. The system is composed of preconfigured Bluetooth devices, preprogrammed receiving/transmitting stations, dedicated e-mail servers, and smartphones. The network facilitates direct communication between offsite cardiologists and EMS personnel, allowing for patient triage directly to the cardiac catheterization laboratory from the field. Demographic, laboratory, and time interval data were prospectively collected and compared with calendar year 2005 data. From June to December 2006, 80 ECGs with suspected STEMI were transmitted via the network. Twenty patients with ECGs consistent with STEMI were triaged to the catheterization laboratory. Improvement was seen in mean door-to-cardiologist notification (-14.6 vs. 61.4 min, p < 0.001), door-to-arterial access (47.6 vs. 108.1 min, p < 0.001), time-to-first angiographic injection (52.8 vs. 119.2 min, p < 0.001), and D2I times (80.1 vs. 145.6 min, p < 0.001) compared with 2005 data. A fully automated wireless network that transmits ECGs simultaneously to the ED and offsite cardiologists for the early evaluation and triage of patients with suspected STEMI can decrease D2I times to <90 min and has the potential to be broadly applied in clinical practice.
Automated tetraploid genotype calling by hierarchical clustering
USDA-ARS?s Scientific Manuscript database
SNP arrays are transforming breeding and genetics research for autotetraploids. To fully utilize these arrays, however, the relationship between signal intensity and allele dosage must be inferred independently for each marker. We developed an improved computational method to automate this process, ...
Buck, Thomas; Hwang, Shawn M; Plicht, Björn; Mucci, Ronald A; Hunold, Peter; Erbel, Raimund; Levine, Robert A
2008-06-01
Cardiac ultrasound imaging systems are limited in the noninvasive quantification of valvular regurgitation due to indirect measurements and inaccurate hemodynamic assumptions. We recently demonstrated that the principle of integration of backscattered acoustic Doppler power times velocity can be used for flow quantification in valvular regurgitation directly at the vena contracta of a regurgitant flow jet. We now aimed to accomplish implementation of automated Doppler power flow analysis software on a standard cardiac ultrasound system utilizing novel matrix-array transducer technology with detailed description of system requirements, components and software contributing to the system. This system based on a 3.5 MHz, matrix-array cardiac ultrasound scanner (Sonos 5500, Philips Medical Systems) was validated by means of comprehensive experimental signal generator trials, in vitro flow phantom trials and in vivo testing in 48 patients with mitral regurgitation of different severity and etiology using magnetic resonance imaging (MRI) for reference. All measurements displayed good correlation to the reference values, indicating successful implementation of automated Doppler power flow analysis on a matrix-array ultrasound imaging system. Systematic underestimation of effective regurgitant orifice areas >0.65 cm(2) and volumes >40 ml was found due to currently limited Doppler beam width that could be readily overcome by the use of new generation 2D matrix-array technology. Automated flow quantification in valvular heart disease based on backscattered Doppler power can be fully implemented on board a routinely used matrix-array ultrasound imaging systems. Such automated Doppler power flow analysis of valvular regurgitant flow directly, noninvasively, and user independent overcomes the practical limitations of current techniques.
The Environmental Control and Life Support System (ECLSS) advanced automation project
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.; Carnes, Ray
1990-01-01
The objective of the environmental control and life support system (ECLSS) Advanced Automation Project is to influence the design of the initial and evolutionary Space Station Freedom Program (SSFP) ECLSS toward a man-made closed environment in which minimal flight and ground manpower is needed. Another objective includes capturing ECLSS design and development knowledge future missions. Our approach has been to (1) analyze the SSFP ECLSS, (2) envision as our goal a fully automated evolutionary environmental control system - an augmentation of the baseline, and (3) document the advanced software systems, hooks, and scars which will be necessary to achieve this goal. From this analysis, prototype software is being developed, and will be tested using air and water recovery simulations and hardware subsystems. In addition, the advanced software is being designed, developed, and tested using automation software management plan and lifecycle tools. Automated knowledge acquisition, engineering, verification and testing tools are being used to develop the software. In this way, we can capture ECLSS development knowledge for future use develop more robust and complex software, provide feedback to the knowledge based system tool community, and ensure proper visibility of our efforts.
NASA Astrophysics Data System (ADS)
Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Filgueiras-Rama, David; Pizarro, Gonzalo; Ibañez, Borja; Berenfeld, Omer; Boyers, Pamela; Gold, Jeffrey
2012-12-01
This paper presents an automated method to segment left ventricle (LV) tissues from functional and delayed-enhancement (DE) cardiac magnetic resonance imaging (MRI) scans using a sequential multi-step approach. First, a region of interest (ROI) is computed to create a subvolume around the LV using morphological operations and image arithmetic. From the subvolume, the myocardial contours are automatically delineated using difference of Gaussians (DoG) filters and GSV snakes. These contours are used as a mask to identify pathological tissues, such as fibrosis or scar, within the DE-MRI. The presented automated technique is able to accurately delineate the myocardium and identify the pathological tissue in patient sets. The results were validated by two expert cardiologists, and in one set the automated results are quantitatively and qualitatively compared with expert manual delineation. Furthermore, the method is patient-specific, performed on an entire patient MRI series. Thus, in addition to providing a quick analysis of individual MRI scans, the fully automated segmentation method is used for effectively tagging regions in order to reconstruct computerized patient-specific 3D cardiac models. These models can then be used in electrophysiological studies and surgical strategy planning.
Discovery informatics in biological and biomedical sciences: research challenges and opportunities.
Honavar, Vasant
2015-01-01
New discoveries in biological, biomedical and health sciences are increasingly being driven by our ability to acquire, share, integrate and analyze, and construct and simulate predictive models of biological systems. While much attention has focused on automating routine aspects of management and analysis of "big data", realizing the full potential of "big data" to accelerate discovery calls for automating many other aspects of the scientific process that have so far largely resisted automation: identifying gaps in the current state of knowledge; generating and prioritizing questions; designing studies; designing, prioritizing, planning, and executing experiments; interpreting results; forming hypotheses; drawing conclusions; replicating studies; validating claims; documenting studies; communicating results; reviewing results; and integrating results into the larger body of knowledge in a discipline. Against this background, the PSB workshop on Discovery Informatics in Biological and Biomedical Sciences explores the opportunities and challenges of automating discovery or assisting humans in discovery through advances (i) Understanding, formalization, and information processing accounts of, the entire scientific process; (ii) Design, development, and evaluation of the computational artifacts (representations, processes) that embody such understanding; and (iii) Application of the resulting artifacts and systems to advance science (by augmenting individual or collective human efforts, or by fully automating science).
Takemura, Hiroyuki; Ai, Tomohiko; Kimura, Konobu; Nagasaka, Kaori; Takahashi, Toshihiro; Tsuchiya, Koji; Yang, Haeun; Konishi, Aya; Uchihashi, Kinya; Horii, Takashi; Tabe, Yoko; Ohsaka, Akimichi
2018-01-01
The XN series automated hematology analyzer has been equipped with a body fluid (BF) mode to count and differentiate leukocytes in BF samples including cerebrospinal fluid (CSF). However, its diagnostic accuracy is not reliable for CSF samples with low cell concentration at the border between normal and pathologic level. To overcome this limitation, a new flow cytometry-based technology, termed "high sensitive analysis (hsA) mode," has been developed. In addition, the XN series analyzer has been equipped with the automated digital cell imaging analyzer DI-60 to classify cell morphology including normal leukocytes differential and abnormal malignant cells detection. Using various BF samples, we evaluated the performance of the XN-hsA mode and DI-60 compared to manual microscopic examination. The reproducibility of the XN-hsA mode showed good results in samples with low cell densities (coefficient of variation; % CV: 7.8% for 6 cells/μL). The linearity of the XN-hsA mode was established up to 938 cells/μL. The cell number obtained using the XN-hsA mode correlated highly with the corresponding microscopic examination. Good correlation was also observed between the DI-60 analyses and manual microscopic classification for all leukocyte types, except monocytes. In conclusion, the combined use of cell counting with the XN-hsA mode and automated morphological analyses using the DI-60 mode is potentially useful for the automated analysis of BF cells.
Development of a Novel and Rapid Fully Automated Genetic Testing System.
Uehara, Masayuki
2016-01-01
We have developed a rapid genetic testing system integrating nucleic acid extraction, purification, amplification, and detection in a single cartridge. The system performs real-time polymerase chain reaction (PCR) after nucleic acid purification in a fully automated manner. RNase P, a housekeeping gene, was purified from human nasal epithelial cells using silica-coated magnetic beads and subjected to real-time PCR using a novel droplet-real-time-PCR machine. The process was completed within 13 min. This system will be widely applicable for research and diagnostic uses.
2010-01-01
Background Cell motility is a critical parameter in many physiological as well as pathophysiological processes. In time-lapse video microscopy, manual cell tracking remains the most common method of analyzing migratory behavior of cell populations. In addition to being labor-intensive, this method is susceptible to user-dependent errors regarding the selection of "representative" subsets of cells and manual determination of precise cell positions. Results We have quantitatively analyzed these error sources, demonstrating that manual cell tracking of pancreatic cancer cells lead to mis-calculation of migration rates of up to 410%. In order to provide for objective measurements of cell migration rates, we have employed multi-target tracking technologies commonly used in radar applications to develop fully automated cell identification and tracking system suitable for high throughput screening of video sequences of unstained living cells. Conclusion We demonstrate that our automatic multi target tracking system identifies cell objects, follows individual cells and computes migration rates with high precision, clearly outperforming manual procedures. PMID:20377897
Frizzarin, Rejane M; Maya, Fernando; Estela, José M; Cerdà, Víctor
2016-12-01
A novel fully-automated magnetic stirring-assisted lab-in-syringe analytical procedure has been developed for the fast and efficient dispersive liquid-liquid microextraction (DLLME) of caffeine in coffee beverages. The procedure is based on the microextraction of caffeine with a minute amount of dichloromethane, isolating caffeine from the sample matrix with no further sample pretreatment. Selection of the relevant extraction parameters such as the dispersive solvent, proportion of aqueous/organic phase, pH and flow rates have been carefully evaluated. Caffeine quantification was linear from 2 to 75mgL(-1), with detection and quantification limits of 0.46mgL(-1) and 1.54mgL(-1), respectively. A coefficient of variation (n=8; 5mgL(-1)) of a 2.1% and a sampling rate of 16h(-1), were obtained. The procedure was satisfactorily applied to the determination of caffeine in brewed, instant and decaf coffee samples, being the results for the sample analysis validated using high-performance liquid chromatography. Copyright © 2016 Elsevier Ltd. All rights reserved.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2017-04-01
With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset.
Al-Fahdawi, Shumoos; Qahwaji, Rami; Al-Waisy, Alaa S; Ipson, Stanley; Ferdousi, Maryam; Malik, Rayaz A; Brahma, Arun
2018-07-01
Corneal endothelial cell abnormalities may be associated with a number of corneal and systemic diseases. Damage to the endothelial cells can significantly affect corneal transparency by altering hydration of the corneal stroma, which can lead to irreversible endothelial cell pathology requiring corneal transplantation. To date, quantitative analysis of endothelial cell abnormalities has been manually performed by ophthalmologists using time consuming and highly subjective semi-automatic tools, which require an operator interaction. We developed and applied a fully-automated and real-time system, termed the Corneal Endothelium Analysis System (CEAS) for the segmentation and computation of endothelial cells in images of the human cornea obtained by in vivo corneal confocal microscopy. First, a Fast Fourier Transform (FFT) Band-pass filter is applied to reduce noise and enhance the image quality to make the cells more visible. Secondly, endothelial cell boundaries are detected using watershed transformations and Voronoi tessellations to accurately quantify the morphological parameters of the human corneal endothelial cells. The performance of the automated segmentation system was tested against manually traced ground-truth images based on a database consisting of 40 corneal confocal endothelial cell images in terms of segmentation accuracy and obtained clinical features. In addition, the robustness and efficiency of the proposed CEAS system were compared with manually obtained cell densities using a separate database of 40 images from controls (n = 11), obese subjects (n = 16) and patients with diabetes (n = 13). The Pearson correlation coefficient between automated and manual endothelial cell densities is 0.9 (p < 0.0001) and a Bland-Altman plot shows that 95% of the data are between the 2SD agreement lines. We demonstrate the effectiveness and robustness of the CEAS system, and the possibility of utilizing it in a real world clinical setting to enable rapid diagnosis and for patient follow-up, with an execution time of only 6 seconds per image. Copyright © 2018 Elsevier B.V. All rights reserved.
Gandola, Emanuele; Antonioli, Manuela; Traficante, Alessio; Franceschini, Simone; Scardi, Michele; Congestri, Roberta
2016-05-01
Toxigenic cyanobacteria are one of the main health risks associated with water resources worldwide, as their toxins can affect humans and fauna exposed via drinking water, aquaculture and recreation. Microscopy monitoring of cyanobacteria in water bodies and massive growth systems is a routine operation for cell abundance and growth estimation. Here we present ACQUA (Automated Cyanobacterial Quantification Algorithm), a new fully automated image analysis method designed for filamentous genera in Bright field microscopy. A pre-processing algorithm has been developed to highlight filaments of interest from background signals due to other phytoplankton and dust. A spline-fitting algorithm has been designed to recombine interrupted and crossing filaments in order to perform accurate morphometric analysis and to extract the surface pattern information of highlighted objects. In addition, 17 specific pattern indicators have been developed and used as input data for a machine-learning algorithm dedicated to the recognition between five widespread toxic or potentially toxic filamentous genera in freshwater: Aphanizomenon, Cylindrospermopsis, Dolichospermum, Limnothrix and Planktothrix. The method was validated using freshwater samples from three Italian volcanic lakes comparing automated vs. manual results. ACQUA proved to be a fast and accurate tool to rapidly assess freshwater quality and to characterize cyanobacterial assemblages in aquatic environments. Copyright © 2016 Elsevier B.V. All rights reserved.
Summers, Ronald M; Baecher, Nicolai; Yao, Jianhua; Liu, Jiamin; Pickhardt, Perry J; Choi, J Richard; Hill, Suvimol
2011-01-01
To show the feasibility of calculating the bone mineral density (BMD) from computed tomographic colonography (CTC) scans using fully automated software. Automated BMD measurement software was developed that measures the BMD of the first and second lumbar vertebrae on computed tomography and calculates the mean of the 2 values to provide a per patient BMD estimate. The software was validated in a reference population of 17 consecutive women who underwent quantitative computed tomography and in a population of 475 women from a consecutive series of asymptomatic patients enrolled in a CTC screening trial conducted at 3 medical centers. The mean (SD) BMD was 133.6 (34.6) mg/mL (95% confidence interval, 130.5-136.7; n = 475). In women aged 42 to 60 years (n = 316) and 61 to 79 years (n = 159), the mean (SD) BMDs were 143.1 (33.5) and 114.7 (28.3) mg/mL, respectively (P < 0.0001). Fully automated BMD measurements were reproducible for a given patient with 95% limits of agreement of -9.79 to 8.46 mg/mL for the mean difference between paired assessments on supine and prone CTC. Osteoporosis screening can be performed simultaneously with screening for colorectal polyps.
Automated measurement of cell motility and proliferation
Bahnson, Alfred; Athanassiou, Charalambos; Koebler, Douglas; Qian, Lei; Shun, Tongying; Shields, Donna; Yu, Hui; Wang, Hong; Goff, Julie; Cheng, Tao; Houck, Raymond; Cowsert, Lex
2005-01-01
Background Time-lapse microscopic imaging provides a powerful approach for following changes in cell phenotype over time. Visible responses of whole cells can yield insight into functional changes that underlie physiological processes in health and disease. For example, features of cell motility accompany molecular changes that are central to the immune response, to carcinogenesis and metastasis, to wound healing and tissue regeneration, and to the myriad developmental processes that generate an organism. Previously reported image processing methods for motility analysis required custom viewing devices and manual interactions that may introduce bias, that slow throughput, and that constrain the scope of experiments in terms of the number of treatment variables, time period of observation, replication and statistical options. Here we describe a fully automated system in which images are acquired 24/7 from 384 well plates and are automatically processed to yield high-content motility and morphological data. Results We have applied this technology to study the effects of different extracellular matrix compounds on human osteoblast-like cell lines to explore functional changes that may underlie processes involved in bone formation and maintenance. We show dose-response and kinetic data for induction of increased motility by laminin and collagen type I without significant effects on growth rate. Differential motility response was evident within 4 hours of plating cells; long-term responses differed depending upon cell type and surface coating. Average velocities were increased approximately 0.1 um/min by ten-fold increases in laminin coating concentration in some cases. Comparison with manual tracking demonstrated the accuracy of the automated method and highlighted the comparative imprecision of human tracking for analysis of cell motility data. Quality statistics are reported that associate with stage noise, interference by non-cell objects, and uncertainty in the outlining and positioning of cells by automated image analysis. Exponential growth, as monitored by total cell area, did not linearly correlate with absolute cell number, but proved valuable for selection of reliable tracking data and for disclosing between-experiment variations in cell growth. Conclusion These results demonstrate the applicability of a system that uses fully automated image acquisition and analysis to study cell motility and growth. Cellular motility response is determined in an unbiased and comparatively high throughput manner. Abundant ancillary data provide opportunities for uniform filtering according to criteria that select for biological relevance and for providing insight into features of system performance. Data quality measures have been developed that can serve as a basis for the design and quality control of experiments that are facilitated by automation and the 384 well plate format. This system is applicable to large-scale studies such as drug screening and research into effects of complex combinations of factors and matrices on cell phenotype. PMID:15831094
NASA Astrophysics Data System (ADS)
Palaniswamy, Hariharasudhan; Kanthadai, Narayan; Roy, Subir; Beauchesne, Erwan
2011-08-01
Crash, NVH (Noise, Vibration, Harshness), and durability analysis are commonly deployed in structural CAE analysis for mechanical design of components especially in the automotive industry. Components manufactured by stamping constitute a major portion of the automotive structure. In CAE analysis they are modeled at a nominal state with uniform thickness and no residual stresses and strains. However, in reality the stamped components have non-uniformly distributed thickness and residual stresses and strains resulting from stamping. It is essential to consider the stamping information in CAE analysis to accurately model the behavior of the sheet metal structures under different loading conditions. Especially with the current emphasis on weight reduction by replacing conventional steels with aluminum and advanced high strength steels it is imperative to avoid over design. Considering this growing need in industry, a highly automated and robust method has been integrated within Altair Hyperworks® to initialize sheet metal components in CAE models with stamping data. This paper demonstrates this new feature and the influence of stamping data for a full car frontal crash analysis.
Space science experimentation automation and support
NASA Technical Reports Server (NTRS)
Frainier, Richard J.; Groleau, Nicolas; Shapiro, Jeff C.
1994-01-01
This paper outlines recent work done at the NASA Ames Artificial Intelligence Research Laboratory on automation and support of science experiments on the US Space Shuttle in low earth orbit. Three approaches to increasing the science return of these experiments using emerging automation technologies are described: remote control (telescience), science advisors for astronaut operators, and fully autonomous experiments. The capabilities and limitations of these approaches are reviewed.
Kim, Youngwoo; Ge, Yinghui; Tao, Cheng; Zhu, Jianbing; Chapman, Arlene B.; Torres, Vicente E.; Yu, Alan S.L.; Mrug, Michal; Bennett, William M.; Flessner, Michael F.; Landsittel, Doug P.
2016-01-01
Background and objectives Our study developed a fully automated method for segmentation and volumetric measurements of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease and assessed the performance of the automated method with the reference manual segmentation method. Design, setting, participants, & measurements Study patients were selected from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease. At the enrollment of the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease Study in 2000, patients with autosomal dominant polycystic kidney disease were between 15 and 46 years of age with relatively preserved GFRs. Our fully automated segmentation method was on the basis of a spatial prior probability map of the location of kidneys in abdominal magnetic resonance images and regional mapping with total variation regularization and propagated shape constraints that were formulated into a level set framework. T2–weighted magnetic resonance image sets of 120 kidneys were selected from 60 patients with autosomal dominant polycystic kidney disease and divided into the training and test datasets. The performance of the automated method in reference to the manual method was assessed by means of two metrics: Dice similarity coefficient and intraclass correlation coefficient of segmented kidney volume. The training and test sets were swapped for crossvalidation and reanalyzed. Results Successful segmentation of kidneys was performed with the automated method in all test patients. The segmented kidney volumes ranged from 177.2 to 2634 ml (mean, 885.4±569.7 ml). The mean Dice similarity coefficient ±SD between the automated and manual methods was 0.88±0.08. The mean correlation coefficient between the two segmentation methods for the segmented volume measurements was 0.97 (P<0.001 for each crossvalidation set). The results from the crossvalidation sets were highly comparable. Conclusions We have developed a fully automated method for segmentation of kidneys from abdominal magnetic resonance images in patients with autosomal dominant polycystic kidney disease with varying kidney volumes. The performance of the automated method was in good agreement with that of manual method. PMID:26797708
Kim, Youngwoo; Ge, Yinghui; Tao, Cheng; Zhu, Jianbing; Chapman, Arlene B; Torres, Vicente E; Yu, Alan S L; Mrug, Michal; Bennett, William M; Flessner, Michael F; Landsittel, Doug P; Bae, Kyongtae T
2016-04-07
Our study developed a fully automated method for segmentation and volumetric measurements of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease and assessed the performance of the automated method with the reference manual segmentation method. Study patients were selected from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease. At the enrollment of the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease Study in 2000, patients with autosomal dominant polycystic kidney disease were between 15 and 46 years of age with relatively preserved GFRs. Our fully automated segmentation method was on the basis of a spatial prior probability map of the location of kidneys in abdominal magnetic resonance images and regional mapping with total variation regularization and propagated shape constraints that were formulated into a level set framework. T2-weighted magnetic resonance image sets of 120 kidneys were selected from 60 patients with autosomal dominant polycystic kidney disease and divided into the training and test datasets. The performance of the automated method in reference to the manual method was assessed by means of two metrics: Dice similarity coefficient and intraclass correlation coefficient of segmented kidney volume. The training and test sets were swapped for crossvalidation and reanalyzed. Successful segmentation of kidneys was performed with the automated method in all test patients. The segmented kidney volumes ranged from 177.2 to 2634 ml (mean, 885.4±569.7 ml). The mean Dice similarity coefficient ±SD between the automated and manual methods was 0.88±0.08. The mean correlation coefficient between the two segmentation methods for the segmented volume measurements was 0.97 (P<0.001 for each crossvalidation set). The results from the crossvalidation sets were highly comparable. We have developed a fully automated method for segmentation of kidneys from abdominal magnetic resonance images in patients with autosomal dominant polycystic kidney disease with varying kidney volumes. The performance of the automated method was in good agreement with that of manual method. Copyright © 2016 by the American Society of Nephrology.
Automated acquisition system for routine, noninvasive monitoring of physiological data.
Ogawa, M; Tamura, T; Togawa, T
1998-01-01
A fully automated, noninvasive data-acquisition system was developed to permit long-term measurement of physiological functions at home, without disturbing subjects' normal routines. The system consists of unconstrained monitors built into furnishings and structures in a home environment. An electrocardiographic (ECG) monitor in the bathtub measures heart function during bathing, a temperature monitor in the bed measures body temperature, and a weight monitor built into the toilet serves as a scale to record weight. All three monitors are connected to one computer and function with data-acquisition programs and a data format rule. The unconstrained physiological parameter monitors and fully automated measurement procedures collect data noninvasively without the subject's awareness. The system was tested for 1 week by a healthy male subject, aged 28, in laboratory-based facilities.
Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features.
Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate
2017-08-01
Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.
Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features
NASA Astrophysics Data System (ADS)
Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate
2017-08-01
Objective. Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. Approach. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. Main results. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Significance. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.
Jiang, Hui; Hanna, Eriny; Gatto, Cheryl L.; Page, Terry L.; Bhuva, Bharat; Broadie, Kendal
2016-01-01
Background Aversive olfactory classical conditioning has been the standard method to assess Drosophila learning and memory behavior for decades, yet training and testing are conducted manually under exceedingly labor-intensive conditions. To overcome this severe limitation, a fully automated, inexpensive system has been developed, which allows accurate and efficient Pavlovian associative learning/memory analyses for high-throughput pharmacological and genetic studies. New Method The automated system employs a linear actuator coupled to an odorant T-maze with airflow-mediated transfer of animals between training and testing stages. Odorant, airflow and electrical shock delivery are automatically administered and monitored during training trials. Control software allows operator-input variables to define parameters of Drosophila learning, short-term memory and long-term memory assays. Results The approach allows accurate learning/memory determinations with operational fail-safes. Automated learning indices (immediately post-training) and memory indices (after 24 hours) are comparable to traditional manual experiments, while minimizing experimenter involvement. Comparison with Existing Methods The automated system provides vast improvements over labor-intensive manual approaches with no experimenter involvement required during either training or testing phases. It provides quality control tracking of airflow rates, odorant delivery and electrical shock treatments, and an expanded platform for high-throughput studies of combinational drug tests and genetic screens. The design uses inexpensive hardware and software for a total cost of ~$500US, making it affordable to a wide range of investigators. Conclusions This study demonstrates the design, construction and testing of a fully automated Drosophila olfactory classical association apparatus to provide low-labor, high-fidelity, quality-monitored, high-throughput and inexpensive learning and memory behavioral assays. PMID:26703418
Jiang, Hui; Hanna, Eriny; Gatto, Cheryl L; Page, Terry L; Bhuva, Bharat; Broadie, Kendal
2016-03-01
Aversive olfactory classical conditioning has been the standard method to assess Drosophila learning and memory behavior for decades, yet training and testing are conducted manually under exceedingly labor-intensive conditions. To overcome this severe limitation, a fully automated, inexpensive system has been developed, which allows accurate and efficient Pavlovian associative learning/memory analyses for high-throughput pharmacological and genetic studies. The automated system employs a linear actuator coupled to an odorant T-maze with airflow-mediated transfer of animals between training and testing stages. Odorant, airflow and electrical shock delivery are automatically administered and monitored during training trials. Control software allows operator-input variables to define parameters of Drosophila learning, short-term memory and long-term memory assays. The approach allows accurate learning/memory determinations with operational fail-safes. Automated learning indices (immediately post-training) and memory indices (after 24h) are comparable to traditional manual experiments, while minimizing experimenter involvement. The automated system provides vast improvements over labor-intensive manual approaches with no experimenter involvement required during either training or testing phases. It provides quality control tracking of airflow rates, odorant delivery and electrical shock treatments, and an expanded platform for high-throughput studies of combinational drug tests and genetic screens. The design uses inexpensive hardware and software for a total cost of ∼$500US, making it affordable to a wide range of investigators. This study demonstrates the design, construction and testing of a fully automated Drosophila olfactory classical association apparatus to provide low-labor, high-fidelity, quality-monitored, high-throughput and inexpensive learning and memory behavioral assays. Copyright © 2015 Elsevier B.V. All rights reserved.
Suppa, Per; Anker, Ulrich; Spies, Lothar; Bopp, Irene; Rüegger-Frey, Brigitte; Klaghofer, Richard; Gocke, Carola; Hampel, Harald; Beck, Sacha; Buchert, Ralph
2015-01-01
Hippocampal volume is a promising biomarker to enhance the accuracy of the diagnosis of dementia due to Alzheimer's disease (AD). However, whereas hippocampal volume is well studied in patient samples from clinical trials, its value in clinical routine patient care is still rather unclear. The aim of the present study, therefore, was to evaluate fully automated atlas-based hippocampal volumetry for detection of AD in the setting of a secondary care expert memory clinic for outpatients. One-hundred consecutive patients with memory complaints were clinically evaluated and categorized into three diagnostic groups: AD, intermediate AD, and non-AD. A software tool based on open source software (Statistical Parametric Mapping SPM8) was employed for fully automated tissue segmentation and stereotactical normalization of high-resolution three-dimensional T1-weighted magnetic resonance images. Predefined standard masks were used for computation of grey matter volume of the left and right hippocampus which then was scaled to the patient's total grey matter volume. The right hippocampal volume provided an area under the receiver operating characteristic curve of 84% for detection of AD patients in the whole sample. This indicates that fully automated MR-based hippocampal volumetry fulfills the requirements for a relevant core feasible biomarker for detection of AD in everyday patient care in a secondary care memory clinic for outpatients. The software used in the present study has been made freely available as an SPM8 toolbox. It is robust and fast so that it is easily integrated into routine workflow.
Hamzeiy, Hamid; Cox, Jürgen
2017-02-01
Computational workflows for mass spectrometry-based shotgun proteomics and untargeted metabolomics share many steps. Despite the similarities, untargeted metabolomics is lagging behind in terms of reliable fully automated quantitative data analysis. We argue that metabolomics will strongly benefit from the adaptation of successful automated proteomics workflows to metabolomics. MaxQuant is a popular platform for proteomics data analysis and is widely considered to be superior in achieving high precursor mass accuracies through advanced nonlinear recalibration, usually leading to five to ten-fold better accuracy in complex LC-MS/MS runs. This translates to a sharp decrease in the number of peptide candidates per measured feature, thereby strongly improving the coverage of identified peptides. We argue that similar strategies can be applied to untargeted metabolomics, leading to equivalent improvements in metabolite identification. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Svetnik, Vladimir; Ma, Junshui; Soper, Keith A.; Doran, Scott; Renger, John J.; Deacon, Steve; Koblan, Ken S.
2007-01-01
Objective: To evaluate the performance of 2 automated systems, Morpheus and Somnolyzer24X7, with various levels of human review/editing, in scoring polysomnographic (PSG) recordings from a clinical trial using zolpidem in a model of transient insomnia. Methods: 164 all-night PSG recordings from 82 subjects collected during 2 nights of sleep, one under placebo and one under zolpidem (10 mg) treatment were used. For each recording, 6 different methods were used to provide sleep stage scores based on Rechtschaffen & Kales criteria: 1) full manual scoring, 2) automated scoring by Morpheus 3) automated scoring by Somnolyzer24X7, 4) automated scoring by Morpheus with full manual review, 5) automated scoring by Morpheus with partial manual review, 6) automated scoring by Somnolyzer24X7 with partial manual review. Ten traditional clinical efficacy measures of sleep initiation, maintenance, and architecture were calculated. Results: Pair-wise epoch-by-epoch agreements between fully automated and manual scores were in the range of intersite manual scoring agreements reported in the literature (70%-72%). Pair-wise epoch-by-epoch agreements between automated scores manually reviewed were higher (73%-76%). The direction and statistical significance of treatment effect sizes using traditional efficacy endpoints were essentially the same whichever method was used. As the degree of manual review increased, the magnitude of the effect size approached those estimated with fully manual scoring. Conclusion: Automated or semi-automated sleep PSG scoring offers valuable alternatives to costly, time consuming, and intrasite and intersite variable manual scoring, especially in large multicenter clinical trials. Reduction in scoring variability may also reduce the sample size of a clinical trial. Citation: Svetnik V; Ma J; Soper KA; Doran S; Renger JJ; Deacon S; Koblan KS. Evaluation of automated and semi-automated scoring of polysomnographic recordings from a clinical trial using zolpidem in the treatment of insomnia. SLEEP 2007;30(11):1562-1574. PMID:18041489
An automatic experimental apparatus to study arm reaching in New World monkeys.
Yin, Allen; An, Jehi; Lehew, Gary; Lebedev, Mikhail A; Nicolelis, Miguel A L
2016-05-01
Several species of the New World monkeys have been used as experimental models in biomedical and neurophysiological research. However, a method for controlled arm reaching tasks has not been developed for these species. We have developed a fully automated, pneumatically driven, portable, and reconfigurable experimental apparatus for arm-reaching tasks suitable for these small primates. We have utilized the apparatus to train two owl monkeys in a visually-cued arm-reaching task. Analysis of neural recordings demonstrates directional tuning of the M1 neurons. Our apparatus allows automated control, freeing the experimenter from manual experiments. The presented apparatus provides a valuable tool for conducting neurophysiological research on New World monkeys. Copyright © 2016. Published by Elsevier B.V.
An Automated Directed Spectral Search Methodology for Small Target Detection
NASA Astrophysics Data System (ADS)
Grossman, Stanley I.
Much of the current efforts in remote sensing tackle macro-level problems such as determining the extent of wheat in a field, the general health of vegetation or the extent of mineral deposits in an area. However, for many of the remaining remote sensing challenges being studied currently, such as border protection, drug smuggling, treaty verification, and the war on terror, most targets are very small in nature - a vehicle or even a person. While in typical macro-level problems the objective vegetation is in the scene, for small target detection problems it is not usually known if the desired small target even exists in the scene, never mind finding it in abundance. The ability to find specific small targets, such as vehicles, typifies this problem. Complicating the analyst's life, the growing number of available sensors is generating mountains of imagery outstripping the analysts' ability to visually peruse them. This work presents the important factors influencing spectral exploitation using multispectral data and suggests a different approach to small target detection. The methodology of directed search is presented, including the use of scene-modeled spectral libraries, various search algorithms, and traditional statistical and ROC curve analysis. The work suggests a new metric to calibrate analysis labeled the analytic sweet spot as well as an estimation method for identifying the sweet spot threshold for an image. It also suggests a new visualization aid for highlighting the target in its entirety called nearest neighbor inflation (NNI). It brings these all together to propose that these additions to the target detection arena allow for the construction of a fully automated target detection scheme. This dissertation next details experiments to support the hypothesis that the optimum detection threshold is the analytic sweet spot and that the estimation method adequately predicts it. Experimental results and analysis are presented for the proposed directed search techniques of spectral image based small target detection. It offers evidence of the functionality of the NNI visualization and also provides evidence that the increased spectral dimensionality of the 8-band Worldview-2 datasets provides noteworthy improvement in results over traditional 4-band multispectral datasets. The final experiment presents the results from a prototype fully automated target detection scheme in support of the overarching premise. This work establishes the analytic sweet spot as the optimum threshold defined as the point where error detection rate curves -- false detections vs. missing detections -- cross. At this point the errors are minimized while the detection rate is maximized. It then demonstrates that taking the first moment statistic of the histogram of calculated target detection values from a detection search with test threshold set arbitrarily high will estimate the analytic sweet spot for that image. It also demonstrates that directed search techniques -- when utilized with appropriate scene-specific modeled signatures and atmospheric compensations -- perform at least as well as in-scene search techniques 88% of the time and grossly under-performing only 11% of the time; the in-scene only performs as well or better 50% of the time. It further demonstrates the clear advantage increased multispectral dimensionality brings to detection searches improving performance in 50% of the cases while performing at least as well 72% of the time. Lastly, it presents evidence that a fully automated prototype performs as anticipated laying the groundwork for further research into fully automated processes for small target detection.
Ackermann, Uwe; Lewis, Jason S; Young, Kenneth; Morris, Michael J; Weickhardt, Andrew; Davis, Ian D; Scott, Andrew M
2016-08-01
Imaging of androgen receptor expression in prostate cancer using F-18 FDHT is becoming increasingly popular. With the radiolabelling precursor now commercially available, developing a fully automated synthesis of [(18) F] FDHT is important. We have fully automated the synthesis of F-18 FDHT using the iPhase FlexLab module using only commercially available components. Total synthesis time was 90 min, radiochemical yields were 25-33% (n = 11). Radiochemical purity of the final formulation was > 99% and specific activity was > 18.5 GBq/µmol for all batches. This method can be up-scaled as desired, thus making it possible to study multiple patients in a day. Furthermore, our procedure uses 4 mg of precursor only and is therefore cost-effective. The synthesis has now been validated at Austin Health and is currently used for [(18) F]FDHT studies in patients. We believe that this method can easily adapted by other modules to further widen the availability of [(18) F]FDHT. Copyright © 2016 John Wiley & Sons, Ltd.
Jiang, Xiaogang; Feng, Shun; Tian, Ruijun; Han, Guanghui; Jiang, Xinning; Ye, Mingliang; Zou, Hanfa
2007-02-01
An approach was developed to automate sample introduction for nanoflow LC-MS/MS (microLC-MS/MS) analysis using a strong cation exchange (SCX) trap column. The system consisted of a 100 microm id x 2 cm SCX trap column and a 75 microm id x 12 cm C18 RP analytical column. During the sample loading step, the flow passing through the SCX trap column was directed to waste for loading a large volume of sample at high flow rate. Then the peptides bound on the SCX trap column were eluted onto the RP analytical column by a high salt buffer followed by RP chromatographic separation of the peptides at nanoliter flow rate. It was observed that higher performance of separation could be achieved with the system using SCX trap column than with the system using C18 trap column. The high proteomic coverage using this approach was demonstrated in the analysis of tryptic digest of BSA and yeast cell lysate. In addition, this system was also applied to two-dimensional separation of tryptic digest of human hepatocellular carcinoma cell line SMMC-7721 for large scale proteome analysis. This system was fully automated and required minimum changes on current microLC-MS/MS system. This system represented a promising platform for routine proteome analysis.
Automated 3D renal segmentation based on image partitioning
NASA Astrophysics Data System (ADS)
Yeghiazaryan, Varduhi; Voiculescu, Irina D.
2016-03-01
Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.
Fully automated MR liver volumetry using watershed segmentation coupled with active contouring.
Huynh, Hieu Trung; Le-Trong, Ngoc; Bao, Pham The; Oto, Aytek; Suzuki, Kenji
2017-02-01
Our purpose is to develop a fully automated scheme for liver volume measurement in abdominal MR images, without requiring any user input or interaction. The proposed scheme is fully automatic for liver volumetry from 3D abdominal MR images, and it consists of three main stages: preprocessing, rough liver shape generation, and liver extraction. The preprocessing stage reduced noise and enhanced the liver boundaries in 3D abdominal MR images. The rough liver shape was revealed fully automatically by using the watershed segmentation, thresholding transform, morphological operations, and statistical properties of the liver. An active contour model was applied to refine the rough liver shape to precisely obtain the liver boundaries. The liver volumes calculated by the proposed scheme were compared to the "gold standard" references which were estimated by an expert abdominal radiologist. The liver volumes computed by using our developed scheme excellently agreed (Intra-class correlation coefficient was 0.94) with the "gold standard" manual volumes by the radiologist in the evaluation with 27 cases from multiple medical centers. The running time was 8.4 min per case on average. We developed a fully automated liver volumetry scheme in MR, which does not require any interaction by users. It was evaluated with cases from multiple medical centers. The liver volumetry performance of our developed system was comparable to that of the gold standard manual volumetry, and it saved radiologists' time for manual liver volumetry of 24.7 min per case.
Dual ant colony operational modal analysis parameter estimation method
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
Peak picking multidimensional NMR spectra with the contour geometry based algorithm CYPICK.
Würz, Julia M; Güntert, Peter
2017-01-01
The automated identification of signals in multidimensional NMR spectra is a challenging task, complicated by signal overlap, noise, and spectral artifacts, for which no universally accepted method is available. Here, we present a new peak picking algorithm, CYPICK, that follows, as far as possible, the manual approach taken by a spectroscopist who analyzes peak patterns in contour plots of the spectrum, but is fully automated. Human visual inspection is replaced by the evaluation of geometric criteria applied to contour lines, such as local extremality, approximate circularity (after appropriate scaling of the spectrum axes), and convexity. The performance of CYPICK was evaluated for a variety of spectra from different proteins by systematic comparison with peak lists obtained by other, manual or automated, peak picking methods, as well as by analyzing the results of automated chemical shift assignment and structure calculation based on input peak lists from CYPICK. The results show that CYPICK yielded peak lists that compare in most cases favorably to those obtained by other automated peak pickers with respect to the criteria of finding a maximal number of real signals, a minimal number of artifact peaks, and maximal correctness of the chemical shift assignments and the three-dimensional structure obtained by fully automated assignment and structure calculation.
Christiaens, B; Chiap, P; Rbeida, O; Cello, D; Crommen, J; Hubert, Ph
2003-09-25
A new fully automated method for the quantitative analysis of an antiandrogenic substance, cyproterone acetate (CPA), in plasma samples has been developed using on-line solid-phase extraction (SPE) prior to the determination by reversed-phase liquid chromatography (LC). The automated method was based on the use of a precolumn packed with an internal-surface reversed-phase packing material (LiChrospher RP-4 ADS) for sample clean-up coupled to LC analysis on an octadecyl stationary phase using a column-switching system. A 200-microL volume of plasma sample was injected directly on the precolumn packed with restricted access material using a mixture of water-acetonitrile (90:10, v/v) as washing liquid. The analyte was then eluted in the back-flush mode with the LC mobile phase which consisted of a mixture of phosphate buffer, pH 7.0-acetonitrile (54:46, v/v). The elution profiles of CPA and blank plasma samples on the precolumn and the time needed for analyte transfer from the precolumn to the analytical column were determined. Different compositions of washing liquid and mobile phase were tested to reduce the interference of plasma endogenous components. UV detection was achieved at 280 nm. Finally, the developed method was validated using a new approach, namely the application of the accuracy profile based on the interval confidence at 90% of the total measurement error (bias+standard deviation). The limit of quantification of cyproterone acetate in plasma was determined at 15 ng mL(-1). The validated method should be applicable to the determination of CPA in patients treated by at least 50 mg day(-1).
Zhang, Jie; Bai, Ruoshi; Yi, Xiaoli; Yang, Zhendong; Liu, Xingyu; Zhou, Jun; Liang, Wei
2016-01-01
A fully automated method for the detection of four tobacco-specific nitrosamines (TSNAs) in mainstream cigarette smoke (MSS) has been developed. The new developed method is based on two-dimensional online solid-phase extraction-liquid chromatography-tandem mass spectrometry (SPE/LC-MS/MS). The two dimensional SPE was performed in the method utilizing two cartridges with different extraction mechanisms to cleanup disturbances of different polarity to minimize sample matrix effects on each analyte. Chromatographic separation was achieved using a UPLC C18 reversed phase analytical column. Under the optimum online SPE/LC-MS/MS conditions, N'-nitrosonornicotine (NNN), N'-nitrosoanatabine (NAT), N'-nitrosoanabasine (NAB), and 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK) were baseline separated with good peak shapes. This method appears to be the most sensitive method yet reported for determination of TSNAs in mainstream cigarette smoke. The limits of quantification for NNN, NNK, NAT and NAB reached the levels of 6.0, 1.0, 3.0 and 0.6 pg/cig, respectively, which were well below the lowest levels of TSNAs in MSS of current commercial cigarettes. The accuracy of the measurement of four TSNAs was from 92.8 to 107.3%. The relative standard deviations of intra-and inter-day analysis were less than 5.4% and 7.5%, respectively. The main advantages of the method developed are fairly high sensitivity, selectivity and accuracy of results, minimum sample pre-treatment, full automation, and high throughput. As a part of the validation procedure, the developed method was applied to evaluate TSNAs yields for 27 top-selling commercial cigarettes in China. Copyright © 2015 Elsevier B.V. All rights reserved.
Rashno, Abdolreza; Nazari, Behzad; Koozekanani, Dara D.; Drayna, Paul M.; Sadri, Saeed; Rabbani, Hossein
2017-01-01
A fully-automated method based on graph shortest path, graph cut and neutrosophic (NS) sets is presented for fluid segmentation in OCT volumes for exudative age related macular degeneration (EAMD) subjects. The proposed method includes three main steps: 1) The inner limiting membrane (ILM) and the retinal pigment epithelium (RPE) layers are segmented using proposed methods based on graph shortest path in NS domain. A flattened RPE boundary is calculated such that all three types of fluid regions, intra-retinal, sub-retinal and sub-RPE, are located above it. 2) Seed points for fluid (object) and tissue (background) are initialized for graph cut by the proposed automated method. 3) A new cost function is proposed in kernel space, and is minimized with max-flow/min-cut algorithms, leading to a binary segmentation. Important properties of the proposed steps are proven and quantitative performance of each step is analyzed separately. The proposed method is evaluated using a publicly available dataset referred as Optima and a local dataset from the UMN clinic. For fluid segmentation in 2D individual slices, the proposed method outperforms the previously proposed methods by 18%, 21% with respect to the dice coefficient and sensitivity, respectively, on the Optima dataset, and by 16%, 11% and 12% with respect to the dice coefficient, sensitivity and precision, respectively, on the local UMN dataset. Finally, for 3D fluid volume segmentation, the proposed method achieves true positive rate (TPR) and false positive rate (FPR) of 90% and 0.74%, respectively, with a correlation of 95% between automated and expert manual segmentations using linear regression analysis. PMID:29059257
van den Hoven, Allard T; Mc-Ghie, Jackie S; Chelu, Raluca G; Duijnhouwer, Anthonie L; Baggen, Vivan J M; Coenen, Adriaan; Vletter, Wim B; Dijkshoorn, Marcel L; van den Bosch, Annemien E; Roos-Hesselink, Jolien W
2017-12-01
Integration of volumetric heart chamber quantification by 3D echocardiography into clinical practice has been hampered by several factors which a new fully automated algorithm (Left Heart Model, (LHM)) may help overcome. This study therefore aims to evaluate the feasibility and accuracy of the LHM software in quantifying left atrial and left ventricular volumes and left ventricular ejection fraction in a cohort of patients with a bicuspid aortic valve. Patients with a bicuspid aortic valve were prospectively included. All patients underwent 2D and 3D transthoracic echocardiography and computed tomography. Left atrial and ventricular volumes were obtained using the automated program, which did not require manual contour detection. For comparison manual and semi-automated measurements were performed using conventional 2D and 3D datasets. 53 patients were included, in four of those patients no 3D dataset could be acquired. Additionally, 12 patients were excluded based on poor imaging quality. Left ventricular end-diastolic and end-systolic volumes and ejection fraction calculated by the LHM correlated well with manual 2D and 3D measurements (Pearson's r between 0.43 and 0.97, p < 0.05). Left atrial volume (LAV) also correlated significantly although LHM did estimate larger LAV compared to both 2DE and 3DE (Pearson's r between 0.61 and 0.81, p < 0.01). The fully automated software works well in a real-world setting and helps to overcome some of the major hurdles in integrating 3D analysis into daily practice, as it is user-independent and highly reproducible in a group of patients with a clearly defined and well-studied valvular abnormality.
TreeRipper web application: towards a fully automated optical tree recognition software.
Hughes, Joseph
2011-05-20
Relationships between species, genes and genomes have been printed as trees for over a century. Whilst this may have been the best format for exchanging and sharing phylogenetic hypotheses during the 20th century, the worldwide web now provides faster and automated ways of transferring and sharing phylogenetic knowledge. However, novel software is needed to defrost these published phylogenies for the 21st century. TreeRipper is a simple website for the fully-automated recognition of multifurcating phylogenetic trees (http://linnaeus.zoology.gla.ac.uk/~jhughes/treeripper/). The program accepts a range of input image formats (PNG, JPG/JPEG or GIF). The underlying command line c++ program follows a number of cleaning steps to detect lines, remove node labels, patch-up broken lines and corners and detect line edges. The edge contour is then determined to detect the branch length, tip label positions and the topology of the tree. Optical Character Recognition (OCR) is used to convert the tip labels into text with the freely available tesseract-ocr software. 32% of images meeting the prerequisites for TreeRipper were successfully recognised, the largest tree had 115 leaves. Despite the diversity of ways phylogenies have been illustrated making the design of a fully automated tree recognition software difficult, TreeRipper is a step towards automating the digitization of past phylogenies. We also provide a dataset of 100 tree images and associated tree files for training and/or benchmarking future software. TreeRipper is an open source project licensed under the GNU General Public Licence v3.
ATLAS from Data Research Associates: A Fully Integrated Automation System.
ERIC Educational Resources Information Center
Mellinger, Michael J.
1987-01-01
This detailed description of a fully integrated, turnkey library system includes a complete profile of the system (functions, operational characteristics, hardware, operating system, minimum memory and pricing); history of the technologies involved; and descriptions of customer services and availability. (CLB)
Flood mapping in ungauged basins using fully continuous hydrologic-hydraulic modeling
NASA Astrophysics Data System (ADS)
Grimaldi, Salvatore; Petroselli, Andrea; Arcangeletti, Ettore; Nardi, Fernando
2013-04-01
SummaryIn this work, a fully-continuous hydrologic-hydraulic modeling framework for flood mapping is introduced and tested. It is characterized by a simulation of a long rainfall time series at sub-daily resolution that feeds a continuous rainfall-runoff model producing a discharge time series that is directly given as an input to a bi-dimensional hydraulic model. The main advantage of the proposed approach is to avoid the use of the design hyetograph and the design hydrograph that constitute the main source of subjective analysis and uncertainty for standard methods. The proposed procedure is optimized for small and ungauged watersheds where empirical models are commonly applied. Results of a simple real case study confirm that this experimental fully-continuous framework may pave the way for the implementation of a less subjective and potentially automated procedure for flood hazard mapping.
Glaciated valleys in Europe and western Asia
Prasicek, Günther; Otto, Jan-Christoph; Montgomery, David R.; Schrott, Lothar
2015-01-01
In recent years, remote sensing, morphometric analysis, and other computational concepts and tools have invigorated the field of geomorphological mapping. Automated interpretation of digital terrain data based on impartial rules holds substantial promise for large dataset processing and objective landscape classification. However, the geomorphological realm presents tremendous complexity and challenges in the translation of qualitative descriptions into geomorphometric semantics. Here, the simple, conventional distinction of V-shaped fluvial and U-shaped glacial valleys was analyzed quantitatively using multi-scale curvature and a novel morphometric variable termed Difference of Minimum Curvature (DMC). We used this automated terrain analysis approach to produce a raster map at a scale of 1:6,000,000 showing the distribution of glaciated valleys across Europe and western Asia. The data set has a cell size of 3 arc seconds and consists of more than 40 billion grid cells. Glaciated U-shaped valleys commonly associated with erosion by warm-based glaciers are abundant in the alpine regions of mid Europe and western Asia but also occur at the margins of mountain ice sheets in Scandinavia. The high-level correspondence with field mapping and the fully transferable semantics validate this approach for automated analysis of yet unexplored terrain around the globe and qualify for potential applications on other planetary bodies like Mars. PMID:27019665
Liese, Jan; Winter, Karsten; Glass, Änne; Bertolini, Julia; Kämmerer, Peer Wolfgang; Frerich, Bernhard; Schiefke, Ingolf; Remmerbach, Torsten W
2017-11-01
Uncertainties in detection of oral epithelial dysplasia (OED) frequently result from sampling error especially in inflammatory oral lesions. Endomicroscopy allows non-invasive, "en face" imaging of upper oral epithelium, but parameters of OED are unknown. Mucosal nuclei were imaged in 34 toluidine blue-stained oral lesions with a commercial endomicroscopy. Histopathological diagnosis showed four biopsies in "dys-/neoplastic," 23 in "inflammatory," and seven in "others" disease groups. Strength of different assessment strategies of nuclear scoring, nuclear count, and automated nuclear analysis were measured by area under ROC curve (AUC) to identify histopathological "dys-/neoplastic" group. Nuclear objects from automated image analysis were visually corrected. Best-performing parameters of nuclear-to-image ratios were the count of large nuclei (AUC=0.986) and 6-nearest neighborhood relation (AUC=0.896), and best parameters of nuclear polymorphism were the count of atypical nuclei (AUC=0.996) and compactness of nuclei (AUC=0.922). Excluding low-grade OED, nuclear scoring and count reached 100% sensitivity and 98% specificity for detection of dys-/neoplastic lesions. In automated analysis, combination of parameters enhanced diagnostic strength. Sensitivity of 100% and specificity of 87% were seen for distances of 6-nearest neighbors and aspect ratios even in uncorrected objects. Correction improved measures of nuclear polymorphism only. The hue of background color was stronger than nuclear density (AUC=0.779 vs 0.687) to detect dys-/neoplastic group indicating that macroscopic aspect is biased. Nuclear-to-image ratios are applicable for automated optical in vivo diagnostics for oral potentially malignant disorders. Nuclear endomicroscopy may promote non-invasive, early detection of dys-/neoplastic lesions by reducing sampling error. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Anderson, D.; Andrais, B.; Mirzayans, R.; Siegbahn, E. A.; Fallone, B. G.; Warkentin, B.
2013-06-01
Microbeam radiation therapy (MRT) delivers single fractions of very high doses of synchrotron x-rays using arrays of microbeams. In animal experiments, MRT has achieved higher tumour control and less normal tissue toxicity compared to single-fraction broad beam irradiations of much lower dose. The mechanism behind the normal tissue sparing of MRT has yet to be fully explained. An accurate method for evaluating DNA damage, such as the γ-H2AX immunofluorescence assay, will be important for understanding the role of cellular communication in the radiobiological response of normal and cancerous cell types to MRT. We compare two methods of quantifying γ-H2AX nuclear fluorescence for uniformly irradiated cell cultures: manual counting of γ-H2AX foci by eye, and an automated, MATLAB-based fluorescence intensity measurement. We also demonstrate the automated analysis of cell cultures irradiated with an array of microbeams. In addition to offering a relatively high dynamic range of γ-H2AX signal versus irradiation dose ( > 10 Gy), our automated method provides speed, robustness, and objectivity when examining a series of images. Our in-house analysis facilitates the automated extraction of the spatial distribution of the γ-H2AX intensity with respect to the microbeam array — for example, the intensities in the peak (high dose area) and valley (area between two microbeams) regions. The automated analysis is particularly beneficial when processing a large number of samples, as is needed to systematically study the relationship between the numerous dosimetric and geometric parameters involved with MRT (e.g., microbeam width, microbeam spacing, microbeam array dimensions, peak dose, valley dose, and geometric arrangement of multiple arrays) and the resulting DNA damage.
Utility of an automated thermal-based approach for monitoring evapotranspiration
USDA-ARS?s Scientific Manuscript database
A very simple remote sensing-based model for water use monitoring is presented. The model acronym DATTUTDUT, (Deriving Atmosphere Turbulent Transport Useful To Dummies Using Temperature) is a Dutch word which loosely translates as “It’s unbelievable that it works”. DATTUTDUT is fully automated and o...
Dysli, Chantal; Enzmann, Volker; Sznitman, Raphael; Zinkernagel, Martin S.
2015-01-01
Purpose Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. Methods Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b+Prph2Rd2/J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. Results Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. Conclusions Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. Translational Relevance The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions. PMID:26336634
Automated image alignment for 2D gel electrophoresis in a high-throughput proteomics pipeline.
Dowsey, Andrew W; Dunn, Michael J; Yang, Guang-Zhong
2008-04-01
The quest for high-throughput proteomics has revealed a number of challenges in recent years. Whilst substantial improvements in automated protein separation with liquid chromatography and mass spectrometry (LC/MS), aka 'shotgun' proteomics, have been achieved, large-scale open initiatives such as the Human Proteome Organization (HUPO) Brain Proteome Project have shown that maximal proteome coverage is only possible when LC/MS is complemented by 2D gel electrophoresis (2-DE) studies. Moreover, both separation methods require automated alignment and differential analysis to relieve the bioinformatics bottleneck and so make high-throughput protein biomarker discovery a reality. The purpose of this article is to describe a fully automatic image alignment framework for the integration of 2-DE into a high-throughput differential expression proteomics pipeline. The proposed method is based on robust automated image normalization (RAIN) to circumvent the drawbacks of traditional approaches. These use symbolic representation at the very early stages of the analysis, which introduces persistent errors due to inaccuracies in modelling and alignment. In RAIN, a third-order volume-invariant B-spline model is incorporated into a multi-resolution schema to correct for geometric and expression inhomogeneity at multiple scales. The normalized images can then be compared directly in the image domain for quantitative differential analysis. Through evaluation against an existing state-of-the-art method on real and synthetically warped 2D gels, the proposed analysis framework demonstrates substantial improvements in matching accuracy and differential sensitivity. High-throughput analysis is established through an accelerated GPGPU (general purpose computation on graphics cards) implementation. Supplementary material, software and images used in the validation are available at http://www.proteomegrid.org/rain/.
Gap-free segmentation of vascular networks with automatic image processing pipeline.
Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas
2017-03-01
Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Microfluidics for the analysis of membrane proteins: how do we get there?
Battle, Katrina N; Uba, Franklin I; Soper, Steven A
2014-08-01
The development of fully automated and high-throughput systems for proteomics is now in demand because of the need to generate new protein-based disease biomarkers. Unfortunately, it is difficult to identify protein biomarkers that are low abundant when in the presence of highly abundant proteins, especially in complex biological samples such as serum, cell lysates, and other biological fluids. Membrane proteins, which are in many cases of low abundance compared to the cytosolic proteins, have various functions and can provide insight into the state of a disease and serve as targets for new drugs making them attractive biomarker candidates. Traditionally, proteins are identified through the use of gel electrophoretic techniques, which are not always suitable for particular protein samples such as membrane proteins. Microfluidics offers the potential as a fully automated platform for the efficient and high-throughput analysis of complex samples, such as membrane proteins, and do so with performance metrics that exceed their bench-top counterparts. In recent years, there have been various improvements to microfluidics and their use for proteomic analysis as reported in the literature. Consequently, this review presents an overview of the traditional proteomic-processing pipelines for membrane proteins and insights into new technological developments with a focus on the applicability of microfluidics for the analysis of membrane proteins. Sample preparation techniques will be discussed in detail and novel interfacing strategies as it relates to MS will be highlighted. Lastly, some general conclusions and future perspectives are presented. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Garrido, Terhilda; Kumar, Sudheen; Lekas, John; Lindberg, Mark; Kadiyala, Dhanyaja; Whippy, Alan; Crawford, Barbara; Weissberg, Jed
2014-01-01
Using electronic health records (EHR) to automate publicly reported quality measures is receiving increasing attention and is one of the promises of EHR implementation. Kaiser Permanente has fully or partly automated six of 13 the joint commission measure sets. We describe our experience with automation and the resulting time savings: a reduction by approximately 50% of abstractor time required for one measure set alone (surgical care improvement project). However, our experience illustrates the gap between the current and desired states of automated public quality reporting, which has important implications for measure developers, accrediting entities, EHR vendors, public/private payers, and government. PMID:23831833
AutoDrug: fully automated macromolecular crystallography workflows for fragment-based drug discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, Yingssu; Stanford University, 333 Campus Drive, Mudd Building, Stanford, CA 94305-5080; McPhillips, Scott E.
New software has been developed for automating the experimental and data-processing stages of fragment-based drug discovery at a macromolecular crystallography beamline. A new workflow-automation framework orchestrates beamline-control and data-analysis software while organizing results from multiple samples. AutoDrug is software based upon the scientific workflow paradigm that integrates the Stanford Synchrotron Radiation Lightsource macromolecular crystallography beamlines and third-party processing software to automate the crystallography steps of the fragment-based drug-discovery process. AutoDrug screens a cassette of fragment-soaked crystals, selects crystals for data collection based on screening results and user-specified criteria and determines optimal data-collection strategies. It then collects and processes diffraction data,more » performs molecular replacement using provided models and detects electron density that is likely to arise from bound fragments. All processes are fully automated, i.e. are performed without user interaction or supervision. Samples can be screened in groups corresponding to particular proteins, crystal forms and/or soaking conditions. A single AutoDrug run is only limited by the capacity of the sample-storage dewar at the beamline: currently 288 samples. AutoDrug was developed in conjunction with RestFlow, a new scientific workflow-automation framework. RestFlow simplifies the design of AutoDrug by managing the flow of data and the organization of results and by orchestrating the execution of computational pipeline steps. It also simplifies the execution and interaction of third-party programs and the beamline-control system. Modeling AutoDrug as a scientific workflow enables multiple variants that meet the requirements of different user groups to be developed and supported. A workflow tailored to mimic the crystallography stages comprising the drug-discovery pipeline of CoCrystal Discovery Inc. has been deployed and successfully demonstrated. This workflow was run once on the same 96 samples that the group had examined manually and the workflow cycled successfully through all of the samples, collected data from the same samples that were selected manually and located the same peaks of unmodeled density in the resulting difference Fourier maps.« less
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Tuell, Grady
2010-04-01
The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.
A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data
NASA Technical Reports Server (NTRS)
Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.
2011-01-01
A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.
Kloefkorn, Heidi E.; Pettengill, Travis R.; Turner, Sara M. F.; Streeter, Kristi A.; Gonzalez-Rothi, Elisa J.; Fuller, David D.; Allen, Kyle D.
2016-01-01
While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns. PMID:27554674
Kloefkorn, Heidi E; Pettengill, Travis R; Turner, Sara M F; Streeter, Kristi A; Gonzalez-Rothi, Elisa J; Fuller, David D; Allen, Kyle D
2017-03-01
While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns.
Kolocouri, Filomila; Dotsikas, Yannis; Apostolou, Constantinos; Kousoulos, Constantinos; Soumelas, Georgios-Stefanos; Loukas, Yannis L
2011-01-01
An HPLC/MS/MS method characterized by complete automation and high throughput was developed for the determination of cilazapril and its active metabolite cilazaprilat in human plasma. All sample preparation and analysis steps were performed by using 2.2 mL 96 deep-well plates, while robotic liquid handling workstations were utilized for all liquid transfer steps, including liquid-liquid extraction. The whole procedure was very fast compared to a manual procedure with vials and no automation. The method also had a very short chromatographic run time of 1.5 min. Sample analysis was performed by RP-HPLC/MS/MS with positive electrospray ionization using multiple reaction monitoring. The calibration curve was linear in the range of 0.500-300 and 0.250-150 ng/mL for cilazapril and cilazaprilat, respectively. The proposed method was fully validated and proved to be selective, accurate, precise, reproducible, and suitable for the determination of cilazapril and cilazaprilat in human plasma. Therefore, it was applied to a bioequivalence study after per os administration of 2.5 mg tablet formulations of cilazapril.
Sun, Wanxin; Chang, Shi; Tai, Dean C S; Tan, Nancy; Xiao, Guangfa; Tang, Huihuan; Yu, Hanry
2008-01-01
Liver fibrosis is associated with an abnormal increase in an extracellular matrix in chronic liver diseases. Quantitative characterization of fibrillar collagen in intact tissue is essential for both fibrosis studies and clinical applications. Commonly used methods, histological staining followed by either semiquantitative or computerized image analysis, have limited sensitivity, accuracy, and operator-dependent variations. The fibrillar collagen in sinusoids of normal livers could be observed through second-harmonic generation (SHG) microscopy. The two-photon excited fluorescence (TPEF) images, recorded simultaneously with SHG, clearly revealed the hepatocyte morphology. We have systematically optimized the parameters for the quantitative SHG/TPEF imaging of liver tissue and developed fully automated image analysis algorithms to extract the information of collagen changes and cell necrosis. Subtle changes in the distribution and amount of collagen and cell morphology are quantitatively characterized in SHG/TPEF images. By comparing to traditional staining, such as Masson's trichrome and Sirius red, SHG/TPEF is a sensitive quantitative tool for automated collagen characterization in liver tissue. Our system allows for enhanced detection and quantification of sinusoidal collagen fibers in fibrosis research and clinical diagnostics.
NASA Astrophysics Data System (ADS)
Watmough, Gary R.; Atkinson, Peter M.; Hutton, Craig W.
2011-04-01
The automated cloud cover assessment (ACCA) algorithm has provided automated estimates of cloud cover for the Landsat ETM+ mission since 2001. However, due to the lack of a band around 1.375 μm, cloud edges and transparent clouds such as cirrus cannot be detected. Use of Landsat ETM+ imagery for terrestrial land analysis is further hampered by the relatively long revisit period due to a nadir only viewing sensor. In this study, the ACCA threshold parameters were altered to minimise omission errors in the cloud masks. Object-based analysis was used to reduce the commission errors from the extended cloud filters. The method resulted in the removal of optically thin cirrus cloud and cloud edges which are often missed by other methods in sub-tropical areas. Although not fully automated, the principles of the method developed here provide an opportunity for using otherwise sub-optimal or completely unusable Landsat ETM+ imagery for operational applications. Where specific images are required for particular research goals the method can be used to remove cloud and transparent cloud helping to reduce bias in subsequent land cover classifications.
3D marker-controlled watershed for kidney segmentation in clinical CT exams.
Wieclawek, Wojciech
2018-02-27
Image segmentation is an essential and non trivial task in computer vision and medical image analysis. Computed tomography (CT) is one of the most accessible medical examination techniques to visualize the interior of a patient's body. Among different computer-aided diagnostic systems, the applications dedicated to kidney segmentation represent a relatively small group. In addition, literature solutions are verified on relatively small databases. The goal of this research is to develop a novel algorithm for fully automated kidney segmentation. This approach is designed for large database analysis including both physiological and pathological cases. This study presents a 3D marker-controlled watershed transform developed and employed for fully automated CT kidney segmentation. The original and the most complex step in the current proposition is an automatic generation of 3D marker images. The final kidney segmentation step is an analysis of the labelled image obtained from marker-controlled watershed transform. It consists of morphological operations and shape analysis. The implementation is conducted in a MATLAB environment, Version 2017a, using i.a. Image Processing Toolbox. 170 clinical CT abdominal studies have been subjected to the analysis. The dataset includes normal as well as various pathological cases (agenesis, renal cysts, tumors, renal cell carcinoma, kidney cirrhosis, partial or radical nephrectomy, hematoma and nephrolithiasis). Manual and semi-automated delineations have been used as a gold standard. Wieclawek Among 67 delineated medical cases, 62 cases are 'Very good', whereas only 5 are 'Good' according to Cohen's Kappa interpretation. The segmentation results show that mean values of Sensitivity, Specificity, Dice, Jaccard, Cohen's Kappa and Accuracy are 90.29, 99.96, 91.68, 85.04, 91.62 and 99.89% respectively. All 170 medical cases (with and without outlines) have been classified by three independent medical experts as 'Very good' in 143-148 cases, as 'Good' in 15-21 cases and as 'Moderate' in 6-8 cases. An automatic kidney segmentation approach for CT studies to compete with commonly known solutions was developed. The algorithm gives promising results, that were confirmed during validation procedure done on a relatively large database, including 170 CTs with both physiological and pathological cases.
Laboratory systems integration: robotics and automation.
Felder, R A
1991-01-01
Robotic technology is going to have a profound impact on the clinical laboratory of the future. Faced with increased pressure to reduce health care spending yet increase services to patients, many laboratories are looking for alternatives to the inflexible or "fixed" automation found in many clinical analyzers. Robots are being examined by many clinical pathologists as an attractive technology which can adapt to the constant changes in laboratory testing. Already, laboratory designs are being altered to accommodate robotics and automated specimen processors. However, the use of robotics and computer intelligence in the clinical laboratory is still in its infancy. Successful examples of robotic automation exist in several laboratories. Investigators have used robots to automate endocrine testing, high performance liquid chromatography, and specimen transportation. Large commercial laboratories are investigating the use of specimen processors which combine the use of fixed automation and robotics. Robotics have also reduced the exposure of medical technologists to specimens infected with viral pathogens. The successful examples of clinical robotics applications were a result of the cooperation of clinical chemists, engineers, and medical technologists. At the University of Virginia we have designed and implemented a robotic critical care laboratory. Initial clinical experience suggests that robotic performance is reliable, however, staff acceptance and utilization requires continuing education. We are also developing a robotic cyclosporine which promises to greatly reduce the labor costs of this analysis. The future will bring lab wide automation that will fully integrate computer artificial intelligence and robotics. Specimens will be transported by mobile robots. Specimen processing, aliquotting, and scheduling will be automated.(ABSTRACT TRUNCATED AT 250 WORDS)
Andersen, David W; Linnet, Kristian
2014-01-01
A screening method for 18 frequently measured exogenous anabolic steroids and the testosterone/epitestosterone (T/E) ratio in forensic cases has been developed and validated. The method involves a fully automated sample preparation including enzyme treatment, addition of internal standards and solid phase extraction followed by analysis by liquid chromatography-tandem mass spectrometry (LC-MS-MS) using electrospray ionization with adduct formation for two compounds. Urine samples from 580 forensic cases were analyzed to determine the T/E ratio and occurrence of exogenous anabolic steroids. Extraction recoveries ranged from 77 to 95%, matrix effects from 48 to 78%, overall process efficiencies from 40 to 54% and the lower limit of identification ranged from 2 to 40 ng/mL. In the 580 urine samples analyzed from routine forensic cases, 17 (2.9%) were found positive for one or more anabolic steroids. Only seven different steroids including testosterone were found in the material, suggesting that only a small number of common steroids are likely to occur in a forensic context. The steroids were often in high concentrations (>100 ng/mL), and a combination of steroids and/or other drugs of abuse were seen in the majority of cases. The method presented serves as a fast and automated screening procedure, proving the suitability of LC-MS-MS for analyzing anabolic steroids. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Morris, Olivia; McMahon, Adam; Boutin, Herve; Grigg, Julian; Prenant, Christian
2016-06-15
[(18) F]Fluoroacetaldehyde is a biocompatible prosthetic group that has been implemented pre-clinically using a semi-automated remotely controlled system. Automation of radiosyntheses permits use of higher levels of [(18) F]fluoride whilst minimising radiochemist exposure and enhancing reproducibility. In order to achieve full-automation of [(18) F]fluoroacetaldehyde peptide radiolabelling, a customised GE Tracerlab FX-FN with fully programmed automated synthesis was developed. The automated synthesis of [(18) F]fluoroacetaldehyde is carried out using a commercially available precursor, with reproducible yields of 26% ± 3 (decay-corrected, n = 10) within 45 min. Fully automated radiolabelling of a protein, recombinant human interleukin-1 receptor antagonist (rhIL-1RA), with [(18) F]fluoroacetaldehyde was achieved within 2 h. Radiolabelling efficiency of rhIL-1RA with [(18) F]fluoroacetaldehyde was confirmed using HPLC and reached 20% ± 10 (n = 5). Overall RCY of [(18) F]rhIL-1RA was 5% ± 2 (decay-corrected, n = 5) within 2 h starting from 35 to 40 GBq of [(18) F]fluoride. Specific activity measurements of 8.11-13.5 GBq/µmol were attained (n = 5), a near three-fold improvement of those achieved using the semi-automated approach. The strategy can be applied to radiolabelling a range of peptides and proteins with [(18) F]fluoroacetaldehyde analogous to other aldehyde-bearing prosthetic groups, yet automation of the method provides reproducibility thereby aiding translation to Good Manufacturing Practice manufacture and the transformation from pre-clinical to clinical production. Copyright © 2016 The Authors. Journal of Labelled Compounds and Radiopharmaceuticals published by John Wiley & Sons, Ltd.
Analysis of communication in the standard versus automated aircraft
NASA Technical Reports Server (NTRS)
Veinott, Elizabeth S.; Irwin, Cheryl M.
1993-01-01
Past research has shown crew communication patterns to be associated with overall crew performance, recent flight experience together, low-and high-error crew performance and personality variables. However, differences in communication patterns as a function of aircraft type and level of aircraft automation have not been fully addressed. Crew communications from ten MD-88 and twelve DC-9 crews were obtained during a full-mission simulation. In addition to large differences in overall amount of communication during the normal and abnormal phases of flight (DC-9 crews generating less speech than MD-88 crews), differences in specific speech categories were also found. Log-linear analyses also generated speaker-response patterns related to each aircraft type, although in future analyses these patterns will need to account for variations due to crew performance.
Reeves, Anthony P.; Xie, Yiting; Liu, Shuang
2017-01-01
Abstract. With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset. PMID:28612037
Zeng, Wei-Fang; Liu, Ming; Kang, Yuan-Yuan; Li, Yan; Wang, Ji-Guang
2013-08-01
The present study aimed to evaluate the accuracy of the fully automated oscillometric upper-arm blood pressure monitor TM-2656 according to the British Hypertension Society (BHS) Protocol 1993. We recruited individuals until there were 85 eligible participants and their blood pressure could meet the blood pressure distribution requirements specified by the BHS Protocol. For each individual, we sequentially measured the systolic and diastolic blood pressures using a mercury sphygmomanometer (two observers) and the TM-2656 device (one supervisor). Data analysis was carried out according to the BHS Protocol. The device achieved grade A. The percentage of blood pressure differences within 5, 10, and 15 mmHg was 62, 85, and 96%, respectively, for systolic blood pressure, and 71, 93, and 99%, respectively, for diastolic blood pressure. The average (±SD) of the device-observer differences was -2.1±7.8 mmHg (P<0.0001) and -1.1±5.8 mmHg (P<0.0001) for systolic and diastolic blood pressures, respectively. The A&D upper-arm blood pressure monitor TM-2656 has passed the requirements of the BHS Protocol, and can thus be recommended for blood pressure measurement.
A fully automated microfluidic-based electrochemical sensor for real-time bacteria detection.
Altintas, Zeynep; Akgun, Mete; Kokturk, Guzin; Uludag, Yildiz
2018-02-15
A fully automated microfluidic-based electrochemical biosensor was designed and manufactured for pathogen detection. The quantification of Escherichia coli was investigated with standard and nanomaterial amplified immunoassays in the concentration ranges of 0.99 × 10 4 3.98 × 10 9 cfu mL -1 and 103.97 × 10 7 cfu mL -1 which resulted in detection limits of 1.99 × 10 4 cfu mL -1 and 50 cfu mL -1 , respectively. The developed methodology was then applied for E. coli quantification in water samples using nanomaterial modified assay. Same detection limit for E. coli was achieved for real sample analysis with a little decrease on the sensor signal. Cross-reactivity studies were conducted by testing Shigella, Salmonella spp., Salmonella typhimurium and Staphylococcus aureus on E. coli specific antibody surface that confirmed the high specificity of the developed immunoassays. The sensor surface could be regenerated multiple times which significantly reduces the cost of the system. Our custom-designed biosensor is capable of detecting bacteria with high sensitivity and specificity, and can serve as a promising tool for pathogen detection. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Bohan; Wang, Hsing-Wen; Guo, Hengchang; Anderson, Erik; Tang, Qinggong; Wu, Tongtong; Falola, Reuben; Smith, Tikina; Andrews, Peter M.; Chen, Yu
2017-12-01
Chronic kidney disease (CKD) is characterized by a progressive loss of renal function over time. Histopathological analysis of the condition of glomeruli and the proximal convolutional tubules over time can provide valuable insights into the progression of CKD. Optical coherence tomography (OCT) is a technology that can analyze the microscopic structures of a kidney in a nondestructive manner. Recently, we have shown that OCT can provide real-time imaging of kidney microstructures in vivo without administering exogenous contrast agents. A murine model of CKD induced by intravenous Adriamycin (ADR) injection is evaluated by OCT. OCT images of the rat kidneys have been captured every week up to eight weeks. Tubular diameter and hypertrophic tubule population of the kidneys at multiple time points after ADR injection have been evaluated through a fully automated computer-vision system. Results revealed that mean tubular diameter and hypertrophic tubule population increase with time in post-ADR injection period. The results suggest that OCT images of the kidney contain abundant information about kidney histopathology. Fully automated computer-aided diagnosis based on OCT has the potential for clinical evaluation of CKD conditions.
Application of Artificial Intelligence to Improve Aircraft Survivability.
1985-12-01
may be as smooth and effective as possible. 3. Fully Automatic Digital Engine Control ( FADEC ) Under development at the Naval Weapons Center, a major...goal of the FADEC program is to significantly reduce engine vulnerability by fully automating the regulation of engine controls. Given a thrust
NASA Astrophysics Data System (ADS)
Fritzsche, Klaus H.; Giesel, Frederik L.; Heimann, Tobias; Thomann, Philipp A.; Hahn, Horst K.; Pantel, Johannes; Schröder, Johannes; Essig, Marco; Meinzer, Hans-Peter
2008-03-01
Objective quantification of disease specific neurodegenerative changes can facilitate diagnosis and therapeutic monitoring in several neuropsychiatric disorders. Reproducibility and easy-to-perform assessment are essential to ensure applicability in clinical environments. Aim of this comparative study is the evaluation of a fully automated approach that assesses atrophic changes in Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI). 21 healthy volunteers (mean age 66.2), 21 patients with MCI (66.6), and 10 patients with AD (65.1) were enrolled. Subjects underwent extensive neuropsychological testing and MRI was conducted on a 1.5 Tesla clinical scanner. Atrophic changes were measured automatically by a series of image processing steps including state of the art brain mapping techniques. Results were compared with two reference approaches: a manual segmentation of the hippocampal formation and a semi-automated estimation of temporal horn volume, which is based upon interactive selection of two to six landmarks in the ventricular system. All approaches separated controls and AD patients significantly (10 -5 < p < 10 -4) and showed a slight but not significant increase of neurodegeneration for subjects with MCI compared to volunteers. The automated approach correlated significantly with the manual (r = -0.65, p < 10 -6) and semi automated (r = -0.83, p < 10 -13) measurements. It proved high accuracy and at the same time maximized observer independency, time reduction and thus usefulness for clinical routine.
Man-Robot Symbiosis: A Framework For Cooperative Intelligence And Control
NASA Astrophysics Data System (ADS)
Parker, Lynne E.; Pin, Francois G.
1988-10-01
The man-robot symbiosis concept has the fundamental objective of bridging the gap between fully human-controlled and fully autonomous systems to achieve true man-robot cooperative control and intelligence. Such a system would allow improved speed, accuracy, and efficiency of task execution, while retaining the man in the loop for innovative reasoning and decision-making. The symbiont would have capabilities for supervised and unsupervised learning, allowing an increase of expertise in a wide task domain. This paper describes a robotic system architecture facilitating the symbiotic integration of teleoperative and automated modes of task execution. The architecture reflects a unique blend of many disciplines of artificial intelligence into a working system, including job or mission planning, dynamic task allocation, man-robot communication, automated monitoring, and machine learning. These disciplines are embodied in five major components of the symbiotic framework: the Job Planner, the Dynamic Task Allocator, the Presenter/Interpreter, the Automated Monitor, and the Learning System.
Lange, Paul P; James, Keith
2012-10-08
A novel methodology for the synthesis of druglike heterocycle libraries has been developed through the use of flow reactor technology. The strategy employs orthogonal modification of a heterocyclic core, which is generated in situ, and was used to construct both a 25-membered library of druglike 3-aminoindolizines, and selected examples of a 100-member virtual library. This general protocol allows a broad range of acylation, alkylation and sulfonamidation reactions to be performed in conjunction with a tandem Sonogashira coupling/cycloisomerization sequence. All three synthetic steps were conducted under full automation in the flow reactor, with no handling or isolation of intermediates, to afford the desired products in good yields. This fully automated, multistep flow approach opens the way to highly efficient generation of druglike heterocyclic systems as part of a lead discovery strategy or within a lead optimization program.
Automated Semantic Indices Related to Cognitive Function and Rate of Cognitive Decline
ERIC Educational Resources Information Center
Pakhomov, Serguei V. S.; Hemmy, Laura S.; Lim, Kelvin O.
2012-01-01
The objective of our study is to introduce a fully automated, computational linguistic technique to quantify semantic relations between words generated on a standard semantic verbal fluency test and to determine its cognitive and clinical correlates. Cognitive differences between patients with Alzheimer's disease and mild cognitive impairment are…
ERIC Educational Resources Information Center
Hartt, Richard W.
This report discusses the characteristics, operations, and automation requirements of technical libraries providing services to organizations involved in aerospace and defense scientific and technical work, and describes the Local Automation Model project. This on-going project is designed to demonstrate the concept of a fully integrated library…
Robotic implementation of assays: tissue-nonspecific alkaline phosphatase (TNAP) case study.
Chung, Thomas D Y
2013-01-01
Laboratory automation and robotics have "industrialized" the execution and completion of large-scale, enabling high-capacity and high-throughput (100 K-1 MM/day) screening (HTS) campaigns of large "libraries" of compounds (>200 K-2 MM) to complete in a few days or weeks. Critical to the success these HTS campaigns is the ability of a competent assay development team to convert a validated research-grade laboratory "benchtop" assay suitable for manual or semi-automated operations on a few hundreds of compounds into a robust miniaturized (384- or 1,536-well format), well-engineered, scalable, industrialized assay that can be seamlessly implemented on a fully automated, fully integrated robotic screening platform for cost-effective screening of hundreds of thousands of compounds. Here, we provide a review of the theoretical guiding principles and practical considerations necessary to reduce often complex research biology into a "lean manufacturing" engineering endeavor comprising adaption, automation, and implementation of HTS. Furthermore we provide a detailed example specifically for a cell-free in vitro biochemical, enzymatic phosphatase assay for tissue-nonspecific alkaline phosphatase that illustrates these principles and considerations.
Hoehl, Melanie M; Weißert, Michael; Dannenberg, Arne; Nesch, Thomas; Paust, Nils; von Stetten, Felix; Zengerle, Roland; Slocum, Alexander H; Steigert, Juergen
2014-06-01
This paper introduces a disposable battery-driven heating system for loop-mediated isothermal DNA amplification (LAMP) inside a centrifugally-driven DNA purification platform (LabTube). We demonstrate LabTube-based fully automated DNA purification of as low as 100 cell-equivalents of verotoxin-producing Escherichia coli (VTEC) in water, milk and apple juice in a laboratory centrifuge, followed by integrated and automated LAMP amplification with a reduction of hands-on time from 45 to 1 min. The heating system consists of two parallel SMD thick film resistors and a NTC as heating and temperature sensing elements. They are driven by a 3 V battery and controlled by a microcontroller. The LAMP reagents are stored in the elution chamber and the amplification starts immediately after the eluate is purged into the chamber. The LabTube, including a microcontroller-based heating system, demonstrates contamination-free and automated sample-to-answer nucleic acid testing within a laboratory centrifuge. The heating system can be easily parallelized within one LabTube and it is deployable for a variety of heating and electrical applications.
Development of a fully automated network system for long-term health-care monitoring at home.
Motoi, K; Kubota, S; Ikarashi, A; Nogawa, M; Tanaka, S; Nemoto, T; Yamakoshi, K
2007-01-01
Daily monitoring of health condition at home is very important not only as an effective scheme for early diagnosis and treatment of cardiovascular and other diseases, but also for prevention and control of such diseases. From this point of view, we have developed a prototype room for fully automated monitoring of various vital signs. From the results of preliminary experiments using this room, it was confirmed that (1) ECG and respiration during bathing, (2) excretion weight and blood pressure, and (3) respiration and cardiac beat during sleep could be monitored with reasonable accuracy by the sensor system installed in bathtub, toilet and bed, respectively.
Fully automated, deep learning segmentation of oxygen-induced retinopathy images
Xiao, Sa; Bucher, Felicitas; Wu, Yue; Rokem, Ariel; Lee, Cecilia S.; Marra, Kyle V.; Fallon, Regis; Diaz-Aguilar, Sophia; Aguilar, Edith; Friedlander, Martin; Lee, Aaron Y.
2017-01-01
Oxygen-induced retinopathy (OIR) is a widely used model to study ischemia-driven neovascularization (NV) in the retina and to serve in proof-of-concept studies in evaluating antiangiogenic drugs for ocular, as well as nonocular, diseases. The primary parameters that are analyzed in this mouse model include the percentage of retina with vaso-obliteration (VO) and NV areas. However, quantification of these two key variables comes with a great challenge due to the requirement of human experts to read the images. Human readers are costly, time-consuming, and subject to bias. Using recent advances in machine learning and computer vision, we trained deep learning neural networks using over a thousand segmentations to fully automate segmentation in OIR images. While determining the percentage area of VO, our algorithm achieved a similar range of correlation coefficients to that of expert inter-human correlation coefficients. In addition, our algorithm achieved a higher range of correlation coefficients compared with inter-expert correlation coefficients for quantification of the percentage area of neovascular tufts. In summary, we have created an open-source, fully automated pipeline for the quantification of key values of OIR images using deep learning neural networks. PMID:29263301
Designs and concept reliance of a fully automated high-content screening platform.
Radu, Constantin; Adrar, Hosna Sana; Alamir, Ab; Hatherley, Ian; Trinh, Trung; Djaballah, Hakim
2012-10-01
High-content screening (HCS) is becoming an accepted platform in academic and industry screening labs and does require slightly different logistics for execution. To automate our stand-alone HCS microscopes, namely, an alpha IN Cell Analyzer 3000 (INCA3000), originally a Praelux unit hooked to a Hudson Plate Crane with a maximum capacity of 50 plates per run, and the IN Cell Analyzer 2000 (INCA2000), in which up to 320 plates could be fed per run using the Thermo Fisher Scientific Orbitor, we opted for a 4 m linear track system harboring both microscopes, plate washer, bulk dispensers, and a high-capacity incubator allowing us to perform both live and fixed cell-based assays while accessing both microscopes on deck. Considerations in design were given to the integration of the alpha INCA3000, a new gripper concept to access the onboard nest, and peripheral locations on deck to ensure a self-reliant system capable of achieving higher throughput. The resulting system, referred to as Hestia, has been fully operational since the new year, has an onboard capacity of 504 plates, and harbors the only fully automated alpha INCA3000 unit in the world.
Designs and Concept-Reliance of a Fully Automated High Content Screening Platform
Radu, Constantin; Adrar, Hosna Sana; Alamir, Ab; Hatherley, Ian; Trinh, Trung; Djaballah, Hakim
2013-01-01
High content screening (HCS) is becoming an accepted platform in academic and industry screening labs and does require slightly different logistics for execution. To automate our stand alone HCS microscopes, namely an alpha IN Cell Analyzer 3000 (INCA3000) originally a Praelux unit hooked to a Hudson Plate Crane with a maximum capacity of 50 plates per run; and the IN Cell Analyzer 2000 (INCA2000) where up to 320 plates could be fed per run using the Thermo Fisher Scientific Orbitor, we opted for a 4 meter linear track system harboring both microscopes, plate washer, bulk dispensers, and a high capacity incubator allowing us to perform both live and fixed cell based assays while accessing both microscopes on deck. Considerations in design were given to the integration of the alpha INCA3000, a new gripper concept to access the onboard nest, and peripheral locations on deck to ensure a self reliant system capable of achieving higher throughput. The resulting system, referred to as Hestia, has been fully operational since the New Year, has an onboard capacity of 504 plates, and harbors the only fully automated alpha INCA3000 unit in the World. PMID:22797489
Déglon, Julien; Thomas, Aurélien; Daali, Youssef; Lauer, Estelle; Samer, Caroline; Desmeules, Jules; Dayer, Pierre; Mangin, Patrice; Staub, Christian
2011-01-25
This paper illustrates the development of an automated system for the on-line bioanalysis of dried blood spots (on-line DBS). In this way, a prototype was designed for integration into a conventional LC/MS/MS, allowing the successive extraction of 30 DBS toward the analytical system without any sample pretreatment. The developed method was assessed for the DBS analysis of flurbiprofen (FLB) and its metabolite 4-hydroxyflurbiprofen (OH-FLB) in human whole blood (i.e. 5 μL). The automated procedure was fully validated based on international criteria and showed good precision, trueness, and linearity over the expected concentration range (from 10 to 1000 ng/mL and 100 to 10,000 ng/mL for OH-FLB and FLB respectively). Furthermore, the prototype showed good results in terms of recovery and carry-over. Stability of both analytes on filter paper was also investigated and the results suggested that DBS could be stored at ambient temperature for over 1 month. The on-line DBS automated system was then successfully applied to a pharmacokinetic study performed on healthy male volunteers after oral administration of a single 50-mg dose of FLB. Additionally, a comparison between finger capillary DBS and classic venous plasma concentrations was investigated. A good correlation was observed, demonstrating the complementarity of both sampling forms. The automated system described in this article represents an efficient tool for the LC/MS/MS analysis of DBS samples in many bioanalytical applications. Copyright © 2010 Elsevier B.V. All rights reserved.
Gibb, Stuart W.; Wood, John W.; Fauzi, R.; Mantoura, C.
1995-01-01
The automation and improved design and performance of Flow Injection Gas Diffusion-Ion Chromatography (FIGD-IC), a novel technique for the simultaneous analysis of trace ammonia (NH3) and methylamines (MAs) in aqueous media, is presented. Automated Flow Injection Gas Diffusion (FIGD) promotes the selective transmembrane diffusion of MAs and NH3 from aqueous sample under strongly alkaline (pH > 12, NaOH), chelated (EDTA) conditions into a recycled acidic acceptor stream. The acceptor is then injected onto an ion chromatograph where NH3 and the MAs are fully resolved as their cations and detected conductimetrically. A versatile PC interfaced control unit and data capture unit (DCU) are employed in series to direct the selonoid valve switching sequence, IC operation and collection of data. Automation, together with other modifications improved both linearily (R2 > 0.99 MAs 0-100 nM, NH3 0-1000 nM) and precision (<8%) of FIGD-IC at nanomolar concentrations, compared with the manual procedure. The system was successfully applied to the determination of MAs and NH3 in seawater and in trapped particulate and gaseous atmospheric samples during an oceanographic research cruise. PMID:18925047
Automated detection of diabetic retinopathy on digital fundus images.
Sinthanayothin, C; Boyce, J F; Williamson, T H; Cook, H L; Mensah, E; Lal, S; Usher, D
2002-02-01
The aim was to develop an automated screening system to analyse digital colour retinal images for important features of non-proliferative diabetic retinopathy (NPDR). High performance pre-processing of the colour images was performed. Previously described automated image analysis systems were used to detect major landmarks of the retinal image (optic disc, blood vessels and fovea). Recursive region growing segmentation algorithms combined with the use of a new technique, termed a 'Moat Operator', were used to automatically detect features of NPDR. These features included haemorrhages and microaneurysms (HMA), which were treated as one group, and hard exudates as another group. Sensitivity and specificity data were calculated by comparison with an experienced fundoscopist. The algorithm for exudate recognition was applied to 30 retinal images of which 21 contained exudates and nine were without pathology. The sensitivity and specificity for exudate detection were 88.5% and 99.7%, respectively, when compared with the ophthalmologist. HMA were present in 14 retinal images. The algorithm achieved a sensitivity of 77.5% and specificity of 88.7% for detection of HMA. Fully automated computer algorithms were able to detect hard exudates and HMA. This paper presents encouraging results in automatic identification of important features of NPDR.
Automation of the Environmental Control and Life Support System
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.; Carnes, J. Ray
1990-01-01
The objective of the Environmental Control and Life Support System (ECLSS) Advanced Automation Project is to recommend and develop advanced software for the initial and evolutionary Space Station Freedom (SSF) ECLS system which will minimize the crew and ground manpower needed for operations. Another objective includes capturing ECLSS design and development knowledge for future missions. This report summarizes our results from Phase I, the ECLSS domain analysis phase, which we broke down into three steps: 1) Analyze and document the baselined ECLS system, 2) envision as our goal an evolution to a fully automated regenerative life support system, built upon an augmented baseline, and 3) document the augmentations (hooks and scars) and advanced software systems which we see as necessary in achieving minimal manpower support for ECLSS operations. In addition, Phase I included development of an advanced software life cycle testing tools will be used in the development of the software. In this way, we plan in preparation for phase II and III, the development and integration phases, respectively. Automated knowledge acquisition, engineering, verification, and can capture ECLSS development knowledge for future use, develop more robust and complex software, provide feedback to the KBS tool community, and insure proper visibility of our efforts.
Lefman, Jonathan; Morrison, Robert; Subramaniam, Sriram
2007-01-01
We report the development of a novel, multi-specimen imaging system for high-throughput transmission electron microscopy. Our cartridge-based loading system, called the “Gatling”, permits the sequential examination of as many as 100 specimens in the microscope for room temperature electron microscopy using mechanisms for rapid and automated specimen exchange. The software for the operation of the Gatling and automated data acquisition has been implemented in an updated version of our in-house program AutoEM. In the current implementation of the system, the time required to deliver 95 specimens into the microscope and collect overview images from each is about 13 hours. Regions of interest are identified from a low magnification atlas generation from each specimen and an unlimited number of higher magnifications images can be subsequently acquired from these regions using fully automated data acquisition procedures that can be controlled from a remote interface. We anticipate that the availability of the Gatling will greatly accelerate the speed of data acquisition for a variety of applications in biology, materials science and nanotechnology that require rapid screening and image analysis of multiple specimens. PMID:17240161
Automated model-based quantitative analysis of phantoms with spherical inserts in FDG PET scans.
Ulrich, Ethan J; Sunderland, John J; Smith, Brian J; Mohiuddin, Imran; Parkhurst, Jessica; Plichta, Kristin A; Buatti, John M; Beichel, Reinhard R
2018-01-01
Quality control plays an increasingly important role in quantitative PET imaging and is typically performed using phantoms. The purpose of this work was to develop and validate a fully automated analysis method for two common PET/CT quality assurance phantoms: the NEMA NU-2 IQ and SNMMI/CTN oncology phantom. The algorithm was designed to only utilize the PET scan to enable the analysis of phantoms with thin-walled inserts. We introduce a model-based method for automated analysis of phantoms with spherical inserts. Models are first constructed for each type of phantom to be analyzed. A robust insert detection algorithm uses the model to locate all inserts inside the phantom. First, candidates for inserts are detected using a scale-space detection approach. Second, candidates are given an initial label using a score-based optimization algorithm. Third, a robust model fitting step aligns the phantom model to the initial labeling and fixes incorrect labels. Finally, the detected insert locations are refined and measurements are taken for each insert and several background regions. In addition, an approach for automated selection of NEMA and CTN phantom models is presented. The method was evaluated on a diverse set of 15 NEMA and 20 CTN phantom PET/CT scans. NEMA phantoms were filled with radioactive tracer solution at 9.7:1 activity ratio over background, and CTN phantoms were filled with 4:1 and 2:1 activity ratio over background. For quantitative evaluation, an independent reference standard was generated by two experts using PET/CT scans of the phantoms. In addition, the automated approach was compared against manual analysis, which represents the current clinical standard approach, of the PET phantom scans by four experts. The automated analysis method successfully detected and measured all inserts in all test phantom scans. It is a deterministic algorithm (zero variability), and the insert detection RMS error (i.e., bias) was 0.97, 1.12, and 1.48 mm for phantom activity ratios 9.7:1, 4:1, and 2:1, respectively. For all phantoms and at all contrast ratios, the average RMS error was found to be significantly lower for the proposed automated method compared to the manual analysis of the phantom scans. The uptake measurements produced by the automated method showed high correlation with the independent reference standard (R 2 ≥ 0.9987). In addition, the average computing time for the automated method was 30.6 s and was found to be significantly lower (P ≪ 0.001) compared to manual analysis (mean: 247.8 s). The proposed automated approach was found to have less error when measured against the independent reference than the manual approach. It can be easily adapted to other phantoms with spherical inserts. In addition, it eliminates inter- and intraoperator variability in PET phantom analysis and is significantly more time efficient, and therefore, represents a promising approach to facilitate and simplify PET standardization and harmonization efforts. © 2017 American Association of Physicists in Medicine.
Large and Small Magellanic Clouds age-metallicity relationships
NASA Astrophysics Data System (ADS)
Perren, G. I.; Piatti, A. E.; Vázquez, R. A.
2017-10-01
We present a new determination of the age-metallicity relation for both Magellanic Clouds, estimated through the homogeneous analysis of 239 observed star clusters. All clusters in our set were observed with the filters of the Washington photometric system. The Automated Stellar cluster Analysis package (ASteCA) was employed to derive the cluster's fundamental parameters, in particular their ages and metallicities, through an unassisted process. We find that our age-metallicity relations (AMRs) can not be fully matched to any of the estimations found in twelve previous works, and are better explained by a combination of several of them in different age intervals.
Automated System for Early Breast Cancer Detection in Mammograms
NASA Technical Reports Server (NTRS)
Bankman, Isaac N.; Kim, Dong W.; Christens-Barry, William A.; Weinberg, Irving N.; Gatewood, Olga B.; Brody, William R.
1993-01-01
The increasing demand on mammographic screening for early breast cancer detection, and the subtlety of early breast cancer signs on mammograms, suggest an automated image processing system that can serve as a diagnostic aid in radiology clinics. We present a fully automated algorithm for detecting clusters of microcalcifications that are the most common signs of early, potentially curable breast cancer. By using the contour map of the mammogram, the algorithm circumvents some of the difficulties encountered with standard image processing methods. The clinical implementation of an automated instrument based on this algorithm is also discussed.
Results from the first fully automated PBS-mask process and pelliclization
NASA Astrophysics Data System (ADS)
Oelmann, Andreas B.; Unger, Gerd M.
1994-02-01
Automation is widely discussed in IC- and mask-manufacturing and partially realized everywhere. The idea for the automation goes back to 1978, when it turned out that the operators for the then newly installed PBS-process-line (the first in Europe) should be trained to behave like robots for particle reduction gaining lower defect densities on the masks. More than this goal has been achieved. It turned out recently, that the automation with its dedicated work routes and detailed documentation of every lot (individual mask or reticle) made it easy to obtain the CEEC certificate which includes ISO 9001.
Li, Tim M H; Chau, Michael; Wong, Paul W C; Lai, Eliza S Y; Yip, Paul S F
2013-05-15
Internet-based learning programs provide people with massive health care information and self-help guidelines on improving their health. The advent of Web 2.0 and social networks renders significant flexibility to embedding highly interactive components, such as games, to foster learning processes. The effectiveness of game-based learning on social networks has not yet been fully evaluated. The aim of this study was to assess the effectiveness of a fully automated, Web-based, social network electronic game on enhancing mental health knowledge and problem-solving skills of young people. We investigated potential motivational constructs directly affecting the learning outcome. Gender differences in learning outcome and motivation were also examined. A pre/posttest design was used to evaluate the fully automated Web-based intervention. Participants, recruited from a closed online user group, self-assessed their mental health literacy and motivational constructs before and after completing the game within a 3-week period. The electronic game was designed according to cognitive-behavioral approaches. Completers and intent-to-treat analyses, using multiple imputation for missing data, were performed. Regression analysis with backward selection was employed when examining the relationship between knowledge enhancement and motivational constructs. The sample included 73 undergraduates (42 females) for completers analysis. The gaming approach was effective in enhancing young people's mental health literacy (d=0.65). The finding was also consistent with the intent-to-treat analysis, which included 127 undergraduates (75 females). No gender differences were found in learning outcome (P=.97). Intrinsic goal orientation was the primary factor in learning motivation, whereas test anxiety was successfully alleviated in the game setting. No gender differences were found on any learning motivation subscales (P>.10). We also found that participants' self-efficacy for learning and performance, as well as test anxiety, significantly affected their learning outcomes, whereas other motivational subscales were statistically nonsignificant. Electronic games implemented through social networking sites appear to effectively enhance users' mental health literacy.
Some selected quantitative methods of thermal image analysis in Matlab.
Koprowski, Robert
2016-05-01
The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
StrAuto: automation and parallelization of STRUCTURE analysis.
Chhatre, Vikram E; Emerson, Kevin J
2017-03-24
Population structure inference using the software STRUCTURE has become an integral part of population genetic studies covering a broad spectrum of taxa including humans. The ever-expanding size of genetic data sets poses computational challenges for this analysis. Although at least one tool currently implements parallel computing to reduce computational overload of this analysis, it does not fully automate the use of replicate STRUCTURE analysis runs required for downstream inference of optimal K. There is pressing need for a tool that can deploy population structure analysis on high performance computing clusters. We present an updated version of the popular Python program StrAuto, to streamline population structure analysis using parallel computing. StrAuto implements a pipeline that combines STRUCTURE analysis with the Evanno Δ K analysis and visualization of results using STRUCTURE HARVESTER. Using benchmarking tests, we demonstrate that StrAuto significantly reduces the computational time needed to perform iterative STRUCTURE analysis by distributing runs over two or more processors. StrAuto is the first tool to integrate STRUCTURE analysis with post-processing using a pipeline approach in addition to implementing parallel computation - a set up ideal for deployment on computing clusters. StrAuto is distributed under the GNU GPL (General Public License) and available to download from http://strauto.popgen.org .
Meyer, Denny; Austin, David William; Kyrios, Michael
2011-01-01
Background The development of e-mental health interventions to treat or prevent mental illness and to enhance wellbeing has risen rapidly over the past decade. This development assists the public in sidestepping some of the obstacles that are often encountered when trying to access traditional face-to-face mental health care services. Objective The objective of our study was to investigate the posttreatment effectiveness of five fully automated self-help cognitive behavior e-therapy programs for generalized anxiety disorder (GAD), panic disorder with or without agoraphobia (PD/A), obsessive–compulsive disorder (OCD), posttraumatic stress disorder (PTSD), and social anxiety disorder (SAD) offered to the international public via Anxiety Online, an open-access full-service virtual psychology clinic for anxiety disorders. Methods We used a naturalistic participant choice, quasi-experimental design to evaluate each of the five Anxiety Online fully automated self-help e-therapy programs. Participants were required to have at least subclinical levels of one of the anxiety disorders to be offered the associated disorder-specific fully automated self-help e-therapy program. These programs are offered free of charge via Anxiety Online. Results A total of 225 people self-selected one of the five e-therapy programs (GAD, n = 88; SAD, n = 50; PD/A, n = 40; PTSD, n = 30; OCD, n = 17) and completed their 12-week posttreatment assessment. Significant improvements were found on 21/25 measures across the five fully automated self-help programs. At postassessment we observed significant reductions on all five anxiety disorder clinical disorder severity ratings (Cohen d range 0.72–1.22), increased confidence in managing one’s own mental health care (Cohen d range 0.70–1.17), and decreases in the total number of clinical diagnoses (except for the PD/A program, where a positive trend was found) (Cohen d range 0.45–1.08). In addition, we found significant improvements in quality of life for the GAD, OCD, PTSD, and SAD e-therapy programs (Cohen d range 0.11–0.96) and significant reductions relating to general psychological distress levels for the GAD, PD/A, and PTSD e-therapy programs (Cohen d range 0.23–1.16). Overall, treatment satisfaction was good across all five e-therapy programs, and posttreatment assessment completers reported using their e-therapy program an average of 395.60 (SD 272.2) minutes over the 12-week treatment period. Conclusions Overall, all five fully automated self-help e-therapy programs appear to be delivering promising high-quality outcomes; however, the results require replication. Trial Registration Australian and New Zealand Clinical Trials Registry ACTRN121611000704998; http://www.anzctr.org.au/trial_view.aspx?ID=336143 (Archived by WebCite at http://www.webcitation.org/618r3wvOG) PMID:22057287
Grab a coffee: your aerial images are already analyzed
NASA Astrophysics Data System (ADS)
Garetto, Anthony; Rademacher, Thomas; Schulz, Kristian
2015-07-01
For over 2 decades the AIMTM platform has been utilized in mask shops as the standard for actinic review of photomask sites in order to perform defect disposition and repair review. Throughout this time the measurement throughput of the systems has been improved in order to keep pace with the requirements demanded by a manufacturing environment, however the analysis of the sites captured has seen little improvement and remained a manual process. This manual analysis of aerial images is time consuming, subject to error and unreliability and contributes to holding up turn-around time (TAT) and slowing process flow in a manufacturing environment. AutoAnalysis, the first application available for the FAVOR® platform, offers a solution to these problems by providing fully automated data transfer and analysis of AIMTM aerial images. The data is automatically output in a customizable format that can be tailored to your internal needs and the requests of your customers. Savings in terms of operator time arise from the automated analysis which no longer needs to be performed. Reliability is improved as human error is eliminated making sure the most defective region is always and consistently captured. Finally the TAT is shortened and process flow for the back end of the line improved as the analysis is fast and runs in parallel to the measurements. In this paper the concept and approach of AutoAnalysis will be presented as well as an update to the status of the project. A look at the benefits arising from the automation and the customizable approach of the solution will be shown.
Fully automated gynecomastia quantification from low-dose chest CT
NASA Astrophysics Data System (ADS)
Liu, Shuang; Sonnenblick, Emily B.; Azour, Lea; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.
2018-02-01
Gynecomastia is characterized by the enlargement of male breasts, which is a common and sometimes distressing condition found in over half of adult men over the age of 44. Although the majority of gynecomastia is physiologic or idiopathic, its occurrence may also associate with an extensive variety of underlying systemic disease or drug toxicity. With the recent large-scale implementation of annual lung cancer screening using low-dose chest CT (LDCT), gynecomastia is believed to be a frequent incidental finding on LDCT. A fully automated system for gynecomastia quantification from LDCT is presented in this paper. The whole breast region is first segmented using an anatomyorientated approach based on the propagation of pectoral muscle fronts in the vertical direction. The subareolar region is then localized, and the fibroglandular tissue within it is measured for the assessment of gynecomastia. The presented system was validated using 454 breast regions from non-contrast LDCT scans of 227 adult men. The ground truth was established by an experienced radiologist by classifying each breast into one of the five categorical scores. The automated measurements have been demonstrated to achieve promising performance for the gynecomastia diagnosis with the AUC of 0.86 for the ROC curve and have statistically significant Spearman correlation r=0.70 (p < 0.001) with the reference categorical grades. The encouraging results demonstrate the feasibility of fully automated gynecomastia quantification from LDCT, which may aid the early detection as well as the treatment of both gynecomastia and the underlying medical problems, if any, that cause gynecomastia.
Fully Automated Driving: Impact of Trust and Practice on Manual Control Recovery.
Payre, William; Cestac, Julien; Delhomme, Patricia
2016-03-01
An experiment was performed in a driving simulator to investigate the impacts of practice, trust, and interaction on manual control recovery (MCR) when employing fully automated driving (FAD). To increase the use of partially or highly automated driving efficiency and to improve safety, some studies have addressed trust in driving automation and training, but few studies have focused on FAD. FAD is an autonomous system that has full control of a vehicle without any need for intervention by the driver. A total of 69 drivers with a valid license practiced with FAD. They were distributed evenly across two conditions: simple practice and elaborate practice. When examining emergency MCR, a correlation was found between trust and reaction time in the simple practice group (i.e., higher trust meant a longer reaction time), but not in the elaborate practice group. This result indicated that to mitigate the negative impact of overtrust on reaction time, more appropriate practice may be needed. Drivers should be trained in how the automated device works so as to improve MCR performance in case of an emergency. The practice format used in this study could be used for the first interaction with an FAD car when acquiring such a vehicle. © 2015, Human Factors and Ergonomics Society.
MannDB: A microbial annotation database for protein characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, C; Lam, M; Smith, J
2006-05-19
MannDB was created to meet a need for rapid, comprehensive automated protein sequence analyses to support selection of proteins suitable as targets for driving the development of reagents for pathogen or protein toxin detection. Because a large number of open-source tools were needed, it was necessary to produce a software system to scale the computations for whole-proteome analysis. Thus, we built a fully automated system for executing software tools and for storage, integration, and display of automated protein sequence analysis and annotation data. MannDB is a relational database that organizes data resulting from fully automated, high-throughput protein-sequence analyses using open-sourcemore » tools. Types of analyses provided include predictions of cleavage, chemical properties, classification, features, functional assignment, post-translational modifications, motifs, antigenicity, and secondary structure. Proteomes (lists of hypothetical and known proteins) are downloaded and parsed from Genbank and then inserted into MannDB, and annotations from SwissProt are downloaded when identifiers are found in the Genbank entry or when identical sequences are identified. Currently 36 open-source tools are run against MannDB protein sequences either on local systems or by means of batch submission to external servers. In addition, BLAST against protein entries in MvirDB, our database of microbial virulence factors, is performed. A web client browser enables viewing of computational results and downloaded annotations, and a query tool enables structured and free-text search capabilities. When available, links to external databases, including MvirDB, are provided. MannDB contains whole-proteome analyses for at least one representative organism from each category of biological threat organism listed by APHIS, CDC, HHS, NIAID, USDA, USFDA, and WHO. MannDB comprises a large number of genomes and comprehensive protein sequence analyses representing organisms listed as high-priority agents on the websites of several governmental organizations concerned with bio-terrorism. MannDB provides the user with a BLAST interface for comparison of native and non-native sequences and a query tool for conveniently selecting proteins of interest. In addition, the user has access to a web-based browser that compiles comprehensive and extensive reports.« less
Manual Vital Signs Reliably Predict Need for Life-Saving Interventions in Trauma Patients
2005-10-01
and pulse oximetry ( SpO2 ); and Group 3, Group 2 plus fully automated noninvasive blood pressure measurements, heart rate, end-tidal carbon dioxide, and... Infection , and Critical Care Volume 59 • Number 4 821 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...pulse character were not ini- tially recorded and resulted in elimination of 339 records created from August 2001 until May 2002. Analysis of the SpO2
Building Flexible User Interfaces for Solving PDEs
NASA Astrophysics Data System (ADS)
Logg, Anders; Wells, Garth N.
2010-09-01
FEniCS is a collection of software tools for the automated solution of differential equations by finite element methods. In this note, we describe how FEniCS can be used to solve a simple nonlinear model problem with varying levels of automation. At one extreme, FEniCS provides tools for the fully automated and adaptive solution of nonlinear partial differential equations. At the other extreme, FEniCS provides a range of tools that allow the computational scientist to experiment with novel solution algorithms.
Verhey, Janko F; Nathan, Nadia S
2004-01-01
Background Finite element method (FEM) analysis for intraoperative modeling of the left ventricle (LV) is presently not possible. Since 3D structural data of the LV is now obtainable using standard transesophageal echocardiography (TEE) devices intraoperatively, the present study describes a method to transfer this data into a commercially available FEM analysis system: ABAQUS©. Methods In this prospective study TomTec LV Analysis TEE© Software was used for semi-automatic endocardial border detection, reconstruction, and volume-rendering of the clinical 3D echocardiographic data. A newly developed software program MVCP FemCoGen©, written in Delphi, reformats the TomTec file structures in five patients for use in ABAQUS and allows visualization of regional deformation of the LV. Results This study demonstrates that a fully automated importation of 3D TEE data into FEM modeling is feasible and can be efficiently accomplished in the operating room. Conclusion For complete intraoperative 3D LV finite element analysis, three input elements are necessary: 1. time-gaited, reality-based structural information, 2. continuous LV pressure and 3. instantaneous tissue elastance. The first of these elements is now available using the methods presented herein. PMID:15473901
Keller, Brad M; Chen, Jinbo; Daye, Dania; Conant, Emily F; Kontos, Despina
2015-08-25
Breast density, commonly quantified as the percentage of mammographically dense tissue area, is a strong breast cancer risk factor. We investigated associations between breast cancer and fully automated measures of breast density made by a new publicly available software tool, the Laboratory for Individualized Breast Radiodensity Assessment (LIBRA). Digital mammograms from 106 invasive breast cancer cases and 318 age-matched controls were retrospectively analyzed. Density estimates acquired by LIBRA were compared with commercially available software and standard Breast Imaging-Reporting and Data System (BI-RADS) density estimates. Associations between the different density measures and breast cancer were evaluated by using logistic regression after adjustment for Gail risk factors and body mass index (BMI). Area under the curve (AUC) of the receiver operating characteristic (ROC) was used to assess discriminatory capacity, and odds ratios (ORs) for each density measure are provided. All automated density measures had a significant association with breast cancer (OR = 1.47-2.23, AUC = 0.59-0.71, P < 0.01) which was strengthened after adjustment for Gail risk factors and BMI (OR = 1.96-2.64, AUC = 0.82-0.85, P < 0.001). In multivariable analysis, absolute dense area (OR = 1.84, P < 0.001) and absolute dense volume (OR = 1.67, P = 0.003) were jointly associated with breast cancer (AUC = 0.77, P < 0.01), having a larger discriminatory capacity than models considering the Gail risk factors alone (AUC = 0.64, P < 0.001) or the Gail risk factors plus standard area percent density (AUC = 0.68, P = 0.01). After BMI was further adjusted for, absolute dense area retained significance (OR = 2.18, P < 0.001) and volume percent density approached significance (OR = 1.47, P = 0.06). This combined area-volume density model also had a significantly (P < 0.001) improved discriminatory capacity (AUC = 0.86) relative to a model considering the Gail risk factors plus BMI (AUC = 0.80). Our study suggests that new automated density measures may ultimately augment the current standard breast cancer risk factors. In addition, the ability to fully automate density estimation with digital mammography, particularly through the use of publically available breast density estimation software, could accelerate the translation of density reporting in routine breast cancer screening and surveillance protocols and facilitate broader research into the use of breast density as a risk factor for breast cancer.
JPLEX: Java Simplex Implementation with Branch-and-Bound Search for Automated Test Assembly
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Dodd, Barbara G.; Chung, Hyewon
2011-01-01
JPLEX, short for Java simPLEX, is an automated test assembly (ATA) program. It is a mixed integer linear programming (MILP) solver written in Java. It reads in a configuration file, solves the minimization problem, and produces an output file for postprocessing. It implements the simplex algorithm to create a fully relaxed solution and…
ECMS--Educational Contest Management System for Selecting Elite Students
ERIC Educational Resources Information Center
Schneider, Thorsten
2004-01-01
Selecting elite students out of a huge collective is a difficult task. The main problem is to provide automated processes to reduce human work. ECMS (Educational Contest Management System) is an online tool approach to help--fully or partly automated--with the task of selecting such elite students out of a mass of candidates. International tests…
An Automated Distillation Column for the Unit Operations Laboratory
ERIC Educational Resources Information Center
Perkins, Douglas M.; Bruce, David A.; Gooding, Charles H.; Butler, Justin T.
2005-01-01
A batch distillation apparatus has been designed and built for use in the undergraduate unit operations laboratory course. The column is fully automated and is accompanied by data acquisition and control software. A mixture of 1-propanol and 2-propanol is separated in the column, using either a constant distillate rate or constant composition…
An automated system for global atmospheric sampling using B-747 airliners
NASA Technical Reports Server (NTRS)
Lew, K. Q.; Gustafsson, U. R. C.; Johnson, R. E.
1981-01-01
The global air sampling program utilizes commercial aircrafts in scheduled service to measure atmospheric constituents. A fully automated system designed for the 747 aircraft is described. Airline operational constraints and data and control subsystems are treated. The overall program management, system monitoring, and data retrieval from four aircraft in global service is described.
Toward best practice: leveraging the electronic patient record as a clinical data warehouse.
Ledbetter, C S; Morgan, M W
2001-01-01
Automating clinical and administrative processes via an electronic patient record (EPR) gives clinicians the point-of-care tools they need to deliver better patient care. However, to improve clinical practice as a whole and then evaluate it, healthcare must go beyond basic automation and convert EPR data into aggregated, multidimensional information. Unfortunately, few EPR systems have the established, powerful analytical clinical data warehouses (CDWs) required for this conversion. This article describes how an organization can support best practice by leveraging a CDW that is fully integrated into its EPR and clinical decision support (CDS) system. The article (1) discusses the requirements for comprehensive CDS, including on-line analytical processing (OLAP) of data at both transactional and aggregate levels, (2) suggests that the transactional data acquired by an OLTP EPR system must be remodeled to support retrospective, population-based, aggregate analysis of those data, and (3) concludes that this aggregate analysis is best provided by a separate CDW system.
Automated Propulsion Data Screening demonstration system
NASA Technical Reports Server (NTRS)
Hoyt, W. Andes; Choate, Timothy D.; Whitehead, Bruce A.
1995-01-01
A fully-instrumented firing of a propulsion system typically generates a very large quantity of data. In the case of the Space Shuttle Main Engine (SSME), data analysis from ground tests and flights is currently a labor-intensive process. Human experts spend a great deal of time examining the large volume of sensor data generated by each engine firing. These experts look for any anomalies in the data which might indicate engine conditions warranting further investigation. The contract effort was to develop a 'first-cut' screening system for application to SSME engine firings that would identify the relatively small volume of data which is unusual or anomalous in some way. With such a system, limited and expensive human resources could focus on this small volume of unusual data for thorough analysis. The overall project objective was to develop a fully operational Automated Propulsion Data Screening (APDS) system with the capability of detecting significant trends and anomalies in transient and steady-state data. However, the effort limited screening of transient data to ground test data for throttle-down cases typical of the 3-g acceleration, and for engine throttling required to reach the maximum dynamic pressure limits imposed on the Space Shuttle. This APDS is based on neural networks designed to detect anomalies in propulsion system data that are not part of the data used for neural network training. The delivered system allows engineers to build their own screening sets for application to completed or planned firings of the SSME. ERC developers also built some generic screening sets that NASA engineers could apply immediately to their data analysis efforts.
2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT
NASA Astrophysics Data System (ADS)
Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.
2018-01-01
We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.
Automated seismic waveform location using Multichannel Coherency Migration (MCM)-I. Theory
NASA Astrophysics Data System (ADS)
Shi, Peidong; Angus, Doug; Rost, Sebastian; Nowacki, Andy; Yuan, Sanyi
2018-03-01
With the proliferation of dense seismic networks sampling the full seismic wavefield, recorded seismic data volumes are getting bigger and automated analysis tools to locate seismic events are essential. Here, we propose a novel Multichannel Coherency Migration (MCM) method to locate earthquakes in continuous seismic data and reveal the location and origin time of seismic events directly from recorded waveforms. By continuously calculating the coherency between waveforms from different receiver pairs, MCM greatly expands the available information which can be used for event location. MCM does not require phase picking or phase identification, which allows fully automated waveform analysis. By migrating the coherency between waveforms, MCM leads to improved source energy focusing. We have tested and compared MCM to other migration-based methods in noise-free and noisy synthetic data. The tests and analysis show that MCM is noise resistant and can achieve more accurate results compared with other migration-based methods. MCM is able to suppress strong interference from other seismic sources occurring at a similar time and location. It can be used with arbitrary 3D velocity models and is able to obtain reasonable location results with smooth but inaccurate velocity models. MCM exhibits excellent location performance and can be easily parallelized giving it large potential to be developed as a real-time location method for very large datasets.
Advances in Mössbauer data analysis
NASA Astrophysics Data System (ADS)
de Souza, Paulo A.
1998-08-01
The whole Mössbauer community generates a huge amount of data in several fields of human knowledge since the first publication of Rudolf Mössbauer. Interlaboratory measurements of the same substance may result in minor differences in the Mössbauer Parameters (MP) of isomer shift, quadrupole splitting and internal magnetic field. Therefore, a conventional data bank of published MP will be of limited help in identification of substances. Data bank search for exact information became incapable to differentiate the values of Mössbauer parameters within the experimental errors (e.g., IS = 0.22 mm/s from IS = 0.23 mm/s), but physically both values may be considered the same. An artificial neural network (ANN) is able to identify a substance and its crystalline structure from measured MP, and its slight variations do not represent an obstacle for the ANN identification. A barrier to the popularization of Mössbauer spectroscopy as an analytical technique is the absence of a full automated equipment, since the analysis of a Mössbauer spectrum normally is time-consuming and requires a specialist. In this work, the fitting process of a Mössbauer spectrum was completely automated through the use of genetic algorithms and fuzzy logic. Both software and hardware systems were implemented turning out to be a fully automated Mössbauer data analysis system. The developed system will be presented.
Evaluating the efficacy of fully automated approaches for the selection of eye blink ICA components
Pontifex, Matthew B.; Miskovic, Vladimir; Laszlo, Sarah
2017-01-01
Independent component analysis (ICA) offers a powerful approach for the isolation and removal of eye blink artifacts from EEG signals. Manual identification of the eye blink ICA component by inspection of scalp map projections, however, is prone to error, particularly when non-artifactual components exhibit topographic distributions similar to the blink. The aim of the present investigation was to determine the extent to which automated approaches for selecting eye blink related ICA components could be utilized to replace manual selection. We evaluated popular blink selection methods relying on spatial features [EyeCatch()], combined stereotypical spatial and temporal features [ADJUST()], and a novel method relying on time-series features alone [icablinkmetrics()] using both simulated and real EEG data. The results of this investigation suggest that all three methods of automatic component selection are able to accurately identify eye blink related ICA components at or above the level of trained human observers. However, icablinkmetrics(), in particular, appears to provide an effective means of automating ICA artifact rejection while at the same time eliminating human errors inevitable during manual component selection and false positive component identifications common in other automated approaches. Based upon these findings, best practices for 1) identifying artifactual components via automated means and 2) reducing the accidental removal of signal-related ICA components are discussed. PMID:28191627
Laboratory Testing Protocols for Heparin-Induced Thrombocytopenia (HIT) Testing.
Lau, Kun Kan Edwin; Mohammed, Soma; Pasalic, Leonardo; Favaloro, Emmanuel J
2017-01-01
Heparin-induced thrombocytopenia (HIT) represents a significant high morbidity complication of heparin therapy. The clinicopathological diagnosis of HIT remains challenging for many reasons; thus, laboratory testing represents an important component of an accurate diagnosis. Although there are many assays available to assess HIT, these essentially fall into two categories-(a) immunological assays, and (b) functional assays. The current chapter presents protocols for several HIT assays, being those that are most commonly performed in laboratory practice and have the widest geographic distribution. These comprise a manual lateral flow-based system (STiC), a fully automated latex immunoturbidimetric assay, a fully automated chemiluminescent assay (CLIA), light transmission aggregation (LTA), and whole blood aggregation (Multiplate).
An Open-Source Automated Peptide Synthesizer Based on Arduino and Python.
Gali, Hariprasad
2017-10-01
The development of the first open-source automated peptide synthesizer, PepSy, using Arduino UNO and readily available components is reported. PepSy was primarily designed to synthesize small peptides in a relatively small scale (<100 µmol). Scripts to operate PepSy in a fully automatic or manual mode were written in Python. Fully automatic script includes functions to carry out resin swelling, resin washing, single coupling, double coupling, Fmoc deprotection, ivDde deprotection, on-resin oxidation, end capping, and amino acid/reagent line cleaning. Several small peptides and peptide conjugates were successfully synthesized on PepSy with reasonably good yields and purity depending on the complexity of the peptide.
Automated Antibody De Novo Sequencing and Its Utility in Biopharmaceutical Discovery
NASA Astrophysics Data System (ADS)
Sen, K. Ilker; Tang, Wilfred H.; Nayak, Shruti; Kil, Yong J.; Bern, Marshall; Ozoglu, Berk; Ueberheide, Beatrix; Davis, Darryl; Becker, Christopher
2017-05-01
Applications of antibody de novo sequencing in the biopharmaceutical industry range from the discovery of new antibody drug candidates to identifying reagents for research and determining the primary structure of innovator products for biosimilar development. When murine, phage display, or patient-derived monoclonal antibodies against a target of interest are available, but the cDNA or the original cell line is not, de novo protein sequencing is required to humanize and recombinantly express these antibodies, followed by in vitro and in vivo testing for functional validation. Availability of fully automated software tools for monoclonal antibody de novo sequencing enables efficient and routine analysis. Here, we present a novel method to automatically de novo sequence antibodies using mass spectrometry and the Supernovo software. The robustness of the algorithm is demonstrated through a series of stress tests.
Automated segmentation of murine lung tumors in x-ray micro-CT images
NASA Astrophysics Data System (ADS)
Swee, Joshua K. Y.; Sheridan, Clare; de Bruin, Elza; Downward, Julian; Lassailly, Francois; Pizarro, Luis
2014-03-01
Recent years have seen micro-CT emerge as a means of providing imaging analysis in pre-clinical study, with in-vivo micro-CT having been shown to be particularly applicable to the examination of murine lung tumors. Despite this, existing studies have involved substantial human intervention during the image analysis process, with the use of fully-automated aids found to be almost non-existent. We present a new approach to automate the segmentation of murine lung tumors designed specifically for in-vivo micro-CT-based pre-clinical lung cancer studies that addresses the specific requirements of such study, as well as the limitations human-centric segmentation approaches experience when applied to such micro-CT data. Our approach consists of three distinct stages, and begins by utilizing edge enhancing and vessel enhancing non-linear anisotropic diffusion filters to extract anatomy masks (lung/vessel structure) in a pre-processing stage. Initial candidate detection is then performed through ROI reduction utilizing obtained masks and a two-step automated segmentation approach that aims to extract all disconnected objects within the ROI, and consists of Otsu thresholding, mathematical morphology and marker-driven watershed. False positive reduction is finally performed on initial candidates through random-forest-driven classification using the shape, intensity, and spatial features of candidates. We provide validation of our approach using data from an associated lung cancer study, showing favorable results both in terms of detection (sensitivity=86%, specificity=89%) and structural recovery (Dice Similarity=0.88) when compared against manual specialist annotation.
Milchenko, Mikhail; Snyder, Abraham Z; LaMontagne, Pamela; Shimony, Joshua S; Benzinger, Tammie L; Fouke, Sarah Jost; Marcus, Daniel S
2016-07-01
Neuroimaging research often relies on clinically acquired magnetic resonance imaging (MRI) datasets that can originate from multiple institutions. Such datasets are characterized by high heterogeneity of modalities and variability of sequence parameters. This heterogeneity complicates the automation of image processing tasks such as spatial co-registration and physiological or functional image analysis. Given this heterogeneity, conventional processing workflows developed for research purposes are not optimal for clinical data. In this work, we describe an approach called Heterogeneous Optimization Framework (HOF) for developing image analysis pipelines that can handle the high degree of clinical data non-uniformity. HOF provides a set of guidelines for configuration, algorithm development, deployment, interpretation of results and quality control for such pipelines. At each step, we illustrate the HOF approach using the implementation of an automated pipeline for Multimodal Glioma Analysis (MGA) as an example. The MGA pipeline computes tissue diffusion characteristics of diffusion tensor imaging (DTI) acquisitions, hemodynamic characteristics using a perfusion model of susceptibility contrast (DSC) MRI, and spatial cross-modal co-registration of available anatomical, physiological and derived patient images. Developing MGA within HOF enabled the processing of neuro-oncology MR imaging studies to be fully automated. MGA has been successfully used to analyze over 160 clinical tumor studies to date within several research projects. Introduction of the MGA pipeline improved image processing throughput and, most importantly, effectively produced co-registered datasets that were suitable for advanced analysis despite high heterogeneity in acquisition protocols.
Automation study for space station subsystems and mission ground support
NASA Technical Reports Server (NTRS)
1985-01-01
An automation concept for the autonomous operation of space station subsystems, i.e., electric power, thermal control, and communications and tracking are discussed. To assure that functions essential for autonomous operations are not neglected, an operations function (systems monitoring and control) is included in the discussion. It is recommended that automated speech recognition and synthesis be considered a basic mode of man/machine interaction for space station command and control, and that the data management system (DMS) and other systems on the space station be designed to accommodate fully automated fault detection, isolation, and recovery within the system monitoring function of the DMS.
Multi-tissue and multi-scale approach for nuclei segmentation in H&E stained images.
Salvi, Massimo; Molinari, Filippo
2018-06-20
Accurate nuclei detection and segmentation in histological images is essential for many clinical purposes. While manual annotations are time-consuming and operator-dependent, full automated segmentation remains a challenging task due to the high variability of cells intensity, size and morphology. Most of the proposed algorithms for the automated segmentation of nuclei were designed for specific organ or tissues. The aim of this study was to develop and validate a fully multiscale method, named MANA (Multiscale Adaptive Nuclei Analysis), for nuclei segmentation in different tissues and magnifications. MANA was tested on a dataset of H&E stained tissue images with more than 59,000 annotated nuclei, taken from six organs (colon, liver, bone, prostate, adrenal gland and thyroid) and three magnifications (10×, 20×, 40×). Automatic results were compared with manual segmentations and three open-source software designed for nuclei detection. For each organ, MANA obtained always an F1-score higher than 0.91, with an average F1 of 0.9305 ± 0.0161. The average computational time was about 20 s independently of the number of nuclei to be detected (anyway, higher than 1000), indicating the efficiency of the proposed technique. To the best of our knowledge, MANA is the first fully automated multi-scale and multi-tissue algorithm for nuclei detection. Overall, the robustness and versatility of MANA allowed to achieve, on different organs and magnifications, performances in line or better than those of state-of-art algorithms optimized for single tissues.
Fully automatic detection of salient features in 3-d transesophageal images.
Curiale, Ariel H; Haak, Alexander; Vegas-Sánchez-Ferrero, Gonzalo; Ren, Ben; Aja-Fernández, Santiago; Bosch, Johan G
2014-12-01
Most automated segmentation approaches to the mitral valve and left ventricle in 3-D echocardiography require a manual initialization. In this article, we propose a fully automatic scheme to initialize a multicavity segmentation approach in 3-D transesophageal echocardiography by detecting the left ventricle long axis, the mitral valve and the aortic valve location. Our approach uses a probabilistic and structural tissue classification to find structures such as the mitral and aortic valves; the Hough transform for circles to find the center of the left ventricle; and multidimensional dynamic programming to find the best position for the left ventricle long axis. For accuracy and agreement assessment, the proposed method was evaluated in 19 patients with respect to manual landmarks and as initialization of a multicavity segmentation approach for the left ventricle, the right ventricle, the left atrium, the right atrium and the aorta. The segmentation results revealed no statistically significant differences between manual and automated initialization in a paired t-test (p > 0.05). Additionally, small biases between manual and automated initialization were detected in the Bland-Altman analysis (bias, variance) for the left ventricle (-0.04, 0.10); right ventricle (-0.07, 0.18); left atrium (-0.01, 0.03); right atrium (-0.04, 0.13); and aorta (-0.05, 0.14). These results indicate that the proposed approach provides robust and accurate detection to initialize a multicavity segmentation approach without any user interaction. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shah, Bhavana; Jiang, Xinzhao Grace; Chen, Louise; Zhang, Zhongqi
2014-06-01
Protein N-Glycan analysis is traditionally performed by high pH anion exchange chromatography (HPAEC), reversed phase liquid chromatography (RPLC), or hydrophilic interaction liquid chromatography (HILIC) on fluorescence-labeled glycans enzymatically released from the glycoprotein. These methods require time-consuming sample preparations and do not provide site-specific glycosylation information. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) peptide mapping is frequently used for protein structural characterization and, as a bonus, can potentially provide glycan profile on each individual glycosylation site. In this work, a recently developed glycopeptide fragmentation model was used for automated identification, based on their MS/MS, of N-glycopeptides from proteolytic digestion of monoclonal antibodies (mAbs). Experimental conditions were optimized to achieve accurate profiling of glycoforms. Glycan profiles obtained from LC-MS/MS peptide mapping were compared with those obtained from HPAEC, RPLC, and HILIC analyses of released glycans for several mAb molecules. Accuracy, reproducibility, and linearity of the LC-MS/MS peptide mapping method for glycan profiling were evaluated. The LC-MS/MS peptide mapping method with fully automated data analysis requires less sample preparation, provides site-specific information, and may serve as an alternative method for routine profiling of N-glycans on immunoglobulins as well as other glycoproteins with simple N-glycans.
Green, Walton A.; Little, Stefan A.; Price, Charles A.; Wing, Scott L.; Smith, Selena Y.; Kotrc, Benjamin; Doria, Gabriela
2014-01-01
The reticulate venation that is characteristic of a dicot leaf has excited interest from systematists for more than a century, and from physiological and developmental botanists for decades. The tools of digital image acquisition and computer image analysis, however, are only now approaching the sophistication needed to quantify aspects of the venation network found in real leaves quickly, easily, accurately, and reliably enough to produce biologically meaningful data. In this paper, we examine 120 leaves distributed across vascular plants (representing 118 genera and 80 families) using two approaches: a semiquantitative scoring system called “leaf ranking,” devised by the late Leo Hickey, and an automated image-analysis protocol. In the process of comparing these approaches, we review some methodological issues that arise in trying to quantify a vein network, and discuss the strengths and weaknesses of automatic data collection and human pattern recognition. We conclude that subjective leaf rank provides a relatively consistent, semiquantitative measure of areole size among other variables; that modal areole size is generally consistent across large sections of a leaf lamina; and that both approaches—semiquantitative, subjective scoring; and fully quantitative, automated measurement—have appropriate places in the study of leaf venation. PMID:25202646
Poritz, Mark A.; Blaschke, Anne J.; Byington, Carrie L.; Meyers, Lindsay; Nilsson, Kody; Jones, David E.; Thatcher, Stephanie A.; Robbins, Thomas; Lingenfelter, Beth; Amiott, Elizabeth; Herbener, Amy; Daly, Judy; Dobrowolski, Steven F.; Teng, David H. -F.; Ririe, Kirk M.
2011-01-01
The ideal clinical diagnostic system should deliver rapid, sensitive, specific and reproducible results while minimizing the requirements for specialized laboratory facilities and skilled technicians. We describe an integrated diagnostic platform, the “FilmArray”, which fully automates the detection and identification of multiple organisms from a single sample in about one hour. An unprocessed biologic/clinical sample is subjected to nucleic acid purification, reverse transcription, a high-order nested multiplex polymerase chain reaction and amplicon melt curve analysis. Biochemical reactions are enclosed in a disposable pouch, minimizing the PCR contamination risk. FilmArray has the potential to detect greater than 100 different nucleic acid targets at one time. These features make the system well-suited for molecular detection of infectious agents. Validation of the FilmArray technology was achieved through development of a panel of assays capable of identifying 21 common viral and bacterial respiratory pathogens. Initial testing of the system using both cultured organisms and clinical nasal aspirates obtained from children demonstrated an analytical and clinical sensitivity and specificity comparable to existing diagnostic platforms. We demonstrate that automated identification of pathogens from their corresponding target amplicon(s) can be accomplished by analysis of the DNA melting curve of the amplicon. PMID:22039434
CognitionMaster: an object-based image analysis framework
2013-01-01
Background Automated image analysis methods are becoming more and more important to extract and quantify image features in microscopy-based biomedical studies and several commercial or open-source tools are available. However, most of the approaches rely on pixel-wise operations, a concept that has limitations when high-level object features and relationships between objects are studied and if user-interactivity on the object-level is desired. Results In this paper we present an open-source software that facilitates the analysis of content features and object relationships by using objects as basic processing unit instead of individual pixels. Our approach enables also users without programming knowledge to compose “analysis pipelines“ that exploit the object-level approach. We demonstrate the design and use of example pipelines for the immunohistochemistry-based cell proliferation quantification in breast cancer and two-photon fluorescence microscopy data about bone-osteoclast interaction, which underline the advantages of the object-based concept. Conclusions We introduce an open source software system that offers object-based image analysis. The object-based concept allows for a straight-forward development of object-related interactive or fully automated image analysis solutions. The presented software may therefore serve as a basis for various applications in the field of digital image analysis. PMID:23445542
Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing
NASA Technical Reports Server (NTRS)
Ordaz, Irian
2011-01-01
Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.
High content image analysis for human H4 neuroglioma cells exposed to CuO nanoparticles.
Li, Fuhai; Zhou, Xiaobo; Zhu, Jinmin; Ma, Jinwen; Huang, Xudong; Wong, Stephen T C
2007-10-09
High content screening (HCS)-based image analysis is becoming an important and widely used research tool. Capitalizing this technology, ample cellular information can be extracted from the high content cellular images. In this study, an automated, reliable and quantitative cellular image analysis system developed in house has been employed to quantify the toxic responses of human H4 neuroglioma cells exposed to metal oxide nanoparticles. This system has been proved to be an essential tool in our study. The cellular images of H4 neuroglioma cells exposed to different concentrations of CuO nanoparticles were sampled using IN Cell Analyzer 1000. A fully automated cellular image analysis system has been developed to perform the image analysis for cell viability. A multiple adaptive thresholding method was used to classify the pixels of the nuclei image into three classes: bright nuclei, dark nuclei, and background. During the development of our image analysis methodology, we have achieved the followings: (1) The Gaussian filtering with proper scale has been applied to the cellular images for generation of a local intensity maximum inside each nucleus; (2) a novel local intensity maxima detection method based on the gradient vector field has been established; and (3) a statistical model based splitting method was proposed to overcome the under segmentation problem. Computational results indicate that 95.9% nuclei can be detected and segmented correctly by the proposed image analysis system. The proposed automated image analysis system can effectively segment the images of human H4 neuroglioma cells exposed to CuO nanoparticles. The computational results confirmed our biological finding that human H4 neuroglioma cells had a dose-dependent toxic response to the insult of CuO nanoparticles.
``Hands-Free'' Asteroid Astrometry
NASA Astrophysics Data System (ADS)
Monet, A. K. B.; Bowell, E.; Monet, D. G.
1997-12-01
How do you undertake a major new astrometric program with no additional financial or personnel resources? The answer: automation! Early in 1992, the authors began a collaboration to obtain astrometric positions for several classes of asteroids (V_lim 17.5 mag) whose orbits required improvement or that were otherwise of special interest. The telescope used for this work is the USNOFS 0.2-meter transit telescope, equipped with a CCD camera. The operation of this instrument has been fully automated (Stone, et al. 1996, AJ, 111, 1721. Nightly observing rosters are constructed from a ranked listing of all asteroids of interest, prepared each month by Bowell. In a typical month, about 200 observations are made, although this number can range from 0 to over 400. Reductions are done automatically as well. A typical 10-hr nightly run can be fully reduced in less than 1/2 hr. Reductions are made on a frame-by-frame basis and positions of the asteroids computed with respect to the USNO-A1.0 catalog (Monet, D.G. 1996, USNO-A1.0 Catalog -- 10 CD-ROM Set, US Naval Observatory.) Observational quality is checked by Bowell, who also recomputes orbits and reports final results to the Minor Planet Center. Orbit residuals hover around 0.3 arcsec. This poster will present a brief overview of the observing and analysis methods, an account of the first five years of results, and a description of planned improvements in instrumentation and analysis techniques.
NASA Technical Reports Server (NTRS)
Bayless, E. O.; Lawless, K. G.; Kurgan, C.; Nunes, A. C.; Graham, B. F.; Hoffman, D.; Jones, C. S.; Shepard, R.
1993-01-01
Fully automated variable-polarity plasma arc VPPA welding system developed at Marshall Space Flight Center. System eliminates defects caused by human error. Integrates many sensors with mathematical model of the weld and computer-controlled welding equipment. Sensors provide real-time information on geometry of weld bead, location of weld joint, and wire-feed entry. Mathematical model relates geometry of weld to critical parameters of welding process.
Fully Automated Deep Learning System for Bone Age Assessment.
Lee, Hyunkwang; Tajmir, Shahein; Lee, Jenny; Zissen, Maurice; Yeshiwas, Bethel Ayele; Alkasab, Tarik K; Choy, Garry; Do, Synho
2017-08-01
Skeletal maturity progresses through discrete phases, a fact that is used routinely in pediatrics where bone age assessments (BAAs) are compared to chronological age in the evaluation of endocrine and metabolic disorders. While central to many disease evaluations, little has changed to improve the tedious process since its introduction in 1950. In this study, we propose a fully automated deep learning pipeline to segment a region of interest, standardize and preprocess input radiographs, and perform BAA. Our models use an ImageNet pretrained, fine-tuned convolutional neural network (CNN) to achieve 57.32 and 61.40% accuracies for the female and male cohorts on our held-out test images. Female test radiographs were assigned a BAA within 1 year 90.39% and within 2 years 98.11% of the time. Male test radiographs were assigned 94.18% within 1 year and 99.00% within 2 years. Using the input occlusion method, attention maps were created which reveal what features the trained model uses to perform BAA. These correspond to what human experts look at when manually performing BAA. Finally, the fully automated BAA system was deployed in the clinical environment as a decision supporting system for more accurate and efficient BAAs at much faster interpretation time (<2 s) than the conventional method.
A new fully automated FTIR system for total column measurements of greenhouse gases
NASA Astrophysics Data System (ADS)
Geibel, M. C.; Gerbig, C.; Feist, D. G.
2010-10-01
This article introduces a new fully automated FTIR system that is part of the Total Carbon Column Observing Network (TCCON). It will provide continuous ground-based measurements of column-averaged volume mixing ratio for CO2, CH4 and several other greenhouse gases in the tropics. Housed in a 20-foot shipping container it was developed as a transportable system that could be deployed almost anywhere in the world. We describe the automation concept which relies on three autonomous subsystems and their interaction. Crucial components like a sturdy and reliable solar tracker dome are described in detail. The automation software employs a new approach relying on multiple processes, database logging and web-based remote control. First results of total column measurements at Jena, Germany show that the instrument works well and can provide parts of the diurnal as well as seasonal cycle for CO2. Instrument line shape measurements with an HCl cell suggest that the instrument stays well-aligned over several months. After a short test campaign for side by side intercomaprison with an existing TCCON instrument in Australia, the system will be transported to its final destination Ascension Island.
Fully Automated Data Collection Using PAM and the Development of PAM/SPACE Reversible Cassettes
NASA Astrophysics Data System (ADS)
Hiraki, Masahiko; Watanabe, Shokei; Chavas, Leonard M. G.; Yamada, Yusuke; Matsugaki, Naohiro; Igarashi, Noriyuki; Wakatsuki, Soichi; Fujihashi, Masahiro; Miki, Kunio; Baba, Seiki; Ueno, Go; Yamamoto, Masaki; Suzuki, Mamoru; Nakagawa, Atsushi; Watanabe, Nobuhisa; Tanaka, Isao
2010-06-01
To remotely control and automatically collect data in high-throughput X-ray data collection experiments, the Structural Biology Research Center at the Photon Factory (PF) developed and installed sample exchange robots PAM (PF Automated Mounting system) at PF macromolecular crystallography beamlines; BL-5A, BL-17A, AR-NW12A and AR-NE3A. We developed and installed software that manages the flow of the automated X-ray experiments; sample exchanges, loop-centering and X-ray diffraction data collection. The fully automated data collection function has been available since February 2009. To identify sample cassettes, PAM employs a two-dimensional bar code reader. New beamlines, BL-1A at the Photon Factory and BL32XU at SPring-8, are currently under construction as part of Targeted Proteins Research Program (TPRP) by the Ministry of Education, Culture, Sports, Science and Technology of Japan. However, different robots, PAM and SPACE (SPring-8 Precise Automatic Cryo-sample Exchanger), will be installed at BL-1A and BL32XU, respectively. For the convenience of the users of both facilities, pins and cassettes for PAM and SPACE are developed as part of the TPRP.
Multi-Modal Glioblastoma Segmentation: Man versus Machine
Pica, Alessia; Schucht, Philippe; Beck, Jürgen; Verma, Rajeev Kumar; Slotboom, Johannes; Reyes, Mauricio; Wiest, Roland
2014-01-01
Background and Purpose Reproducible segmentation of brain tumors on magnetic resonance images is an important clinical need. This study was designed to evaluate the reliability of a novel fully automated segmentation tool for brain tumor image analysis in comparison to manually defined tumor segmentations. Methods We prospectively evaluated preoperative MR Images from 25 glioblastoma patients. Two independent expert raters performed manual segmentations. Automatic segmentations were performed using the Brain Tumor Image Analysis software (BraTumIA). In order to study the different tumor compartments, the complete tumor volume TV (enhancing part plus non-enhancing part plus necrotic core of the tumor), the TV+ (TV plus edema) and the contrast enhancing tumor volume CETV were identified. We quantified the overlap between manual and automated segmentation by calculation of diameter measurements as well as the Dice coefficients, the positive predictive values, sensitivity, relative volume error and absolute volume error. Results Comparison of automated versus manual extraction of 2-dimensional diameter measurements showed no significant difference (p = 0.29). Comparison of automated versus manual segmentation of volumetric segmentations showed significant differences for TV+ and TV (p<0.05) but no significant differences for CETV (p>0.05) with regard to the Dice overlap coefficients. Spearman's rank correlation coefficients (ρ) of TV+, TV and CETV showed highly significant correlations between automatic and manual segmentations. Tumor localization did not influence the accuracy of segmentation. Conclusions In summary, we demonstrated that BraTumIA supports radiologists and clinicians by providing accurate measures of cross-sectional diameter-based tumor extensions. The automated volume measurements were comparable to manual tumor delineation for CETV tumor volumes, and outperformed inter-rater variability for overlap and sensitivity. PMID:24804720
Cassani, Raymundo; Falk, Tiago H.; Fraga, Francisco J.; Kanda, Paulo A. M.; Anghinah, Renato
2014-01-01
Over the last decade, electroencephalography (EEG) has emerged as a reliable tool for the diagnosis of cortical disorders such as Alzheimer's disease (AD). EEG signals, however, are susceptible to several artifacts, such as ocular, muscular, movement, and environmental. To overcome this limitation, existing diagnostic systems commonly depend on experienced clinicians to manually select artifact-free epochs from the collected multi-channel EEG data. Manual selection, however, is a tedious and time-consuming process, rendering the diagnostic system “semi-automated.” Notwithstanding, a number of EEG artifact removal algorithms have been proposed in the literature. The (dis)advantages of using such algorithms in automated AD diagnostic systems, however, have not been documented; this paper aims to fill this gap. Here, we investigate the effects of three state-of-the-art automated artifact removal (AAR) algorithms (both alone and in combination with each other) on AD diagnostic systems based on four different classes of EEG features, namely, spectral, amplitude modulation rate of change, coherence, and phase. The three AAR algorithms tested are statistical artifact rejection (SAR), blind source separation based on second order blind identification and canonical correlation analysis (BSS-SOBI-CCA), and wavelet enhanced independent component analysis (wICA). Experimental results based on 20-channel resting-awake EEG data collected from 59 participants (20 patients with mild AD, 15 with moderate-to-severe AD, and 24 age-matched healthy controls) showed the wICA algorithm alone outperforming other enhancement algorithm combinations across three tasks: diagnosis (control vs. mild vs. moderate), early detection (control vs. mild), and disease progression (mild vs. moderate), thus opening the doors for fully-automated systems that can assist clinicians with early detection of AD, as well as disease severity progression assessment. PMID:24723886
AUTOMATED CELL SEGMENTATION WITH 3D FLUORESCENCE MICROSCOPY IMAGES.
Kong, Jun; Wang, Fusheng; Teodoro, George; Liang, Yanhui; Zhu, Yangyang; Tucker-Burden, Carol; Brat, Daniel J
2015-04-01
A large number of cell-oriented cancer investigations require an effective and reliable cell segmentation method on three dimensional (3D) fluorescence microscopic images for quantitative analysis of cell biological properties. In this paper, we present a fully automated cell segmentation method that can detect cells from 3D fluorescence microscopic images. Enlightened by fluorescence imaging techniques, we regulated the image gradient field by gradient vector flow (GVF) with interpolated and smoothed data volume, and grouped voxels based on gradient modes identified by tracking GVF field. Adaptive thresholding was then applied to voxels associated with the same gradient mode where voxel intensities were enhanced by a multiscale cell filter. We applied the method to a large volume of 3D fluorescence imaging data of human brain tumor cells with (1) small cell false detection and missing rates for individual cells; and (2) trivial over and under segmentation incidences for clustered cells. Additionally, the concordance of cell morphometry structure between automated and manual segmentation was encouraging. These results suggest a promising 3D cell segmentation method applicable to cancer studies.
General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets
NASA Technical Reports Server (NTRS)
Marchen, Luis F.
2011-01-01
The Coronagraph Performance Error Budget (CPEB) tool automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. The tool uses a Code V prescription of the optical train, and uses MATLAB programs to call ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled fine-steering mirrors (FSMs). The sensitivity matrices are imported by macros into Excel 2007, where the error budget is evaluated. The user specifies the particular optics of interest, and chooses the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions, and combines that with the sensitivity matrices to generate an error budget for the system. CPEB also contains a combination of form and ActiveX controls with Visual Basic for Applications code to allow for user interaction in which the user can perform trade studies such as changing engineering requirements, and identifying and isolating stringent requirements. It contains summary tables and graphics that can be instantly used for reporting results in view graphs. The entire process to obtain a coronagraphic telescope performance error budget has been automated into three stages: conversion of optical prescription from Zemax or Code V to MACOS (in-house optical modeling and analysis tool), a linear models process, and an error budget tool process. The first process was improved by developing a MATLAB package based on the Class Constructor Method with a number of user-defined functions that allow the user to modify the MACOS optical prescription. The second process was modified by creating a MATLAB package that contains user-defined functions that automate the process. The user interfaces with the process by utilizing an initialization file where the user defines the parameters of the linear model computations. Other than this, the process is fully automated. The third process was developed based on the Terrestrial Planet Finder coronagraph Error Budget Tool, but was fully automated by using VBA code, form, and ActiveX controls.
NASA Astrophysics Data System (ADS)
McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A.; Purdie, Thomas G.
2017-08-01
Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment planning and can be readily applied to different treatment sites and modalities.
Bonekamp, S; Ghosh, P; Crawford, S; Solga, S F; Horska, A; Brancati, F L; Diehl, A M; Smith, S; Clark, J M
2008-01-01
To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Feature evaluation and test-retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. A random sample of 15 obese adults with type 2 diabetes. Axial T1-weighted spin echo images centered at vertebral bodies of L2-L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test-retest reliability. Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test-retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages.
Bonekamp, S; Ghosh, P; Crawford, S; Solga, SF; Horska, A; Brancati, FL; Diehl, AM; Smith, S; Clark, JM
2009-01-01
Objective To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Design Feature evaluation and test–retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. Subjects A random sample of 15 obese adults with type 2 diabetes. Measurements Axial T1-weighted spin echo images centered at vertebral bodies of L2–L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test–retest reliability. Results Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test–retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Conclusion Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages. PMID:17700582
McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A; Purdie, Thomas G
2017-07-06
Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment planning and can be readily applied to different treatment sites and modalities.
A taxonomy of integral reaction path analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grcar, Joseph F.; Day, Marcus S.; Bell, John B.
2004-12-23
W. C. Gardiner observed that achieving understanding through combustion modeling is limited by the ability to recognize the implications of what has been computed and to draw conclusions about the elementary steps underlying the reaction mechanism. This difficulty can be overcome in part by making better use of reaction path analysis in the context of multidimensional flame simulations. Following a survey of current practice, an integral reaction flux is formulated in terms of conserved scalars that can be calculated in a fully automated way. Conditional analyses are then introduced, and a taxonomy for bidirectional path analysis is explored. Many examplesmore » illustrate the resulting path analysis and uncover some new results about nonpremixed methane-air laminar jets.« less
NASA Astrophysics Data System (ADS)
Brown, James M.; Campbell, J. Peter; Beers, Andrew; Chang, Ken; Donohue, Kyra; Ostmo, Susan; Chan, R. V. Paul; Dy, Jennifer; Erdogmus, Deniz; Ioannidis, Stratis; Chiang, Michael F.; Kalpathy-Cramer, Jayashree
2018-03-01
Retinopathy of prematurity (ROP) is a disease that affects premature infants, where abnormal growth of the retinal blood vessels can lead to blindness unless treated accordingly. Infants considered at risk of severe ROP are monitored for symptoms of plus disease, characterized by arterial tortuosity and venous dilation at the posterior pole, with a standard photographic definition. Disagreement among ROP experts in diagnosing plus disease has driven the development of computer-based methods that classify images based on hand-crafted features extracted from the vasculature. However, most of these approaches are semi-automated, which are time-consuming and subject to variability. In contrast, deep learning is a fully automated approach that has shown great promise in a wide variety of domains, including medical genetics, informatics and imaging. Convolutional neural networks (CNNs) are deep networks which learn rich representations of disease features that are highly robust to variations in acquisition and image quality. In this study, we utilized a U-Net architecture to perform vessel segmentation and then a GoogLeNet to perform disease classification. The classifier was trained on 3,000 retinal images and validated on an independent test set of patients with different observed progressions and treatments. We show that our fully automated algorithm can be used to monitor the progression of plus disease over multiple patient visits with results that are consistent with the experts' consensus diagnosis. Future work will aim to further validate the method on larger cohorts of patients to assess its applicability within the clinic as a treatment monitoring tool.
A fully automated FTIR system for remote sensing of greenhouse gases in the tropics
NASA Astrophysics Data System (ADS)
Geibel, M. C.; Gerbig, C.; Feist, D. G.
2010-07-01
This article introduces a new fully automated FTIR system that is part of the Total Carbon Column Observing Network. It will provide continuous ground-based measurements of column-averaged volume mixing ratio for CO2, CH4 and several other greenhouse gases in the tropics. Housed in a 20-foot shipping container it was developed as a transportable system that could be deployed almost anywhere in the world. We describe the automation concept which relies on three autonomous subsystems and their interaction. Crucial components like a sturdy and reliable solar tracker dome are described in detail. First results of total column measurements at Jena, Germany show that the instrument works well and can provide diurnal as well as seasonal cycle for CO2. Instrument line shape measurements with an HCl cell suggest that the instrument stays well-aligned over several months. After a short test campaign for side by side intercomaprison with an existing TCCON instrument in Australia, the system will be transported to its final destination Ascension Island.
Automation of surface observations program
NASA Technical Reports Server (NTRS)
Short, Steve E.
1988-01-01
At present, surface weather observing methods are still largely manual and labor intensive. Through the nationwide implementation of Automated Surface Observing Systems (ASOS), this situation can be improved. Two ASOS capability levels are planned. The first is a basic-level system which will automatically observe the weather parameters essential for aviation operations and will operate either with or without supplemental contributions by an observer. The second is a more fully automated, stand-alone system which will observe and report the full range of weather parameters and will operate primarily in the unattended mode. Approximately 250 systems are planned by the end of the decade. When deployed, these systems will generate the standard hourly and special long-line transmitted weather observations, as well as provide continuous weather information direct to airport users. Specific ASOS configurations will vary depending upon whether the operation is unattended, minimally attended, or fully attended. The major functions of ASOS are data collection, data processing, product distribution, and system control. The program phases of development, demonstration, production system acquisition, and operational implementation are described.
Automating Trend Analysis for Spacecraft Constellations
NASA Technical Reports Server (NTRS)
Davis, George; Cooter, Miranda; Updike, Clark; Carey, Everett; Mackey, Jennifer; Rykowski, Timothy; Powers, Edward I. (Technical Monitor)
2001-01-01
Spacecraft trend analysis is a vital mission operations function performed by satellite controllers and engineers, who perform detailed analyses of engineering telemetry data to diagnose subsystem faults and to detect trends that may potentially lead to degraded subsystem performance or failure in the future. It is this latter function that is of greatest importance, for careful trending can often predict or detect events that may lead to a spacecraft's entry into safe-hold. Early prediction and detection of such events could result in the avoidance of, or rapid return to service from, spacecraft safing, which not only results in reduced recovery costs but also in a higher overall level of service for the satellite system. Contemporary spacecraft trending activities are manually intensive and are primarily performed diagnostically after a fault occurs, rather than proactively to predict its occurrence. They also tend to rely on information systems and software that are oudated when compared to current technologies. When coupled with the fact that flight operations teams often have limited resources, proactive trending opportunities are limited, and detailed trend analysis is often reserved for critical responses to safe holds or other on-orbit events such as maneuvers. While the contemporary trend analysis approach has sufficed for current single-spacecraft operations, it will be unfeasible for NASA's planned and proposed space science constellations. Missions such as the Dynamics, Reconnection and Configuration Observatory (DRACO), for example, are planning to launch as many as 100 'nanospacecraft' to form a homogenous constellation. A simple extrapolation of resources and manpower based on single-spacecraft operations suggests that trending for such a large spacecraft fleet will be unmanageable, unwieldy, and cost-prohibitive. It is therefore imperative that an approach to automating the spacecraft trend analysis function be studied, developed, and applied to missions such as DRACO with the intent that mission operations costs be significantly reduced. The goal of the Constellation Spacecraft Trend Analysis Toolkit (CSTAT) project is to serve as the pathfinder for a fully automated trending system to support spacecraft constellations. The development approach to be taken is evolutionary. In the first year of the project, the intent is to significantly advance the state of the art in current trending systems through improved functionality and increased automation. In the second year, the intent is to add an expert system shell, likely through the adaptation of an existing commercial-off-the-shelf (COTS) or government-off-the-shelf (GOTS) tool to implement some level of the trending intelligence that humans currently provide in manual operations. In the third year, the intent is to infuse the resulting technology into a near-term constellation or formation-flying mission to test it and gain experience in automated trending. The lessons learned from the real missions operations experience will then be used to improve the system, and to ultimately incorporate it into a fully autonomous, closed-loop mission operations system that is truly capable of supporting large constellations. In this paper, the process of automating trend analysis for spacecraft constellations will be addressed. First, the results of a survey on automation in spacecraft mission operations in general, and in trending systems in particular will be presented to provide an overview of the current state of the art. Next, a rule-based model for implementing intelligent spacecraft subsystem trending will be then presented, followed by a survey of existing COTS/GOTS tools that could be adapted for implementing such a model. The baseline design and architecture of the CSTAT system will be presented. Finally, some results obtained from initial software tests and demonstrations will be presented.
Chan, Adrian C H; Adachi, Jonathan D; Papaioannou, Alexandra; Wong, Andy Kin On
Lower peripheral quantitative computed tomography (pQCT)-derived leg muscle density has been associated with fragility fractures in postmenopausal women. Limb movement during image acquisition may result in motion streaks in muscle that could dilute this relationship. This cross-sectional study examined a subset of women from the Canadian Multicentre Osteoporosis Study. pQCT leg scans were qualitatively graded (1-5) for motion severity. Muscle and motion streak were segmented using semi-automated (watershed) and fully automated (threshold-based) methods, computing area, and density. Binary logistic regression evaluated odds ratios (ORs) for fragility or all-cause fractures related to each of these measures with covariate adjustment. Among the 223 women examined (mean age: 72.7 ± 7.1 years, body mass index: 26.30 ± 4.97 kg/m 2 ), muscle density was significantly lower after removing motion (p < 0.001) for both methods. Motion streak areas segmented using the semi-automated method correlated better with visual motion grades (rho = 0.90, p < 0.01) compared to the fully automated method (rho = 0.65, p < 0.01). Although the analysis-reanalysis precision of motion streak area segmentation using the semi-automated method is above 5% error (6.44%), motion-corrected muscle density measures remained well within 2% analytical error. The effect of motion-correction on strengthening the association between muscle density and fragility fractures was significant when motion grade was ≥3 (p interaction <0.05). This observation was most dramatic for the semi-automated algorithm (OR: 1.62 [0.82,3.17] before to 2.19 [1.05,4.59] after correction). Although muscle density showed an overall association with all-cause fractures (OR: 1.49 [1.05,2.12]), the effect of motion-correction was again, most impactful within individuals with scans showing grade 3 or above motion. Correcting for motion in pQCT leg scans strengthened the relationship between muscle density and fragility fractures, particularly in scans with motion grades of 3 or above. Motion streaks are not confounders to the relationship between pQCT-derived leg muscle density and fractures, but may introduce heterogeneity in muscle density measurements, rendering associations with fractures to be weaker. Copyright © 2016. Published by Elsevier Inc.
Lemaire, C; Libert, L; Franci, X; Genon, J-L; Kuci, S; Giacomelli, F; Luxen, A
2015-06-15
An efficient, fully automated, enantioselective multi-step synthesis of no-carrier-added (nca) 6-[(18)F]fluoro-L-dopa ([(18)F]FDOPA) and 2-[(18)F]fluoro-L-tyrosine ([(18)F]FTYR) on a GE FASTlab synthesizer in conjunction with an additional high- performance liquid chromatography (HPLC) purification has been developed. A PTC (phase-transfer catalyst) strategy was used to synthesize these two important radiopharmaceuticals. According to recent chemistry improvements, automation of the whole process was implemented in a commercially available GE FASTlab module, with slight hardware modification using single use cassettes and stand-alone HPLC. [(18)F]FDOPA and [(18)F]FTYR were produced in 36.3 ± 3.0% (n = 8) and 50.5 ± 2.7% (n = 10) FASTlab radiochemical yield (decay corrected). The automated radiosynthesis on the FASTlab module requires about 52 min. Total synthesis time including HPLC purification and formulation was about 62 min. Enantiomeric excesses for these two aromatic amino acids were always >95%, and the specific activity of was >740 GBq/µmol. This automated synthesis provides high amount of [(18)F]FDOPA and [(18)F]FTYR (>37 GBq end of synthesis (EOS)). The process, fully adaptable for reliable production across multiple PET sites, could be readily implemented into a clinical good manufacturing process (GMP) environment. Copyright © 2015 John Wiley & Sons, Ltd.
QuickNGS elevates Next-Generation Sequencing data analysis to a new level of automation.
Wagle, Prerana; Nikolić, Miloš; Frommolt, Peter
2015-07-01
Next-Generation Sequencing (NGS) has emerged as a widely used tool in molecular biology. While time and cost for the sequencing itself are decreasing, the analysis of the massive amounts of data remains challenging. Since multiple algorithmic approaches for the basic data analysis have been developed, there is now an increasing need to efficiently use these tools to obtain results in reasonable time. We have developed QuickNGS, a new workflow system for laboratories with the need to analyze data from multiple NGS projects at a time. QuickNGS takes advantage of parallel computing resources, a comprehensive back-end database, and a careful selection of previously published algorithmic approaches to build fully automated data analysis workflows. We demonstrate the efficiency of our new software by a comprehensive analysis of 10 RNA-Seq samples which we can finish in only a few minutes of hands-on time. The approach we have taken is suitable to process even much larger numbers of samples and multiple projects at a time. Our approach considerably reduces the barriers that still limit the usability of the powerful NGS technology and finally decreases the time to be spent before proceeding to further downstream analysis and interpretation of the data.
System Analysis Applied to Autonomy: Application to Human-Rated Lunar/Mars Landers
NASA Technical Reports Server (NTRS)
Young, Larry A.
2006-01-01
System analysis is an essential technical discipline for the modern design of spacecraft and their associated missions. Specifically, system analysis is a powerful aid in identifying and prioritizing the required technologies needed for mission and/or vehicle development efforts. Maturation of intelligent systems technologies, and their incorporation into spacecraft systems, are dictating the development of new analysis tools, and incorporation of such tools into existing system analysis methodologies, in order to fully capture the trade-offs of autonomy on vehicle and mission success. A "system analysis of autonomy" methodology will be outlined and applied to a set of notional human-rated lunar/Mars lander missions toward answering these questions: 1. what is the optimum level of vehicle autonomy and intelligence required? and 2. what are the specific attributes of an autonomous system implementation essential for a given surface lander mission/application in order to maximize mission success? Future human-rated lunar/Mars landers, though nominally under the control of their crew, will, nonetheless, be highly automated systems. These automated systems will range from mission/flight control functions, to vehicle health monitoring and prognostication, to life-support and other "housekeeping" functions. The optimum degree of autonomy afforded to these spacecraft systems/functions has profound implications from an exploration system architecture standpoint.
Fully Automated Anesthesia, Analgesia and Fluid Management
2017-01-03
General Anesthetic Drug Overdose; Adverse Effect of Intravenous Anesthetics, Sequela; Complication of Anesthesia; Drug Delivery System Malfunction; Hemodynamic Instability; Underdosing of Other General Anesthetics
NASA Technical Reports Server (NTRS)
Tenney, Yvette J.; Rogers, William H.; Pew, Richard W.
1995-01-01
There has been much concern in recent years about the rapid increase in automation on commercial flight decks. The survey was composed of three major sections. The first section asked pilots to rate different automation components that exist on the latest commercial aircraft regarding their obtrusiveness and the attention and effort required in using them. The second section addressed general 'automation philosophy' issues. The third section focused on issues related to levels and amount of automation. The results indicate that pilots of advanced aircraft like their automation, use it, and would welcome more automation. However, they also believe that automation has many disadvantages, especially fully autonomous automation. They want their automation to be simple and reliable and to produce predictable results. The biggest needs for higher levels of automation were in pre-flight, communication, systems management, and task management functions, planning as well as response tasks, and high workload situations. There is an irony and a challenge in the implications of these findings. On the one hand pilots would like new automation to be simple and reliable, but they need it to support the most complex part of the job--managing and planning tasks in high workload situations.
NASA Astrophysics Data System (ADS)
Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-03-01
The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.
Raterink, Robert-Jan; Witkam, Yoeri; Vreeken, Rob J; Ramautar, Rawi; Hankemeier, Thomas
2014-10-21
In the field of bioanalysis, there is an increasing demand for miniaturized, automated, robust sample pretreatment procedures that can be easily connected to direct-infusion mass spectrometry (DI-MS) in order to allow the high-throughput screening of drugs and/or their metabolites in complex body fluids like plasma. Liquid-Liquid extraction (LLE) is a common sample pretreatment technique often used for complex aqueous samples in bioanalysis. Despite significant developments that have been made in automated and miniaturized LLE procedures, fully automated LLE techniques allowing high-throughput bioanalytical studies on small-volume samples using direct infusion mass spectrometry, have not been matured yet. Here, we introduce a new fully automated micro-LLE technique based on gas-pressure assisted mixing followed by passive phase separation, coupled online to nanoelectrospray-DI-MS. Our method was characterized by varying the gas flow and its duration through the solvent mixture. For evaluation of the analytical performance, four drugs were spiked to human plasma, resulting in highly acceptable precision (RSD down to 9%) and linearity (R(2) ranging from 0.990 to 0.998). We demonstrate that our new method does not only allow the reliable extraction of analytes from small sample volumes of a few microliters in an automated and high-throughput manner, but also performs comparable or better than conventional offline LLE, in which the handling of small volumes remains challenging. Finally, we demonstrate the applicability of our method for drug screening on dried blood spots showing excellent linearity (R(2) of 0.998) and precision (RSD of 9%). In conclusion, we present the proof of principe of a new high-throughput screening platform for bioanalysis based on a new automated microLLE method, coupled online to a commercially available nano-ESI-DI-MS.
Milles, J; van der Geest, R J; Jerosch-Herold, M; Reiber, J H C; Lelieveldt, B P F
2007-01-01
This paper presents a novel method for registration of cardiac perfusion MRI. The presented method successfully corrects for breathing motion without any manual interaction using Independent Component Analysis to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of ICA, and used to compute the displacement caused by breathing for each frame. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Validation experiments showed a reduction of the average LV motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. We conclude that this fully automatic ICA-based method shows an excellent accuracy, robustness and computation speed, adequate for use in a clinical environment.
Besmer, Michael D.; Weissbrodt, David G.; Kratochvil, Bradley E.; Sigrist, Jürg A.; Weyland, Mathias S.; Hammes, Frederik
2014-01-01
Fluorescent staining coupled with flow cytometry (FCM) is often used for the monitoring, quantification and characterization of bacteria in engineered and environmental aquatic ecosystems including seawater, freshwater, drinking water, wastewater, and industrial bioreactors. However, infrequent grab sampling hampers accurate characterization and subsequent understanding of microbial dynamics in all of these ecosystems. A logic technological progression is high throughput and full automation of the sampling, staining, measurement, and data analysis steps. Here we assess the feasibility and applicability of automated FCM by means of actual data sets produced with prototype instrumentation. As proof-of-concept we demonstrate examples of microbial dynamics in (i) flowing tap water from a municipal drinking water supply network and (ii) river water from a small creek subject to two rainfall events. In both cases, automated measurements were done at 15-min intervals during 12–14 consecutive days, yielding more than 1000 individual data points for each ecosystem. The extensive data sets derived from the automated measurements allowed for the establishment of baseline data for each ecosystem, as well as for the recognition of daily variations and specific events that would most likely be missed (or miss-characterized) by infrequent sampling. In addition, the online FCM data from the river water was combined and correlated with online measurements of abiotic parameters, showing considerable potential for a better understanding of cause-and-effect relationships in aquatic ecosystems. Although several challenges remain, the successful operation of an automated online FCM system and the basic interpretation of the resulting data sets represent a breakthrough toward the eventual establishment of fully automated online microbiological monitoring technologies. PMID:24917858
Matthews, Stephen G; Miller, Amy L; Clapp, James; Plötz, Thomas; Kyriazakis, Ilias
2016-11-01
Early detection of health and welfare compromises in commercial piggeries is essential for timely intervention to enhance treatment success, reduce impact on welfare, and promote sustainable pig production. Behavioural changes that precede or accompany subclinical and clinical signs may have diagnostic value. Often referred to as sickness behaviour, this encompasses changes in feeding, drinking, and elimination behaviours, social behaviours, and locomotion and posture. Such subtle changes in behaviour are not easy to quantify and require lengthy observation input by staff, which is impractical on a commercial scale. Automated early-warning systems may provide an alternative by objectively measuring behaviour with sensors to automatically monitor and detect behavioural changes. This paper aims to: (1) review the quantifiable changes in behaviours with potential diagnostic value; (2) subsequently identify available sensors for measuring behaviours; and (3) describe the progress towards automating monitoring and detection, which may allow such behavioural changes to be captured, measured, and interpreted and thus lead to automation in commercial, housed piggeries. Multiple sensor modalities are available for automatic measurement and monitoring of behaviour, which require humans to actively identify behavioural changes. This has been demonstrated for the detection of small deviations in diurnal drinking, deviations in feeding behaviour, monitoring coughs and vocalisation, and monitoring thermal comfort, but not social behaviour. However, current progress is in the early stages of developing fully automated detection systems that do not require humans to identify behavioural changes; e.g., through automated alerts sent to mobile phones. Challenges for achieving automation are multifaceted and trade-offs are considered between health, welfare, and costs, between analysis of individuals and groups, and between generic and compromise-specific behaviours. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.
2016-09-01
There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21 patient image sections, were analyzed. The automated algorithm produced segmentation of seven material classes with a median DSC of 0.86 ± 0.03 for pediatric patient protocols, and 0.85 ± 0.04 for adult patient protocols. Additionally, 100 randomly selected patient examinations were segmented and analyzed, and a mean sensitivity of 0.91 (range: 0.82-0.98), specificity of 0.89 (range: 0.70-0.98), and accuracy of 0.90 (range: 0.76-0.98) were demonstrated. In this study, we demonstrate that this fully automated segmentation tool was able to produce fast and accurate segmentation of the neck and trunk of the body over a wide range of patient habitus and scan parameters.
Long-term maintenance of human induced pluripotent stem cells by automated cell culture system.
Konagaya, Shuhei; Ando, Takeshi; Yamauchi, Toshiaki; Suemori, Hirofumi; Iwata, Hiroo
2015-11-17
Pluripotent stem cells, such as embryonic stem cells and induced pluripotent stem (iPS) cells, are regarded as new sources for cell replacement therapy. These cells can unlimitedly expand under undifferentiated conditions and be differentiated into multiple cell types. Automated culture systems enable the large-scale production of cells. In addition to reducing the time and effort of researchers, an automated culture system improves the reproducibility of cell cultures. In the present study, we newly designed a fully automated cell culture system for human iPS maintenance. Using an automated culture system, hiPS cells maintained their undifferentiated state for 60 days. Automatically prepared hiPS cells had a potency of differentiation into three germ layer cells including dopaminergic neurons and pancreatic cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Y; Huang, H; Su, T
Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less
Larrabide, Ignacio; Cruz Villa-Uriol, Maria; Cárdenes, Rubén; Pozo, Jose Maria; Macho, Juan; San Roman, Luis; Blasco, Jordi; Vivas, Elio; Marzo, Alberto; Hose, D Rod; Frangi, Alejandro F
2011-05-01
Morphological descriptors are practical and essential biomarkers for diagnosis and treatment selection for intracranial aneurysm management according to the current guidelines in use. Nevertheless, relatively little work has been dedicated to improve the three-dimensional quantification of aneurysmal morphology, to automate the analysis, and hence to reduce the inherent intra and interobserver variability of manual analysis. In this paper we propose a methodology for the automated isolation and morphological quantification of saccular intracranial aneurysms based on a 3D representation of the vascular anatomy. This methodology is based on the analysis of the vasculature skeleton's topology and the subsequent application of concepts from deformable cylinders. These are expanded inside the parent vessel to identify different regions and discriminate the aneurysm sac from the parent vessel wall. The method renders as output the surface representation of the isolated aneurysm sac, which can then be quantified automatically. The proposed method provides the means for identifying the aneurysm neck in a deterministic way. The results obtained by the method were assessed in two ways: they were compared to manual measurements obtained by three independent clinicians as normally done during diagnosis and to automated measurements from manually isolated aneurysms by three independent operators, nonclinicians, experts in vascular image analysis. All the measurements were obtained using in-house tools. The results were qualitatively and quantitatively compared for a set of the saccular intracranial aneurysms (n = 26). Measurements performed on a synthetic phantom showed that the automated measurements obtained from manually isolated aneurysms where the most accurate. The differences between the measurements obtained by the clinicians and the manually isolated sacs were statistically significant (neck width: p <0.001, sac height: p = 0.002). When comparing clinicians' measurements to automatically isolated sacs, only the differences for the neck width were significant (neck width: p <0.001, sac height: p = 0.95). However, the correlation and agreement between the measurements obtained from manually and automatically isolated aneurysms for the neck width: p = 0.43 and sac height: p = 0.95 where found. The proposed method allows the automated isolation of intracranial aneurysms, eliminating the interobserver variability. In average, the computational cost of the automated method (2 min 36 s) was similar to the time required by a manual operator (measurement by clinicians: 2 min 51 s, manual isolation: 2 min 21 s) but eliminating human interaction. The automated measurements are irrespective of the viewing angle, eliminating any bias or difference between the observer criteria. Finally, the qualitative assessment of the results showed acceptable agreement between manually and automatically isolated aneurysms.
Fully automated adipose tissue measurement on abdominal CT
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Sussman, Daniel L.; Summers, Ronald M.
2011-03-01
Obesity has become widespread in America and has been associated as a risk factor for many illnesses. Adipose tissue (AT) content, especially visceral AT (VAT), is an important indicator for risks of many disorders, including heart disease and diabetes. Measuring adipose tissue (AT) with traditional means is often unreliable and inaccurate. CT provides a means to measure AT accurately and consistently. We present a fully automated method to segment and measure abdominal AT in CT. Our method integrates image preprocessing which attempts to correct for image artifacts and inhomogeneities. We use fuzzy cmeans to cluster AT regions and active contour models to separate subcutaneous and visceral AT. We tested our method on 50 abdominal CT scans and evaluated the correlations between several measurements.
Srinivasan, Pratul P.; Kim, Leo A.; Mettu, Priyatham S.; Cousins, Scott W.; Comer, Grant M.; Izatt, Joseph A.; Farsiu, Sina
2014-01-01
We present a novel fully automated algorithm for the detection of retinal diseases via optical coherence tomography (OCT) imaging. Our algorithm utilizes multiscale histograms of oriented gradient descriptors as feature vectors of a support vector machine based classifier. The spectral domain OCT data sets used for cross-validation consisted of volumetric scans acquired from 45 subjects: 15 normal subjects, 15 patients with dry age-related macular degeneration (AMD), and 15 patients with diabetic macular edema (DME). Our classifier correctly identified 100% of cases with AMD, 100% cases with DME, and 86.67% cases of normal subjects. This algorithm is a potentially impactful tool for the remote diagnosis of ophthalmic diseases. PMID:25360373
NASA Technical Reports Server (NTRS)
Miller, R. H.; Minsky, M. L.; Smith, D. B. S.
1982-01-01
Applications of automation, robotics, and machine intelligence systems (ARAMIS) to space activities and their related ground support functions are studied, so that informed decisions can be made on which aspects of ARAMIS to develop. The specific tasks which will be required by future space project tasks are identified and the relative merits of these options are evaluated. The ARAMIS options defined and researched span the range from fully human to fully machine, including a number of intermediate options (e.g., humans assisted by computers, and various levels of teleoperation). By including this spectrum, the study searches for the optimum mix of humans and machines for space project tasks.
Automated Medical Supply Chain Management: A Remedy for Logistical Shortcomings
2016-08-01
Regional case study where the hospital compared its utilization of automated inventory management technologies (Pyxis) to previous SCM practice in the... management practices within the 96 Medical Group (MDG), Eglin Hospital . It was known that the Defense Medical Logistics Standard was used at Eglin... Hospital but was not fully integrated down to the unit level. Casual manual inventory management practices were used explicitly resulting in
Greg C. Liknes; Dacia M. Meneguzzo; Todd A. Kellerman
2017-01-01
Windbreaks are an important ecological resource across the large expanse of agricultural land in the central United States and are often planted in straight-line or L-shaped configurations to serve specific functions. As high-resolution (i.e., <5 m) land cover datasets become more available for these areas, semi-or fully-automated methods for distinguishing...
Lab-on-a-Chip Proteomic Assays for Psychiatric Disorders.
Peter, Harald; Wienke, Julia; Guest, Paul C; Bistolas, Nikitas; Bier, Frank F
2017-01-01
Lab-on-a-chip assays allow rapid identification of multiple parameters on an automated user-friendly platform. Here we describe a fully automated multiplex immunoassay and readout in less than 15 min using the Fraunhofer in vitro diagnostics (ivD) platform to enable inexpensive point-of-care profiling of sera or a single drop of blood from patients with various diseases such as psychiatric disorders.
Automated optimization techniques for aircraft synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1976-01-01
Application of numerical optimization techniques to automated conceptual aircraft design is examined. These methods are shown to be a general and efficient way to obtain quantitative information for evaluating alternative new vehicle projects. Fully automated design is compared with traditional point design methods and time and resource requirements for automated design are given. The NASA Ames Research Center aircraft synthesis program (ACSYNT) is described with special attention to calculation of the weight of a vehicle to fly a specified mission. The ACSYNT procedures for automatically obtaining sensitivity of the design (aircraft weight, performance and cost) to various vehicle, mission, and material technology parameters are presented. Examples are used to demonstrate the efficient application of these techniques.
The automation of an inlet mass flow control system
NASA Technical Reports Server (NTRS)
Supplee, Frank; Tcheng, Ping; Weisenborn, Michael
1989-01-01
The automation of a closed-loop computer controlled system for the inlet mass flow system (IMFS) developed for a wind tunnel facility at Langley Research Center is presented. This new PC based control system is intended to replace the manual control system presently in use in order to fully automate the plug positioning of the IMFS during wind tunnel testing. Provision is also made for communication between the PC and a host-computer in order to allow total animation of the plug positioning and data acquisition during the complete sequence of predetermined plug locations. As extensive running time is programmed for the IMFS, this new automated system will save both manpower and tunnel running time.
Automated liver elasticity calculation for 3D MRE
NASA Astrophysics Data System (ADS)
Dzyubak, Bogdan; Glaser, Kevin J.; Manduca, Armando; Ehman, Richard L.
2017-03-01
Magnetic Resonance Elastography (MRE) is a phase-contrast MRI technique which calculates quantitative stiffness images, called elastograms, by imaging the propagation of acoustic waves in tissues. It is used clinically to diagnose liver fibrosis. Automated analysis of MRE is difficult as the corresponding MRI magnitude images (which contain anatomical information) are affected by intensity inhomogeneity, motion artifact, and poor tissue- and edge-contrast. Additionally, areas with low wave amplitude must be excluded. An automated algorithm has already been successfully developed and validated for clinical 2D MRE. 3D MRE acquires substantially more data and, due to accelerated acquisition, has exacerbated image artifacts. Also, the current 3D MRE processing does not yield a confidence map to indicate MRE wave quality and guide ROI selection, as is the case in 2D. In this study, extension of the 2D automated method, with a simple wave-amplitude metric, was developed and validated against an expert reader in a set of 57 patient exams with both 2D and 3D MRE. The stiffness discrepancy with the expert for 3D MRE was -0.8% +/- 9.45% and was better than discrepancy with the same reader for 2D MRE (-3.2% +/- 10.43%), and better than the inter-reader discrepancy observed in previous studies. There were no automated processing failures in this dataset. Thus, the automated liver elasticity calculation (ALEC) algorithm is able to calculate stiffness from 3D MRE data with minimal bias and good precision, while enabling stiffness measurements to be fully reproducible and to be easily performed on the large 3D MRE datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bates, T.R. Jr.; Tait, S.; Mumford, G.
The authors discuss how improvements that can increase rig safety can be made in equipment, regulations, and stabilized personnel levels. With regard to equipment, exposure to material handling must be reduced through automation, and well-control technology must be improved by enhanced use of computers and better systems to handle gas. According to this analysis, regulations are needed that are global in scope and have had their costs-to-benefits fully and fairly assessed. Self regulation must be used effectively throughout the industry. Job security and wages should be made adequate to maintain an experienced, motivated, and safe work force.
Continuous QKD and high speed data encryption
NASA Astrophysics Data System (ADS)
Zbinden, Hugo; Walenta, Nino; Guinnard, Olivier; Houlmann, Raphael; Wen, Charles Lim Ci; Korzh, Boris; Lunghi, Tommaso; Gisin, Nicolas; Burg, Andreas; Constantin, Jeremy; Legré, Matthieu; Trinkler, Patrick; Caselunghe, Dario; Kulesza, Natalia; Trolliet, Gregory; Vannel, Fabien; Junod, Pascal; Auberson, Olivier; Graf, Yoan; Curchod, Gilles; Habegger, Gilles; Messerli, Etienne; Portmann, Christopher; Henzen, Luca; Keller, Christoph; Pendl, Christian; Mühlberghuber, Michael; Roth, Christoph; Felber, Norbert; Gürkaynak, Frank; Schöni, Daniel; Muheim, Beat
2013-10-01
We present the results of a Swiss project dedicated to the development of high speed quantum key distribution and data encryption. The QKD engine features fully automated key exchange, hardware key distillation based on finite key security analysis, efficient authentication and wavelength division multiplexing of the quantum and the classical channel and one-time pas encryption. The encryption device allows authenticated symmetric key encryption (e.g AES) at rates of up to 100 Gb/s. A new quantum key can uploaded up to 1000 times second from the QKD engine.
2007-05-01
evaluation of approximations,” tech. rep., Dep. Sistemes Informàtics i Computació, Univ. Politècnica de València (Spain), 2003. [7] D. C. Edwards, C. E...Maryellen L. Giger, scientific collaborator • Lorenzo Pesce, computer programmer 16 C The Hypervolume under the ROC Hypersurface of “Near-Guessing...the simple model we have just described corresponds in the two-class classification task to ROC analysis performed ‘‘per ARTICLE IN PRESS
Kume, Teruyoshi; Kim, Byeong-Keuk; Waseda, Katsuhisa; Sathyanarayana, Shashidhar; Li, Wenguang; Teo, Tat-Jin; Yock, Paul G; Fitzgerald, Peter J; Honda, Yasuhiro
2013-02-01
The aim of this study was to evaluate a new fully automated lumen border tracing system based on a novel multifrequency processing algorithm. We developed the multifrequency processing method to enhance arterial lumen detection by exploiting the differential scattering characteristics of blood and arterial tissue. The implementation of the method can be integrated into current intravascular ultrasound (IVUS) hardware. This study was performed in vivo with conventional 40-MHz IVUS catheters (Atlantis SR Pro™, Boston Scientific Corp, Natick, MA) in 43 clinical patients with coronary artery disease. A total of 522 frames were randomly selected, and lumen areas were measured after automatically tracing lumen borders with the new tracing system and a commercially available tracing system (TraceAssist™) referred to as the "conventional tracing system." The data assessed by the two automated systems were compared with the results of manual tracings by experienced IVUS analysts. New automated lumen measurements showed better agreement with manual lumen area tracings compared with those of the conventional tracing system (correlation coefficient: 0.819 vs. 0.509). When compared against manual tracings, the new algorithm also demonstrated improved systematic error (mean difference: 0.13 vs. -1.02 mm(2) ) and random variability (standard deviation of difference: 2.21 vs. 4.02 mm(2) ) compared with the conventional tracing system. This preliminary study showed that the novel fully automated tracing system based on the multifrequency processing algorithm can provide more accurate lumen border detection than current automated tracing systems and thus, offer a more reliable quantitative evaluation of lumen geometry. Copyright © 2011 Wiley Periodicals, Inc.
Yousef Kalafi, Elham; Tan, Wooi Boon; Town, Christopher; Dhillon, Sarinder Kaur
2016-12-22
Monogeneans are flatworms (Platyhelminthes) that are primarily found on gills and skin of fishes. Monogenean parasites have attachment appendages at their haptoral regions that help them to move about the body surface and feed on skin and gill debris. Haptoral attachment organs consist of sclerotized hard parts such as hooks, anchors and marginal hooks. Monogenean species are differentiated based on their haptoral bars, anchors, marginal hooks, reproductive parts' (male and female copulatory organs) morphological characters and soft anatomical parts. The complex structure of these diagnostic organs and also their overlapping in microscopic digital images are impediments for developing fully automated identification system for monogeneans (LNCS 7666:256-263, 2012), (ISDA; 457-462, 2011), (J Zoolog Syst Evol Res 52(2): 95-99. 2013;). In this study images of hard parts of the haptoral organs such as bars and anchors are used to develop a fully automated identification technique for monogenean species identification by implementing image processing techniques and machine learning methods. Images of four monogenean species namely Sinodiplectanotrema malayanus, Trianchoratus pahangensis, Metahaliotrema mizellei and Metahaliotrema sp. (undescribed) were used to develop an automated technique for identification. K-nearest neighbour (KNN) was applied to classify the monogenean specimens based on the extracted features. 50% of the dataset was used for training and the other 50% was used as testing for system evaluation. Our approach demonstrated overall classification accuracy of 90%. In this study Leave One Out (LOO) cross validation is used for validation of our system and the accuracy is 91.25%. The methods presented in this study facilitate fast and accurate fully automated classification of monogeneans at the species level. In future studies more classes will be included in the model, the time to capture the monogenean images will be reduced and improvements in extraction and selection of features will be implemented.
Integrated microfluidic systems for cell lysis, mixing/pumping and DNA amplification
NASA Astrophysics Data System (ADS)
Lee, Chia-Yen; Lee, Gwo-Bin; Lin, Jr-Lung; Huang, Fu-Chun; Liao, Chia-Sheng
2005-06-01
The present paper reports a fully automated microfluidic system for the DNA amplification process by integrating an electroosmotic pump, an active micromixer and an on-chip temperature control system. In this DNA amplification process, the cell lysis is initially performed in a micro cell lysis reactor. Extracted DNA samples, primers and reagents are then driven electroosmotically into a mixing region where they are mixed by the active micromixer. The homogeneous mixture is then thermally cycled in a micro-PCR (polymerase chain reaction) chamber to perform DNA amplification. Experimental results show that the proposed device can successfully automate the sample pretreatment operation for DNA amplification, thereby delivering significant time and effort savings. The new microfluidic system, which facilitates cell lysis, sample driving/mixing and DNA amplification, could provide a significant contribution to ongoing efforts to miniaturize bio-analysis systems by utilizing a simple fabrication process and cheap materials.
Wang, Amy Y; Lancaster, William J; Wyatt, Matthew C; Rasmussen, Luke V; Fort, Daniel G; Cimino, James J
2017-01-01
A major challenge in using electronic health record repositories for research is the difficulty matching subject eligibility criteria to query capabilities of the repositories. We propose categories for study criteria corresponding to the effort needed for querying those criteria: "easy" (supporting automated queries), mixed (initial automated querying with manual review), "hard" (fully manual record review), and "impossible" or "point of enrollment" (not typically in health repositories). We obtained a sample of 292 criteria from 20 studies from ClinicalTrials.gov. Six independent reviewers, three each from two academic research institutions, rated criteria according to our four types. We observed high interrater reliability both within and between institutions. The analysis demonstrated typical features of criteria that map with varying levels of difficulty to repositories. We propose using these features to improve enrollment workflow through more standardized study criteria, self-service repository queries, and analyst-mediated retrievals.
Wang, Amy Y.; Lancaster, William J.; Wyatt, Matthew C.; Rasmussen, Luke V.; Fort, Daniel G.; Cimino, James J.
2017-01-01
A major challenge in using electronic health record repositories for research is the difficulty matching subject eligibility criteria to query capabilities of the repositories. We propose categories for study criteria corresponding to the effort needed for querying those criteria: “easy” (supporting automated queries), mixed (initial automated querying with manual review), “hard” (fully manual record review), and “impossible” or “point of enrollment” (not typically in health repositories). We obtained a sample of 292 criteria from 20 studies from ClinicalTrials.gov. Six independent reviewers, three each from two academic research institutions, rated criteria according to our four types. We observed high interrater reliability both within and between institutions. The analysis demonstrated typical features of criteria that map with varying levels of difficulty to repositories. We propose using these features to improve enrollment workflow through more standardized study criteria, self-service repository queries, and analyst-mediated retrievals. PMID:29854246
Mapping Cortical Laminar Structure in the 3D BigBrain.
Wagstyl, Konrad; Lepage, Claude; Bludau, Sebastian; Zilles, Karl; Fletcher, Paul C; Amunts, Katrin; Evans, Alan C
2018-07-01
Histological sections offer high spatial resolution to examine laminar architecture of the human cerebral cortex; however, they are restricted by being 2D, hence only regions with sufficiently optimal cutting planes can be analyzed. Conversely, noninvasive neuroimaging approaches are whole brain but have relatively low resolution. Consequently, correct 3D cross-cortical patterns of laminar architecture have never been mapped in histological sections. We developed an automated technique to identify and analyze laminar structure within the high-resolution 3D histological BigBrain. We extracted white matter and pial surfaces, from which we derived histologically verified surfaces at the layer I/II boundary and within layer IV. Layer IV depth was strongly predicted by cortical curvature but varied between areas. This fully automated 3D laminar analysis is an important requirement for bridging high-resolution 2D cytoarchitecture and in vivo 3D neuroimaging. It lays the foundation for in-depth, whole-brain analyses of cortical layering.
2017-01-01
Direct analysis by mass spectrometry (imaging) has become increasingly deployed in preclinical and clinical research due to its rapid and accurate readouts. However, when it comes to biomarker discovery or histopathological diagnostics, more sensitive and in-depth profiling from localized areas is required. We developed a comprehensive, fully automated online platform for high-resolution liquid extraction surface analysis (HR-LESA) followed by micro–liquid chromatography (LC) separation and a data-independent acquisition strategy for untargeted and low abundant analyte identification directly from tissue sections. Applied to tissue sections of rat pituitary, the platform demonstrated improved spatial resolution, allowing sample areas as small as 400 μm to be studied, a major advantage over conventional LESA. The platform integrates an online buffer exchange and washing step for removal of salts and other endogenous contamination that originates from local tissue extraction. Our carry over–free platform showed high reproducibility, with an interextraction variability below 30%. Another strength of the platform is the additional selectivity provided by a postsampling gas-phase ion mobility separation. This allowed distinguishing coeluted isobaric compounds without requiring additional separation time. Furthermore, we identified untargeted and low-abundance analytes, including neuropeptides deriving from the pro-opiomelanocortin precursor protein and localized a specific area of the pituitary gland (i.e., adenohypophysis) known to secrete neuropeptides and other small metabolites related to development, growth, and metabolism. This platform can thus be applied for the in-depth study of small samples of complex tissues with histologic features of ∼400 μm or more, including potential neuropeptide markers involved in many diseases such as neurodegenerative diseases, obesity, bulimia, and anorexia nervosa. PMID:28945354
Beltrán, Blanca; Avivar, Jessica; Mola, Montserrat; Ferrer, Laura; Cerdà, Víctor; Leal, Luz O
2013-09-03
A new automated, sensitive, and fast system for the simultaneous online isolation and preconcentration of lead and strontium by sorption on a microcolumn packed with Sr-resin using an inductively coupled plasma mass spectrometry (ICP-MS) detector was developed, hyphenating lab-on-valve (LOV) and multisyringe flow injection analysis (MSFIA). Pb and Sr are directly retained on the sorbent column and eluted with a solution of 0.05 mol L(-1) ammonium oxalate. The detection limits achieved were 0.04 ng for lead and 0.03 ng for strontium. Mass calibration curves were used since the proposed system allows the use of different sample volumes for preconcentration. Mass linear working ranges were between 0.13 and 50 ng and 0.1 and 50 ng for lead and strontium, respectively. The repeatability of the method, expressed as RSD, was 2.1% and 2.7% for Pb and Sr, respectively. Environmental samples such as rainwater and airborne particulate (PM10) filters as well as a certified reference material SLRS-4 (river water) were satisfactorily analyzed obtaining recoveries between 90 and 110% for both elements. The main features of the LOV-MSFIA-ICP-MS system proposed are the capability to renew solid phase extraction at will in a fully automated way, the remarkable stability of the column which can be reused up to 160 times, and the potential to perform isotopic analysis.
Polonchuk, Liudmila
2014-01-01
Patch-clamping is a powerful technique for investigating the ion channel function and regulation. However, its low throughput hampered profiling of large compound series in early drug development. Fortunately, automation has revolutionized the area of experimental electrophysiology over the past decade. Whereas the first automated patch-clamp instruments using the planar patch-clamp technology demonstrated rather a moderate throughput, few second-generation automated platforms recently launched by various companies have significantly increased ability to form a high number of high-resistance seals. Among them is SyncroPatch(®) 96 (Nanion Technologies GmbH, Munich, Germany), a fully automated giga-seal patch-clamp system with the highest throughput on the market. By recording from up to 96 cells simultaneously, the SyncroPatch(®) 96 allows to substantially increase throughput without compromising data quality. This chapter describes features of the innovative automated electrophysiology system and protocols used for a successful transfer of the established hERG assay to this high-throughput automated platform.
An Automated Solar Synoptic Analysis Software System
NASA Astrophysics Data System (ADS)
Hong, S.; Lee, S.; Oh, S.; Kim, J.; Lee, J.; Kim, Y.; Lee, J.; Moon, Y.; Lee, D.
2012-12-01
We have developed an automated software system of identifying solar active regions, filament channels, and coronal holes, those are three major solar sources causing the space weather. Space weather forecasters of NOAA Space Weather Prediction Center produce the solar synoptic drawings as a daily basis to predict solar activities, i.e., solar flares, filament eruptions, high speed solar wind streams, and co-rotating interaction regions as well as their possible effects to the Earth. As an attempt to emulate this process with a fully automated and consistent way, we developed a software system named ASSA(Automated Solar Synoptic Analysis). When identifying solar active regions, ASSA uses high-resolution SDO HMI intensitygram and magnetogram as inputs and providing McIntosh classification and Mt. Wilson magnetic classification of each active region by applying appropriate image processing techniques such as thresholding, morphology extraction, and region growing. At the same time, it also extracts morphological and physical properties of active regions in a quantitative way for the short-term prediction of flares and CMEs. When identifying filament channels and coronal holes, images of global H-alpha network and SDO AIA 193 are used for morphological identification and also SDO HMI magnetograms for quantitative verification. The output results of ASSA are routinely checked and validated against NOAA's daily SRS(Solar Region Summary) and UCOHO(URSIgram code for coronal hole information). A couple of preliminary scientific results are to be presented using available output results. ASSA will be deployed at the Korean Space Weather Center and serve its customers in an operational status by the end of 2012.
Optimizing transformations for automated, high throughput analysis of flow cytometry data
2010-01-01
Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce variability in the location of discovered cell populations across samples, and decrease the misclassification (mis-gating) of individual events when compared to default-parameter counterparts. Conclusions Our results indicate that the preferred transformation for fluorescence channels is a parameter- optimized biexponential or generalized Box-Cox, in accordance with current best practices. Interestingly, for populations in the scatter channels, we find that the optimized hyperbolic arcsine may be a better choice in a high-throughput setting than current standard practice of no transformation. However, generally speaking, the choice of transformation remains data-dependent. We have implemented our algorithm in the BioConductor package, flowTrans, which is publicly available. PMID:21050468
Optimizing transformations for automated, high throughput analysis of flow cytometry data.
Finak, Greg; Perez, Juan-Manuel; Weng, Andrew; Gottardo, Raphael
2010-11-04
In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce variability in the location of discovered cell populations across samples, and decrease the misclassification (mis-gating) of individual events when compared to default-parameter counterparts. Our results indicate that the preferred transformation for fluorescence channels is a parameter- optimized biexponential or generalized Box-Cox, in accordance with current best practices. Interestingly, for populations in the scatter channels, we find that the optimized hyperbolic arcsine may be a better choice in a high-throughput setting than current standard practice of no transformation. However, generally speaking, the choice of transformation remains data-dependent. We have implemented our algorithm in the BioConductor package, flowTrans, which is publicly available.
Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter
2015-01-20
While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.
Steiding, Christian; Kolditz, Daniel; Kalender, Willi A
2014-03-01
Thousands of cone-beam computed tomography (CBCT) scanners for vascular, maxillofacial, neurological, and body imaging are in clinical use today, but there is no consensus on uniform acceptance and constancy testing for image quality (IQ) and dose yet. The authors developed a quality assurance (QA) framework for fully automated and time-efficient performance evaluation of these systems. In addition, the dependence of objective Fourier-based IQ metrics on direction and position in 3D volumes was investigated for CBCT. The authors designed a dedicated QA phantom 10 cm in length consisting of five compartments, each with a diameter of 10 cm, and an optional extension ring 16 cm in diameter. A homogeneous section of water-equivalent material allows measuring CT value accuracy, image noise and uniformity, and multidimensional global and local noise power spectra (NPS). For the quantitative determination of 3D high-contrast spatial resolution, the modulation transfer function (MTF) of centrally and peripherally positioned aluminum spheres was computed from edge profiles. Additional in-plane and axial resolution patterns were used to assess resolution qualitatively. The characterization of low-contrast detectability as well as CT value linearity and artifact behavior was tested by utilizing sections with soft-tissue-equivalent and metallic inserts. For an automated QA procedure, a phantom detection algorithm was implemented. All tests used in the dedicated QA program were initially verified in simulation studies and experimentally confirmed on a clinical dental CBCT system. The automated IQ evaluation of volume data sets of the dental CBCT system was achieved with the proposed phantom requiring only one scan for the determination of all desired parameters. Typically, less than 5 min were needed for phantom set-up, scanning, and data analysis. Quantitative evaluation of system performance over time by comparison to previous examinations was also verified. The maximum percentage interscan variation of repeated measurements was less than 4% and 1.7% on average for all investigated quality criteria. The NPS-based image noise differed by less than 5% from the conventional standard deviation approach and spatially selective 10% MTF values were well comparable to subjective results obtained with 3D resolution pattern. Determining only transverse spatial resolution and global noise behavior in the central field of measurement turned out to be insufficient. The proposed framework transfers QA routines employed in conventional CT in an advanced version to CBCT for fully automated and time-efficient evaluation of technical equipment. With the modular phantom design, a routine as well as an expert version for assessing IQ is provided. The QA program can be used for arbitrary CT units to evaluate 3D imaging characteristics automatically.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steiding, Christian; Kolditz, Daniel; Kalender, Willi A., E-mail: willi.kalender@imp.uni-erlangen.de
Purpose: Thousands of cone-beam computed tomography (CBCT) scanners for vascular, maxillofacial, neurological, and body imaging are in clinical use today, but there is no consensus on uniform acceptance and constancy testing for image quality (IQ) and dose yet. The authors developed a quality assurance (QA) framework for fully automated and time-efficient performance evaluation of these systems. In addition, the dependence of objective Fourier-based IQ metrics on direction and position in 3D volumes was investigated for CBCT. Methods: The authors designed a dedicated QA phantom 10 cm in length consisting of five compartments, each with a diameter of 10 cm, andmore » an optional extension ring 16 cm in diameter. A homogeneous section of water-equivalent material allows measuring CT value accuracy, image noise and uniformity, and multidimensional global and local noise power spectra (NPS). For the quantitative determination of 3D high-contrast spatial resolution, the modulation transfer function (MTF) of centrally and peripherally positioned aluminum spheres was computed from edge profiles. Additional in-plane and axial resolution patterns were used to assess resolution qualitatively. The characterization of low-contrast detectability as well as CT value linearity and artifact behavior was tested by utilizing sections with soft-tissue-equivalent and metallic inserts. For an automated QA procedure, a phantom detection algorithm was implemented. All tests used in the dedicated QA program were initially verified in simulation studies and experimentally confirmed on a clinical dental CBCT system. Results: The automated IQ evaluation of volume data sets of the dental CBCT system was achieved with the proposed phantom requiring only one scan for the determination of all desired parameters. Typically, less than 5 min were needed for phantom set-up, scanning, and data analysis. Quantitative evaluation of system performance over time by comparison to previous examinations was also verified. The maximum percentage interscan variation of repeated measurements was less than 4% and 1.7% on average for all investigated quality criteria. The NPS-based image noise differed by less than 5% from the conventional standard deviation approach and spatially selective 10% MTF values were well comparable to subjective results obtained with 3D resolution pattern. Determining only transverse spatial resolution and global noise behavior in the central field of measurement turned out to be insufficient. Conclusions: The proposed framework transfers QA routines employed in conventional CT in an advanced version to CBCT for fully automated and time-efficient evaluation of technical equipment. With the modular phantom design, a routine as well as an expert version for assessing IQ is provided. The QA program can be used for arbitrary CT units to evaluate 3D imaging characteristics automatically.« less
Ultramap v3 - a Revolution in Aerial Photogrammetry
NASA Astrophysics Data System (ADS)
Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.
2012-07-01
In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.
Suppa, Per; Hampel, Harald; Spies, Lothar; Fiebach, Jochen B; Dubois, Bruno; Buchert, Ralph
2015-01-01
Hippocampus volumetry based on magnetic resonance imaging (MRI) has not yet been translated into everyday clinical diagnostic patient care, at least in part due to limited availability of appropriate software tools. In the present study, we evaluate a fully-automated and computationally efficient processing pipeline for atlas based hippocampal volumetry using freely available Statistical Parametric Mapping (SPM) software in 198 amnestic mild cognitive impairment (MCI) subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI1). Subjects were grouped into MCI stable and MCI to probable Alzheimer's disease (AD) converters according to follow-up diagnoses at 12, 24, and 36 months. Hippocampal grey matter volume (HGMV) was obtained from baseline T1-weighted MRI and then corrected for total intracranial volume and age. Average processing time per subject was less than 4 minutes on a standard PC. The area under the receiver operator characteristic curve of the corrected HGMV for identification of MCI to probable AD converters within 12, 24, and 36 months was 0.78, 0.72, and 0.71, respectively. Thus, hippocampal volume computed with the fully-automated processing pipeline provides similar power for prediction of MCI to probable AD conversion as computationally more expensive methods. The whole processing pipeline has been made freely available as an SPM8 toolbox. It is easily set up and integrated into everyday clinical patient care.
Aptima HIV-1 Quant Dx--A fully automated assay for both diagnosis and quantification of HIV-1.
Nair, Sangeetha Vijaysri; Kim, Hee Cheol; Fortunko, Jacqueline; Foote, Tracy; Peling, Tashi; Tran, Cuong; Nugent, Charles Thomas; Joo, Sunghae; Kang, Youna; Wilkins, Bana; Lednovich, Kristen; Worlock, Andrew
2016-04-01
Separate assays are available for diagnosis and viral load (VL) monitoring of HIV-1. Studies have shown that using a single test for both confirmatory diagnosis and VL increases linkage to care. To validate a single assay for both diagnosis and VL monitoring of HIV-1 on the fully automated Panther platform. Validate the assay by assessing specificity, sensitivity, subtype detection, seroconversion, reproducibility and linearity. Also assess diagnostic agreement with the Procleix(®) Ultrio Elite™ discriminatory assay (Procleix), and agreement of VL results (method comparison) with Ampliprep/COBAS TaqMan HIV-1 version 2.0 (CAP/CTM), using clinical samples. The assay was specific (100%) and sensitive with a 95% limit of detection of 12 copies/mL with the 3rd WHO standards. Aptima detected HIV in seroconversion panels 6 and 11 days before p24 antigen and antibody tests, respectively. Diagnostic agreement with Procleix, was 100%. Regression analysis showed good agreement of VL results between Aptima and CAP/CTM with a slope of 1.02, intercept of 0.07, and correlation coefficient (R(2)) of 0.97. Aptima was more sensitive than CAP/CTM. Equivalent quantification was seen on testing clinical samples and isolates belonging to HIV group M, N, O and P and commercially available subtype panels. Assay results were linear (R(2) 0.9994) with standard deviation of <0.17 log copies across assay range. The good specificity, sensitivity, precision, subtype performance and clinical agreement with other assays demonstrated by Aptima combined with the complete automation provided by the Panther platform makes Aptima a good candidate for both VL monitoring and diagnosis of HIV-1. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Hockley, Brian G; Stewart, Megan N; Sherman, Phillip; Quesada, Carole; Kilbourn, Michael R; Albin, Roger L; Scott, Peter J H
2013-10-01
(-)-[(18) F]Flubatine was selected for clinical imaging of α4 β2 nicotinic acetylcholine receptors because of its high affinity and appropriate kinetic profile. A fully automated synthesis of (-)-[(18) F]flubatine as a sterile isotonic solution suitable for clinical use is reported, as well as the first evaluation in nonhuman primates (rhesus macaques). (-)-[(18) F]Flubatine was prepared by fluorination of the Boc-protected trimethylammonium iodide precursor with [(18) F]fluoride in an automated synthesis module. Subsequent deprotection of the Boc group with 1-M HCl yielded (-)-[(18) F]flubatine, which was purified by semi-preparative HPLC. (-)-[(18) F]Flubatine was prepared in 25% radiochemical yield (formulated for clinical use at end of synthesis, n = 3), >95% radiochemical purity, and specific activity = 4647 Ci/mmol (171.9 GBq/µmol). Doses met all quality control criteria confirming their suitability for clinical use. Evaluation of (-)-[(18) F]flubatine in rhesus macaques was performed with a Concorde MicroPET P4 scanner (Concorde MicroSystems, Knoxville, TN). The brain was imaged for 90 min, and data were reconstructed using the 3-D maximum a posteriori algorithm. Image analysis revealed higher uptake and slower washout in the thalamus than those in other areas of the brain and peak uptake at 45 min. Injection of 2.5 µg/kg of nifene at 60 min initiated a slow washout of [(18) F]flubatine, with about 25% clearance from the thalamus by the end of imaging at 90 min. Copyright © 2013 John Wiley & Sons, Ltd.
Verplaetse, Ruth; Henion, Jack
2016-07-05
A workflow overcoming microsample collection issues and hematocrit (HCT)-related bias would facilitate more widespread use of dried blood spots (DBS). This report describes comparative results between the use of a pipet and a microfluidic-based sampling device for the creation of volumetric DBS. Both approaches were successfully coupled to HCT-independent, fully automated sample preparation and online liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) analysis allowing detection of five stimulants in finger prick blood. Reproducible, selective, accurate, and precise responses meeting generally accepted regulated bioanalysis guidelines were observed over the range of 5-1000 ng/mL whole blood. The applied heated flow-through solvent desorption of the entire spot and online solid phase extraction (SPE) procedure were unaffected by the blood's HCT value within the tested range of 28.0-61.5% HCT. Enhanced stability for mephedrone on DBS compared to liquid whole blood was observed. Finger prick blood samples were collected using both volumetric sampling approaches over a time course of 25 h after intake of a single oral dose of phentermine. A pharmacokinetic curve for the incurred phentermine was successfully produced using the described validated method. These results suggest that either volumetric sample collection method may be amenable to field-use followed by fully automated, HCT-independent DBS-SPE-LC-MS/MS bioanalysis for the quantitation of these representative controlled substances. Analytical data from DBS prepared with a pipet and microfluidic-based sampling devices were comparable, but the latter is easier to operate, making this approach more suitable for sample collection by unskilled persons.
Jiang, Jiyang; Liu, Tao; Zhu, Wanlin; Koncz, Rebecca; Liu, Hao; Lee, Teresa; Sachdev, Perminder S; Wen, Wei
2018-07-01
We present 'UBO Detector', a cluster-based, fully automated pipeline for extracting and calculating variables for regions of white matter hyperintensities (WMH) (available for download at https://cheba.unsw.edu.au/group/neuroimaging-pipeline). It takes T1-weighted and fluid attenuated inversion recovery (FLAIR) scans as input, and SPM12 and FSL functions are utilised for pre-processing. The candidate clusters are then generated by FMRIB's Automated Segmentation Tool (FAST). A supervised machine learning algorithm, k-nearest neighbor (k-NN), is applied to determine whether the candidate clusters are WMH or non-WMH. UBO Detector generates both image and text (volumes and the number of WMH clusters) outputs for whole brain, periventricular, deep, and lobar WMH, as well as WMH in arterial territories. The computation time for each brain is approximately 15 min. We validated the performance of UBO Detector by showing a) high segmentation (similarity index (SI) = 0.848) and volumetric (intraclass correlation coefficient (ICC) = 0.985) agreement between the UBO Detector-derived and manually traced WMH; b) highly correlated (r 2 > 0.9) and a steady increase of WMH volumes over time; and c) significant associations of periventricular (t = 22.591, p < 0.001) and deep (t = 14.523, p < 0.001) WMH volumes generated by UBO Detector with Fazekas rating scores. With parallel computing enabled in UBO Detector, the processing can take advantage of multi-core CPU's that are commonly available on workstations. In conclusion, UBO Detector is a reliable, efficient and fully automated WMH segmentation pipeline. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Sheffner, E. J.; Hlavka, C. A.; Bauer, E. M.
1984-01-01
Two techniques have been developed for the mapping and area estimation of small grains in California from Landsat digital data. The two techniques are Band Ratio Thresholding, a semi-automated version of a manual procedure, and LCLS, a layered classification technique which can be fully automated and is based on established clustering and classification technology. Preliminary evaluation results indicate that the two techniques have potential for providing map products which can be incorporated into existing inventory procedures and automated alternatives to traditional inventory techniques and those which currently employ Landsat imagery.
Note: Automated electrochemical etching and polishing of silver scanning tunneling microscope tips.
Sasaki, Stephen S; Perdue, Shawn M; Rodriguez Perez, Alejandro; Tallarida, Nicholas; Majors, Julia H; Apkarian, V Ara; Lee, Joonhee
2013-09-01
Fabrication of sharp and smooth Ag tips is crucial in optical scanning probe microscope experiments. To ensure reproducible tip profiles, the polishing process is fully automated using a closed-loop laminar flow system to deliver the electrolytic solution to moving electrodes mounted on a motorized translational stage. The repetitive translational motion is controlled precisely on the μm scale with a stepper motor and screw-thread mechanism. The automated setup allows reproducible control over the tip profile and improves smoothness and sharpness of tips (radius 27 ± 18 nm), as measured by ultrafast field emission.
Exploring the Use of a Test Automation Framework
NASA Technical Reports Server (NTRS)
Cervantes, Alex
2009-01-01
It is known that software testers, more often than not, lack the time needed to fully test the delivered software product within the time period allotted to them. When problems in the implementation phase of a development project occur, it normally causes the software delivery date to slide. As a result, testers either need to work longer hours, or supplementary resources need to be added to the test team in order to meet aggressive test deadlines. One solution to this problem is to provide testers with a test automation framework to facilitate the development of automated test solutions.
NASA Astrophysics Data System (ADS)
Harms, Justin D.; Bachmann, Charles M.; Ambeau, Brittany L.; Faulring, Jason W.; Ruiz Torres, Andres J.; Badura, Gregory; Myers, Emily
2017-10-01
Field-portable goniometers are created for a wide variety of applications. Many of these applications require specific types of instruments and measurement schemes and must operate in challenging environments. Therefore, designs are based on the requirements that are specific to the application. We present a field-portable goniometer that was designed for measuring the hemispherical-conical reflectance factor (HCRF) of various soils and low-growing vegetation in austere coastal and desert environments and biconical reflectance factors in laboratory settings. Unlike some goniometers, this system features a requirement for "target-plane tracking" to ensure that measurements can be collected on sloped surfaces, without compromising angular accuracy. The system also features a second upward-looking spectrometer to measure the spatially dependent incoming illumination, an integrated software package to provide full automation, an automated leveling system to ensure a standard frame of reference, a design that minimizes the obscuration due to self-shading to measure the opposition effect, and the ability to record a digital elevation model of the target region. This fully automated and highly mobile system obtains accurate and precise measurements of HCRF in a wide variety of terrain and in less time than most other systems while not sacrificing consistency or repeatability in laboratory environments.
NASA Astrophysics Data System (ADS)
Greenwald, Jared
Any good physical theory must resolve current experimental data as well as offer predictions for potential searches in the future. The Standard Model of particle physics, Grand Unied Theories, Minimal Supersymmetric Models and Supergravity are all attempts to provide such a framework. However, they all lack the ability to predict many of the parameters that each of the theories utilize. String theory may yield a solution to this naturalness (or self-predictiveness) problem as well as offer a unifed theory of gravity. Studies in particle physics phenomenology based on perturbative low energy analysis of various string theories can help determine the candidacy of such models. After a review of principles and problems leading up to our current understanding of the universe, we will discuss some of the best particle physics model building techniques that have been developed using string theory. This will culminate in the introduction of a novel approach to a computational, systematic analysis of the various physical phenomena that arise from these string models. We focus on the necessary assumptions, complexity and open questions that arise while making a fully-automated at direction analysis program.
HITCal: a software tool for analysis of video head impulse test responses.
Rey-Martinez, Jorge; Batuecas-Caletrio, Angel; Matiño, Eusebi; Perez Fernandez, Nicolás
2015-09-01
The developed software (HITCal) may be a useful tool in the analysis and measurement of the saccadic video head impulse test (vHIT) responses and with the experience obtained during its use the authors suggest that HITCal is an excellent method for enhanced exploration of vHIT outputs. To develop a (software) method to analyze and explore the vHIT responses, mainly saccades. HITCal was written using a computational development program; the function to access a vHIT file was programmed; extended head impulse exploration and measurement tools were created and an automated saccade analysis was developed using an experimental algorithm. For pre-release HITCal laboratory tests, a database of head impulse tests (HITs) was created with the data collected retrospectively in three reference centers. This HITs database was evaluated by humans and was also computed with HITCal. The authors have successfully built HITCal and it has been released as open source software; the developed software was fully operative and all the proposed characteristics were incorporated in the released version. The automated saccades algorithm implemented in HITCal has good concordance with the assessment by human observers (Cohen's kappa coefficient = 0.7).
Li, Tim MH; Wong, Paul WC; Lai, Eliza SY; Yip, Paul SF
2013-01-01
Background Internet-based learning programs provide people with massive health care information and self-help guidelines on improving their health. The advent of Web 2.0 and social networks renders significant flexibility to embedding highly interactive components, such as games, to foster learning processes. The effectiveness of game-based learning on social networks has not yet been fully evaluated. Objectives The aim of this study was to assess the effectiveness of a fully automated, Web-based, social network electronic game on enhancing mental health knowledge and problem-solving skills of young people. We investigated potential motivational constructs directly affecting the learning outcome. Gender differences in learning outcome and motivation were also examined. Methods A pre/posttest design was used to evaluate the fully automated Web-based intervention. Participants, recruited from a closed online user group, self-assessed their mental health literacy and motivational constructs before and after completing the game within a 3-week period. The electronic game was designed according to cognitive-behavioral approaches. Completers and intent-to-treat analyses, using multiple imputation for missing data, were performed. Regression analysis with backward selection was employed when examining the relationship between knowledge enhancement and motivational constructs. Results The sample included 73 undergraduates (42 females) for completers analysis. The gaming approach was effective in enhancing young people’s mental health literacy (d=0.65). The finding was also consistent with the intent-to-treat analysis, which included 127 undergraduates (75 females). No gender differences were found in learning outcome (P=.97). Intrinsic goal orientation was the primary factor in learning motivation, whereas test anxiety was successfully alleviated in the game setting. No gender differences were found on any learning motivation subscales (P>.10). We also found that participants’ self-efficacy for learning and performance, as well as test anxiety, significantly affected their learning outcomes, whereas other motivational subscales were statistically nonsignificant. Conclusions Electronic games implemented through social networking sites appear to effectively enhance users’ mental health literacy. PMID:23676714
Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.
2017-01-01
Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324
Rexhepaj, Elton; Brennan, Donal J; Holloway, Peter; Kay, Elaine W; McCann, Amanda H; Landberg, Goran; Duffy, Michael J; Jirstrom, Karin; Gallagher, William M
2008-01-01
Manual interpretation of immunohistochemistry (IHC) is a subjective, time-consuming and variable process, with an inherent intra-observer and inter-observer variability. Automated image analysis approaches offer the possibility of developing rapid, uniform indicators of IHC staining. In the present article we describe the development of a novel approach for automatically quantifying oestrogen receptor (ER) and progesterone receptor (PR) protein expression assessed by IHC in primary breast cancer. Two cohorts of breast cancer patients (n = 743) were used in the study. Digital images of breast cancer tissue microarrays were captured using the Aperio ScanScope XT slide scanner (Aperio Technologies, Vista, CA, USA). Image analysis algorithms were developed using MatLab 7 (MathWorks, Apple Hill Drive, MA, USA). A fully automated nuclear algorithm was developed to discriminate tumour from normal tissue and to quantify ER and PR expression in both cohorts. Random forest clustering was employed to identify optimum thresholds for survival analysis. The accuracy of the nuclear algorithm was initially confirmed by a histopathologist, who validated the output in 18 representative images. In these 18 samples, an excellent correlation was evident between the results obtained by manual and automated analysis (Spearman's rho = 0.9, P < 0.001). Optimum thresholds for survival analysis were identified using random forest clustering. This revealed 7% positive tumour cells as the optimum threshold for the ER and 5% positive tumour cells for the PR. Moreover, a 7% cutoff level for the ER predicted a better response to tamoxifen than the currently used 10% threshold. Finally, linear regression was employed to demonstrate a more homogeneous pattern of expression for the ER (R = 0.860) than for the PR (R = 0.681). In summary, we present data on the automated quantification of the ER and the PR in 743 primary breast tumours using a novel unsupervised image analysis algorithm. This novel approach provides a useful tool for the quantification of biomarkers on tissue specimens, as well as for objective identification of appropriate cutoff thresholds for biomarker positivity. It also offers the potential to identify proteins with a homogeneous pattern of expression.
Dera, Dimah; Bouaynaya, Nidhal; Fathallah-Shaykh, Hassan M
2016-07-01
We address the problem of fully automated region discovery and robust image segmentation by devising a new deformable model based on the level set method (LSM) and the probabilistic nonnegative matrix factorization (NMF). We describe the use of NMF to calculate the number of distinct regions in the image and to derive the local distribution of the regions, which is incorporated into the energy functional of the LSM. The results demonstrate that our NMF-LSM method is superior to other approaches when applied to synthetic binary and gray-scale images and to clinical magnetic resonance images (MRI) of the human brain with and without a malignant brain tumor, glioblastoma multiforme. In particular, the NMF-LSM method is fully automated, highly accurate, less sensitive to the initial selection of the contour(s) or initial conditions, more robust to noise and model parameters, and able to detect as small distinct regions as desired. These advantages stem from the fact that the proposed method relies on histogram information instead of intensity values and does not introduce nuisance model parameters. These properties provide a general approach for automated robust region discovery and segmentation in heterogeneous images. Compared with the retrospective radiological diagnoses of two patients with non-enhancing grade 2 and 3 oligodendroglioma, the NMF-LSM detects earlier progression times and appears suitable for monitoring tumor response. The NMF-LSM method fills an important need of automated segmentation of clinical MRI.
Individual bone structure segmentation and labeling from low-dose chest CT
NASA Astrophysics Data System (ADS)
Liu, Shuang; Xie, Yiting; Reeves, Anthony P.
2017-03-01
The segmentation and labeling of the individual bones serve as the first step to the fully automated measurement of skeletal characteristics and the detection of abnormalities such as skeletal deformities, osteoporosis, and vertebral fractures. Moreover, the identified landmarks on the segmented bone structures can potentially provide relatively reliable location reference to other non-rigid human organs, such as breast, heart and lung, thereby facilitating the corresponding image analysis and registration. A fully automated anatomy-directed framework for the segmentation and labeling of the individual bone structures from low-dose chest CT is presented in this paper. The proposed system consists of four main stages: First, both clavicles are segmented and labeled by fitting a piecewise cylindrical envelope. Second, the sternum is segmented under the spatial constraints provided by the segmented clavicles. Third, all ribs are segmented and labeled based on 3D region growing within the volume of interest defined with reference to the spinal canal centerline and lungs. Fourth, the individual thoracic vertebrae are segmented and labeled by image intensity based analysis in the spatial region constrained by the previously segmented bone structures. The system performance was validated with 1270 lowdose chest CT scans through visual evaluation. Satisfactory performance was obtained respectively in 97.1% cases for the clavicle segmentation and labeling, in 97.3% cases for the sternum segmentation, in 97.2% cases for the rib segmentation, in 94.2% cases for the rib labeling, in 92.4% cases for vertebra segmentation and in 89.9% cases for the vertebra labeling.
Automated liver sampling using a gradient dual-echo Dixon-based technique.
Bashir, Mustafa R; Dale, Brian M; Merkle, Elmar M; Boll, Daniel T
2012-05-01
Magnetic resonance spectroscopy of the liver requires input from a physicist or physician at the time of acquisition to insure proper voxel selection, while in multiecho chemical shift imaging, numerous regions of interest must be manually selected in order to ensure analysis of a representative portion of the liver parenchyma. A fully automated technique could improve workflow by selecting representative portions of the liver prior to human analysis. Complete volumes from three-dimensional gradient dual-echo acquisitions with two-point Dixon reconstruction acquired at 1.5 and 3 T were analyzed in 100 subjects, using an automated liver sampling algorithm, based on ratio pairs calculated from signal intensity image data as fat-only/water-only and log(in-phase/opposed-phase) on a voxel-by-voxel basis. Using different gridding variations of the algorithm, the average correct liver volume samples ranged from 527 to 733 mL. The average percentage of sample located within the liver ranged from 95.4 to 97.1%, whereas the average incorrect volume selected was 16.5-35.4 mL (2.9-4.6%). Average run time was 19.7-79.0 s. The algorithm consistently selected large samples of the hepatic parenchyma with small amounts of erroneous extrahepatic sampling, and run times were feasible for execution on an MRI system console during exam acquisition. Copyright © 2011 Wiley Periodicals, Inc.
Recent development in software and automation tools for high-throughput discovery bioanalysis.
Shou, Wilson Z; Zhang, Jun
2012-05-01
Bioanalysis with LC-MS/MS has been established as the method of choice for quantitative determination of drug candidates in biological matrices in drug discovery and development. The LC-MS/MS bioanalytical support for drug discovery, especially for early discovery, often requires high-throughput (HT) analysis of large numbers of samples (hundreds to thousands per day) generated from many structurally diverse compounds (tens to hundreds per day) with a very quick turnaround time, in order to provide important activity and liability data to move discovery projects forward. Another important consideration for discovery bioanalysis is its fit-for-purpose quality requirement depending on the particular experiments being conducted at this stage, and it is usually not as stringent as those required in bioanalysis supporting drug development. These aforementioned attributes of HT discovery bioanalysis made it an ideal candidate for using software and automation tools to eliminate manual steps, remove bottlenecks, improve efficiency and reduce turnaround time while maintaining adequate quality. In this article we will review various recent developments that facilitate automation of individual bioanalytical procedures, such as sample preparation, MS/MS method development, sample analysis and data review, as well as fully integrated software tools that manage the entire bioanalytical workflow in HT discovery bioanalysis. In addition, software tools supporting the emerging high-resolution accurate MS bioanalytical approach are also discussed.
AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source
NASA Astrophysics Data System (ADS)
Nightingale, J. W.; Dye, S.; Massey, Richard J.
2018-05-01
This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.
Park, Yang-Seok; Sunkara, Vijaya; Kim, Yubin; Lee, Won Seok; Han, Ja-Ryoung; Cho, Yoon-Kyoung
2016-04-16
Enzyme-linked immunosorbent assay (ELISA) is a promising method to detect small amount of proteins in biological samples. The devices providing a platform for reduced sample volume and assay time as well as full automation are required for potential use in point-of-care-diagnostics. Recently, we have demonstrated ultrasensitive detection of serum proteins, C-reactive protein (CRP) and cardiac troponin I (cTnI), utilizing a lab-on-a-disc composed of TiO2 nanofibrous (NF) mats. It showed a large dynamic range with femto molar (fM) detection sensitivity, from a small volume of whole blood in 30 min. The device consists of several components for blood separation, metering, mixing, and washing that are automated for improved sensitivity from low sample volumes. Here, in the video demonstration, we show the experimental protocols and know-how for the fabrication of NFs as well as the disc, their integration and the operation in the following order: processes for preparing TiO2 NF mat; transfer-printing of TiO2 NF mat onto the disc; surface modification for immune-reactions, disc assembly and operation; on-disc detection and representative results for immunoassay. Use of this device enables multiplexed analysis with minimal consumption of samples and reagents. Given the advantages, the device should find use in a wide variety of applications, and prove beneficial in facilitating the analysis of low abundant proteins.
Ten years of R&D and full automation in molecular diagnosis.
Greub, Gilbert; Sahli, Roland; Brouillet, René; Jaton, Katia
2016-01-01
A 10-year experience of our automated molecular diagnostic platform that carries out 91 different real-time PCR is described. Progresses and future perspectives in molecular diagnostic microbiology are reviewed: why automation is important; how our platform was implemented; how homemade PCRs were developed; the advantages/disadvantages of homemade PCRs, including the critical aspects of troubleshooting and the need to further reduce the turnaround time for specific samples, at least for defined clinical settings such as emergencies. The future of molecular diagnosis depends on automation, and in a novel perspective, it is time now to fully acknowledge the true contribution of molecular diagnostic and to reconsider the indication for PCR, by also using these tests as first-line assays.
Automated homogeneous liposome immunoassay systems for anticonvulsant drugs.
Kubotsu, K; Goto, S; Fujita, M; Tuchiya, H; Kida, M; Takano, S; Matsuura, S; Sakurabayashi, I
1992-06-01
We developed automated homogeneous immunoassays, based on immunolysis of liposomes, for measuring phenytoin, phenobarbital, and carbamazepine from serum. Liposome lysis was detected spectrophotometrically from entrapped glucose-6-phosphate dehydrogenase activity. The procedure was fully automated on a routine automated clinical analyzer. Within-run, between-run, dilution, and recovery tests showed good accuracies and reproducibilities. Bilirubin, hemoglobin, triglycerides, and Intrafat did not affect assay results. The results obtained by liposome immunoassays for phenytoin, phenobarbital, and carbamazepine correlated well with those obtained by enzyme-multiplied immunoassay (Syva EMIT) kits (r = 0.995, 0.986, and 0.988, respectively) and fluorescence polarization immunoassay (Abbott TDx) kits (r = 0.990, 0.991, and 0.975, respectively). The proposed method should be useful for monitoring anticonvulsant drug concentrations in blood.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jung Hwa; Hyung, Seok-Won; Mun, Dong-Gi
2012-08-03
A multi-functional liquid chromatography system that performs 1-dimensional, 2-dimensional (strong cation exchange/reverse phase liquid chromatography, or SCX/RPLC) separations, and online phosphopeptides enrichment using a single binary nano-flow pump has been developed. With a simple operation of a function selection valve, which is equipped with a SCX column and a TiO2 (titanium dioxide) column, a fully automated selection of three different experiment modes was achieved. Because the current system uses essentially the same solvent flow paths, the same trap column, and the same separation column for reverse-phase separation of 1D, 2D, and online phosphopeptides enrichment experiments, the elution time information obtainedmore » from these experiments is in excellent agreement, which facilitates correlating peptide information from different experiments.« less
3D model assisted fully automated scanning laser Doppler vibrometer measurements
NASA Astrophysics Data System (ADS)
Sels, Seppe; Ribbens, Bart; Bogaerts, Boris; Peeters, Jeroen; Vanlanduit, Steve
2017-12-01
In this paper, a new fully automated scanning laser Doppler vibrometer (LDV) measurement technique is presented. In contrast to existing scanning LDV techniques which use a 2D camera for the manual selection of sample points, we use a 3D Time-of-Flight camera in combination with a CAD file of the test object to automatically obtain measurements at pre-defined locations. The proposed procedure allows users to test prototypes in a shorter time because physical measurement locations are determined without user interaction. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. The proposed method is illustrated with vibration measurements of an unmanned aerial vehicle
NASA Technical Reports Server (NTRS)
Miller, R. H.; Minsky, M. L.; Smith, D. B. S.
1982-01-01
Potential applications of automation, robotics, and machine intelligence systems (ARAMIS) to space activities, and to their related ground support functions are explored. The specific tasks which will be required by future space projects are identified. ARAMIS options which are candidates for those space project tasks and the relative merits of these options are defined and evaluated. Promising applications of ARAMIS and specific areas for further research are identified. The ARAMIS options defined and researched by the study group span the range from fully human to fully machine, including a number of intermediate options (e.g., humans assisted by computers, and various levels of teleoperation). By including this spectrum, the study searches for the optimum mix of humans and machines for space project tasks.
Development of an Automated Imaging Pipeline for the Analysis of the Zebrafish Larval Kidney
Westhoff, Jens H.; Giselbrecht, Stefan; Schmidts, Miriam; Schindler, Sebastian; Beales, Philip L.; Tönshoff, Burkhard; Liebel, Urban; Gehrig, Jochen
2013-01-01
The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP) transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems. PMID:24324758
Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.
Westhoff, Jens H; Giselbrecht, Stefan; Schmidts, Miriam; Schindler, Sebastian; Beales, Philip L; Tönshoff, Burkhard; Liebel, Urban; Gehrig, Jochen
2013-01-01
The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP) transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.
Automated Analysis of siRNA Screens of Virus Infected Cells Based on Immunofluorescence Microscopy
NASA Astrophysics Data System (ADS)
Matula, Petr; Kumar, Anil; Wörz, Ilka; Harder, Nathalie; Erfle, Holger; Bartenschlager, Ralf; Eils, Roland; Rohr, Karl
We present an image analysis approach as part of a high-throughput microscopy screening system based on cell arrays for the identification of genes involved in Hepatitis C and Dengue virus replication. Our approach comprises: cell nucleus segmentation, quantification of virus replication level in cells, localization of regions with transfected cells, cell classification by infection status, and quality assessment of an experiment. The approach is fully automatic and has been successfully applied to a large number of cell array images from screening experiments. The experimental results show a good agreement with the expected behavior of positive as well as negative controls and encourage the application to screens from further high-throughput experiments.
SISSY: An example of a multi-threaded, networked, object-oriented databased application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scipioni, B.; Liu, D.; Song, T.
1993-05-01
The Systems Integration Support SYstem (SISSY) is presented and its capabilities and techniques are discussed. It is fully automated data collection and analysis system supporting the SSCL`s systems analysis activities as they relate to the Physics Detector and Simulation Facility (PDSF). SISSY itself is a paradigm of effective computing on the PDSF. It uses home-grown code (C++), network programming (RPC, SNMP), relational (SYBASE) and object-oriented (ObjectStore) DBMSs, UNIX operating system services (IRIX threads, cron, system utilities, shells scripts, etc.), and third party software applications (NetCentral Station, Wingz, DataLink) all of which act together as a single application to monitor andmore » analyze the PDSF.« less
A Survey of Flow Cytometry Data Analysis Methods
Bashashati, Ali; Brinkman, Ryan R.
2009-01-01
Flow cytometry (FCM) is widely used in health research and in treatment for a variety of tasks, such as in the diagnosis and monitoring of leukemia and lymphoma patients, providing the counts of helper-T lymphocytes needed to monitor the course and treatment of HIV infection, the evaluation of peripheral blood hematopoietic stem cell grafts, and many other diseases. In practice, FCM data analysis is performed manually, a process that requires an inordinate amount of time and is error-prone, nonreproducible, nonstandardized, and not open for re-evaluation, making it the most limiting aspect of this technology. This paper reviews state-of-the-art FCM data analysis approaches using a framework introduced to report each of the components in a data analysis pipeline. Current challenges and possible future directions in developing fully automated FCM data analysis tools are also outlined. PMID:20049163
Automated Microfluidic Instrument for Label-Free and High-Throughput Cell Separation.
Zhang, Xinjie; Zhu, Zhixian; Xiang, Nan; Long, Feifei; Ni, Zhonghua
2018-03-20
Microfluidic technologies for cell separation were reported frequently in recent years. However, a compact microfluidic instrument enabling thoroughly automated cell separation is still rarely reported until today due to the difficult hybrid between the macrosized fluidic control system and the microsized microfluidic device. In this work, we propose a novel and automated microfluidic instrument to realize size-based separation of cancer cells in a label-free and high-throughput manner. Briefly, the instrument is equipped with a fully integrated microfluidic device and a set of robust fluid-driven and control units, and the instrument functions of precise fluid infusion and high-throughput cell separation are guaranteed by a flow regulatory chip and two cell separation chips which are the key components of the microfluidic device. With optimized control programs, the instrument is successfully applied to automatically sort human breast adenocarcinoma cell line MCF-7 from 5 mL of diluted human blood with a high recovery ratio of ∼85% within a rapid processing time of ∼23 min. We envision that our microfluidic instrument will be potentially useful in many biomedical applications, especially cell separation, enrichment, and concentration for the purpose of cell culture and analysis.
Roemer, Ewald; Zenzen, Volker; Conroy, Lynda L; Luedemann, Kathrin; Dempsey, Ruth; Schunck, Christian; Sticken, Edgar Trelles
2015-01-01
Total particulate matter (TPM) and the gas-vapor phase (GVP) of mainstream smoke from the Reference Cigarette 3R4F were assayed in the cytokinesis-block in vitro micronucleus (MN) assay and the in vitro chromosome aberration (CA) assay, both using V79-4 Chinese hamster lung fibroblasts exposed for up to 24 h. The Metafer image analysis platform was adapted resulting in a fully automated evaluation system of the MN assay for the detection, identification and reporting of cells with micronuclei together with the determination of the cytokinesis-block proliferation index (CBPI) to quantify the treatment-related cytotoxicity. In the CA assay, the same platform was used to identify, map and retrieve metaphases for a subsequent CA evaluation by a trained evaluator. In both the assays, TPM and GVP provoked a significant genotoxic effect: up to 6-fold more micronucleated target cells than in the negative control and up to 10-fold increases in aberrant metaphases. Data variability was lower in the automated version of the MN assay than in the non-automated. It can be estimated that two test substances that differ in their genotoxicity by approximately 30% can statistically be distinguished in the automated MN and CA assays. Time savings, based on man hours, due to the automation were approximately 70% in the MN and 25% in the CA assays. The turn-around time of the evaluation phase could be shortened by 35 and 50%, respectively. Although only cigarette smoke-derived test material has been applied, the technical improvements should be of value for other test substances.
Automated Liver Elasticity Calculation for 3D MRE
Dzyubak, Bogdan; Glaser, Kevin J.; Manduca, Armando; Ehman, Richard L.
2017-01-01
Magnetic Resonance Elastography (MRE) is a phase-contrast MRI technique which calculates quantitative stiffness images, called elastograms, by imaging the propagation of acoustic waves in tissues. It is used clinically to diagnose liver fibrosis. Automated analysis of MRE is difficult as the corresponding MRI magnitude images (which contain anatomical information) are affected by intensity inhomogeneity, motion artifact, and poor tissue- and edge-contrast. Additionally, areas with low wave amplitude must be excluded. An automated algorithm has already been successfully developed and validated for clinical 2D MRE. 3D MRE acquires substantially more data and, due to accelerated acquisition, has exacerbated image artifacts. Also, the current 3D MRE processing does not yield a confidence map to indicate MRE wave quality and guide ROI selection, as is the case in 2D. In this study, extension of the 2D automated method, with a simple wave-amplitude metric, was developed and validated against an expert reader in a set of 57 patient exams with both 2D and 3D MRE. The stiffness discrepancy with the expert for 3D MRE was −0.8% ± 9.45% and was better than discrepancy with the same reader for 2D MRE (−3.2% ± 10.43%), and better than the inter-reader discrepancy observed in previous studies. There were no automated processing failures in this dataset. Thus, the automated liver elasticity calculation (ALEC) algorithm is able to calculate stiffness from 3D MRE data with minimal bias and good precision, while enabling stiffness measurements to be fully reproducible and to be easily performed on the large 3D MRE datasets. PMID:29033488
Automated Test Requirement Document Generation
1987-11-01
DIAGNOSTICS BASED ON THE PRINCIPLES OF ARTIFICIAL INTELIGENCE ", 1984 International Test Conference, 01Oct84, (A3, 3, Cs D3, E2, G2, H2, 13, J6, K) 425...j0O GLOSSARY OF ACRONYMS 0 ABBREVIATION DEFINITION AFSATCOM Air Force Satellite Communication Al Artificial Intelligence ASIC Application Specific...In-Test Equipment (BITE) and AI ( Artificial Intelligence) - Expert Systems - need to be fully applied before a completely automated process can be
Sauer, Juergen; Chavaillaz, Alain; Wastell, David
2016-06-01
This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.
Automated measurement of uptake in cerebellum, liver, and aortic arch in full-body FDG PET/CT scans.
Bauer, Christian; Sun, Shanhui; Sun, Wenqing; Otis, Justin; Wallace, Audrey; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M; Beichel, Reinhard R
2012-06-01
The purpose of this work was to develop and validate fully automated methods for uptake measurement of cerebellum, liver, and aortic arch in full-body PET/CT scans. Such measurements are of interest in the context of uptake normalization for quantitative assessment of metabolic activity and/or automated image quality control. Cerebellum, liver, and aortic arch regions were segmented with different automated approaches. Cerebella were segmented in PET volumes by means of a robust active shape model (ASM) based method. For liver segmentation, a largest possible hyperellipsoid was fitted to the liver in PET scans. The aortic arch was first segmented in CT images of a PET/CT scan by a tubular structure analysis approach, and the segmented result was then mapped to the corresponding PET scan. For each of the segmented structures, the average standardized uptake value (SUV) was calculated. To generate an independent reference standard for method validation, expert image analysts were asked to segment several cross sections of each of the three structures in 134 F-18 fluorodeoxyglucose (FDG) PET/CT scans. For each case, the true average SUV was estimated by utilizing statistical models and served as the independent reference standard. For automated aorta and liver SUV measurements, no statistically significant scale or shift differences were observed between automated results and the independent standard. In the case of the cerebellum, the scale and shift were not significantly different, if measured in the same cross sections that were utilized for generating the reference. In contrast, automated results were scaled 5% lower on average although not shifted, if FDG uptake was calculated from the whole segmented cerebellum volume. The estimated reduction in total SUV measurement error ranged between 54.7% and 99.2%, and the reduction was found to be statistically significant for cerebellum and aortic arch. With the proposed methods, the authors have demonstrated that automated SUV uptake measurements in cerebellum, liver, and aortic arch agree with expert-defined independent standards. The proposed methods were found to be accurate and showed less intra- and interobserver variability, compared to manual analysis. The approach provides an alternative to manual uptake quantification, which is time-consuming. Such an approach will be important for application of quantitative PET imaging to large scale clinical trials. © 2012 American Association of Physicists in Medicine.
Ngo, Tuan Anh; Lu, Zhi; Carneiro, Gustavo
2017-01-01
We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Valen, Anja; Leere Øiestad, Åse Marit; Strand, Dag Helge; Skari, Ragnhild; Berg, Thomas
2017-05-01
Collection of oral fluid (OF) is easy and non-invasive compared to the collection of urine and blood, and interest in OF for drug screening and diagnostic purposes is increasing. A high-throughput ultra-high-performance liquid chromatography-tandem mass spectrometry method for determination of 21 drugs in OF using fully automated 96-well plate supported liquid extraction for sample preparation is presented. The method contains a selection of classic drugs of abuse, including amphetamines, cocaine, cannabis, opioids, and benzodiazepines. The method was fully validated for 200 μL OF/buffer mix using an Intercept OF sampling kit; validation included linearity, sensitivity, precision, accuracy, extraction recovery, matrix effects, stability, and carry-over. Inter-assay precision (RSD) and accuracy (relative error) were <15% and 13 to 5%, respectively, for all compounds at concentrations equal to or higher than the lower limit of quantification. Extraction recoveries were between 58 and 76% (RSD < 8%), except for tetrahydrocannabinol and three 7-amino benzodiazepine metabolites with recoveries between 23 and 33% (RSD between 51 and 52 % and 11 and 25%, respectively). Ion enhancement or ion suppression effects were observed for a few compounds; however, to a large degree they were compensated for by the internal standards used. Deuterium-labelled and 13 C-labelled internal standards were used for 8 and 11 of the compounds, respectively. In a comparison between Intercept and Quantisal OF kits, better recoveries and fewer matrix effects were observed for some compounds using Quantisal. The method is sensitive and robust for its purposes and has been used successfully since February 2015 for analysis of Intercept OF samples from 2600 cases in a 12-month period. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks.
Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo
2017-11-05
Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way.
Fully automated joint space width measurement and digital X-ray radiogrammetry in early RA.
Platten, Michael; Kisten, Yogan; Kälvesten, Johan; Arnaud, Laurent; Forslind, Kristina; van Vollenhoven, Ronald
2017-01-01
To study fully automated digital joint space width (JSW) and bone mineral density (BMD) in relation to a conventional radiographic scoring method in early rheumatoid arthritis (eRA). Radiographs scored by the modified Sharp van der Heijde score (SHS) in patients with eRA were acquired from the SWEdish FarmacOTherapy study. Fully automated JSW measurements of bilateral metacarpals 2, 3 and 4 were compared with the joint space narrowing (JSN) score in SHS. Multilevel mixed model statistics were applied to calculate the significance of the association between ΔJSW and ΔBMD over 1 year, and the JSW differences between damaged and undamaged joints as evaluated by the JSN. Based on 576 joints of 96 patients with eRA, a significant reduction from baseline to 1 year was observed in the JSW from 1.69 (±0.19) mm to 1.66 (±0.19) mm (p<0.01), and BMD from 0.583 (±0.068) g/cm 2 to 0.566 (±0.074) g/cm 2 (p<0.01). A significant positive association was observed between ΔJSW and ΔBMD over 1 year (p<0.0001). On an individual joint level, JSWs of undamaged (JSN=0) joints were wider than damaged (JSN>0) joints: 1.68 mm (95% CI 1.70 to 1.67) vs 1.54 mm (95% CI 1.63 to 1.46). Similarly the unadjusted multilevel model showed significant differences in JSW between undamaged (1.68 mm (95% CI 1.72 to 1.64)) and damaged joints (1.63 mm (95% CI 1.68 to 1.58)) (p=0.0048). This difference remained significant in the adjusted model: 1.66 mm (95% CI 1.70 to 1.61) vs 1.62 mm (95% CI 1.68 to 1.56) (p=0.042). To measure the JSW with this fully automated digital tool may be useful as a quick and observer-independent application for evaluating cartilage damage in eRA. NCT00764725.
Flexible automated approach for quantitative liquid handling of complex biological samples.
Palandra, Joe; Weller, David; Hudson, Gary; Li, Jeff; Osgood, Sarah; Hudson, Emily; Zhong, Min; Buchholz, Lisa; Cohen, Lucinda H
2007-11-01
A fully automated protein precipitation technique for biological sample preparation has been developed for the quantitation of drugs in various biological matrixes. All liquid handling during sample preparation was automated using a Hamilton MicroLab Star Robotic workstation, which included the preparation of standards and controls from a Watson laboratory information management system generated work list, shaking of 96-well plates, and vacuum application. Processing time is less than 30 s per sample or approximately 45 min per 96-well plate, which is then immediately ready for injection onto an LC-MS/MS system. An overview of the process workflow is discussed, including the software development. Validation data are also provided, including specific liquid class data as well as comparative data of automated vs manual preparation using both quality controls and actual sample data. The efficiencies gained from this automated approach are described.
The effect of JPEG compression on automated detection of microaneurysms in retinal images
NASA Astrophysics Data System (ADS)
Cree, M. J.; Jelinek, H. F.
2008-02-01
As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.
NASA Technical Reports Server (NTRS)
Kenny, Caitlin; Fern, Lisa
2012-01-01
Continuing demand for the use of Unmanned Aircraft Systems (UAS) has put increasing pressure on operations in civil airspace. The need to fly UAS in the National Airspace System (NAS) in order to perform missions vital to national security and defense, emergency management, and science is increasing at a rapid pace. In order to ensure safe operations in the NAS, operators of unmanned aircraft, like those of manned aircraft, may be required to maintain separation assurance and avoid loss of separation with other aircraft while performing their mission tasks. This experiment investigated the effects of varying levels of automation on UAS operator performance and workload while responding to conflict resolution instructions provided by the Tactical Collision Avoidance System II (TCAS II) during a UAS mission in high-density airspace. The purpose of this study was not to investigate the safety of using TCAS II on UAS, but rather to examine the effect of automation on the ability of operators to respond to traffic collision alerts. Six licensed pilots were recruited to act as UAS operators for this study. Operators were instructed to follow a specified mission flight path, while maintaining radio contact with Air Traffic Control and responding to TCAS II resolution advisories. Operators flew four, 45 minute, experimental missions with four different levels of automation: Manual, Knobs, Management by Exception, and Fully Automated. All missions included TCAS II Resolution Advisories (RAs) that required operator attention and rerouting. Operator compliance and reaction time to RAs was measured, and post-run NASA-TLX ratings were collected to measure workload. Results showed significantly higher compliance rates, faster responses to TCAS II alerts, as well as less preemptive operator actions when higher levels of automation are implemented. Physical and Temporal ratings of workload were significantly higher in the Manual condition than in the Management by Exception and Fully Automated conditions.
Automated Transition State Theory Calculations for High-Throughput Kinetics.
Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H
2017-09-21
A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.
Quintiliani, Lisa M; Turner-McGrievy, Gabrielle M; Migneault, Jeffrey P; Heeren, Timothy; Friedman, Robert H
2014-01-01
Background Health behavior change interventions have focused on obtaining short-term intervention effects; few studies have evaluated mid-term and long-term outcomes, and even fewer have evaluated interventions that are designed to maintain and enhance initial intervention effects. Moreover, behavior theory has not been developed for maintenance or applied to maintenance intervention design to the degree that it has for behavior change initiation. Objective The objective of this paper is to describe a study that compared two theory-based interventions (social cognitive theory [SCT] vs goal systems theory [GST]) designed to maintain previously achieved improvements in fruit and vegetable (F&V) consumption. Methods The interventions used tailored, interactive conversations delivered by a fully automated telephony system (Telephone-Linked Care [TLC]) over a 6-month period. TLC maintenance intervention based on SCT used a skills-based approach to build self-efficacy. It assessed confidence in and barriers to eating F&V, provided feedback on how to overcome barriers, plan ahead, and set goals. The TLC maintenance intervention based on GST used a cognitive-based approach. Conversations trained participants in goal management to help them integrate their newly acquired dietary behavior into their hierarchical system of goals. Content included goal facilitation, conflict, shielding, and redundancy, and reflection on personal goals and priorities. To evaluate and compare the two approaches, a sample of adults whose F&V consumption was below public health goal levels were recruited from a large urban area to participate in a fully automated telephony intervention (TLC-EAT) for 3-6 months. Participants who increase their daily intake of F&V by ≥1 serving/day will be eligible for the three-arm randomized controlled trial. A sample of 405 participants will be randomized to one of three arms: (1) an assessment-only control, (2) TLC-SCT, and (3) TLC-GST. The maintenance interventions are 6 months. All 405 participants who qualify for the trial will complete surveys administered by blinded interviewers at baseline (randomization), 6, 12, 18, and 24 months. Results Data analysis is not yet complete, but we hypothesize that (1) TLC-GST > TLC-SCT > control at all follow-up time points for F&V consumption, and (2) intervention effects will be mediated by the theoretical constructs (eg, self-efficacy, goal pursuit, conflict, shielding, and facilitation). Conclusions This study used a novel study design to initiate and then promote the maintenance of dietary behavior change through the use of an evidence-based fully automated telephony intervention. After the first 6 months (the acquisition phase), we will examine whether two telephony interventions built using different underlying behavioral theories were more successful than an assessment-only control group in helping participants maintain their newly acquired health behavior change. Trial Registration Clinicaltrials.gov NCT00148525; http://clinicaltrials.gov/ct2/show/NCT00148525 (Archived by Webcite at http://www.webcitation.org/6TiRriJOs). PMID:25387065
Fully automated laser ray tracing system to measure changes in the crystalline lens GRIN profile.
Qiu, Chen; Maceo Heilman, Bianca; Kaipio, Jari; Donaldson, Paul; Vaghefi, Ehsan
2017-11-01
Measuring the lens gradient refractive index (GRIN) accurately and reliably has proven an extremely challenging technical problem. A fully automated laser ray tracing (LRT) system was built to address this issue. The LRT system captures images of multiple laser projections before and after traversing through an ex vivo lens. These LRT images, combined with accurate measurements of the lens geometry, are used to calculate the lens GRIN profile. Mathematically, this is an ill-conditioned problem; hence, it is essential to apply biologically relevant constraints to produce a feasible solution. The lens GRIN measurements were compared with previously published data. Our GRIN retrieval algorithm produces fast and accurate measurements of the lens GRIN profile. Experiments to study the optics of physiologically perturbed lenses are the future direction of this research.
Fully automated laser ray tracing system to measure changes in the crystalline lens GRIN profile
Qiu, Chen; Maceo Heilman, Bianca; Kaipio, Jari; Donaldson, Paul; Vaghefi, Ehsan
2017-01-01
Measuring the lens gradient refractive index (GRIN) accurately and reliably has proven an extremely challenging technical problem. A fully automated laser ray tracing (LRT) system was built to address this issue. The LRT system captures images of multiple laser projections before and after traversing through an ex vivo lens. These LRT images, combined with accurate measurements of the lens geometry, are used to calculate the lens GRIN profile. Mathematically, this is an ill-conditioned problem; hence, it is essential to apply biologically relevant constraints to produce a feasible solution. The lens GRIN measurements were compared with previously published data. Our GRIN retrieval algorithm produces fast and accurate measurements of the lens GRIN profile. Experiments to study the optics of physiologically perturbed lenses are the future direction of this research. PMID:29188093
Ackermann, Uwe; Plougastel, Lucie; Goh, Yit Wooi; Yeoh, Shinn Dee; Scott, Andrew M
2014-12-01
The synthesis of [(18)F]2-fluoroethyl azide and its subsequent click reaction with 5-ethynyl-2'-deoxyuridine (EDU) to form [(18)F]FLETT was performed using an iPhase FlexLab module. The implementation of a vacuum distillation method afforded [(18)F]2-fluoroethyl azide in 87±5.3% radiochemical yield. The use of Cu(CH3CN)4PF6 and TBTA as catalyst enabled us to fully automate the [(18)F]FLETT synthesis without the need for the operator to enter the radiation field. [(18)F]FLETT was produced in higher overall yield (41.3±6.5%) and shorter synthesis time (67min) than with our previously reported manual method (32.5±2.5% in 130min). Copyright © 2014 Elsevier Ltd. All rights reserved.
Godfrey, Alexander G; Masquelin, Thierry; Hemmerle, Horst
2013-09-01
This article describes our experiences in creating a fully integrated, globally accessible, automated chemical synthesis laboratory. The goal of the project was to establish a fully integrated automated synthesis solution that was initially focused on minimizing the burden of repetitive, routine, rules-based operations that characterize more established chemistry workflows. The architecture was crafted to allow for the expansion of synthetic capabilities while also providing for a flexible interface that permits the synthesis objective to be introduced and manipulated as needed under the judicious direction of a remote user in real-time. This innovative central synthesis suite is herein described along with some case studies to illustrate the impact such a system is having in expanding drug discovery capabilities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Fricke, Jens; Pohlmann, Kristof; Jonescheit, Nils A; Ellert, Andree; Joksch, Burkhard; Luttmann, Reiner
2013-06-01
The identification of optimal expression conditions for state-of-the-art production of pharmaceutical proteins is a very time-consuming and expensive process. In this report a method for rapid and reproducible optimization of protein expression in an in-house designed small-scale BIOSTAT® multi-bioreactor plant is described. A newly developed BioPAT® MFCS/win Design of Experiments (DoE) module (Sartorius Stedim Systems, Germany) connects the process control system MFCS/win and the DoE software MODDE® (Umetrics AB, Sweden) and enables therefore the implementation of fully automated optimization procedures. As a proof of concept, a commercial Pichia pastoris strain KM71H has been transformed for the expression of potential malaria vaccines. This approach has allowed a doubling of intact protein secretion productivity due to the DoE optimization procedure compared to initial cultivation results. In a next step, robustness regarding the sensitivity to process parameter variability has been proven around the determined optimum. Thereby, a pharmaceutical production process that is significantly improved within seven 24-hour cultivation cycles was established. Specifically, regarding the regulatory demands pointed out in the process analytical technology (PAT) initiative of the United States Food and Drug Administration (FDA), the combination of a highly instrumented, fully automated multi-bioreactor platform with proper cultivation strategies and extended DoE software solutions opens up promising benefits and opportunities for pharmaceutical protein production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Valcarcel, Alessandra M; Linn, Kristin A; Vandekar, Simon N; Satterthwaite, Theodore D; Muschelli, John; Calabresi, Peter A; Pham, Dzung L; Martin, Melissa Lynne; Shinohara, Russell T
2018-03-08
Magnetic resonance imaging (MRI) is crucial for in vivo detection and characterization of white matter lesions (WMLs) in multiple sclerosis. While WMLs have been studied for over two decades using MRI, automated segmentation remains challenging. Although the majority of statistical techniques for the automated segmentation of WMLs are based on single imaging modalities, recent advances have used multimodal techniques for identifying WMLs. Complementary modalities emphasize different tissue properties, which help identify interrelated features of lesions. Method for Inter-Modal Segmentation Analysis (MIMoSA), a fully automatic lesion segmentation algorithm that utilizes novel covariance features from intermodal coupling regression in addition to mean structure to model the probability lesion is contained in each voxel, is proposed. MIMoSA was validated by comparison with both expert manual and other automated segmentation methods in two datasets. The first included 98 subjects imaged at Johns Hopkins Hospital in which bootstrap cross-validation was used to compare the performance of MIMoSA against OASIS and LesionTOADS, two popular automatic segmentation approaches. For a secondary validation, a publicly available data from a segmentation challenge were used for performance benchmarking. In the Johns Hopkins study, MIMoSA yielded average Sørensen-Dice coefficient (DSC) of .57 and partial AUC of .68 calculated with false positive rates up to 1%. This was superior to performance using OASIS and LesionTOADS. The proposed method also performed competitively in the segmentation challenge dataset. MIMoSA resulted in statistically significant improvements in lesion segmentation performance compared with LesionTOADS and OASIS, and performed competitively in an additional validation study. Copyright © 2018 by the American Society of Neuroimaging.
Automated interpretation of 3D laserscanned point clouds for plant organ segmentation.
Wahabzada, Mirwaes; Paulus, Stefan; Kersting, Kristian; Mahlein, Anne-Katrin
2015-08-08
Plant organ segmentation from 3D point clouds is a relevant task for plant phenotyping and plant growth observation. Automated solutions are required to increase the efficiency of recent high-throughput plant phenotyping pipelines. However, plant geometrical properties vary with time, among observation scales and different plant types. The main objective of the present research is to develop a fully automated, fast and reliable data driven approach for plant organ segmentation. The automated segmentation of plant organs using unsupervised, clustering methods is crucial in cases where the goal is to get fast insights into the data or no labeled data is available or costly to achieve. For this we propose and compare data driven approaches that are easy-to-realize and make the use of standard algorithms possible. Since normalized histograms, acquired from 3D point clouds, can be seen as samples from a probability simplex, we propose to map the data from the simplex space into Euclidean space using Aitchisons log ratio transformation, or into the positive quadrant of the unit sphere using square root transformation. This, in turn, paves the way to a wide range of commonly used analysis techniques that are based on measuring the similarities between data points using Euclidean distance. We investigate the performance of the resulting approaches in the practical context of grouping 3D point clouds and demonstrate empirically that they lead to clustering results with high accuracy for monocotyledonous and dicotyledonous plant species with diverse shoot architecture. An automated segmentation of 3D point clouds is demonstrated in the present work. Within seconds first insights into plant data can be deviated - even from non-labelled data. This approach is applicable to different plant species with high accuracy. The analysis cascade can be implemented in future high-throughput phenotyping scenarios and will support the evaluation of the performance of different plant genotypes exposed to stress or in different environmental scenarios.
Guo, Jianming; Shang, Er-Xin; Duan, Jin-Ao; Tang, Yuping; Qian, Dawei; Su, Shulan
2010-02-01
In drug metabolism research, the setting up of a complex series of mass spectrometry experiments and the subsequent analysis of the large amounts of data produced are often time-consuming. In this paper, we describe a strategy using ultra-performance liquid chromatography/quadrupole time-of-flight mass spectrometry (UPLC/QTOFMS) with automated data analysis software (MetaboLynx) for fast analysis of the metabolic profile of flavonoids in Abelmoschus manihot. Rat plasma and urine samples collected 1 h and 0-12 h after oral administration of Abelmoschus manihot were analyzed by UPLC/QTOFMS within 15 min. The post-acquisition data were processed using MetaboLynx. With key parameters carefully set, MetaboLynx is able to show the presence of a wide range of metabolites with only a limited requirement for manual intervention and data interpretation time. A total of 16 and 38 metabolites were identified in plasma and urine compared with blank samples. The results indicated that methylation and glucuronidation after deglycosylation were the major metabolic pathways of flavonoid glycosides in Abelmoschus manihot. The present study provided important information about the metabolism of flavonoid glycosides in Abelmoschus manihot which will be helpful for fully understanding the mechanism of action of this herb. Furthermore, this work demonstrated the potential of the UPLC/QTOFMS approach using MetaboLynx for fast and automated identification of metabolites from Chinese herbal medicines. Copyright (c) 2010 John Wiley & Sons, Ltd.
Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Qing; Whaley, Richard Clint; Qasem, Apan
This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis,more » identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.« less
Lupidi, Marco; Coscas, Florence; Cagini, Carlo; Fiore, Tito; Spaccini, Elisa; Fruttini, Daniela; Coscas, Gabriel
2016-09-01
To describe a new automated quantitative technique for displaying and analyzing macular vascular perfusion using optical coherence tomography angiography (OCT-A) and to determine a normative data set, which might be used as reference in identifying progressive changes due to different retinal vascular diseases. Reliability study. A retrospective review of 47 eyes of 47 consecutive healthy subjects imaged with a spectral-domain OCT-A device was performed in a single institution. Full-spectrum amplitude-decorrelation angiography generated OCT angiograms of the retinal superficial and deep capillary plexuses. A fully automated custom-built software was used to provide quantitative data on the foveal avascular zone (FAZ) features and the total vascular and avascular surfaces. A comparative analysis between central macular thickness (and volume) and FAZ metrics was performed. Repeatability and reproducibility were also assessed in order to establish the feasibility and reliability of the method. The comparative analysis between the superficial capillary plexus and the deep capillary plexus revealed a statistically significant difference (P < .05) in terms of FAZ perimeter, surface, and major axis and a not statistically significant difference (P > .05) when considering total vascular and avascular surfaces. A linear correlation was demonstrated between central macular thickness (and volume) and the FAZ surface. Coefficients of repeatability and reproducibility were less than 0.4, thus demonstrating high intraobserver repeatability and interobserver reproducibility for all the examined data. A quantitative approach on retinal vascular perfusion, which is visible on Spectralis OCT angiography, may offer an objective and reliable method for monitoring disease progression in several retinal vascular diseases. Copyright © 2016 Elsevier Inc. All rights reserved.
Certification-Based Process Analysis
NASA Technical Reports Server (NTRS)
Knight, Russell L.
2013-01-01
Space mission architects are often challenged with knowing which investment in technology infusion will have the highest return. Certification-based analysis (CBA) gives architects and technologists a means to communicate the risks and advantages of infusing technologies at various points in a process. Various alternatives can be compared, and requirements based on supporting streamlining or automation can be derived and levied on candidate technologies. CBA is a technique for analyzing a process and identifying potential areas of improvement. The process and analysis products are used to communicate between technologists and architects. Process means any of the standard representations of a production flow; in this case, any individual steps leading to products, which feed into other steps, until the final product is produced at the end. This sort of process is common for space mission operations, where a set of goals is reduced eventually to a fully vetted command sequence to be sent to the spacecraft. Fully vetting a product is synonymous with certification. For some types of products, this is referred to as verification and validation, and for others it is referred to as checking. Fundamentally, certification is the step in the process where one insures that a product works as intended, and contains no flaws.
Lightning Jump Algorithm Development for the GOES·R Geostationary Lightning Mapper
NASA Technical Reports Server (NTRS)
Schultz. E.; Schultz. C.; Chronis, T.; Stough, S.; Carey, L.; Calhoun, K.; Ortega, K.; Stano, G.; Cecil, D.; Bateman, M.;
2014-01-01
Current work on the lightning jump algorithm to be used in GOES-R Geostationary Lightning Mapper (GLM)'s data stream is multifaceted due to the intricate interplay between the storm tracking, GLM proxy data, and the performance of the lightning jump itself. This work outlines the progress of the last year, where analysis and performance of the lightning jump algorithm with automated storm tracking and GLM proxy data were assessed using over 700 storms from North Alabama. The cases analyzed coincide with previous semi-objective work performed using total lightning mapping array (LMA) measurements in Schultz et al. (2011). Analysis shows that key components of the algorithm (flash rate and sigma thresholds) have the greatest influence on the performance of the algorithm when validating using severe storm reports. Automated objective analysis using the GLM proxy data has shown probability of detection (POD) values around 60% with false alarm rates (FAR) around 73% using similar methodology to Schultz et al. (2011). However, when applying verification methods similar to those employed by the National Weather Service, POD values increase slightly (69%) and FAR values decrease (63%). The relationship between storm tracking and lightning jump has also been tested in a real-time framework at NSSL. This system includes fully automated tracking by radar alone, real-time LMA and radar observations and the lightning jump. Results indicate that the POD is strong at 65%. However, the FAR is significantly higher than in Schultz et al. (2011) (50-80% depending on various tracking/lightning jump parameters) when using storm reports for verification. Given known issues with Storm Data, the performance of the real-time jump algorithm is also being tested with high density radar and surface observations from the NSSL Severe Hazards Analysis & Verification Experiment (SHAVE).
Fully automated three-dimensional microscopy system
NASA Astrophysics Data System (ADS)
Kerschmann, Russell L.
2000-04-01
Tissue-scale structures such as vessel networks are imaged at micron resolution with the Virtual Tissue System (VT System). VT System imaging of cubic millimeters of tissue and other material extends the capabilities of conventional volumetric techniques such as confocal microscopy, and allows for the first time the integrated 2D and 3D analysis of important tissue structural relationships. The VT System eliminates the need for glass slide-mounted tissue sections and instead captures images directly from the surface of a block containing a sample. Tissues are en bloc stained with fluorochrome compounds, embedded in an optically conditioned polymer that suppresses image signals form dep within the block , and serially sectioned for imaging. Thousands of fully registered 2D images are automatically captured digitally to completely convert tissue samples into blocks of high-resolution information. The resulting multi gigabyte data sets constitute the raw material for precision visualization and analysis. Cellular function may be seen in a larger anatomical context. VT System technology makes tissue metrics, accurate cell enumeration and cell cycle analyses possible while preserving full histologic setting.
Automated sizing of large structures by mixed optimization methods
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.; Loendorf, D.
1973-01-01
A procedure for automating the sizing of wing-fuselage airframes was developed and implemented in the form of an operational program. The program combines fully stressed design to determine an overall material distribution with mass-strength and mathematical programming methods to design structural details accounting for realistic design constraints. The practicality and efficiency of the procedure is demonstrated for transport aircraft configurations. The methodology is sufficiently general to be applicable to other large and complex structures.
Automated ILA design for synchronous sequential circuits
NASA Technical Reports Server (NTRS)
Liu, M. N.; Liu, K. Z.; Maki, G. K.; Whitaker, S. R.
1991-01-01
An iterative logic array (ILA) architecture for synchronous sequential circuits is presented. This technique utilizes linear algebra to produce the design equations. The ILA realization of synchronous sequential logic can be fully automated with a computer program. A programmable design procedure is proposed to fullfill the design task and layout generation. A software algorithm in the C language has been developed and tested to generate 1 micron CMOS layouts using the Hewlett-Packard FUNGEN module generator shell.
Leb, Victoria; Stöcher, Markus; Valentine-Thon, Elizabeth; Hölzl, Gabriele; Kessler, Harald; Stekel, Herbert; Berg, Jörg
2004-02-01
We report on the development of a fully automated real-time PCR assay for the quantitative detection of hepatitis B virus (HBV) DNA in plasma with EDTA (EDTA plasma). The MagNA Pure LC instrument was used for automated DNA purification and automated preparation of PCR mixtures. Real-time PCR was performed on the LightCycler instrument. An internal amplification control was devised as a PCR competitor and was introduced into the assay at the stage of DNA purification to permit monitoring for sample adequacy. The detection limit of the assay was found to be 200 HBV DNA copies/ml, with a linear dynamic range of 8 orders of magnitude. When samples from the European Union Quality Control Concerted Action HBV Proficiency Panel 1999 were examined, the results were found to be in acceptable agreement with the HBV DNA concentrations of the panel members. In a clinical laboratory evaluation of 123 EDTA plasma samples, a significant correlation was found with the results obtained by the Roche HBV Monitor test on the Cobas Amplicor analyzer within the dynamic range of that system. In conclusion, the newly developed assay has a markedly reduced hands-on time, permits monitoring for sample adequacy, and is suitable for the quantitative detection of HBV DNA in plasma in a routine clinical laboratory.
Rabal, Obdulia; Link, Wolfgang; Serelde, Beatriz G; Bischoff, James R; Oyarzabal, Julen
2010-04-01
Here we report the development and validation of a complete solution to manage and analyze the data produced by image-based phenotypic screening campaigns of small-molecule libraries. In one step initial crude images are analyzed for multiple cytological features, statistical analysis is performed and molecules that produce the desired phenotypic profile are identified. A naïve Bayes classifier, integrating chemical and phenotypic spaces, is built and utilized during the process to assess those images initially classified as "fuzzy"-an automated iterative feedback tuning. Simultaneously, all this information is directly annotated in a relational database containing the chemical data. This novel fully automated method was validated by conducting a re-analysis of results from a high-content screening campaign involving 33 992 molecules used to identify inhibitors of the PI3K/Akt signaling pathway. Ninety-two percent of confirmed hits identified by the conventional multistep analysis method were identified using this integrated one-step system as well as 40 new hits, 14.9% of the total, originally false negatives. Ninety-six percent of true negatives were properly recognized too. A web-based access to the database, with customizable data retrieval and visualization tools, facilitates the posterior analysis of annotated cytological features which allows identification of additional phenotypic profiles; thus, further analysis of original crude images is not required.
Automated analysis of art object surfaces using time-averaged digital speckle pattern interferometry
NASA Astrophysics Data System (ADS)
Lukomski, Michal; Krzemien, Leszek
2013-05-01
Technical development and practical evaluation of a laboratory built, out-of-plane digital speckle pattern interferometer (DSPI) are reported. The instrument was used for non-invasive, non-contact detection and characterization of early-stage damage, like fracturing and layer separation, of painted objects of art. A fully automated algorithm was developed for recording and analysis of vibrating objects utilizing continuous-wave laser light. The algorithm uses direct, numerical fitting or Hilbert transformation for an independent, quantitative evaluation of the Bessel function at every point of the investigated surface. The procedure does not require phase modulation and thus can be implemented within any, even the simplest, DSPI apparatus. The proposed deformation analysis is fast and computationally inexpensive. Diagnosis of physical state of the surface of a panel painting attributed to Nicolaus Haberschrack (a late-mediaeval painter active in Krakow) from the collection of the National Museum in Krakow is presented as an example of an in situ application of the developed methodology. It has allowed the effectiveness of the deformation analysis to be evaluated for the surface of a real painting (heterogeneous colour and texture) in a conservation studio where vibration level was considerably higher than in the laboratory. It has been established that the methodology, which offers automatic analysis of the interferometric fringe patterns, has a considerable potential to facilitate and render more precise the condition surveys of works of art.
Improving Grid Resilience through Informed Decision-making (IGRID)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnham, Laurie; Stamber, Kevin L.; Jeffers, Robert Fredric
The transformation of the distribution grid from a centralized to decentralized architecture, with bi-directional power and data flows, is made possible by a surge in network intelligence and grid automation. While changes are largely beneficial, the interface between grid operator and automated technologies is not well understood, nor are the benefits and risks of automation. Quantifying and understanding the latter is an important facet of grid resilience that needs to be fully investigated. The work described in this document represents the first empirical study aimed at identifying and mitigating the vulnerabilities posed by automation for a grid that for themore » foreseeable future will remain a human-in-the-loop critical infrastructure. Our scenario-based methodology enabled us to conduct a series of experimental studies to identify causal relationships between grid-operator performance and automated technologies and to collect measurements of human performance as a function of automation. Our findings, though preliminary, suggest there are predictive patterns in the interplay between human operators and automation, patterns that can inform the rollout of distribution automation and the hiring and training of operators, and contribute in multiple and significant ways to the field of grid resilience.« less
Advanced automation for in-space vehicle processing
NASA Technical Reports Server (NTRS)
Sklar, Michael; Wegerif, D.
1990-01-01
The primary objective of this 3-year planned study is to assure that the fully evolved Space Station Freedom (SSF) can support automated processing of exploratory mission vehicles. Current study assessments show that required extravehicular activity (EVA) and to some extent intravehicular activity (IVA) manpower requirements for required processing tasks far exceeds the available manpower. Furthermore, many processing tasks are either hazardous operations or they exceed EVA capability. Thus, automation is essential for SSF transportation node functionality. Here, advanced automation represents the replacement of human performed tasks beyond the planned baseline automated tasks. Both physical tasks such as manipulation, assembly and actuation, and cognitive tasks such as visual inspection, monitoring and diagnosis, and task planning are considered. During this first year of activity both the Phobos/Gateway Mars Expedition and Lunar Evolution missions proposed by the Office of Exploration have been evaluated. A methodology for choosing optimal tasks to be automated has been developed. Processing tasks for both missions have been ranked on the basis of automation potential. The underlying concept in evaluating and describing processing tasks has been the use of a common set of 'Primitive' task descriptions. Primitive or standard tasks have been developed both for manual or crew processing and automated machine processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Yunfei; Wood, Eric; Burton, Evan
A shift towards increased levels of driving automation is generally expected to result in improved safety and traffic congestion outcomes. However, little empirical data exists to estimate the impact that automated driving could have on energy consumption and greenhouse gas emissions. In the absence of empirical data on differences between drive cycles from present day vehicles (primarily operated by humans) and future vehicles (partially or fully operated by computers) one approach is to model both situations over identical traffic conditions. Such an exercise requires traffic micro-simulation to not only accurately model vehicle operation under high levels of automation, but alsomore » (and potentially more challenging) vehicle operation under present day human drivers. This work seeks to quantify the ability of a commercial traffic micro-simulation program to accurately model real-world drive cycles in vehicles operated primarily by humans in terms of driving speed, acceleration, and simulated fuel economy. Synthetic profiles from models of freeway and arterial facilities near Atlanta, Georgia, are compared to empirical data collected from real-world drivers on the same facilities. Empirical and synthetic drive cycles are then simulated in a powertrain efficiency model to enable comparison on the basis of fuel economy. Synthetic profiles from traffic micro-simulation were found to exhibit low levels of transient behavior relative to the empirical data. Even with these differences, the synthetic and empirical data in this study agree well in terms of driving speed and simulated fuel economy. The differences in transient behavior between simulated and empirical data suggest that larger stochastic contributions in traffic micro-simulation (relative to those present in the traffic micro-simulation tool used in this study) are required to fully capture the arbitrary elements of human driving. Interestingly, the lack of stochastic contributions from models of human drivers in this study did not result in a significant discrepancy between fuel economy simulations based on synthetic and empirical data; a finding with implications on the potential energy efficiency gains of automated vehicle technology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, S; Lo, P; Hoffman, J
Purpose: To evaluate the robustness of CAD or Quantitative Imaging methods, they should be tested on a variety of cases and under a variety of image acquisition and reconstruction conditions that represent the heterogeneity encountered in clinical practice. The purpose of this work was to develop a fully-automated pipeline for generating CT images that represent a wide range of dose and reconstruction conditions. Methods: The pipeline consists of three main modules: reduced-dose simulation, image reconstruction, and quantitative analysis. The first two modules of the pipeline can be operated in a completely automated fashion, using configuration files and running the modulesmore » in a batch queue. The input to the pipeline is raw projection CT data; this data is used to simulate different levels of dose reduction using a previously-published algorithm. Filtered-backprojection reconstructions are then performed using FreeCT-wFBP, a freely-available reconstruction software for helical CT. We also added support for an in-house, model-based iterative reconstruction algorithm using iterative coordinate-descent optimization, which may be run in tandem with the more conventional recon methods. The reduced-dose simulations and image reconstructions are controlled automatically by a single script, and they can be run in parallel on our research cluster. The pipeline was tested on phantom and lung screening datasets from a clinical scanner (Definition AS, Siemens Healthcare). Results: The images generated from our test datasets appeared to represent a realistic range of acquisition and reconstruction conditions that we would expect to find clinically. The time to generate images was approximately 30 minutes per dose/reconstruction combination on a hybrid CPU/GPU architecture. Conclusion: The automated research pipeline promises to be a useful tool for either training or evaluating performance of quantitative imaging software such as classifiers and CAD algorithms across the range of acquisition and reconstruction parameters present in the clinical environment. Funding support: NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less
Tretzel, Laura; Thomas, Andreas; Piper, Thomas; Hedeland, Mikael; Geyer, Hans; Schänzer, Wilhelm; Thevis, Mario
2016-05-10
Dried blood spots (DBS) represent a sample matrix collected under minimal-invasive, straightforward and robust conditions. DBS specimens have been shown to provide appropriate test material for different analytical disciplines, e.g., preclinical drug development, therapeutic drug monitoring, forensic toxicology and diagnostic analysis of metabolic disorders in newborns. However, the sample preparation has occasionally been reported as laborious and time consuming. In order to minimize the manual workload and to substantiate the suitability of DBS for high sample-throughput, the automation of sample preparation processes is of paramount interest. In the current study, the development and validation of a fully automated DBS extraction method coupled to online solid-phase extraction using the example of nicotine, its major metabolites nornicotine, cotinine and trans-3'-hydroxycotinine and the tobacco alkaloids anabasine and anatabine is presented, based on the rationale that the use of nicotine-containing products for performance-enhancing purposes has been monitored by the World Anti-Doping Agency (WADA) for several years. Automation-derived DBS sample extracts were directed online to liquid chromatography high resolution/high mass accuracy tandem mass spectrometry, and target analytes were determined with support of four deuterated internal standards. Validation of the method yielded precise (CV <7.5% for intraday and <12.3% for interday measurements) and linear (r(2)>0.998) results. The limit of detection was established at 5 ng mL(-1) for all studied compounds, the extraction recovery ranged from 25 to 44%, and no matrix effects were observed. To exemplify the applicability of the DBS online-SPE LC-MS/MS approach for sports drug testing purposes, the method was applied to authentic DBS samples obtained from smokers, snus users, and e-cigarette users. Statistical evaluation of the obtained results indicated differences in metabolic behavior depending on the route of administration (inhalative versus buccal absorption) in terms of the ratio of nicotine and nornicotine. Copyright © 2016 Elsevier B.V. All rights reserved.
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
NASA Astrophysics Data System (ADS)
Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John
2017-04-01
Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in order to be applicable in various networks ranging from local to regional. The method can be applied either to the everyday network data flow, or to process large previously existing earthquake catalogues and data sets.
Model-Based Infrared Metrology for Advanced Technology Nodes and 300 mm Wafer Processing
NASA Astrophysics Data System (ADS)
Rosenthal, Peter A.; Duran, Carlos; Tower, Josh; Mazurenko, Alex; Mantz, Ulrich; Weidner, Peter; Kasic, Alexander
2005-09-01
The use of infrared spectroscopy for production semiconductor process monitoring has evolved recently from primarily unpatterned, i.e. blanket test wafer measurements in a limited historical application space of blanket epitaxial, BPSG, and FSG layers to new applications involving patterned product wafer measurements, and new measurement capabilities. Over the last several years, the semiconductor industry has adopted a new set of materials associated with copper/low-k interconnects, and new structures incorporating exotic materials including silicon germanium, SOI substrates and high aspect ratio trenches. The new device architectures and more chemically sophisticated materials have raised new process control and metrology challenges that are not addressed by current measurement technology. To address the challenges we have developed a new infrared metrology tool designed for emerging semiconductor production processes, in a package compatible with modern production and R&D environments. The tool incorporates recent advances in reflectance instrumentation including highly accurate signal processing, optimized reflectometry optics, and model-based calibration and analysis algorithms. To meet the production requirements of the modern automated fab, the measurement hardware has been integrated with a fully automated 300 mm platform incorporating front opening unified pod (FOUP) interfaces, automated pattern recognition and high throughput ultra clean robotics. The tool employs a suite of automated dispersion-model analysis algorithms capable of extracting a variety of layer properties from measured spectra. The new tool provides excellent measurement precision, tool matching, and a platform for deploying many new production and development applications. In this paper we will explore the use of model based infrared analysis as a tool for characterizing novel bottle capacitor structures employed in high density dynamic random access memory (DRAM) chips. We will explore the capability of the tool for characterizing multiple geometric parameters associated with the manufacturing process that are important to the yield and performance of advanced bottle DRAM devices.
Technical Assessment of the Transette Transit System
DOT National Transportation Integrated Search
1981-10-01
This report describes an assessment of the Transette system located at the Georgia Institute of Technology in Atlanta, Georgia. The Transette system is a unique, fully-automated, engineering prototype transportation test system installed on the campu...
Preprocessing film-copied MRI for studying morphological brain changes.
Pham, Tuan D; Eisenblätter, Uwe; Baune, Bernhard T; Berger, Klaus
2009-06-15
The magnetic resonance imaging (MRI) of the brain is one of the important data items for studying memory and morbidity in elderly as these images can provide useful information through the quantitative measures of various regions of interest of the brain. As an effort to fully automate the biomedical analysis of the brain that can be combined with the genetic data of the same human population and where the records of the original MRI data are missing, this paper presents two effective methods for addressing this imaging problem. The first method handles the restoration of the film-copied MRI. The second method involves the segmentation of the image data. Experimental results and comparisons with other methods suggest the usefulness of the proposed image analysis methodology.
Toward an Efficient Icing CFD Process Using an Interactive Software Toolkit: Smagglce 2D
NASA Technical Reports Server (NTRS)
Vickerman, Mary B.; Choo, Yung K.; Schilling, Herbert W.; Baez, Marivell; Braun, Donald C.; Cotton, Barbara J.
2001-01-01
Two-dimensional CID analysis for iced airfoils can be a labor-intensive task. The software toolkit SmaggIce 2D is being developed to help streamline the CID process and provide the unique features needed for icing. When complete, it will include a combination of partially automated and fully interactive tools for all aspects of the tasks leading up to the flow analysis: geometry preparation, domain decomposition. block boundary demoralization. gridding, and linking with a flow solver. It also includes tools to perform ice shape characterization, an important aid in determining the relationship between ice characteristics and their effects on aerodynamic performance. Completed tools, work-in-progress, and planned features of the software toolkit are presented here.
Analysis of short tandem repeat polymorphisms using infrared fluorescence with M18 tailed primers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oetting, W.S.; Wiesner, G.; Laken, S.
The use of short tandem repeat polymorphisms (STRPs) are becoming increasingly important as markers for linkage analysis due to their large numbers of the human genome and their high degree of polymorphism. Fluorescence based detection of the STRP pattern using the LI-COR model 4000S automated DNA sequencer eliminates the need for radioactivity and produces a digitized image that can be used for the analysis of the polymorphisms. In an effort to reduce the cost of STRP analysis, we have synthesized primers with a 19 bp extension complementary to the sequence of the M13 primer on the 5{prime} end of onemore » of the two primers used in the amplification of the STRP instead of using primers with direct conjugation of the infrared fluorescent dye. Up to 5 primer pairs can be multiplexed together with the M13 primer-dye conjugate as the sole primer conjugated to the fluorescent dye. Comparisons between primers that have been directly conjugated to the fluor with those having the M13 sequence extension show no difference in the ability to determine the STRP pattern. At present, the entire Weber 4A set of STRP markers is available with the M13 5{prime} extension. We are currently using this technique for linkage analysis of familial breast cancer and asthma. The combination of STRP analysis using fluorescence detection will allow this technique to be fully automated for allele scoring and linkage analysis.« less