Science.gov

Sample records for mindboggle automated brain

  1. Mindboggling morphometry of human brains

    PubMed Central

    Bao, Forrest S.; Giard, Joachim; Stavsky, Eliezer; Lee, Noah; Rossa, Brian; Reuter, Martin; Chaibub Neto, Elias

    2017-01-01

    Mindboggle (http://mindboggle.info) is an open source brain morphometry platform that takes in preprocessed T1-weighted MRI data and outputs volume, surface, and tabular data containing label, feature, and shape information for further analysis. In this article, we document the software and demonstrate its use in studies of shape variation in healthy and diseased humans. The number of different shape measures and the size of the populations make this the largest and most detailed shape analysis of human brains ever conducted. Brain image morphometry shows great potential for providing much-needed biological markers for diagnosing, tracking, and predicting progression of mental health disorders. Very few software algorithms provide more than measures of volume and cortical thickness, while more subtle shape measures may provide more sensitive and specific biomarkers. Mindboggle computes a variety of (primarily surface-based) shapes: area, volume, thickness, curvature, depth, Laplace-Beltrami spectra, Zernike moments, etc. We evaluate Mindboggle’s algorithms using the largest set of manually labeled, publicly available brain images in the world and compare them against state-of-the-art algorithms where they exist. All data, code, and results of these evaluations are publicly available. PMID:28231282

  2. 101 Labeled Brain Images and a Consistent Human Cortical Labeling Protocol

    PubMed Central

    Klein, Arno; Tourville, Jason

    2012-01-01

    We introduce the Mindboggle-101 dataset, the largest and most complete set of free, publicly accessible, manually labeled human brain images. To manually label the macroscopic anatomy in magnetic resonance images of 101 healthy participants, we created a new cortical labeling protocol that relies on robust anatomical landmarks and minimal manual edits after initialization with automated labels. The “Desikan–Killiany–Tourville” (DKT) protocol is intended to improve the ease, consistency, and accuracy of labeling human cortical areas. Given how difficult it is to label brains, the Mindboggle-101 dataset is intended to serve as brain atlases for use in labeling other brains, as a normative dataset to establish morphometric variation in a healthy population for comparison against clinical populations, and contribute to the development, training, testing, and evaluation of automated registration and labeling algorithms. To this end, we also introduce benchmarks for the evaluation of such algorithms by comparing our manual labels with labels automatically generated by probabilistic and multi-atlas registration-based approaches. All data and related software and updated information are available on the http://mindboggle.info/data website. PMID:23227001

  3. "BRAIN": Baruch Retrieval of Automated Information for Negotiations.

    ERIC Educational Resources Information Center

    Levenstein, Aaron, Ed.

    1981-01-01

    A data processing program that can be used as a research and collective bargaining aid for colleges is briefly described and the fields of the system are outlined. The system, known as BRAIN (Baruch Retrieval of Automated Information for Negotiations), is designed primarily as an instrument for quantitative and qualitative analysis. BRAIN consists…

  4. Automated in situ brain imaging for mapping the Drosophila connectome.

    PubMed

    Lin, Chi-Wen; Lin, Hsuan-Wen; Chiu, Mei-Tzu; Shih, Yung-Hsin; Wang, Ting-Yuan; Chang, Hsiu-Ming; Chiang, Ann-Shyn

    2015-01-01

    Mapping the connectome, a wiring diagram of the entire brain, requires large-scale imaging of numerous single neurons with diverse morphology. It is a formidable challenge to reassemble these neurons into a virtual brain and correlate their structural networks with neuronal activities, which are measured in different experiments to analyze the informational flow in the brain. Here, we report an in situ brain imaging technique called Fly Head Array Slice Tomography (FHAST), which permits the reconstruction of structural and functional data to generate an integrative connectome in Drosophila. Using FHAST, the head capsules of an array of flies can be opened with a single vibratome sectioning to expose the brains, replacing the painstaking and inconsistent brain dissection process. FHAST can reveal in situ brain neuroanatomy with minimal distortion to neuronal morphology and maintain intact neuronal connections to peripheral sensory organs. Most importantly, it enables the automated 3D imaging of 100 intact fly brains in each experiment. The established head model with in situ brain neuroanatomy allows functional data to be accurately registered and associated with 3D images of single neurons. These integrative data can then be shared, searched, visualized, and analyzed for understanding how brain-wide activities in different neurons within the same circuit function together to control complex behaviors.

  5. Automated deep-phenotyping of the vertebrate brain.

    PubMed

    Allalou, Amin; Wu, Yuelong; Ghannad-Rezaie, Mostafa; Eimon, Peter M; Yanik, Mehmet Fatih

    2017-04-13

    Here, we describe an automated platform suitable for large-scale deep-phenotyping of zebrafish mutant lines, which uses optical projection tomography to rapidly image brain-specific gene expression patterns in 3D at cellular resolution. Registration algorithms and correlation analysis are then used to compare 3D expression patterns, to automatically detect all statistically significant alterations in mutants, and to map them onto a brain atlas. Automated deep-phenotyping of a mutation in the master transcriptional regulator fezf2 not only detects all known phenotypes but also uncovers important novel neural deficits that were overlooked in previous studies. In the telencephalon, we show for the first time that fezf2 mutant zebrafish have significant patterning deficits, particularly in glutamatergic populations. Our findings reveal unexpected parallels between fezf2 function in zebrafish and mice, where mutations cause deficits in glutamatergic neurons of the telencephalon-derived neocortex.

  6. Automated Talairach atlas labels for functional brain mapping.

    PubMed

    Lancaster, J L; Woldorff, M G; Parsons, L M; Liotti, M; Freitas, C S; Rainey, L; Kochunov, P V; Nickerson, D; Mikiten, S A; Fox, P T

    2000-07-01

    An automated coordinate-based system to retrieve brain labels from the 1988 Talairach Atlas, called the Talairach Daemon (TD), was previously introduced [Lancaster et al., 1997]. In the present study, the TD system and its 3-D database of labels for the 1988 Talairach atlas were tested for labeling of functional activation foci. TD system labels were compared with author-designated labels of activation coordinates from over 250 published functional brain-mapping studies and with manual atlas-derived labels from an expert group using a subset of these activation coordinates. Automated labeling by the TD system compared well with authors' labels, with a 70% or greater label match averaged over all locations. Author-label matching improved to greater than 90% within a search range of +/-5 mm for most sites. An adaptive grey matter (GM) range-search utility was evaluated using individual activations from the M1 mouth region (30 subjects, 52 sites). It provided an 87% label match to Brodmann area labels (BA 4 & BA 6) within a search range of +/-5 mm. Using the adaptive GM range search, the TD system's overall match with authors' labels (90%) was better than that of the expert group (80%). When used in concert with authors' deeper knowledge of an experiment, the TD system provides consistent and comprehensive labels for brain activation foci. Additional suggested applications of the TD system include interactive labeling, anatomical grouping of activation foci, lesion-deficit analysis, and neuroanatomy education.

  7. Automated regional behavioral analysis for human brain images

    PubMed Central

    Lancaster, Jack L.; Laird, Angela R.; Eickhoff, Simon B.; Martinez, Michael J.; Fox, P. Mickle; Fox, Peter T.

    2012-01-01

    Behavioral categories of functional imaging experiments along with standardized brain coordinates of associated activations were used to develop a method to automate regional behavioral analysis of human brain images. Behavioral and coordinate data were taken from the BrainMap database (http://www.brainmap.org/), which documents over 20 years of published functional brain imaging studies. A brain region of interest (ROI) for behavioral analysis can be defined in functional images, anatomical images or brain atlases, if images are spatially normalized to MNI or Talairach standards. Results of behavioral analysis are presented for each of BrainMap's 51 behavioral sub-domains spanning five behavioral domains (Action, Cognition, Emotion, Interoception, and Perception). For each behavioral sub-domain the fraction of coordinates falling within the ROI was computed and compared with the fraction expected if coordinates for the behavior were not clustered, i.e., uniformly distributed. When the difference between these fractions is large behavioral association is indicated. A z-score ≥ 3.0 was used to designate statistically significant behavioral association. The left-right symmetry of ~100K activation foci was evaluated by hemisphere, lobe, and by behavioral sub-domain. Results highlighted the classic left-side dominance for language while asymmetry for most sub-domains (~75%) was not statistically significant. Use scenarios were presented for anatomical ROIs from the Harvard-Oxford cortical (HOC) brain atlas, functional ROIs from statistical parametric maps in a TMS-PET study, a task-based fMRI study, and ROIs from the ten “major representative” functional networks in a previously published resting state fMRI study. Statistically significant behavioral findings for these use scenarios were consistent with published behaviors for associated anatomical and functional regions. PMID:22973224

  8. Automated regional behavioral analysis for human brain images.

    PubMed

    Lancaster, Jack L; Laird, Angela R; Eickhoff, Simon B; Martinez, Michael J; Fox, P Mickle; Fox, Peter T

    2012-01-01

    Behavioral categories of functional imaging experiments along with standardized brain coordinates of associated activations were used to develop a method to automate regional behavioral analysis of human brain images. Behavioral and coordinate data were taken from the BrainMap database (http://www.brainmap.org/), which documents over 20 years of published functional brain imaging studies. A brain region of interest (ROI) for behavioral analysis can be defined in functional images, anatomical images or brain atlases, if images are spatially normalized to MNI or Talairach standards. Results of behavioral analysis are presented for each of BrainMap's 51 behavioral sub-domains spanning five behavioral domains (Action, Cognition, Emotion, Interoception, and Perception). For each behavioral sub-domain the fraction of coordinates falling within the ROI was computed and compared with the fraction expected if coordinates for the behavior were not clustered, i.e., uniformly distributed. When the difference between these fractions is large behavioral association is indicated. A z-score ≥ 3.0 was used to designate statistically significant behavioral association. The left-right symmetry of ~100K activation foci was evaluated by hemisphere, lobe, and by behavioral sub-domain. Results highlighted the classic left-side dominance for language while asymmetry for most sub-domains (~75%) was not statistically significant. Use scenarios were presented for anatomical ROIs from the Harvard-Oxford cortical (HOC) brain atlas, functional ROIs from statistical parametric maps in a TMS-PET study, a task-based fMRI study, and ROIs from the ten "major representative" functional networks in a previously published resting state fMRI study. Statistically significant behavioral findings for these use scenarios were consistent with published behaviors for associated anatomical and functional regions.

  9. Automated deep-phenotyping of the vertebrate brain

    PubMed Central

    Allalou, Amin; Wu, Yuelong; Ghannad-Rezaie, Mostafa; Eimon, Peter M; Yanik, Mehmet Fatih

    2017-01-01

    Here, we describe an automated platform suitable for large-scale deep-phenotyping of zebrafish mutant lines, which uses optical projection tomography to rapidly image brain-specific gene expression patterns in 3D at cellular resolution. Registration algorithms and correlation analysis are then used to compare 3D expression patterns, to automatically detect all statistically significant alterations in mutants, and to map them onto a brain atlas. Automated deep-phenotyping of a mutation in the master transcriptional regulator fezf2 not only detects all known phenotypes but also uncovers important novel neural deficits that were overlooked in previous studies. In the telencephalon, we show for the first time that fezf2 mutant zebrafish have significant patterning deficits, particularly in glutamatergic populations. Our findings reveal unexpected parallels between fezf2 function in zebrafish and mice, where mutations cause deficits in glutamatergic neurons of the telencephalon-derived neocortex. DOI: http://dx.doi.org/10.7554/eLife.23379.001 PMID:28406399

  10. Brain MAPS: an automated, accurate and robust brain extraction technique using a template library

    PubMed Central

    Leung, Kelvin K.; Barnes, Josephine; Modat, Marc; Ridgway, Gerard R.; Bartlett, Jonathan W.; Fox, Nick C.; Ourselin, Sébastien

    2011-01-01

    Whole brain extraction is an important pre-processing step in neuro-image analysis. Manual or semi-automated brain delineations are labour-intensive and thus not desirable in large studies, meaning that automated techniques are preferable. The accuracy and robustness of automated methods are crucial because human expertise may be required to correct any sub-optimal results, which can be very time consuming. We compared the accuracy of four automated brain extraction methods: Brain Extraction Tool (BET), Brain Surface Extractor (BSE), Hybrid Watershed Algorithm (HWA) and a Multi-Atlas Propagation and Segmentation (MAPS) technique we have previously developed for hippocampal segmentation. The four methods were applied to extract whole brains from 682 1.5T and 157 3T T1-weighted MR baseline images from the Alzheimer’s Disease Neuroimaging Initiative database. Semi-automated brain segmentations with manual editing and checking were used as the gold-standard to compare with the results. The median Jaccard index of MAPS was higher than HWA, BET and BSE in 1.5T and 3T scans (p < 0.05, all tests), and the 1st-99th centile range of the Jaccard index of MAPS was smaller than HWA, BET and BSE in 1.5T and 3T scans (p < 0.05, all tests). HWA and MAPS were found to be best at including all brain tissues (median false negative rate ≤ 0.010% for 1.5T scans and ≤ 0.019% for 3T scans, both methods). The median Jaccard index of MAPS were similar in both 1.5T and 3T scans, whereas those of BET, BSE and HWA were higher in 1.5T scans than 3T scans (p < 0.05, all tests). We found that the diagnostic group had a small effect on the median Jaccard index of all four methods. In conclusion, MAPS had relatively high accuracy and low variability compared to HWA, BET and BSE in MR scans with and without atrophy. PMID:21195780

  11. Automated 3-Dimensional Brain Atlas Fitting to Microelectrode Recordings from Deep Brain Stimulation Surgeries

    PubMed Central

    Luján, J. Luis; Noecker, Angela M.; Butson, Christopher R.; Cooper, Scott E.; Walter, Benjamin L.; Vitek, Jerrold L.; McIntyre, Cameron C.

    2009-01-01

    Objective Deep brain stimulation (DBS) surgeries commonly rely on brain atlases and microelectrode recordings (MER) to help identify the target location for electrode implantation. We present an automated method for optimally fitting a 3-dimensional brain atlas to intraoperative MER and predicting a target DBS electrode location in stereotactic coordinates for the patient. Methods We retrospectively fit a 3-dimensional brain atlas to MER points from 10 DBS surgeries targeting the subthalamic nucleus (STN). We used a constrained optimization algorithm to maximize the MER points correctly fitted (i.e., contained) within the appropriate atlas nuclei. We compared our optimization approach to conventional anterior commissure-posterior commissure (AC/PC) scaling, and to manual fits performed by four experts. A theoretical DBS electrode target location in the dorsal STN was customized to each patient as part of the fitting process and compared to the location of the clinically defined therapeutic stimulation contact. Results The human expert and computer optimization fits achieved significantly better fits than the AC/PC scaling (80, 81, and 41% of correctly fitted MER, respectively). However, the optimization fits were performed in less time than the expert fits and converged to a single solution for each patient, eliminating interexpert variance. Conclusions and Significance DBS therapeutic outcomes are directly related to electrode implantation accuracy. Our automated fitting techniques may aid in the surgical decision-making process by optimally integrating brain atlas and intraoperative neurophysiological data to provide a visual guide for target identification. PMID:19556832

  12. Automated coregistration and statistical analyses of SPECT brain images

    SciTech Connect

    Gong, W.; Devous, M.D.

    1994-05-01

    Statistical analyses of SPECT image data often require highly accurate image coregistration. Several image coregistration algorithms have been developed. The Pellizari algorithm (PA) uses the Powell technique to estimate transformation parameters between the {open_quotes}head{close_quotes} (model) and {open_quotes}hat{close_quotes} (images to be registered). Image normalization and good initial transformation parameters heavily affect the accuracy and speed of convergence of the PA. We have explored various normalization methods and found a simple technique that avoids most artificial edge effects and minimizes blurring of useful edges. We have tested the effects on accuracy and convergence speed of the PA caused by different initial transformation parameters. From these data, a modified PA was integrated into an automated coregistration system for SPECT brain images on the PRISM 3000S under X Windows. The system yields an accuracy of approximately 2 mm between model and registered images, and employs minimal user intervention through a simple graphic user interface. Data are automatically resliced, normalized and coregistered, with the user choosing only the slice range for inclusion and two initial transformation parameters (under computer-aided guidance). Coregistration is accomplished (converges) in approximately 8 min for a 128 x 128 x 128 set of 2 mm{sup 3} voxels. The complete process (editing, reslicing, normalization, coregistration) takes about 20 min. We have also developed automated 3-dimensional parametric images ({open_quotes}t{close_quotes}, {open_quotes}z{close_quotes}, and subtraction images) from coregistered data sets for statistical analyses. Data are compared against a coregistered normal control group (N = 50) distributed in age and gender for matching against subject samples.

  13. Automated selection of brain regions for real-time fMRI brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Lührs, Michael; Sorger, Bettina; Goebel, Rainer; Esposito, Fabrizio

    2017-02-01

    Objective. Brain-computer interfaces (BCIs) implemented with real-time functional magnetic resonance imaging (rt-fMRI) use fMRI time-courses from predefined regions of interest (ROIs). To reach best performances, localizer experiments and on-site expert supervision are required for ROI definition. To automate this step, we developed two unsupervised computational techniques based on the general linear model (GLM) and independent component analysis (ICA) of rt-fMRI data, and compared their performances on a communication BCI. Approach. 3 T fMRI data of six volunteers were re-analyzed in simulated real-time. During a localizer run, participants performed three mental tasks following visual cues. During two communication runs, a letter-spelling display guided the subjects to freely encode letters by performing one of the mental tasks with a specific timing. GLM- and ICA-based procedures were used to decode each letter, respectively using compact ROIs and whole-brain distributed spatio-temporal patterns of fMRI activity, automatically defined from subject-specific or group-level maps. Main results. Letter-decoding performances were comparable to supervised methods. In combination with a similarity-based criterion, GLM- and ICA-based approaches successfully decoded more than 80% (average) of the letters. Subject-specific maps yielded optimal performances. Significance. Automated solutions for ROI selection may help accelerating the translation of rt-fMRI BCIs from research to clinical applications.

  14. Fast whole-brain optical tomography capable of automated slice-collection (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Jiang, Tao; Deng, Lei; Long, Beng; Peng, Jie; Luo, Qingming; Gong, Hui

    2016-03-01

    Acquiring brain-wide composite information of neuroanatomical and molecular phenotyping is crucial to understand brain functions. However, current whole-brain imaging methods based on mechnical sectioning haven't achieved brain-wide acquisition of both neuroanatomical and molecular phenotyping due to the lack of appropriate whole-brain immunostaining of embedded samples. Here, we present a novel strategy of acquiring brain-wide structural and molecular maps in the same brain, combining whole-brain imaging and subsequent immunostaining of automated-collected slices. We developed a whole-brain imaging system capable of automatically imaging and then collecting imaged tissue slices in order. The system contains three parts: structured illumination microscopy for high-throughput optical sectioning, vibratome for high-precision sectioning and slice-collection device for automated collecting of tissue slices. Through our system, we could acquire a whole-brain dataset of agarose-embedded mouse brain at lateral resolution of 0.33 µm with z-interval sampling of 100 µm in 9 h, and automatically collect the imaged slices in sequence. Subsequently, we performed immunohistochemistry of the collected slices in the routine way. We acquired mouse whole-brain imaging datasets of multiple specific types of neurons, proteins and gene expression profiles. We believe our method could accelerate systematic analysis of brain anatomical structure with specific proteins or genes expression information and understanding how the brain processes information and generates behavior.

  15. NeuroGPS: automated localization of neurons for brain circuits using L1 minimization model

    NASA Astrophysics Data System (ADS)

    Quan, Tingwei; Zheng, Ting; Yang, Zhongqing; Ding, Wenxiang; Li, Shiwei; Li, Jing; Zhou, Hang; Luo, Qingming; Gong, Hui; Zeng, Shaoqun

    2013-04-01

    Drawing the map of neuronal circuits at microscopic resolution is important to explain how brain works. Recent progresses in fluorescence labeling and imaging techniques have enabled measuring the whole brain of a rodent like a mouse at submicron-resolution. Considering the huge volume of such datasets, automatic tracing and reconstruct the neuronal connections from the image stacks is essential to form the large scale circuits. However, the first step among which, automated location the soma across different brain areas remains a challenge. Here, we addressed this problem by introducing L1 minimization model. We developed a fully automated system, NeuronGlobalPositionSystem (NeuroGPS) that is robust to the broad diversity of shape, size and density of the neurons in a mouse brain. This method allows locating the neurons across different brain areas without human intervention. We believe this method would facilitate the analysis of the neuronal circuits for brain function and disease studies.

  16. NeuroGPS: automated localization of neurons for brain circuits using L1 minimization model.

    PubMed

    Quan, Tingwei; Zheng, Ting; Yang, Zhongqing; Ding, Wenxiang; Li, Shiwei; Li, Jing; Zhou, Hang; Luo, Qingming; Gong, Hui; Zeng, Shaoqun

    2013-01-01

    Drawing the map of neuronal circuits at microscopic resolution is important to explain how brain works. Recent progresses in fluorescence labeling and imaging techniques have enabled measuring the whole brain of a rodent like a mouse at submicron-resolution. Considering the huge volume of such datasets, automatic tracing and reconstruct the neuronal connections from the image stacks is essential to form the large scale circuits. However, the first step among which, automated location the soma across different brain areas remains a challenge. Here, we addressed this problem by introducing L1 minimization model. We developed a fully automated system, NeuronGlobalPositionSystem (NeuroGPS) that is robust to the broad diversity of shape, size and density of the neurons in a mouse brain. This method allows locating the neurons across different brain areas without human intervention. We believe this method would facilitate the analysis of the neuronal circuits for brain function and disease studies.

  17. Comparison of a brain-based adaptive system and a manual adaptable system for invoking automation.

    PubMed

    Bailey, Nathan R; Scerbo, Mark W; Freeman, Frederick G; Mikulka, Peter J; Scott, Lorissa A

    2006-01-01

    Two experiments are presented examining adaptive and adaptable methods for invoking automation. Empirical investigations of adaptive automation have focused on methods used to invoke automation or on automation-related performance implications. However, no research has addressed whether performance benefits associated with brain-based systems exceed those in which users have control over task allocations. Participants performed monitoring and resource management tasks as well as a tracking task that shifted between automatic and manual modes. In the first experiment, participants worked with an adaptive system that used their electroencephalographic signals to switch the tracking task between automatic and manual modes. Participants were also divided between high- and low-reliability conditions for the system-monitoring task as well as high- and low-complacency potential. For the second experiment, participants operated an adaptable system that gave them manual control over task allocations. Results indicated increased situation awareness (SA) of gauge instrument settings for individuals high in complacency potential using the adaptive system. In addition, participants who had control over automation performed more poorly on the resource management task and reported higher levels of workload. A comparison between systems also revealed enhanced SA of gauge instrument settings and decreased workload in the adaptive condition. The present results suggest that brain-based adaptive automation systems may enhance perceptual level SA while reducing mental workload relative to systems requiring user-initiated control. Potential applications include automated systems for which operator monitoring performance and high-workload conditions are of concern.

  18. Serial two-photon tomography: an automated method for ex-vivo mouse brain imaging

    PubMed Central

    Ragan, Timothy; Kadiri, Lolahon R.; Venkataraju, Kannan Umadevi; Bahlmann, Karsten; Sutin, Jason; Taranda, Julian; Arganda-Carreras, Ignacio; Kim, Yongsoo; Seung, H. Sebastian

    2011-01-01

    Here we describe an automated method, which we call serial two-photon (STP) tomography, that achieves high-throughput fluorescence imaging of mouse brains by integrating two-photon microscopy and tissue sectioning. STP tomography generates high-resolution datasets that are free of distortions and can be readily warped in 3D, for example, for comparing multiple anatomical tracings. This method opens the door to routine systematic studies of neuroanatomy in mouse models of human brain disorders. PMID:22245809

  19. BrainCAT - a tool for automated and combined functional magnetic resonance imaging and diffusion tensor imaging brain connectivity analysis

    PubMed Central

    Marques, Paulo; Soares, José M.; Alves, Victor; Sousa, Nuno

    2013-01-01

    Multimodal neuroimaging studies have recently become a trend in the neuroimaging field and are certainly a standard for the future. Brain connectivity studies combining functional activation patterns using resting-state or task-related functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) tractography have growing popularity. However, there is a scarcity of solutions to perform optimized, intuitive, and consistent multimodal fMRI/DTI studies. Here we propose a new tool, brain connectivity analysis tool (BrainCAT), for an automated and standard multimodal analysis of combined fMRI/DTI data, using freely available tools. With a friendly graphical user interface, BrainCAT aims to make data processing easier and faster, implementing a fully automated data processing pipeline and minimizing the need for user intervention, which hopefully will expand the use of combined fMRI/DTI studies. Its validity was tested in an aging study of the default mode network (DMN) white matter connectivity. The results evidenced the cingulum bundle as the structural connector of the precuneus/posterior cingulate cortex and the medial frontal cortex, regions of the DMN. Moreover, mean fractional anisotropy (FA) values along the cingulum extracted with BrainCAT showed a strong correlation with FA values from the manual selection of the same bundle. Taken together, these results provide evidence that BrainCAT is suitable for these analyses. PMID:24319419

  20. Automated Recognition of Brain Region Mentions in Neuroscience Literature

    PubMed Central

    French, Leon; Lane, Suzanne; Xu, Lydia; Pavlidis, Paul

    2009-01-01

    The ability to computationally extract mentions of neuroanatomical regions from the literature would assist linking to other entities within and outside of an article. Examples include extracting reports of connectivity or region-specific gene expression. To facilitate text mining of neuroscience literature we have created a corpus of manually annotated brain region mentions. The corpus contains 1,377 abstracts with 18,242 brain region annotations. Interannotator agreement was evaluated for a subset of the documents, and was 90.7% and 96.7% for strict and lenient matching respectively. We observed a large vocabulary of over 6,000 unique brain region terms and 17,000 words. For automatic extraction of brain region mentions we evaluated simple dictionary methods and complex natural language processing techniques. The dictionary methods based on neuroanatomical lexicons recalled 36% of the mentions with 57% precision. The best performance was achieved using a conditional random field (CRF) with a rich feature set. Features were based on morphological, lexical, syntactic and contextual information. The CRF recalled 76% of mentions at 81% precision, by counting partial matches recall and precision increase to 86% and 92% respectively. We suspect a large amount of error is due to coordinating conjunctions, previously unseen words and brain regions of less commonly studied organisms. We found context windows, lemmatization and abbreviation expansion to be the most informative techniques. The corpus is freely available at http://www.chibi.ubc.ca/WhiteText/. PMID:19750194

  1. Automated detection of periventricular veins on 7 T brain MRI

    NASA Astrophysics Data System (ADS)

    Kuijf, Hugo J.; Bouvy, Willem H.; Zwanenburg, Jaco J. M.; Viergever, Max A.; Biessels, Geert Jan; Vincken, Koen L.

    2015-03-01

    Cerebral small vessel disease is common in elderly persons and a leading cause of cognitive decline, dementia, and acute stroke. With the introduction of ultra-high field strength 7.0T MRI, it is possible to visualize small vessels in the brain. In this work, a proof-of-principle study is conducted to assess the feasibility of automatically detecting periventricular veins. Periventricular veins are organized in a fan-pattern and drain venous blood from the brain towards the caudate vein of Schlesinger, which is situated along the lateral ventricles. Just outside this vein, a region-of- interest (ROI) through which all periventricular veins must cross is defined. Within this ROI, a combination of the vesselness filter, tubular tracking, and hysteresis thresholding is applied to locate periventricular veins. All detected locations were evaluated by an expert human observer. The results showed a positive predictive value of 88% and a sensitivity of 95% for detecting periventricular veins. The proposed method shows good results in detecting periventricular veins in the brain on 7.0T MR images. Compared to previous works, that only use a 1D or 2D ROI and limited image processing, our work presents a more comprehensive definition of the ROI, advanced image processing techniques to detect periventricular veins, and a quantitative analysis of the performance. The results of this proof-of-principle study are promising and will be used to assess periventricular veins on 7.0T brain MRI.

  2. An automated method measures variability in P-glycoprotein and ABCG2 densities across brain regions and brain matter.

    PubMed

    Kannan, Pavitra; Schain, Martin; Kretzschmar, Warren W; Weidner, Lora; Mitsios, Nicholas; Gulyás, Balázs; Blom, Hans; Gottesman, Michael M; Innis, Robert B; Hall, Matthew D; Mulder, Jan

    2017-06-01

    Changes in P-glycoprotein and ABCG2 densities may play a role in amyloid-beta accumulation in Alzheimer's disease. However, previous studies report conflicting results from different brain regions, without correcting for changes in vessel density. We developed an automated method to measure transporter density exclusively within the vascular space, thereby correcting for vessel density. We then examined variability in transporter density across brain regions, matter, and disease using two cohorts of post-mortem brains from Alzheimer's disease patients and age-matched controls. Changes in transporter density were also investigated in capillaries near plaques and on the mRNA level. P-glycoprotein density varied with brain region and matter, whereas ABCG2 density varied with brain matter. In temporal cortex, P-glycoprotein density was 53% lower in Alzheimer's disease samples than in controls, and was reduced by 35% in capillaries near plaque deposits within Alzheimer's disease samples. ABCG2 density was unaffected in Alzheimer's disease. No differences were detected at the transcript level. Our study indicates that region-specific changes in transporter densities can occur globally and locally near amyloid-beta deposits in Alzheimer's disease, providing an explanation for conflicting results in the literature. When differences in region and matter are accounted for, changes in density can be reproducibly measured using our automated method.

  3. Mapping of brain activity by automated volume analysis of immediate early genes

    PubMed Central

    Renier, Nicolas; Adams, Eliza L.; Kirst, Christoph; Wu, Zhuhao; Azevedo, Ricardo; Kohl, Johannes; Autry, Anita E.; Kadiri, Lolahon; Venkataraju, Kannan Umadevi; Zhou, Yu; Wang, Victoria X.; Tang, Cheuk Y.; Olsen, Olav; Dulac, Catherine; Osten, Pavel; Tessier-Lavigne, Marc

    2016-01-01

    Summary Understanding how neural information is processed in physiological and pathological states would benefit from precise detection, localization and quantification of the activity of all neurons across the entire brain, which has not to date been achieved in the mammalian brain. We introduce a pipeline for high speed acquisition of brain activity at cellular resolution through profiling immediate early gene expression using immunostaining and light-sheet fluorescence imaging, followed by automated mapping and analysis of activity by an open-source software program we term ClearMap. We validate the pipeline first by analysis of brain regions activated in response to Haloperidol. Next, we report new cortical regions downstream of whisker-evoked sensory processing during active exploration. Lastly, we combine activity mapping with axon tracing to uncover new brain regions differentially activated during parenting behavior. This pipeline is widely applicable to different experimental paradigms, including animal species for which transgenic activity reporters are not readily available. PMID:27238021

  4. Mapping of Brain Activity by Automated Volume Analysis of Immediate Early Genes.

    PubMed

    Renier, Nicolas; Adams, Eliza L; Kirst, Christoph; Wu, Zhuhao; Azevedo, Ricardo; Kohl, Johannes; Autry, Anita E; Kadiri, Lolahon; Umadevi Venkataraju, Kannan; Zhou, Yu; Wang, Victoria X; Tang, Cheuk Y; Olsen, Olav; Dulac, Catherine; Osten, Pavel; Tessier-Lavigne, Marc

    2016-06-16

    Understanding how neural information is processed in physiological and pathological states would benefit from precise detection, localization, and quantification of the activity of all neurons across the entire brain, which has not, to date, been achieved in the mammalian brain. We introduce a pipeline for high-speed acquisition of brain activity at cellular resolution through profiling immediate early gene expression using immunostaining and light-sheet fluorescence imaging, followed by automated mapping and analysis of activity by an open-source software program we term ClearMap. We validate the pipeline first by analysis of brain regions activated in response to haloperidol. Next, we report new cortical regions downstream of whisker-evoked sensory processing during active exploration. Last, we combine activity mapping with axon tracing to uncover new brain regions differentially activated during parenting behavior. This pipeline is widely applicable to different experimental paradigms, including animal species for which transgenic activity reporters are not readily available.

  5. Simple Fully Automated Group Classification on Brain fMRI

    SciTech Connect

    Honorio, J.; Goldstein, R.; Honorio, J.; Samaras, D.; Tomasi, D.; Goldstein, R.Z.

    2010-04-14

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statistical theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.

  6. Highly automated computer-aided diagnosis of neurological disorders using functional brain imaging

    NASA Astrophysics Data System (ADS)

    Spetsieris, P. G.; Ma, Y.; Dhawan, V.; Moeller, J. R.; Eidelberg, D.

    2006-03-01

    We have implemented a highly automated analytical method for computer aided diagnosis (CAD) of neurological disorders using functional brain imaging that is based on the Scaled Subprofile Model (SSM). Accurate diagnosis of functional brain disorders such as Parkinson's disease is often difficult clinically, particularly in early stages. Using principal component analysis (PCA) in conjunction with SSM on brain images of patients and normals, we can identify characteristic abnormal network covariance patterns which provide a subject dependent scalar score that not only discriminates a particular disease but also correlates with independent measures of disease severity. These patterns represent disease-specific brain networks that have been shown to be highly reproducible in distinct groups of patients. Topographic Profile Rating (TPR) is a reverse SSM computational algorithm that can be used to determine subject scores for new patients on a prospective basis. In our implementation, reference values for a full range of patients and controls are automatically accessed for comparison. We also implemented an automated recalibration step to produce reference scores for images generated in a different imaging environment from that used in the initial network derivation. New subjects under the same setting can then be evaluated individually and a simple report is generated indicating the subject's classification. For scores near the normal limits, additional criteria are used to make a definitive diagnosis. With further refinement, automated TPR can be used to efficiently assess disease severity, monitor disease progression and evaluate treatment efficacy.

  7. Hierarchical approach for automated segmentation of the brain volume from MR images

    NASA Astrophysics Data System (ADS)

    Hsu, Li-Yueh; Loew, Murray H.; Momenan, Reza

    1999-05-01

    Image segmentation is considered one of the essential steps in medical image analysis. Cases such as classification of tissue structures for quantitative analysis, reconstruction of anatomical volumes for visualization, and registration of multi-modality images for complementary study often require the segmentation of the brain to accomplish the task. In many clinical applications, parts of this task are performed either manually or interactively. Not only is this proces often tedious and time-consuming, it introduces additional external factors of inter- and intra-rater variability. In this paper, we present a 3D automated algorithm for segmenting the brain from various MR images. This algorithm consists of a sequence of pre-determined steps: First, an intensity window for initial separation of the brain volume from the background and non-brain structures is selected by using probability curves fitting on the intensity histogram. Next, a 3D isotropic volume is interpolated and an optimal threshold value is determined to construct a binary brain mask. The morphological and connectivity processes are then applied on this 3D mask for eliminating the non-brain structures. Finally, a surface extraction kernel is applied to extract the 3D brain surface. Preliminary results from the same subjects with different pulse sequences are compared with the manual segmentation. The automatically segmented brain volumes are compared with the manual results using the correlation coefficient and percentage overlay. Then the automatically detected surfaces are measured with the manual contouring in terms of RMS distance. The introduced automatic segmentation algorithm is effective on different sequences of MR data sets without any parameter tuning. It requires no user interaction so variability introduced by manual tracing or interactive thresholding can be eliminated. Currently, the introduced segmentation algorithm is applied in the automated inter- and intra-modality image

  8. Correlation of automated volumetric analysis of brain MR imaging with cognitive impairment in a natural history study of mucopolysaccharidosis II.

    PubMed

    Fan, Zheng; Styner, M; Muenzer, J; Poe, M; Escolar, M

    2010-08-01

    Reliable markers for predicting neurologic outcome in patients with MPS II are lacking. The purpose of this study is to explore whether quantitative volumetric measurements of brain MR imaging can be used to differentiate between MPS II patients with and without cognitive impairment. This MR imaging study is the first in MPS II patients to use automated/semi-automated methods to quantify brain volumes in a longitudinal design. Sixteen male patients with MPS II in a natural history study had annual brain MR imaging and detailed neurodevelopmental assessment over 2 years. Automated and semi-automated methods were used to determine brain volumes. Linear mixed regression models adjusting for age were used to assess the correlation between the volumetric parameters and cognition. Among the 16 MPS II patients, 10 (22 MR imaging studies) had cognitive impairment whereas the other 6 (11 MR imaging studies) had normal cognition. A decreased brain tissue/ICV ratio (-5%; P < .001) and an increased lateral ventricle/ICV ratio (+4%; P = .029) were found in patients with cognitive impairment compared with patients with normal cognition. These changes were apparent in patients as young as 7 years of age in addition to older patients. Quantitative volumetric measurements of brain MR imaging in MPS II patients can be obtained by using automated and semi-automated segmentation methods. MPS II patients with cognitive impairment have decreased brain tissue volumes, but longer studies with more subjects are required to confirm these results.

  9. Nursing benefits of using an automated injection system for ictal brain single photon emission computed tomography.

    PubMed

    Vonhofen, Geraldine; Evangelista, Tonya; Lordeon, Patricia

    2012-04-01

    The traditional method of administering radioactive isotopes to pediatric patients undergoing ictal brain single photon emission computed tomography testing has been by manual injections. This method presents certain challenges for nursing, including time requirements and safety risks. This quality improvement project discusses the implementation of an automated injection system for isotope administration and its impact on staffing, safety, and nursing satisfaction. It was conducted in an epilepsy monitoring unit at a large urban pediatric facility. Results of this project showed a decrease in the number of nurses exposed to radiation and improved nursing satisfaction with the use of the automated injection system. In addition, there was a decrease in the number of nursing hours required during ictal brain single photon emission computed tomography testing.

  10. Automated three-dimensional quantification of myocardial perfusion and brain SPECT.

    PubMed

    Slomka, P J; Radau, P; Hurwitz, G A; Dey, D

    2001-01-01

    To allow automated and objective reading of nuclear medicine tomography, we have developed a set of tools for clinical analysis of myocardial perfusion tomography (PERFIT) and Brain SPECT/PET (BRASS). We exploit algorithms for image registration and use three-dimensional (3D) "normal models" for individual patient comparisons to composite datasets on a "voxel-by-voxel basis" in order to automatically determine the statistically significant abnormalities. A multistage, 3D iterative inter-subject registration of patient images to normal templates is applied, including automated masking of the external activity before final fit. In separate projects, the software has been applied to the analysis of myocardial perfusion SPECT, as well as brain SPECT and PET data. Automatic reading was consistent with visual analysis; it can be applied to the whole spectrum of clinical images, and aid physicians in the daily interpretation of tomographic nuclear medicine images.

  11. Automated Robust Image Segmentation: Level Set Method Using Nonnegative Matrix Factorization with Application to Brain MRI.

    PubMed

    Dera, Dimah; Bouaynaya, Nidhal; Fathallah-Shaykh, Hassan M

    2016-07-01

    We address the problem of fully automated region discovery and robust image segmentation by devising a new deformable model based on the level set method (LSM) and the probabilistic nonnegative matrix factorization (NMF). We describe the use of NMF to calculate the number of distinct regions in the image and to derive the local distribution of the regions, which is incorporated into the energy functional of the LSM. The results demonstrate that our NMF-LSM method is superior to other approaches when applied to synthetic binary and gray-scale images and to clinical magnetic resonance images (MRI) of the human brain with and without a malignant brain tumor, glioblastoma multiforme. In particular, the NMF-LSM method is fully automated, highly accurate, less sensitive to the initial selection of the contour(s) or initial conditions, more robust to noise and model parameters, and able to detect as small distinct regions as desired. These advantages stem from the fact that the proposed method relies on histogram information instead of intensity values and does not introduce nuisance model parameters. These properties provide a general approach for automated robust region discovery and segmentation in heterogeneous images. Compared with the retrospective radiological diagnoses of two patients with non-enhancing grade 2 and 3 oligodendroglioma, the NMF-LSM detects earlier progression times and appears suitable for monitoring tumor response. The NMF-LSM method fills an important need of automated segmentation of clinical MRI.

  12. An improved automated method to quantitate infarct volume in triphenyltetrazolium stained rat brain sections.

    PubMed

    Regan, Hillary K; Detwiler, Theodore J; Huang, Judy C; Lynch, Joseph J; Regan, Christopher P

    2007-01-01

    The identification of acute neuroprotectants relies heavily on rodent stroke models. It is well know that some of the more common models used can exhibit a relatively high degree of inter animal variability. This necessitates the need to increase the sample size per group and to run concomitant positive and negative control groups with each study in order to increase the consistency and reproducibility of the model. As such, one aspect of these studies that has become more labor intensive is the measurement of infarct volume post study. Herein, we describe a simple method to determine stroke infarct volume in triphenyltetrazolium (TTC) stained brain sections utilizing an automated set of routines using standard software. The method was first validated by determining the correlation of infarct volumes derived from the manual measurements vs the automated method for the same samples across a wide range of infarcts. This comparison resulted in a significant correlation (r=0.99) indicating that the automated method was a valid method to assess infarct volume across a wide range in lesion volumes. Next, the automated infarct analysis tool was used to determine the effect of (+)-MK801, a well known neuroprotectant, on infarct volume after cerebral ischemia. This study demonstrated a significant reduction in infarct volume in (+)-MK801 treated rats. These data demonstrate a simple, accurate automated routine to measure lesion volume in TTC stained sections.

  13. Automated Ischemic Lesion Segmentation in MRI Mouse Brain Data after Transient Middle Cerebral Artery Occlusion

    PubMed Central

    Mulder, Inge A.; Khmelinskii, Artem; Dzyubachyk, Oleh; de Jong, Sebastiaan; Rieff, Nathalie; Wermer, Marieke J. H.; Hoehn, Mathias; Lelieveldt, Boudewijn P. F.; van den Maagdenberg, Arn M. J. M.

    2017-01-01

    Magnetic resonance imaging (MRI) has become increasingly important in ischemic stroke experiments in mice, especially because it enables longitudinal studies. Still, quantitative analysis of MRI data remains challenging mainly because segmentation of mouse brain lesions in MRI data heavily relies on time-consuming manual tracing and thresholding techniques. Therefore, in the present study, a fully automated approach was developed to analyze longitudinal MRI data for quantification of ischemic lesion volume progression in the mouse brain. We present a level-set-based lesion segmentation algorithm that is built using a minimal set of assumptions and requires only one MRI sequence (T2) as input. To validate our algorithm we used a heterogeneous data set consisting of 121 mouse brain scans of various age groups and time points after infarct induction and obtained using different MRI hardware and acquisition parameters. We evaluated the volumetric accuracy and regional overlap of ischemic lesions segmented by our automated method against the ground truth obtained in a semi-automated fashion that includes a highly time-consuming manual correction step. Our method shows good agreement with human observations and is accurate on heterogeneous data, whilst requiring much shorter average execution time. The algorithm developed here was compiled into a toolbox and made publically available, as well as all the data sets. PMID:28197090

  14. Associations between Family Adversity and Brain Volume in Adolescence: Manual vs. Automated Brain Segmentation Yields Different Results

    PubMed Central

    Lyden, Hannah; Gimbel, Sarah I.; Del Piero, Larissa; Tsai, A. Bryna; Sachs, Matthew E.; Kaplan, Jonas T.; Margolin, Gayla; Saxbe, Darby

    2016-01-01

    Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used. PMID:27656121

  15. Automated long-term tracking of freely moving animal and functional brain imaging based on fiber optic microscopy

    NASA Astrophysics Data System (ADS)

    Cha, Jaepyeong; Cheon, Gyeong Woo; Kang, Jin U.

    2015-03-01

    In this study, we demonstrate an automated data acquisition/analysis platform for both long-term motion tracking and functional brain imaging in freely moving mice. Our system utilizes a fiber-bundle based fluorescence microscope for 24 hours imaging of cellular activities within the brain while also monitoring corresponding animal behaviors using a NIR camera. Synchronized software and automation of analysis allow quantification of all animal behaviors and their brain activities over extended periods of time. Our platform can be used for interrogation of the brain activities in different behavioral states and is also well-suited for longitudinal studies of cellular activities in freely moving animals.

  16. Automated whole-brain N-acetylaspartate proton MRS quantification.

    PubMed

    Soher, Brian J; Wu, William E; Tal, Assaf; Storey, Pippa; Zhang, Ke; Babb, James S; Kirov, Ivan I; Lui, Yvonne W; Gonen, Oded

    2014-11-01

    Concentration of the neuronal marker, N-acetylaspartate (NAA), a quantitative metric for the health and density of neurons, is currently obtained by integration of the manually defined peak in whole-head proton ((1) H)-MRS. Our goal was to develop a full spectral modeling approach for the automatic estimation of the whole-brain NAA concentration (WBNAA) and to compare the performance of this approach with a manual frequency-range peak integration approach previously employed. MRI and whole-head (1) H-MRS from 18 healthy young adults were examined. Non-localized, whole-head (1) H-MRS obtained at 3 T yielded the NAA peak area through both manually defined frequency-range integration and the new, full spectral simulation. The NAA peak area was converted into an absolute amount with phantom replacement and normalized for brain volume (segmented from T1 -weighted MRI) to yield WBNAA. A paired-sample t test was used to compare the means of the WBNAA paradigms and a likelihood ratio test used to compare their coefficients of variation. While the between-subject WBNAA means were nearly identical (12.8 ± 2.5 mm for integration, 12.8 ± 1.4 mm for spectral modeling), the latter's standard deviation was significantly smaller (by ~50%, p = 0.026). The within-subject variability was 11.7% (±1.3 mm) for integration versus 7.0% (±0.8 mm) for spectral modeling, i.e., a 40% improvement. The (quantifiable) quality of the modeling approach was high, as reflected by Cramer-Rao lower bounds below 0.1% and vanishingly small (experimental - fitted) residuals. Modeling of the whole-head (1) H-MRS increases WBNAA quantification reliability by reducing its variability, its susceptibility to operator bias and baseline roll, and by providing quality-control feedback. Together, these enhance the usefulness of the technique for monitoring the diffuse progression and treatment response of neurological disorders.

  17. Automated monitoring of early neurobehavioral changes in mice following traumatic brain injury

    PubMed Central

    Qu, Wenrui; Liu, Nai-kui; Xie, Xin-min (Simon); Li, Rui; Xu, Xiao-ming

    2016-01-01

    Traumatic brain injury often causes a variety of behavioral and emotional impairments that can develop into chronic disorders. Therefore, there is a need to shift towards identifying early symptoms that can aid in the prediction of traumatic brain injury outcomes and behavioral endpoints in patients with traumatic brain injury after early interventions. In this study, we used the SmartCage system, an automated quantitative approach to assess behavior alterations in mice during an early phase of traumatic brain injury in their home cages. Female C57BL/6 adult mice were subjected to moderate controlled cortical impact (CCI) injury. The mice then received a battery of behavioral assessments including neurological score, locomotor activity, sleep/wake states, and anxiety-like behaviors on days 1, 2, and 7 after CCI. Histological analysis was performed on day 7 after the last assessment. Spontaneous activities on days 1 and 2 after injury were significantly decreased in the CCI group. The average percentage of sleep time spent in both dark and light cycles were significantly higher in the CCI group than in the sham group. For anxiety-like behaviors, the time spent in a light compartment and the number of transitions between the dark/light compartments were all significantly reduced in the CCI group than in the sham group. In addition, the mice suffering from CCI exhibited a preference of staying in the dark compartment of a dark/light cage. The CCI mice showed reduced neurological score and histological abnormalities, which are well correlated to the automated behavioral assessments. Our findings demonstrate that the automated SmartCage system provides sensitive and objective measures for early behavior changes in mice following traumatic brain injury. PMID:27073377

  18. Automated EEG signal analysis for identification of epilepsy seizures and brain tumour.

    PubMed

    Sharanreddy, M; Kulkarni, P K

    2013-11-01

    Abstract Electroencephalography (EEG) is a clinical test which records neuro-electrical activities generated by brain structures. EEG test results used to monitor brain diseases such as epilepsy seizure, brain tumours, toxic encephalopathies infections and cerebrovascular disorders. Due to the extreme variation in the EEG morphologies, manual analysis of the EEG signal is laborious, time consuming and requires skilled interpreters, who by the nature of the task are prone to subjective judegment and error. Further, manual analysis of the EEG results often fails to detect and uncover subtle features. This paper proposes an automated EEG analysis method by combining digital signal processing and neural network techniques, which will remove error and subjectivity associated with manual analysis and identifies the existence of epilepsy seizure and brain tumour diseases. The system uses multi-wavelet transform for feature extraction in which an input EEG signal is decomposed in a sub-signal. Irregularities and unpredictable fluctuations present in the decomposed signal are measured using approximate entropy. A feed-forward neural network is used to classify the EEG signal as a normal, epilepsy or brain tumour signal. The proposed technique is implemented and tested on data of 500 EEG signals for each disease. Results are promising, with classification accuracy of 98% for normal, 93% for epilepsy and 87% for brain tumour. Along with classification, the paper also highlights the EEG abnormalities associated with brain tumour and epilepsy seizure.

  19. A fully automated algorithm under modified FCM framework for improved brain MR image segmentation.

    PubMed

    Sikka, Karan; Sinha, Nitesh; Singh, Pankaj K; Mishra, Amit K

    2009-09-01

    Automated brain magnetic resonance image (MRI) segmentation is a complex problem especially if accompanied by quality depreciating factors such as intensity inhomogeneity and noise. This article presents a new algorithm for automated segmentation of both normal and diseased brain MRI. An entropy driven homomorphic filtering technique has been employed in this work to remove the bias field. The initial cluster centers are estimated using a proposed algorithm called histogram-based local peak merger using adaptive window. Subsequently, a modified fuzzy c-mean (MFCM) technique using the neighborhood pixel considerations is applied. Finally, a new technique called neighborhood-based membership ambiguity correction (NMAC) has been used for smoothing the boundaries between different tissue classes as well as to remove small pixel level noise, which appear as misclassified pixels even after the MFCM approach. NMAC leads to much sharper boundaries between tissues and, hence, has been found to be highly effective in prominently estimating the tissue and tumor areas in a brain MR scan. The algorithm has been validated against MFCM and FMRIB software library using MRI scans from BrainWeb. Superior results to those achieved with MFCM technique have been observed along with the collateral advantages of fully automatic segmentation, faster computation and faster convergence of the objective function.

  20. Automated Template-based Brain Localization and Extraction for Fetal Brain MRI Reconstruction.

    PubMed

    Tourbier, Sébastien; Velasco-Annis, Clemente; Taimouri, Vahid; Hagmann, Patric; Meuli, Reto; Warfield, Simon K; Cuadra, Meritxell Bach; Gholipour, Ali

    2017-04-10

    Most fetal brain MRI reconstruction algorithms rely only on brain tissue-relevant voxels of low-resolution (LR) images to enhance the quality of inter-slice motion correction and image reconstruction. Consequently the fetal brain needs to be localized and extracted as a first step, which is usually a laborious and time consuming manual or semi-automatic task. We have proposed in this work to use age-matched template images as prior knowledge to automatize brain localization and extraction. This has been achieved through a novel automatic brain localization and extraction method based on robust template-to-slice block matching and deformable slice-to-template registration. Our template-based approach has also enabled the reconstruction of fetal brain images in standard radiological anatomical planes in a common coordinate space. We have integrated this approach into our new reconstruction pipeline that involves intensity normalization, inter-slice motion correction, and super-resolution (SR) reconstruction. To this end we have adopted a novel approach based on projection of every slice of the LR brain masks into the template space using a fusion strategy. This has enabled the refinement of brain masks in the LR images at each motion correction iteration. The overall brain localization and extraction algorithm has shown to produce brain masks that are very close to manually drawn brain masks, showing an average Dice overlap measure of 94.5%. We have also demonstrated that adopting a slice-to-template registration and propagation of the brain mask slice-by-slice leads to a significant improvement in brain extraction performance compared to global rigid brain extraction and consequently in the quality of the final reconstructed images. Ratings performed by two expert observers show that the proposed pipeline can achieve similar reconstruction quality to reference reconstruction based on manual slice-by-slice brain extraction. The proposed brain mask refinement and

  1. Automated fiber tracking of human brain white matter using diffusion tensor imaging.

    PubMed

    Zhang, Weihong; Olivi, Alessandro; Hertig, Samuel J; van Zijl, Peter; Mori, Susumu

    2008-08-15

    Reconstruction of white matter tracts based on diffusion tensor imaging (DTI) is currently widely used in clinical research. This reconstruction allows us to identify coordinates of specific white matter tracts and to investigate their anatomy. Fiber reconstruction, however, relies on manual identification of anatomical landmarks of a tract of interest, which is based on subjective judgment and thus a potential source of experimental variability. Here, an automated tract reconstruction approach is introduced. A set of reference regions of interest (rROIs) known to select a tract of interest was marked in our DTI brain atlas. The atlas was then linearly transformed to each subject, and the rROI set was transferred to the subject for tract reconstruction. Agreement between the automated and manual approaches was measured for 11 tracts in 10 healthy volunteers and found to be excellent (kappa>0.8) and remained high up to 4-5 mm of the linear transformation errors. As a first example, the automated approach was applied to brain tumor patients and strategies to cope with severe anatomical abnormalities are discussed.

  2. Towards automated detection of depression from brain structural magnetic resonance images.

    PubMed

    Kipli, Kuryati; Kouzani, Abbas Z; Williams, Lana J

    2013-05-01

    Depression is a major issue worldwide and is seen as a significant health problem. Stigma and patient denial, clinical experience, time limitations, and reliability of psychometrics are barriers to the clinical diagnoses of depression. Thus, the establishment of an automated system that could detect such abnormalities would assist medical experts in their decision-making process. This paper reviews existing methods for the automated detection of depression from brain structural magnetic resonance images (sMRI). Relevant sources were identified from various databases and online sites using a combination of keywords and terms including depression, major depressive disorder, detection, classification, and MRI databases. Reference lists of chosen articles were further reviewed for associated publications. The paper introduces a generic structure for representing and describing the methods developed for the detection of depression from sMRI of the brain. It consists of a number of components including acquisition and preprocessing, feature extraction, feature selection, and classification. Automated sMRI-based detection methods have the potential to provide an objective measure of depression, hence improving the confidence level in the diagnosis and prognosis of depression.

  3. Leveraging Clinical Imaging Archives for Radiomics: Reliability of Automated Methods for Brain Volume Measurement.

    PubMed

    Adduru, Viraj R; Michael, Andrew M; Helguera, Maria; Baum, Stefi A; Moore, Gregory J

    2017-09-01

    Purpose To validate the use of thick-section clinically acquired magnetic resonance (MR) imaging data for estimating total brain volume (TBV), gray matter (GM) volume (GMV), and white matter (WM) volume (WMV) by using three widely used automated toolboxes: SPM ( www.fil.ion.ucl.ac.uk/spm/ ), FreeSurfer ( surfer.nmr.mgh.harvard.edu ), and FSL (FMRIB software library; Oxford Centre for Functional MR Imaging of the Brain, Oxford, England, https://fsl.fmrib.ox.ac.uk/fsl ). Materials and Methods MR images from a clinical archive were used and data were deidentified. The three methods were applied to estimate brain volumes from thin-section research-quality brain MR images and routine thick-section clinical MR images acquired from the same 38 patients (age range, 1-71 years; mean age, 22 years; 11 women). By using these automated methods, TBV, GMV, and WMV were estimated. Thin- versus thick-section volume comparisons were made for each method by using intraclass correlation coefficients (ICCs). Results SPM exhibited excellent ICCs (0.97, 0.85, and 0.83 for TBV, GMV, and WMV, respectively). FSL exhibited ICCs of 0.69, 0.51, and 0.60 for TBV, GMV, and WMV, respectively, but they were lower than with SPM. FreeSurfer exhibited excellent ICC of 0.63 only for TBV. Application of SPM's voxel-based morphometry on the modulated images of thin-section images and interpolated thick-section images showed fair to excellent ICCs (0.37-0.98) for the majority of brain regions (88.47% [306924 of 346916 voxels] of WM and 80.35% [377 282 of 469 502 voxels] of GM). Conclusion Thick-section clinical-quality MR images can be reliably used for computing quantitative brain metrics such as TBV, GMV, and WMV by using SPM. (©) RSNA, 2017 Online supplemental material is available for this article.

  4. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm.

    PubMed

    Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  5. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm

    PubMed Central

    Pizarro, Ricardo A.; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A.; Goldman, Aaron L.; Xiao, Ena; Luo, Qian; Berman, Karen F.; Callicott, Joseph H.; Weinberger, Daniel R.; Mattay, Venkata S.

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI. PMID:28066227

  6. Integrating the Allen Brain Institute Cell Types Database into Automated Neuroscience Workflow.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2017-08-02

    We developed software tools to download, extract features, and organize the Cell Types Database from the Allen Brain Institute (ABI) in order to integrate its whole cell patch clamp characterization data into the automated modeling/data analysis cycle. To expand the potential user base we employed both Python and MATLAB. The basic set of tools downloads selected raw data and extracts cell, sweep, and spike features, using ABI's feature extraction code. To facilitate data manipulation we added a tool to build a local specialized database of raw data plus extracted features. Finally, to maximize automation, we extended our NeuroManager workflow automation suite to include these tools plus a separate investigation database. The extended suite allows the user to integrate ABI experimental and modeling data into an automated workflow deployed on heterogeneous computer infrastructures, from local servers, to high performance computing environments, to the cloud. Since our approach is focused on workflow procedures our tools can be modified to interact with the increasing number of neuroscience databases being developed to cover all scales and properties of the nervous system.

  7. Quantifying brain tissue volume in multiple sclerosis with automated lesion segmentation and filling

    PubMed Central

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; Pareto, Deborah; Vilanova, Joan C.; Ramió-Torrentà, Lluís; Sastre-Garriga, Jaume; Montalban, Xavier; Rovira, Àlex; Lladó, Xavier

    2015-01-01

    Lesion filling has been successfully applied to reduce the effect of hypo-intense T1-w Multiple Sclerosis (MS) lesions on automatic brain tissue segmentation. However, a study of fully automated pipelines incorporating lesion segmentation and lesion filling on tissue volume analysis has not yet been performed. Here, we analyzed the % of error introduced by automating the lesion segmentation and filling processes in the tissue segmentation of 70 clinically isolated syndrome patient images. First of all, images were processed using the LST and SLS toolkits with different pipeline combinations that differed in either automated or manual lesion segmentation, and lesion filling or masking out lesions. Then, images processed following each of the pipelines were segmented into gray matter (GM) and white matter (WM) using SPM8, and compared with the same images where expert lesion annotations were filled before segmentation. Our results showed that fully automated lesion segmentation and filling pipelines reduced significantly the % of error in GM and WM volume on images of MS patients, and performed similarly to the images where expert lesion annotations were masked before segmentation. In all the pipelines, the amount of misclassified lesion voxels was the main cause in the observed error in GM and WM volume. However, the % of error was significantly lower when automatically estimated lesions were filled and not masked before segmentation. These results are relevant and suggest that LST and SLS toolboxes allow the performance of accurate brain tissue volume measurements without any kind of manual intervention, which can be convenient not only in terms of time and economic costs, but also to avoid the inherent intra/inter variability between manual annotations. PMID:26740917

  8. Semi-Automated Trajectory Analysis of Deep Ballistic Penetrating Brain Injury

    PubMed Central

    Folio, Les; Solomon, Jeffrey; Biassou, Nadia; Fischer, Tatjana; Dworzak, Jenny; Raymont, Vanessa; Sinaii, Ninet; Wassermann, Eric M.; Grafman, Jordan

    2016-01-01

    Background Penetrating head injuries (PHIs) are common in combat operations and most have visible wound paths on computed tomography (CT). Objective We assess agreement between an automated trajectory analysis-based assessment of brain injury and manual tracings of encephalomalacia on CT. Methods We analyzed 80 head CTs with ballistic PHI from the Institutional Review Board approved Vietnam head injury registry. Anatomic reports were generated from spatial coordinates of projectile entrance and terminal fragment location. These were compared to manual tracings of the regions of encephalomalacia. Dice’s similarity coefficients, kappa, sensitivities, and specificities were calculated to assess agreement. Times required for case analysis were also compared. Results Results show high specificity of anatomic regions identified on CT with semiautomated anatomical estimates and manual tracings of tissue damage. Radiologist’s and medical students’ anatomic region reports were similar (Kappa 0.8, t-test p < 0.001). Region of probable injury modeling of involved brain structures was sensitive (0.7) and specific (0.9) compared with manually traced structures. Semiautomated analysis was 9-fold faster than manual tracings. Conclusion Our region of probable injury spatial model approximates anatomical regions of encephalomalacia from ballistic PHI with time-saving over manual methods. Results show potential for automated anatomical reporting as an adjunct to current practice of radiologist/neurosurgical review of brain injury by penetrating projectiles. PMID:23707123

  9. New tissue priors for improved automated classification of subcortical brain structures on MRI☆

    PubMed Central

    Lorio, S.; Fresard, S.; Adaszewski, S.; Kherif, F.; Chowdhury, R.; Frackowiak, R.S.; Ashburner, J.; Helms, G.; Weiskopf, N.; Lutti, A.; Draganski, B.

    2016-01-01

    Despite the constant improvement of algorithms for automated brain tissue classification, the accurate delineation of subcortical structures using magnetic resonance images (MRI) data remains challenging. The main difficulties arise from the low gray-white matter contrast of iron rich areas in T1-weighted (T1w) MRI data and from the lack of adequate priors for basal ganglia and thalamus. The most recent attempts to obtain such priors were based on cohorts with limited size that included subjects in a narrow age range, failing to account for age-related gray-white matter contrast changes. Aiming to improve the anatomical plausibility of automated brain tissue classification from T1w data, we have created new tissue probability maps for subcortical gray matter regions. Supported by atlas-derived spatial information, raters manually labeled subcortical structures in a cohort of healthy subjects using magnetization transfer saturation and R2* MRI maps, which feature optimal gray-white matter contrast in these areas. After assessment of inter-rater variability, the new tissue priors were tested on T1w data within the framework of voxel-based morphometry. The automated detection of gray matter in subcortical areas with our new probability maps was more anatomically plausible compared to the one derived with currently available priors. We provide evidence that the improved delineation compensates age-related bias in the segmentation of iron rich subcortical regions. The new tissue priors, allowing robust detection of basal ganglia and thalamus, have the potential to enhance the sensitivity of voxel-based morphometry in both healthy and diseased brains. PMID:26854557

  10. Comparison of Automated Brain Volume Measures obtained with NeuroQuant and FreeSurfer.

    PubMed

    Ochs, Alfred L; Ross, David E; Zannoni, Megan D; Abildskov, Tracy J; Bigler, Erin D

    2015-01-01

    To examine intermethod reliabilities and differences between FreeSurfer and the FDA-cleared congener, NeuroQuant, both fully automated methods for structural brain MRI measurements. MRI scans from 20 normal control subjects, 20 Alzheimer's disease patients, and 20 mild traumatically brain-injured patients were analyzed with NeuroQuant and with FreeSurfer. Intermethod reliability was evaluated. Pairwise correlation coefficients, intraclass correlation coefficients, and effect size differences were computed. NeuroQuant versus FreeSurfer measures showed excellent to good intermethod reliability for the 21 regions evaluated (r: .63 to .99/ICC: .62 to .99/ES: -.33 to 2.08) except for the pallidum (r/ICC/ES = .31/.29/-2.2) and cerebellar white matter (r/ICC/ES = .31/.31/.08). Volumes reported by NeuroQuant were generally larger than those reported by FreeSurfer with the whole brain parenchyma volume reported by NeuroQuant 6.50% larger than the volume reported by FreeSurfer. There was no systematic difference in results between the 3 subgroups. NeuroQuant and FreeSurfer showed good to excellent intermethod reliability in volumetric measurements for all brain regions examined with the only exceptions being the pallidum and cerebellar white matter. This finding was robust for normal individuals, patients with Alzheimer's disease, and patients with mild traumatic brain injury. Copyright © 2015 by the American Society of Neuroimaging.

  11. Automated metastatic brain lesion detection: a computer aided diagnostic and clinical research tool

    NASA Astrophysics Data System (ADS)

    Devine, Jeremy; Sahgal, Arjun; Karam, Irene; Martel, Anne L.

    2016-03-01

    The accurate localization of brain metastases in magnetic resonance (MR) images is crucial for patients undergoing stereotactic radiosurgery (SRS) to ensure that all neoplastic foci are targeted. Computer automated tumor localization and analysis can improve both of these tasks by eliminating inter and intra-observer variations during the MR image reading process. Lesion localization is accomplished using adaptive thresholding to extract enhancing objects. Each enhancing object is represented as a vector of features which includes information on object size, symmetry, position, shape, and context. These vectors are then used to train a random forest classifier. We trained and tested the image analysis pipeline on 3D axial contrast-enhanced MR images with the intention of localizing the brain metastases. In our cross validation study and at the most effective algorithm operating point, we were able to identify 90% of the lesions at a precision rate of 60%.

  12. Robustness of Automated Methods for Brain Volume Measurements across Different MRI Field Strengths

    PubMed Central

    Heinen, Rutger; Bouvy, Willem H.; Mendrik, Adrienne M.; Viergever, Max A.; Biessels, Geert Jan; de Bresser, Jeroen

    2016-01-01

    Introduction Pooling of multicenter brain imaging data is a trend in studies on ageing related brain diseases. This poses challenges to MR-based brain segmentation. The performance across different field strengths of three widely used automated methods for brain volume measurements was assessed in the present study. Methods Ten subjects (mean age: 64 years) were scanned on 1.5T and 3T MRI on the same day. We determined robustness across field strength (i.e., whether measured volumes between 3T and 1.5T scans in the same subjects were similar) for SPM12, Freesurfer 5.3.0 and FSL 5.0.7. As a frame of reference, 3T MRI scans from 20 additional subjects (mean age: 71 years) were segmented manually to determine accuracy of the methods (i.e., whether measured volumes corresponded with expert-defined volumes). Results Total brain volume (TBV) measurements were robust across field strength for Freesurfer and FSL (mean absolute difference as % of mean volume ≤ 1%), but less so for SPM (4%). Gray matter (GM) and white matter (WM) volume measurements were robust for Freesurfer (1%; 2%) and FSL (2%; 3%) but less so for SPM (5%; 4%). For intracranial volume (ICV), SPM was more robust (2%) than FSL (3%) and Freesurfer (9%). TBV measurements were accurate for SPM and FSL, but less so for Freesurfer. For GM volume, SPM was accurate, but accuracy was lower for Freesurfer and FSL. For WM volume, Freesurfer was accurate, but SPM and FSL were less accurate. For ICV, FSL was accurate, while SPM and Freesurfer were less accurate. Conclusion Brain volumes and ICV could be measured quite robustly in scans acquired at different field strengths, but performance of the methods varied depending on the assessed compartment (e.g., TBV or ICV). Selection of an appropriate method in multicenter brain imaging studies therefore depends on the compartment of interest. PMID:27798694

  13. Automated delineation of brain structures in patients undergoing radiotherapy for primary brain tumors: from atlas to dose-volume histograms.

    PubMed

    Conson, Manuel; Cella, Laura; Pacelli, Roberto; Comerci, Marco; Liuzzi, Raffaele; Salvatore, Marco; Quarantelli, Mario

    2014-09-01

    To implement and evaluate a magnetic resonance imaging atlas-based automated segmentation (MRI-ABAS) procedure for cortical and sub-cortical grey matter areas definition, suitable for dose-distribution analyses in brain tumor patients undergoing radiotherapy (RT). 3T-MRI scans performed before RT in ten brain tumor patients were used. The MRI-ABAS procedure consists of grey matter classification and atlas-based regions of interest definition. The Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm was applied to structures manually delineated by four experts to generate the standard reference. Performance was assessed comparing multiple geometrical metrics (including Dice Similarity Coefficient - DSC). Dosimetric parameters from dose-volume-histograms were also generated and compared. Compared with manual delineation, MRI-ABAS showed excellent reproducibility [median DSCABAS=1 (95% CI, 0.97-1.0) vs. DSCMANUAL=0.90 (0.73-0.98)], acceptable accuracy [DSCABAS=0.81 (0.68-0.94) vs. DSCMANUAL=0.90 (0.76-0.98)], and an overall 90% reduction in delineation time. Dosimetric parameters obtained using MRI-ABAS were comparable with those obtained by manual contouring. The speed, reproducibility, and robustness of the process make MRI-ABAS a valuable tool for investigating radiation dose-volume effects in non-target brain structures providing additional standardized data without additional time-consuming procedures. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. An Automated Quiet Sleep Detection Approach in Preterm Infants as a Gateway to Assess Brain Maturation.

    PubMed

    Dereymaeker, Anneleen; Pillay, Kirubin; Vervisch, Jan; Van Huffel, Sabine; Naulaers, Gunnar; Jansen, Katrien; De Vos, Maarten

    2017-09-01

    Sleep state development in preterm neonates can provide crucial information regarding functional brain maturation and give insight into neurological well being. However, visual labeling of sleep stages from EEG requires expertise and is very time consuming, prompting the need for an automated procedure. We present a robust method for automated detection of preterm sleep from EEG, over a wide postmenstrual age ([Formula: see text] age) range, focusing first on Quiet Sleep (QS) as an initial marker for sleep assessment. Our algorithm, CLuster-based Adaptive Sleep Staging (CLASS), detects QS if it remains relatively more discontinuous than non-QS over PMA. CLASS was optimized on a training set of 34 recordings aged 27-42 weeks PMA, and performance then assessed on a distinct test set of 55 recordings of the same age range. Results were compared to visual QS labeling from two independent raters (with inter-rater agreement [Formula: see text]), using Sensitivity, Specificity, Detection Factor ([Formula: see text] of visual QS periods correctly detected by CLASS) and Misclassification Factor ([Formula: see text] of CLASS-detected QS periods that are misclassified). CLASS performance proved optimal across recordings at 31-38 weeks (median [Formula: see text], median MF 0-0.25, median Sensitivity 0.93-1.0, and median Specificity 0.80-0.91 across this age range), with minimal misclassifications at 35-36 weeks (median [Formula: see text]). To illustrate the potential of CLASS in facilitating clinical research, normal maturational trends over PMA were derived from CLASS-estimated QS periods, visual QS estimates, and nonstate specific periods (containing QS and non-QS) in the EEG recording. CLASS QS trends agreed with those from visual QS, with both showing stronger correlations than nonstate specific trends. This highlights the benefit of automated QS detection for exploring brain maturation.

  15. Automated diagnosis of Alzheimer's disease with multi-atlas based whole brain segmentations

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Tang, Xiaoying

    2017-03-01

    Voxel-based analysis is widely used in quantitative analysis of structural brain magnetic resonance imaging (MRI) and automated disease detection, such as Alzheimer's disease (AD). However, noise at the voxel level may cause low sensitivity to AD-induced structural abnormalities. This can be addressed with the use of a whole brain structural segmentation approach which greatly reduces the dimension of features (the number of voxels). In this paper, we propose an automatic AD diagnosis system that combines such whole brain segmen- tations with advanced machine learning methods. We used a multi-atlas segmentation technique to parcellate T1-weighted images into 54 distinct brain regions and extract their structural volumes to serve as the features for principal-component-analysis-based dimension reduction and support-vector-machine-based classification. The relationship between the number of retained principal components (PCs) and the diagnosis accuracy was systematically evaluated, in a leave-one-out fashion, based on 28 AD subjects and 23 age-matched healthy subjects. Our approach yielded pretty good classification results with 96.08% overall accuracy being achieved using the three foremost PCs. In addition, our approach yielded 96.43% specificity, 100% sensitivity, and 0.9891 area under the receiver operating characteristic curve.

  16. Comparison of manual vs. automated multimodality (CT-MRI) image registration for brain tumors

    SciTech Connect

    Sarkar, Abhirup; Santiago, Roberto J.; Smith, Ryan; Kassaee, Alireza . E-mail: Kassaee@xrt.upenn.edu

    2005-03-31

    Computed tomgoraphy-magnetic resonance imaging (CT-MRI) registrations are routinely used for target-volume delineation of brain tumors. We clinically use 2 software packages based on manual operation and 1 automated package with 2 different algorithms: chamfer matching using bony structures, and mutual information using intensity patterns. In all registration algorithms, a minimum of 3 pairs of identical anatomical and preferably noncoplanar landmarks is used on each of the 2 image sets. In manual registration, the program registers these points and links the image sets using a 3-dimensional (3D) transformation. In automated registration, the 3 landmarks are used as an initial starting point and further processing is done to complete the registration. Using our registration packages, registration of CT and MRI was performed on 10 patients. We scored the results of each registration set based on the amount of time spent, the accuracy reported by the software, and a final evaluation. We evaluated each software program by measuring the residual error between 'matched' points on the right and left globes and the posterior fossa for fused image slices. In general, manual registration showed higher misalignment between corresponding points compared to automated registration using intensity matching. This error had no directional dependence and was, most of the time, larger for a larger structure in both registration techniques. Automated algorithm based on intensity matching also gave the best results in terms of registration accuracy, irrespective of whether or not the initial landmarks were chosen carefully, when compared to that done using bone matching algorithm. Intensity-matching algorithm required the least amount of user-time and provided better accuracy.

  17. Automated tissue segmentation of MR brain images in the presence of white matter lesions.

    PubMed

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Lladó, Xavier

    2017-01-01

    Over the last few years, the increasing interest in brain tissue volume measurements on clinical settings has led to the development of a wide number of automated tissue segmentation methods. However, white matter lesions are known to reduce the performance of automated tissue segmentation methods, which requires manual annotation of the lesions and refilling them before segmentation, which is tedious and time-consuming. Here, we propose a new, fully automated T1-w/FLAIR tissue segmentation approach designed to deal with images in the presence of WM lesions. This approach integrates a robust partial volume tissue segmentation with WM outlier rejection and filling, combining intensity and probabilistic and morphological prior maps. We evaluate the performance of this method on the MRBrainS13 tissue segmentation challenge database, which contains images with vascular WM lesions, and also on a set of Multiple Sclerosis (MS) patient images. On both databases, we validate the performance of our method with other state-of-the-art techniques. On the MRBrainS13 data, the presented approach was at the time of submission the best ranked unsupervised intensity model method of the challenge (7th position) and clearly outperformed the other unsupervised pipelines such as FAST and SPM12. On MS data, the differences in tissue segmentation between the images segmented with our method and the same images where manual expert annotations were used to refill lesions on T1-w images before segmentation were lower or similar to the best state-of-the-art pipeline incorporating automated lesion segmentation and filling. Our results show that the proposed pipeline achieved very competitive results on both vascular and MS lesions. A public version of this approach is available to download for the neuro-imaging community.

  18. A Comparison of a Brain-Based Adaptive System and a Manual Adaptable System for Invoking Automation

    NASA Technical Reports Server (NTRS)

    Bailey, Nathan R.; Scerbo, Mark W.; Freeman, Frederick G.; Mikulka, Peter J.; Scott, Lorissa A.

    2004-01-01

    Two experiments are presented that examine alternative methods for invoking automation. In each experiment, participants were asked to perform simultaneously a monitoring task and a resource management task as well as a tracking task that changed between automatic and manual modes. The monitoring task required participants to detect failures of an automated system to correct aberrant conditions under either high or low system reliability. Performance on each task was assessed as well as situation awareness and subjective workload. In the first experiment, half of the participants worked with a brain-based system that used their EEG signals to switch the tracking task between automatic and manual modes. The remaining participants were yoked to participants from the adaptive condition and received the same schedule of mode switches, but their EEG had no effect on the automation. Within each group, half of the participants were assigned to either the low or high reliability monitoring task. In addition, within each combination of automation invocation and system reliability, participants were separated into high and low complacency potential groups. The results revealed no significant effects of automation invocation on the performance measures; however, the high complacency individuals demonstrated better situation awareness when working with the adaptive automation system. The second experiment was the same as the first with one important exception. Automation was invoked manually. Thus, half of the participants pressed a button to invoke automation for 10 s. The remaining participants were yoked to participants from the adaptable condition and received the same schedule of mode switches, but they had no control over the automation. The results showed that participants who could invoke automation performed more poorly on the resource management task and reported higher levels of subjective workload. Further, those who invoked automation more frequently performed

  19. Automated Outcome Classification of Computed Tomography Imaging Reports for Pediatric Traumatic Brain Injury.

    PubMed

    Yadav, Kabir; Sarioglu, Efsun; Choi, Hyeong Ah; Cartwright, Walter B; Hinds, Pamela S; Chamberlain, James M

    2016-02-01

    The authors have previously demonstrated highly reliable automated classification of free-text computed tomography (CT) imaging reports using a hybrid system that pairs linguistic (natural language processing) and statistical (machine learning) techniques. Previously performed for identifying the outcome of orbital fracture in unprocessed radiology reports from a clinical data repository, the performance has not been replicated for more complex outcomes. To validate automated outcome classification performance of a hybrid natural language processing (NLP) and machine learning system for brain CT imaging reports. The hypothesis was that our system has performance characteristics for identifying pediatric traumatic brain injury (TBI). This was a secondary analysis of a subset of 2,121 CT reports from the Pediatric Emergency Care Applied Research Network (PECARN) TBI study. For that project, radiologists dictated CT reports as free text, which were then deidentified and scanned as PDF documents. Trained data abstractors manually coded each report for TBI outcome. Text was extracted from the PDF files using optical character recognition. The data set was randomly split evenly for training and testing. Training patient reports were used as input to the Medical Language Extraction and Encoding (MedLEE) NLP tool to create structured output containing standardized medical terms and modifiers for negation, certainty, and temporal status. A random subset stratified by site was analyzed using descriptive quantitative content analysis to confirm identification of TBI findings based on the National Institute of Neurological Disorders and Stroke (NINDS) Common Data Elements project. Findings were coded for presence or absence, weighted by frequency of mentions, and past/future/indication modifiers were filtered. After combining with the manual reference standard, a decision tree classifier was created using data mining tools WEKA 3.7.5 and Salford Predictive Miner 7

  20. Automated Outcome Classification of Computed Tomography Imaging Reports for Pediatric Traumatic Brain Injury

    PubMed Central

    Yadav, Kabir; Sarioglu, Efsun; Choi, Hyeong-Ah; Cartwright, Walter B.; Hinds, Pamela S.; Chamberlain, James M.

    2016-01-01

    Background The authors have previously demonstrated highly reliable automated classification of free text computed tomography (CT) imaging reports using a hybrid system that pairs linguistic (natural language processing) and statistical (machine learning) techniques. Previously performed for identifying the outcome of orbital fracture in unprocessed radiology reports from a clinical data repository, the performance has not been replicated for more complex outcomes. Objectives To validate automated outcome classification performance of a hybrid natural language processing (NLP) and machine learning system for brain CT imaging reports. The hypothesis was that our system has performance characteristics for identifying pediatric traumatic brain injury (TBI). Methods This was a secondary analysis of a subset of 2,121 CT reports from the Pediatric Emergency Care Applied Research Network (PECARN) TBI study. For that project, radiologists dictated CT reports as free text, which were then de-identified and scanned as PDF documents. Trained data abstractors manually coded each report for TBI outcome. Text was extracted from the PDF files using optical character recognition. The dataset was randomly split evenly for training and testing. Training patient reports were used as input to the Medical Language Extraction and Encoding (MedLEE) NLP tool to create structured output containing standardized medical terms and modifiers for negation, certainty, and temporal status. A random subset stratified by site was analyzed using descriptive quantitative content analysis to confirm identification of TBI findings based upon the National Institute of Neurological Disorders and Stroke Common Data Elements project. Findings were coded for presence or absence, weighted by frequency of mentions, and past/future/indication modifiers were filtered. After combining with the manual reference standard, a decision tree classifier was created using data mining tools WEKA 3.7.5 and Salford

  1. Diffuse damage in pediatric traumatic brain injury: a comparison of automated versus operator-controlled quantification methods.

    PubMed

    Bigler, Erin D; Abildskov, Tracy J; Wilde, Elisabeth A; McCauley, Stephen R; Li, Xiaoqi; Merkley, Tricia L; Fearing, Michael A; Newsome, Mary R; Scheibel, Randall S; Hunter, Jill V; Chu, Zili; Levin, Harvey S

    2010-04-15

    This investigation had two main objectives: 1) to assess the comparability of volumes determined by operator-controlled image quantification with automated image analysis in evaluating atrophic brain changes related to traumatic brain injury (TBI) in children, and 2) to assess the extent of diffuse structural changes throughout the brain as determined by reduced volume of a brain structure or region of interest (ROI). Operator-controlled methods used ANALYZE software for segmentation and tracing routines of pre-defined brain structures and ROIs. For automated image analyses, the open-access FreeSurfer program was used. Sixteen children with moderate-to-severe TBI were compared to individually matched, typically developing control children and the volumes of 18 brain structures and/or ROIs were compared between the two methods. Both methods detected atrophic changes but differed in the magnitude of the atrophic effect with the best agreement in subcortical structures. The volumes of all brain structures/ROIs were smaller in the TBI group regardless of method used; overall effect size differences were minimal for caudate and putamen but moderate to large for all other measures. This is reflective of the diffuse nature of TBI and its widespread impact on structural brain integrity, indicating that both FreeSurfer and operator-controlled methods can reliably assess cross-sectional volumetric changes in pediatric TBI. Copyright 2010 Elsevier Inc. All rights reserved.

  2. Automated longitudinal registration of high resolution structural MRI brain sub-volumes in non-human primates

    PubMed Central

    Lecoeur, Jérémy; Wang, Feng; Chen, Li Min; Li, Rui; Avison, Malcolm J.; Dawant, Benoit M.

    2011-01-01

    Accurate anatomic co-registration is a prerequisite for identifying structural and functional changes in longitudinal studies of brain plasticity. Current MRI methods permit collection of brain images across multiple scales, ranging from whole brain at relatively low resolution (≥1 mm), to local brain areas at the level of cortical layers and columns (~100 µm) in the same session, allowing detection of subtle structural changes on a similar spatial scale. To measure these changes reliably, high resolution structural and functional images of local brain regions must be registered accurately across imaging sessions. The present study describes a robust fully automated strategy for the registration of high resolution structural images of brain sub-volumes to lower resolution whole brain images collected within a session, and the registration of partially overlapping high resolution MRI sub-volumes (“slabs”) across imaging sessions. In high field (9.4 T) reduced field-of-view high resolution structural imaging studies using a surface coil in an anesthetized non-human primate model, this fully automated coregistration pipeline was robust in the face of significant inhomogeneities in image intensity and tissue contrast arising from the spatially inhomogeneous transmit and receive properties of the surface coil, achieving a registration accuracy of 30 ± 15 µm between sessions. PMID:21920386

  3. A simple rapid process for semi-automated brain extraction from magnetic resonance images of the whole mouse head.

    PubMed

    Delora, Adam; Gonzales, Aaron; Medina, Christopher S; Mitchell, Adam; Mohed, Abdul Faheem; Jacobs, Russell E; Bearer, Elaine L

    2016-01-15

    Magnetic resonance imaging (MRI) is a well-developed technique in neuroscience. Limitations in applying MRI to rodent models of neuropsychiatric disorders include the large number of animals required to achieve statistical significance, and the paucity of automation tools for the critical early step in processing, brain extraction, which prepares brain images for alignment and voxel-wise statistics. This novel timesaving automation of template-based brain extraction ("skull-stripping") is capable of quickly and reliably extracting the brain from large numbers of whole head images in a single step. The method is simple to install and requires minimal user interaction. This method is equally applicable to different types of MR images. Results were evaluated with Dice and Jacquard similarity indices and compared in 3D surface projections with other stripping approaches. Statistical comparisons demonstrate that individual variation of brain volumes are preserved. A downloadable software package not otherwise available for extraction of brains from whole head images is included here. This software tool increases speed, can be used with an atlas or a template from within the dataset, and produces masks that need little further refinement. Our new automation can be applied to any MR dataset, since the starting point is a template mask generated specifically for that dataset. The method reliably and rapidly extracts brain images from whole head images, rendering them useable for subsequent analytical processing. This software tool will accelerate the exploitation of mouse models for the investigation of human brain disorders by MRI. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Robust Automated Constellation-Based Landmark Detection in Human Brain Imaging.

    PubMed

    Ghayoor, Ali; Vaidya, Jatin G; Johnson, Hans J

    2017-04-06

    A robust fully automated algorithm for identifying an arbitrary number of landmark points in the human brain is described and validated. The proposed method combines statistical shape models with trained brain morphometric measures to estimate midbrain landmark positions reliably and accurately. Gross morphometric constraints provided by automatically identified eye centers and the center of the head mass are shown to provide robust initialization in the presence of large rotations in the initial head orientation. Detection of primary midbrain landmarks are used as the foundation from which extended detection of an arbitrary set of secondary landmarks in different brain regions by applying a linear model estimation and principle component analysis. This estimation model sequentially uses the knowledge of each additional detected landmark as an improved foundation for improved prediction of the next landmark location. The accuracy and robustness of the presented method was evaluated by comparing the automatically generated results to two manual raters on 30 identified landmark points extracted from each of 30 T1-weighted magnetic resonance images. For the landmarks with unambiguous anatomical definitions, the average discrepancy between the algorithm results and each human observer differed by less than 1 mm from the average inter-observer variability when the algorithm was evaluated on imaging data collected from the same site as the model building data. Similar results were obtained when the same model was applied to a set of heterogeneous image volumes from seven different collection sites representing 3 scanner manufacturers. This method is reliable for general application in large-scale multi-site studies that consist of a variety of imaging data with different orientations, spacings, origins, and field strengths.

  5. A multi-atlas based method for automated anatomical rat brain MRI segmentation and extraction of PET activity.

    PubMed

    Lancelot, Sophie; Roche, Roxane; Slimen, Afifa; Bouillot, Caroline; Levigoureux, Elise; Langlois, Jean-Baptiste; Zimmer, Luc; Costes, Nicolas

    2014-01-01

    Preclinical in vivo imaging requires precise and reproducible delineation of brain structures. Manual segmentation is time consuming and operator dependent. Automated segmentation as usually performed via single atlas registration fails to account for anatomo-physiological variability. We present, evaluate, and make available a multi-atlas approach for automatically segmenting rat brain MRI and extracting PET activies. High-resolution 7T 2DT2 MR images of 12 Sprague-Dawley rat brains were manually segmented into 27-VOI label volumes using detailed protocols. Automated methods were developed with 7/12 atlas datasets, i.e. the MRIs and their associated label volumes. MRIs were registered to a common space, where an MRI template and a maximum probability atlas were created. Three automated methods were tested: 1/registering individual MRIs to the template, and using a single atlas (SA), 2/using the maximum probability atlas (MP), and 3/registering the MRIs from the multi-atlas dataset to an individual MRI, propagating the label volumes and fusing them in individual MRI space (propagation & fusion, PF). Evaluation was performed on the five remaining rats which additionally underwent [18F]FDG PET. Automated and manual segmentations were compared for morphometric performance (assessed by comparing volume bias and Dice overlap index) and functional performance (evaluated by comparing extracted PET measures). Only the SA method showed volume bias. Dice indices were significantly different between methods (PF>MP>SA). PET regional measures were more accurate with multi-atlas methods than with SA method. Multi-atlas methods outperform SA for automated anatomical brain segmentation and PET measure's extraction. They perform comparably to manual segmentation for FDG-PET quantification. Multi-atlas methods are suitable for rapid reproducible VOI analyses.

  6. Neuronal loss as evidenced by automated quantification of neuronal density following moderate and severe traumatic brain injury in rats.

    PubMed

    Balança, Baptiste; Bapteste, Lionel; Lieutaud, Thomas; Ressnikoff, Denis; Guy, Rainui; Bezin, Laurent; Marinesco, Stéphane

    2016-01-01

    Traumatic brain injury causes widespread neurological lesions that can be reproduced in animals with the lateral fluid percussion (LFP) model. The characterization of the pattern of neuronal death generated in this model remains unclear, involving both cortical and subcortical brain regions. Here, 7 days after moderate (3 atmospheres absolute [ATA]) or severe (3.8 ATA) LFP, we estimated neuronal loss by using immunohistochemistry together with a computer-assisted automated method for quantifying neuronal density in brain sections. Neuronal counts were performed ipsilateral to the impact, in the parietal cortex ventral to the site of percussion, in the temporal cortex, in the dorsal thalamus, and in the hippocampus. These results were compared with the counts observed at similar areas in sham animals. We found that neuronal density was severely decreased in the temporal cortex (-60%), in the dorsal thalamus (-63%), and in area CA3 of the hippocampus (-36%) of injured animals compared with controls but was not significantly modified in the cortices located immediately ventral to the impact. Total cellular density increased in brain structures displaying neuronal death, suggesting the presence of gliosis. The increase in the severity of LFP did not change the pattern of neuronal injury. This automated method simplified the study of neuronal loss following traumatic brain injury and allowed the identification of a pattern of neuronal loss that spreads from the dorsal thalamus to the temporal cortex, with the most severe lesions being in brain structures remote from the site of impact.

  7. Automated Classification to Predict the Progression of Alzheimer's Disease Using Whole-Brain Volumetry and DTI

    PubMed Central

    Jung, Won Beom; Lee, Young Min; Kim, Young Hoon

    2015-01-01

    Objective This study proposes an automated diagnostic method to classify patients with Alzheimer's disease (AD) of degenerative etiology using magnetic resonance imaging (MRI) markers. Methods Twenty-seven patients with subjective memory impairment (SMI), 18 patients with mild cognitive impairment (MCI), and 27 patients with AD participated. MRI protocols included three dimensional brain structural imaging and diffusion tensor imaging to assess the cortical thickness, subcortical volume and white matter integrity. Recursive feature elimination based on support vector machine (SVM) was conducted to determine the most relevant features for classifying abnormal regions and imaging parameters, and then a factor analysis for the top-ranked factors was performed. Subjects were classified using nonlinear SVM. Results Medial temporal regions in AD patients were dominantly detected with cortical thinning and volume atrophy compared with SMI and MCI patients. Damage to white matter integrity was also accredited with decreased fractional anisotropy and increased mean diffusivity (MD) across the three groups. The microscopic damage in the subcortical gray matter was reflected in increased MD. Classification accuracy between pairs of groups (SMI vs. MCI, MCI vs. AD, SMI vs. AD) and among all three groups were 84.4% (±13.8), 86.9% (±10.5), 96.3% (±4.6), and 70.5% (±11.5), respectively. Conclusion This proposed method may be a potential tool to diagnose AD pathology with the current clinical criteria. PMID:25670951

  8. PyDBS: an automated image processing workflow for deep brain stimulation surgery.

    PubMed

    D'Albis, Tiziano; Haegelen, Claire; Essert, Caroline; Fernández-Vidal, Sara; Lalys, Florent; Jannin, Pierre

    2015-02-01

    Deep brain stimulation (DBS) is a surgical procedure for treating motor-related neurological disorders. DBS clinical efficacy hinges on precise surgical planning and accurate electrode placement, which in turn call upon several image processing and visualization tasks, such as image registration, image segmentation, image fusion, and 3D visualization. These tasks are often performed by a heterogeneous set of software tools, which adopt differing formats and geometrical conventions and require patient-specific parameterization or interactive tuning. To overcome these issues, we introduce in this article PyDBS, a fully integrated and automated image processing workflow for DBS surgery. PyDBS consists of three image processing pipelines and three visualization modules assisting clinicians through the entire DBS surgical workflow, from the preoperative planning of electrode trajectories to the postoperative assessment of electrode placement. The system's robustness, speed, and accuracy were assessed by means of a retrospective validation, based on 92 clinical cases. The complete PyDBS workflow achieved satisfactory results in 92 % of tested cases, with a median processing time of 28 min per patient. The results obtained are compatible with the adoption of PyDBS in clinical practice.

  9. Automated Determination of Axonal Orientation in the Deep White Matter of the Human Brain

    PubMed Central

    Bartsch, Hauke; Maechler, Paul

    2012-01-01

    Abstract The wide-spread utilization of diffusion-weighted imaging in the clinical neurosciences to assess white-matter (WM) integrity and architecture calls for robust validation strategies applied to the data that are acquired with noninvasive imaging. However, the pathology and detailed fiber architecture of WM tissue can only be observed postmortem. With these considerations in mind, we designed an automated method for the determination of axonal orientation in high-resolution microscope images. The algorithm was tested on tissue that was stained using a silver impregnation technique that was optimized to resolve axonal fibers against very low levels of background. The orientation of individual nerve fibers was detected using spatial filtering and a template-matching algorithm, and the results are displayed as color-coded overlays. Quantitative models of WM fiber architecture at the microscopic level can lead to improved interpretation of low-resolution neuroimaging data and to more accurate mapping of fiber pathways in the human brain. PMID:23030312

  10. Automated midline shift and intracranial pressure estimation based on brain CT images.

    PubMed

    Chen, Wenan; Belle, Ashwin; Cockrell, Charles; Ward, Kevin R; Najarian, Kayvan

    2013-04-13

    In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.

  11. Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images

    PubMed Central

    Cockrell, Charles; Ward, Kevin R.; Najarian, Kayvan

    2013-01-01

    In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring. PMID:23604268

  12. SU-D-BRD-06: Automated Population-Based Planning for Whole Brain Radiation Therapy

    SciTech Connect

    Schreibmann, E; Fox, T; Crocker, I; Shu, H

    2014-06-01

    Purpose: Treatment planning for whole brain radiation treatment is technically a simple process but in practice it takes valuable clinical time of repetitive and tedious tasks. This report presents a method that automatically segments the relevant target and normal tissues and creates a treatment plan in only a few minutes after patient simulation. Methods: Segmentation is performed automatically through morphological operations on the soft tissue. The treatment plan is generated by searching a database of previous cases for patients with similar anatomy. In this search, each database case is ranked in terms of similarity using a customized metric designed for sensitivity by including only geometrical changes that affect the dose distribution. The database case with the best match is automatically modified to replace relevant patient info and isocenter position while maintaining original beam and MLC settings. Results: Fifteen patients were used to validate the method. In each of these cases the anatomy was accurately segmented to mean Dice coefficients of 0.970 ± 0.008 for the brain, 0.846 ± 0.009 for the eyes and 0.672 ± 0.111 for the lens as compared to clinical segmentations. Each case was then subsequently matched against a database of 70 validated treatment plans and the best matching plan (termed auto-planned), was compared retrospectively with the clinical plans in terms of brain coverage and maximum doses to critical structures. Maximum doses were reduced by a maximum of 20.809 Gy for the left eye (mean 3.533), by 13.352 (1.311) for the right eye, and by 27.471 (4.856), 25.218 (6.315) for the left and right lens. Time from simulation to auto-plan was 3-4 minutes. Conclusion: Automated database- based matching is an alternative to classical treatment planning that improves quality while providing a cost—effective solution to planning through modifying previous validated plans to match a current patient's anatomy.

  13. Using Automated Morphometry to Detect Associations Between ERP Latency and Structural Brain MRI in Normal Adults

    PubMed Central

    Cardenas, Valerie A.; Chao, Linda L.; Blumenfeld, Rob; Song, Enmin; Meyerhoff, Dieter J.; Weiner, Michael W.; Studholme, Colin

    2008-01-01

    Despite the clinical significance of event-related potential (ERP) latency abnormalities, little attention has focused on the anatomic substrate of latency variability. Volume conduction models do not identify the anatomy responsible for delayed neural transmission between neural sources. To explore the anatomic substrate of ERP latency variability in normal adults using automated measures derived from magnetic resonance imaging (MRI), ERPs were recorded in the visual three-stimulus oddball task in 59 healthy participants. Latencies of the P3a and P3b components were measured at the vertex. Measures of local anatomic size in the brain were estimated from structural MRI, using tissue segmentation and deformation morphometry. A general linear model was fitted relating latency to measures of local anatomic size, covarying for intracranial vault volume. Longer P3b latencies were related to contractions in thalamus extending superiorly into the corpus callosum, white matter (WM) anterior to the central sulcus on the left and right, left temporal WM, the right anterior limb of the internal capsule extending into the lenticular nucleus, and larger cerebrospinal fluid volumes. There was no evidence for a relationship between gray matter (GM) volumes and P3b latency. Longer P3a latencies were related to contractions in left temporal WM, and left parietal GM and WM near the interhemispheric fissure. P3b latency variability is related chiefly to WM, thalamus, and lenticular nucleus, whereas P3a latency variability is not related as strongly to anatomy. These results imply that the WM connectivity between generators influences P3b latency more than the generators themselves do. PMID:15834860

  14. Subtle In-Scanner Motion Biases Automated Measurement of Brain Anatomy From In Vivo MRI

    PubMed Central

    Alexander-Bloch, Aaron; Clasen, Liv; Stockman, Michael; Ronan, Lisa; Lalonde, Francois; Giedd, Jay; Raznahan, Armin

    2016-01-01

    While the potential for small amounts of motion in functional magnetic resonance imaging (fMRI) scans to bias the results of functional neuroimaging studies is well appreciated, the impact of in-scanner motion on morphological analysis of structural MRI is relatively under-studied. Even among “good quality” structural scans, there may be systematic effects of motion on measures of brain morphometry. In the present study, the subjects’ tendency to move during fMRI scans, acquired in the same scanning sessions as their structural scans, yielded a reliable, continuous estimate of in-scanner motion. Using this approach within a sample of 127 children, adolescents, and young adults, significant relationships were found between this measure and estimates of cortical gray matter volume and mean curvature, as well as trend-level relationships with cortical thickness. Specifically, cortical volume and thickness decreased with greater motion, and mean curvature increased. These effects of subtle motion were anatomically heterogeneous, were present across different automated imaging pipelines, showed convergent validity with effects of frank motion assessed in a separate sample of 274 scans, and could be demonstrated in both pediatric and adult populations. Thus, using different motion assays in two large non-overlapping sets of structural MRI scans, convergent evidence showed that in-scanner motion—even at levels which do not manifest in visible motion artifact—can lead to systematic and regionally specific biases in anatomical estimation. These findings have special relevance to structural neuroimaging in developmental and clinical datasets, and inform ongoing efforts to optimize neuroanatomical analysis of existing and future structural MRI datasets in non-sedated humans. PMID:27004471

  15. FreeSurfer-Initiated Fully-Automated Subcortical Brain Segmentation in MRI Using Large Deformation Diffeomorphic Metric Mapping

    PubMed Central

    Khan, Ali R.; Wang, Lei

    2010-01-01

    Fully-automated brain segmentation methods have not been widely adopted for clinical use because of issues related to reliability, accuracy, and limitations of delineation protocol. By combining the probabilistic-based FreeSurfer (FS) method with the Large Deformation Diffeomorphic Metric Mapping (LDDMM) based label propagation method, we are able to increase reliability and accuracy, and allow for flexibility in template choice. Our method uses the automated FreeSurfer subcortical labeling to provide a coarse to fine introduction of information in the LDDMM template-based segmentation resulting in a fully-automated subcortical brain segmentation method (FS+LDDMM). One major advantage of the FS+LDDMM-based approach is that the automatically generated segmentations generated are inherently smooth, thus subsequent steps in shape analysis can directly follow without manual post-processing or loss of detail. We have evaluated our new FS+LDDMM method on several databases containing a total of 50 subjects with different pathologies, scan sequences and manual delineation protocols for labeling the basal ganglia, thalamus, and hippocampus. In healthy controls we report Dice overlap measures of 0.81, 0.83, 0.74, 0.86 and 0.75 for the right caudate nucleus, putamen, pallidum, thalamus and hippocampus respectively. We also find statistically significant improvement of accuracy in FS+LDDMM over FreeSurfer for the caudate nucleus and putamen of Huntington’s disease and Tourette’s syndrome subjects, and the right hippocampus of Schizophrenia subjects. PMID:18455931

  16. Automated segmentation of ventricles from serial brain MRI for the quantification of volumetric changes associated with communicating hydrocephalus in patients with brain tumor

    NASA Astrophysics Data System (ADS)

    Pura, John A.; Hamilton, Allison M.; Vargish, Geoffrey A.; Butman, John A.; Linguraru, Marius George

    2011-03-01

    Accurate ventricle volume estimates could improve the understanding and diagnosis of postoperative communicating hydrocephalus. For this category of patients, associated changes in ventricle volume can be difficult to identify, particularly over short time intervals. We present an automated segmentation algorithm that evaluates ventricle size from serial brain MRI examination. The technique combines serial T1- weighted images to increase SNR and segments the means image to generate a ventricle template. After pre-processing, the segmentation is initiated by a fuzzy c-means clustering algorithm to find the seeds used in a combination of fast marching methods and geodesic active contours. Finally, the ventricle template is propagated onto the serial data via non-linear registration. Serial volume estimates were obtained in an automated robust and accurate manner from difficult data.

  17. CUDA-based acceleration and BPN-assisted automation of bilateral filtering for brain MR image restoration.

    PubMed

    Chang, Herng-Hua; Chang, Yu-Ning

    2017-04-01

    Bilateral filters have been substantially exploited in numerous magnetic resonance (MR) image restoration applications for decades. Due to the deficiency of theoretical basis on the filter parameter setting, empirical manipulation with fixed values and noise variance-related adjustments has generally been employed. The outcome of these strategies is usually sensitive to the variation of the brain structures and not all the three parameter values are optimal. This article is in an attempt to investigate the optimal setting of the bilateral filter, from which an accelerated and automated restoration framework is developed. To reduce the computational burden of the bilateral filter, parallel computing with the graphics processing unit (GPU) architecture is first introduced. The NVIDIA Tesla K40c GPU with the compute unified device architecture (CUDA) functionality is specifically utilized to emphasize thread usages and memory resources. To correlate the filter parameters with image characteristics for automation, optimal image texture features are subsequently acquired based on the sequential forward floating selection (SFFS) scheme. Subsequently, the selected features are introduced into the back propagation network (BPN) model for filter parameter estimation. Finally, the k-fold cross validation method is adopted to evaluate the accuracy of the proposed filter parameter prediction framework. A wide variety of T1-weighted brain MR images with various scenarios of noise levels and anatomic structures were utilized to train and validate this new parameter decision system with CUDA-based bilateral filtering. For a common brain MR image volume of 256 × 256 × 256 pixels, the speed-up gain reached 284. Six optimal texture features were acquired and associated with the BPN to establish a "high accuracy" parameter prediction system, which achieved a mean absolute percentage error (MAPE) of 5.6%. Automatic restoration results on 2460 brain MR images received an average

  18. Precise Anatomic Localization of Accumulated Lipids in Mfp2 Deficient Murine Brains Through Automated Registration of SIMS Images to the Allen Brain Atlas

    NASA Astrophysics Data System (ADS)

    Škrášková, Karolina; Khmelinskii, Artem; Abdelmoula, Walid M.; De Munter, Stephanie; Baes, Myriam; McDonnell, Liam; Dijkstra, Jouke; Heeren, Ron M. A.

    2015-06-01

    Mass spectrometry imaging (MSI) is a powerful tool for the molecular characterization of specific tissue regions. Histochemical staining provides anatomic information complementary to MSI data. The combination of both modalities has been proven to be beneficial. However, direct comparison of histology based and mass spectrometry-based molecular images can become problematic because of potential tissue damages or changes caused by different sample preparation. Curated atlases such as the Allen Brain Atlas (ABA) offer a collection of highly detailed and standardized anatomic information. Direct comparison of MSI brain data to the ABA allows for conclusions to be drawn on precise anatomic localization of the molecular signal. Here we applied secondary ion mass spectrometry imaging at high spatial resolution to study brains of knock-out mouse models with impaired peroxisomal β-oxidation. Murine models were lacking D-multifunctional protein (MFP2), which is involved in degradation of very long chain fatty acids. SIMS imaging revealed deposits of fatty acids within distinct brain regions. Manual comparison of the MSI data with the histologic stains did not allow for an unequivocal anatomic identification of the fatty acids rich regions. We further employed an automated pipeline for co-registration of the SIMS data to the ABA. The registration enabled precise anatomic annotation of the brain structures with the revealed lipid deposits. The precise anatomic localization allowed for a deeper insight into the pathology of Mfp2 deficient mouse models.

  19. Automated detection of cerebral microbleeds in patients with Traumatic Brain Injury.

    PubMed

    van den Heuvel, T L A; van der Eerden, A W; Manniesing, R; Ghafoorian, M; Tan, T; Andriessen, T M J C; Vande Vyvere, T; van den Hauwe, L; Ter Haar Romeny, B M; Goraj, B M; Platel, B

    2016-01-01

    In this paper a Computer Aided Detection (CAD) system is presented to automatically detect Cerebral Microbleeds (CMBs) in patients with Traumatic Brain Injury (TBI). It is believed that the presence of CMBs has clinical prognostic value in TBI patients. To study the contribution of CMBs in patient outcome, accurate detection of CMBs is required. Manual detection of CMBs in TBI patients is a time consuming task that is prone to errors, because CMBs are easily overlooked and are difficult to distinguish from blood vessels. This study included 33 TBI patients. Because of the laborious nature of manually annotating CMBs, only one trained expert manually annotated the CMBs in all 33 patients. A subset of ten TBI patients was annotated by six experts. Our CAD system makes use of both Susceptibility Weighted Imaging (SWI) and T1 weighted magnetic resonance images to detect CMBs. After pre-processing these images, a two-step approach was used for automated detection of CMBs. In the first step, each voxel was characterized by twelve features based on the dark and spherical nature of CMBs and a random forest classifier was used to identify CMB candidate locations. In the second step, segmentations were made from each identified candidate location. Subsequently an object-based classifier was used to remove false positive detections of the voxel classifier, by considering seven object-based features that discriminate between spherical objects (CMBs) and elongated objects (blood vessels). A guided user interface was designed for fast evaluation of the CAD system result. During this process, an expert checked each CMB detected by the CAD system. A Fleiss' kappa value of only 0.24 showed that the inter-observer variability for the TBI patients in this study was very large. An expert using the guided user interface reached an average sensitivity of 93%, which was significantly higher (p = 0.03) than the average sensitivity of 77% (sd 12.4%) that the six experts manually detected

  20. Technical aspects and evaluation methodology for the application of two automated brain MRI tumor segmentation methods in radiation therapy planning.

    PubMed

    Beyer, Gloria P; Velthuizen, Robert P; Murtagh, F Reed; Pearlman, James L

    2006-11-01

    The purpose of this study was to design the steps necessary to create a tumor volume outline from the results of two automated multispectral magnetic resonance imaging segmentation methods and integrate these contours into radiation therapy treatment planning. Algorithms were developed to create a closed, smooth contour that encompassed the tumor pixels resulting from two automated segmentation methods: k-nearest neighbors and knowledge guided. These included an automatic three-dimensional (3D) expansion of the results to compensate for their undersegmentation and match the extended contouring technique used in practice by radiation oncologists. Each resulting radiation treatment plan generated from the automated segmentation and from the outlining by two radiation oncologists for 11 brain tumor patients was compared against the volume and treatment plan from an expert radiation oncologist who served as the control. As part of this analysis, a quantitative and qualitative evaluation mechanism was developed to aid in this comparison. It was found that the expert physician reference volume was irradiated within the same level of conformity when using the plans generated from the contours of the segmentation methods. In addition, any uncertainty in the identification of the actual gross tumor volume by the segmentation methods, as identified by previous research into this area, had small effects when used to generate 3D radiation therapy treatment planning due to the averaging process in the generation of margins used in defining a planning target volume.

  1. Implementation of talairach atlas based automated brain segmentation for radiation therapy dosimetry.

    PubMed

    Popple, R A; Griffith, H R; Sawrie, S M; Fiveash, J B; Brezovich, I A

    2006-02-01

    Radiotherapy for brain cancer inevitably results in irradiation of uninvolved brain. While it has been demonstrated that irradiation of the brain can result in cognitive deficits, dose-volume relationships are not well established. There is little work correlating a particular cognitive deficit with dose received by the region of the brain responsible for the specific cognitive function. One obstacle to such studies is that identification of brain anatomy is both labor intensive and dependent on the individual performing the segmentation. Automatic segmentation has the potential to be both efficient and consistent. Brains2 is a software package developed by the University of Iowa for MRI volumetric studies. It utilizes MR images, the Talairach atlas, and an artificial neural network (ANN) to segment brain images into substructures in a standardized manner. We have developed a software package, Brains2DICOM, that converts the regions of interest identified by Brains2 into a DICOM radiotherapy structure set. The structure set can be imported into a treatment planning system for dosimetry. We demonstrated the utility of Brains2DICOM using a test case, a 34-year-old man with diffuse astrocytoma treated with three-dimensional conformal radiotherapy. Brains2 successfully applied the Talairach atlas to identify the right and left frontal, parietal, temporal, occipital, subcortical, and cerebellum regions. Brains2 was not successful in applying the ANN to identify small structures, such as the hippocampus and caudate. Further work is necessary to revise the ANN or to develop new methods for identification of small structures in the presence of disease and radiation induced changes. The segmented regions-of-interest were transferred to our commercial treatment planning system using DICOM and dose-volume histograms were constructed. This method will facilitate the acquisition of data necessary for the development of normal tissue complication probability (NTCP) models that

  2. Semi-automated registration-based anatomical labelling, voxel based morphometry and cortical thickness mapping of the mouse brain.

    PubMed

    Pagani, Marco; Damiano, Mario; Galbusera, Alberto; Tsaftaris, Sotirios A; Gozzi, Alessandro

    2016-07-15

    Morphoanatomical MRI methods have recently begun to be applied in the mouse. However, substantial differences in the anatomical organisation of human and rodent brain prevent a straightforward extension of clinical neuroimaging tools to mouse brain imaging. As a result, the vast majority of the published approaches rely on tailored routines that address single morphoanatomical readouts and typically lack a sufficiently-detailed description of the complex workflow required to process images and quantify structural alterations. Here we provide a detailed description of semi-automated registration-based procedures for voxel based morphometry, cortical thickness estimation and automated anatomical labelling of the mouse brain. The approach relies on the sequential use of advanced image processing tools offered by ANTs, a flexible open source toolkit freely available to the scientific community. To illustrate our procedures, we described their application to quantify morphological alterations in socially-impaired BTBR mice with respect to normosocial C57BL/6J controls, a comparison recently described by us and other research groups. We show that the approach can reliably detect both focal and large-scale grey matter alterations using complementary readouts. No detailed operational workflows for mouse imaging are available for direct comparison with our methods. However, empirical assessment of the mapped inter-strain differences is in good agreement with the findings of other groups using analogous approaches. The detailed operational workflows described here are expected to help the implementation of rodent morphoanatomical methods by non-expert users, and ultimately promote the use of these tools across the preclinical neuroimaging community. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Brain-Wide Mapping of Axonal Connections: Workflow for Automated Detection and Spatial Analysis of Labeling in Microscopic Sections.

    PubMed

    Papp, Eszter A; Leergaard, Trygve B; Csucs, Gergely; Bjaalie, Jan G

    2016-01-01

    Axonal tracing techniques are powerful tools for exploring the structural organization of neuronal connections. Tracers such as biotinylated dextran amine (BDA) and Phaseolus vulgaris leucoagglutinin (Pha-L) allow brain-wide mapping of connections through analysis of large series of histological section images. We present a workflow for efficient collection and analysis of tract-tracing datasets with a focus on newly developed modules for image processing and assignment of anatomical location to tracing data. New functionality includes automatic detection of neuronal labeling in large image series, alignment of images to a volumetric brain atlas, and analytical tools for measuring the position and extent of labeling. To evaluate the workflow, we used high-resolution microscopic images from axonal tracing experiments in which different parts of the rat primary somatosensory cortex had been injected with BDA or Pha-L. Parameters from a set of representative images were used to automate detection of labeling in image series covering the entire brain, resulting in binary maps of the distribution of labeling. For high to medium labeling densities, automatic detection was found to provide reliable results when compared to manual analysis, whereas weak labeling required manual curation for optimal detection. To identify brain regions corresponding to labeled areas, section images were aligned to the Waxholm Space (WHS) atlas of the Sprague Dawley rat brain (v2) by custom-angle slicing of the MRI template to match individual sections. Based on the alignment, WHS coordinates were obtained for labeled elements and transformed to stereotaxic coordinates. The new workflow modules increase the efficiency and reliability of labeling detection in large series of images from histological sections, and enable anchoring to anatomical atlases for further spatial analysis and comparison with other data.

  4. Brain-Wide Mapping of Axonal Connections: Workflow for Automated Detection and Spatial Analysis of Labeling in Microscopic Sections

    PubMed Central

    Papp, Eszter A.; Leergaard, Trygve B.; Csucs, Gergely; Bjaalie, Jan G.

    2016-01-01

    Axonal tracing techniques are powerful tools for exploring the structural organization of neuronal connections. Tracers such as biotinylated dextran amine (BDA) and Phaseolus vulgaris leucoagglutinin (Pha-L) allow brain-wide mapping of connections through analysis of large series of histological section images. We present a workflow for efficient collection and analysis of tract-tracing datasets with a focus on newly developed modules for image processing and assignment of anatomical location to tracing data. New functionality includes automatic detection of neuronal labeling in large image series, alignment of images to a volumetric brain atlas, and analytical tools for measuring the position and extent of labeling. To evaluate the workflow, we used high-resolution microscopic images from axonal tracing experiments in which different parts of the rat primary somatosensory cortex had been injected with BDA or Pha-L. Parameters from a set of representative images were used to automate detection of labeling in image series covering the entire brain, resulting in binary maps of the distribution of labeling. For high to medium labeling densities, automatic detection was found to provide reliable results when compared to manual analysis, whereas weak labeling required manual curation for optimal detection. To identify brain regions corresponding to labeled areas, section images were aligned to the Waxholm Space (WHS) atlas of the Sprague Dawley rat brain (v2) by custom-angle slicing of the MRI template to match individual sections. Based on the alignment, WHS coordinates were obtained for labeled elements and transformed to stereotaxic coordinates. The new workflow modules increase the efficiency and reliability of labeling detection in large series of images from histological sections, and enable anchoring to anatomical atlases for further spatial analysis and comparison with other data. PMID:27148038

  5. Automated reference region extraction and population-based input function for brain [11C]TMSX PET image analyses

    PubMed Central

    Rissanen, Eero; Tuisku, Jouni; Luoto, Pauliina; Arponen, Eveliina; Johansson, Jarkko; Oikonen, Vesa; Parkkola, Riitta; Airas, Laura; Rinne, Juha O

    2015-01-01

    [11C]TMSX ([7-N-methyl-11C]-(E)-8-(3,4,5-trimethoxystyryl)-1,3,7-trimethylxanthine) is a selective adenosine A2A receptor (A2AR) radioligand. In the central nervous system (CNS), A2AR are linked to dopamine D2 receptor function in striatum, but they are also important modulators of inflammation. The golden standard for kinetic modeling of brain [11C]TMSX positron emission tomography (PET) is to obtain arterial input function via arterial blood sampling. However, this method is laborious, prone to errors and unpleasant for study subjects. The aim of this work was to evaluate alternative input function acquisition methods for brain [11C]TMSX PET imaging. First, a noninvasive, automated method for the extraction of gray matter reference region using supervised clustering (SCgm) was developed. Second, a method for obtaining a population-based arterial input function (PBIF) was implemented. These methods were created using data from 28 study subjects (7 healthy controls, 12 multiple sclerosis patients, and 9 patients with Parkinson's disease). The results with PBIF correlated well with original plasma input, and the SCgm yielded similar results compared with cerebellum as a reference region. The clustering method for extracting reference region and the population-based approach for acquiring input for dynamic [11C]TMSX brain PET image analyses appear to be feasible and robust methods, that can be applied in patients with CNS pathology. PMID:25370856

  6. Semi-automated 3D segmentation of major tracts in the rat brain: comparing DTI with standard histological methods.

    PubMed

    Gyengesi, Erika; Calabrese, Evan; Sherrier, Matthew C; Johnson, G Allan; Paxinos, George; Watson, Charles

    2014-03-01

    Researchers working with rodent models of neurological disease often require an accurate map of the anatomical organization of the white matter of the rodent brain. With the increasing popularity of small animal MRI techniques, including diffusion tensor imaging (DTI), there is considerable interest in rapid segmentation methods of neurological structures for quantitative comparisons. DTI-derived tractography allows simple and rapid segmentation of major white matter tracts, but the anatomic accuracy of these computer-generated fibers is open to question and has not been rigorously evaluated in the rat brain. In this study, we examine the anatomic accuracy of tractography-based segmentation in the adult rat brain. We analysed 12 major white matter pathways using semi-automated tractography-based segmentation alongside manual segmentation of Gallyas silver-stained histology sections. We applied four fiber-tracking algorithms to the DTI data-two integration methods and two deflection methods. In many cases, tractography-based segmentation closely matched histology-based segmentation; however different tractography algorithms produced dramatically different results. Results suggest that certain white matter pathways are more amenable to tractography-based segmentation than others. We believe that these data will help researchers decide whether it is appropriate to use tractography-based segmentation of white matter structures for quantitative DTI-based analysis of neurologic disease models.

  7. Control of a Wheelchair in an Indoor Environment Based on a Brain-Computer Interface and Automated Navigation.

    PubMed

    Zhang, Rui; Li, Yuanqing; Yan, Yongyong; Zhang, Hao; Wu, Shaoyu; Yu, Tianyou; Gu, Zhenghui

    2016-01-01

    The concept of controlling a wheelchair using brain signals is promising. However, the continuous control of a wheelchair based on unstable and noisy electroencephalogram signals is unreliable and generates a significant mental burden for the user. A feasible solution is to integrate a brain-computer interface (BCI) with automated navigation techniques. This paper presents a brain-controlled intelligent wheelchair with the capability of automatic navigation. Using an autonomous navigation system, candidate destinations and waypoints are automatically generated based on the existing environment. The user selects a destination using a motor imagery (MI)-based or P300-based BCI. According to the determined destination, the navigation system plans a short and safe path and navigates the wheelchair to the destination. During the movement of the wheelchair, the user can issue a stop command with the BCI. Using our system, the mental burden of the user can be substantially alleviated. Furthermore, our system can adapt to changes in the environment. Two experiments based on MI and P300 were conducted to demonstrate the effectiveness of our system.

  8. Quantification of Human Brain Metabolites from in Vivo1H NMR Magnitude Spectra Using Automated Artificial Neural Network Analysis

    NASA Astrophysics Data System (ADS)

    Hiltunen, Yrjö; Kaartinen, Jouni; Pulkkinen, Juhani; Häkkinen, Anna-Maija; Lundbom, Nina; Kauppinen, Risto A.

    2002-01-01

    Long echo time (TE=270 ms) in vivo proton NMR spectra resembling human brain metabolite patterns were simulated for lineshape fitting (LF) and quantitative artificial neural network (ANN) analyses. A set of experimental in vivo1H NMR spectra were first analyzed by the LF method to match the signal-to-noise ratios and linewidths of simulated spectra to those in the experimental data. The performance of constructed ANNs was compared for the peak area determinations of choline-containing compounds (Cho), total creatine (Cr), and N-acetyl aspartate (NAA) signals using both manually phase-corrected and magnitude spectra as inputs. The peak area data from ANN and LF analyses for simulated spectra yielded high correlation coefficients demonstrating that the peak areas quantified with ANN gave similar results as LF analysis. Thus, a fully automated ANN method based on magnitude spectra has demonstrated potential for quantification of in vivo metabolites from long echo time spectroscopic imaging.

  9. A rat brain MRI template with digital stereotaxic atlas of fine anatomical delineations in paxinos space and its automated application in voxel-wise analysis.

    PubMed

    Nie, Binbin; Chen, Kewei; Zhao, Shujun; Liu, Junhua; Gu, Xiaochun; Yao, Qunli; Hui, Jiaojie; Zhang, Zhijun; Teng, Gaojun; Zhao, Chunjie; Shan, Baoci

    2013-06-01

    This study constructs a rat brain T2 -weighted magnetic resonance imaging template including olfactory bulb and a compatible digital atlas. The atlas contains 624 carefully delineated brain structures based on the newest (2005) edition of rat brain atlas by Paxinos and Watson. An automated procedure, as an SPM toolbox, was introduced for spatially normalizing individual rat brains, conducting statistical analysis and visually localizing the results in the Atlas coordinate space. The brain template/atlas and the procedure were evaluated using functional images between rats with the right side middle cerebral artery occlusion (MCAO) and normal controls. The result shows that the brain region with significant signal decline in the MCAO rats was consistent with the occlusion position.

  10. Automated analysis of seizure semiology and brain electrical activity in presurgery evaluation of epilepsy: A focused survey.

    PubMed

    Ahmedt-Aristizabal, David; Fookes, Clinton; Dionisio, Sasha; Nguyen, Kien; Cunha, João Paulo S; Sridharan, Sridha

    2017-10-09

    Epilepsy being one of the most prevalent neurological disorders, affecting approximately 50 million people worldwide, and with almost 30-40% of patients experiencing partial epilepsy being nonresponsive to medication, epilepsy surgery is widely accepted as an effective therapeutic option. Presurgical evaluation has advanced significantly using noninvasive techniques based on video monitoring, neuroimaging, and electrophysiological and neuropsychological tests; however, certain clinical settings call for invasive intracranial recordings such as stereoelectroencephalography (SEEG), aiming to accurately map the eloquent brain networks involved during a seizure. Most of the current presurgical evaluation procedures focus on semiautomatic techniques, where surgery diagnosis relies immensely on neurologists' experience and their time-consuming subjective interpretation of semiology or the manifestations of epilepsy and their correlation with the brain's electrical activity. Because surgery misdiagnosis reaches a rate of 30%, and more than one-third of all epilepsies are poorly understood, there is an evident keen interest in improving diagnostic precision using computer-based methodologies that in the past few years have shown near-human performance. Among them, deep learning has excelled in many biological and medical applications, but has advanced insufficiently in epilepsy evaluation and automated understanding of neural bases of semiology. In this paper, we systematically review the automatic applications in epilepsy for human motion analysis, brain electrical activity, and the anatomoelectroclinical correlation to attribute anatomical localization of the epileptogenic network to distinctive epilepsy patterns. Notably, recent advances in deep learning techniques will be investigated in the contexts of epilepsy to address the challenges exhibited by traditional machine learning techniques. Finally, we discuss and propose future research on epilepsy surgery assessment

  11. Automated fetal brain segmentation from 2D MRI slices for motion correction.

    PubMed

    Keraudren, K; Kuklisova-Murgasova, M; Kyriakopoulou, V; Malamateniou, C; Rutherford, M A; Kainz, B; Hajnal, J V; Rueckert, D

    2014-11-01

    Motion correction is a key element for imaging the fetal brain in-utero using Magnetic Resonance Imaging (MRI). Maternal breathing can introduce motion, but a larger effect is frequently due to fetal movement within the womb. Consequently, imaging is frequently performed slice-by-slice using single shot techniques, which are then combined into volumetric images using slice-to-volume reconstruction methods (SVR). For successful SVR, a key preprocessing step is to isolate fetal brain tissues from maternal anatomy before correcting for the motion of the fetal head. This has hitherto been a manual or semi-automatic procedure. We propose an automatic method to localize and segment the brain of the fetus when the image data is acquired as stacks of 2D slices with anatomy misaligned due to fetal motion. We combine this segmentation process with a robust motion correction method, enabling the segmentation to be refined as the reconstruction proceeds. The fetal brain localization process uses Maximally Stable Extremal Regions (MSER), which are classified using a Bag-of-Words model with Scale-Invariant Feature Transform (SIFT) features. The segmentation process is a patch-based propagation of the MSER regions selected during detection, combined with a Conditional Random Field (CRF). The gestational age (GA) is used to incorporate prior knowledge about the size and volume of the fetal brain into the detection and segmentation process. The method was tested in a ten-fold cross-validation experiment on 66 datasets of healthy fetuses whose GA ranged from 22 to 39 weeks. In 85% of the tested cases, our proposed method produced a motion corrected volume of a relevant quality for clinical diagnosis, thus removing the need for manually delineating the contours of the brain before motion correction. Our method automatically generated as a side-product a segmentation of the reconstructed fetal brain with a mean Dice score of 93%, which can be used for further processing. Copyright

  12. DCS-SVM: a novel semi-automated method for human brain MR image segmentation.

    PubMed

    Ahmadvand, Ali; Daliri, Mohammad Reza; Hajiali, Mohammadtaghi

    2016-12-08

    In this paper, a novel method is proposed which appropriately segments magnetic resonance (MR) brain images into three main tissues. This paper proposes an extension of our previous work in which we suggested a combination of multiple classifiers (CMC)-based methods named dynamic classifier selection-dynamic local training local Tanimoto index (DCS-DLTLTI) for MR brain image segmentation into three main cerebral tissues. This idea is used here and a novel method is developed that tries to use more complex and accurate classifiers like support vector machine (SVM) in the ensemble. This work is challenging because the CMC-based methods are time consuming, especially on huge datasets like three-dimensional (3D) brain MR images. Moreover, SVM is a powerful method that is used for modeling datasets with complex feature space, but it also has huge computational cost for big datasets, especially those with strong interclass variability problems and with more than two classes such as 3D brain images; therefore, we cannot use SVM in DCS-DLTLTI. Therefore, we propose a novel approach named "DCS-SVM" to use SVM in DCS-DLTLTI to improve the accuracy of segmentation results. The proposed method is applied on well-known datasets of the Internet Brain Segmentation Repository (IBSR) and promising results are obtained.

  13. An automated pipeline for constructing personalized virtual brains from multimodal neuroimaging data.

    PubMed

    Schirner, Michael; Rothmeier, Simon; Jirsa, Viktor K; McIntosh, Anthony Randal; Ritter, Petra

    2015-08-15

    Large amounts of multimodal neuroimaging data are acquired every year worldwide. In order to extract high-dimensional information for computational neuroscience applications standardized data fusion and efficient reduction into integrative data structures are required. Such self-consistent multimodal data sets can be used for computational brain modeling to constrain models with individual measurable features of the brain, such as done with The Virtual Brain (TVB). TVB is a simulation platform that uses empirical structural and functional data to build full brain models of individual humans. For convenient model construction, we developed a processing pipeline for structural, functional and diffusion-weighted magnetic resonance imaging (MRI) and optionally electroencephalography (EEG) data. The pipeline combines several state-of-the-art neuroinformatics tools to generate subject-specific cortical and subcortical parcellations, surface-tessellations, structural and functional connectomes, lead field matrices, electrical source activity estimates and region-wise aggregated blood oxygen level dependent (BOLD) functional MRI (fMRI) time-series. The output files of the pipeline can be directly uploaded to TVB to create and simulate individualized large-scale network models that incorporate intra- and intercortical interaction on the basis of cortical surface triangulations and white matter tractograpy. We detail the pitfalls of the individual processing streams and discuss ways of validation. With the pipeline we also introduce novel ways of estimating the transmission strengths of fiber tracts in whole-brain structural connectivity (SC) networks and compare the outcomes of different tractography or parcellation approaches. We tested the functionality of the pipeline on 50 multimodal data sets. In order to quantify the robustness of the connectome extraction part of the pipeline we computed several metrics that quantify its rescan reliability and compared them to other

  14. Automated identification of brain tumors from single MR images based on segmentation with refined patient-specific priors

    PubMed Central

    Sanjuán, Ana; Price, Cathy J.; Mancini, Laura; Josse, Goulven; Grogan, Alice; Yamamoto, Adam K.; Geva, Sharon; Leff, Alex P.; Yousry, Tarek A.; Seghier, Mohamed L.

    2013-01-01

    Brain tumors can have different shapes or locations, making their identification very challenging. In functional MRI, it is not unusual that patients have only one anatomical image due to time and financial constraints. Here, we provide a modified automatic lesion identification (ALI) procedure which enables brain tumor identification from single MR images. Our method rests on (A) a modified segmentation-normalization procedure with an explicit “extra prior” for the tumor and (B) an outlier detection procedure for abnormal voxel (i.e., tumor) classification. To minimize tissue misclassification, the segmentation-normalization procedure requires prior information of the tumor location and extent. We therefore propose that ALI is run iteratively so that the output of Step B is used as a patient-specific prior in Step A. We test this procedure on real T1-weighted images from 18 patients, and the results were validated in comparison to two independent observers' manual tracings. The automated procedure identified the tumors successfully with an excellent agreement with the manual segmentation (area under the ROC curve = 0.97 ± 0.03). The proposed procedure increases the flexibility and robustness of the ALI tool and will be particularly useful for lesion-behavior mapping studies, or when lesion identification and/or spatial normalization are problematic. PMID:24381535

  15. Automated identification of brain tumors from single MR images based on segmentation with refined patient-specific priors.

    PubMed

    Sanjuán, Ana; Price, Cathy J; Mancini, Laura; Josse, Goulven; Grogan, Alice; Yamamoto, Adam K; Geva, Sharon; Leff, Alex P; Yousry, Tarek A; Seghier, Mohamed L

    2013-01-01

    Brain tumors can have different shapes or locations, making their identification very challenging. In functional MRI, it is not unusual that patients have only one anatomical image due to time and financial constraints. Here, we provide a modified automatic lesion identification (ALI) procedure which enables brain tumor identification from single MR images. Our method rests on (A) a modified segmentation-normalization procedure with an explicit "extra prior" for the tumor and (B) an outlier detection procedure for abnormal voxel (i.e., tumor) classification. To minimize tissue misclassification, the segmentation-normalization procedure requires prior information of the tumor location and extent. We therefore propose that ALI is run iteratively so that the output of Step B is used as a patient-specific prior in Step A. We test this procedure on real T1-weighted images from 18 patients, and the results were validated in comparison to two independent observers' manual tracings. The automated procedure identified the tumors successfully with an excellent agreement with the manual segmentation (area under the ROC curve = 0.97 ± 0.03). The proposed procedure increases the flexibility and robustness of the ALI tool and will be particularly useful for lesion-behavior mapping studies, or when lesion identification and/or spatial normalization are problematic.

  16. Brain tumor target volume determination for radiation therapy treatment planning through the use of automated MRI segmentation

    NASA Astrophysics Data System (ADS)

    Mazzara, Gloria Patrika

    Radiation therapy seeks to effectively irradiate the tumor cells while minimizing the dose to adjacent normal cells. Prior research found that the low success rates for treating brain tumors would be improved with higher radiation doses to the tumor area. This is feasible only if the target volume can be precisely identified. However, the definition of tumor volume is still based on time-intensive, highly subjective manual outlining by radiation oncologists. In this study the effectiveness of two automated Magnetic Resonance Imaging (MRI) segmentation methods, k-Nearest Neighbors (kNN) and Knowledge-Guided (KG), in determining the Gross Tumor Volume (GTV) of brain tumors for use in radiation therapy was assessed. Three criteria were applied: accuracy of the contours; quality of the resulting treatment plan in terms of dose to the tumor; and a novel treatment plan evaluation technique based on post-treatment images. The kNN method was able to segment all cases while the KG method was limited to enhancing tumors and gliomas with clear enhancing edges. Various software applications were developed to create a closed smooth contour that encompassed the tumor pixels from the segmentations and to integrate these results into the treatment planning software. A novel, probabilistic measurement of accuracy was introduced to compare the agreement of the segmentation methods with the weighted average physician volume. Both computer methods under-segment the tumor volume when compared with the physicians but performed within the variability of manual contouring (28% +/- 12% for inter-operator variability). Computer segmentations were modified vertically to compensate for their under-segmentation. When comparing radiation treatment plans designed from physician-defined tumor volumes with treatment plans developed from the modified segmentation results, the reference target volume was irradiated within the same level of conformity. Analysis of the plans based on post

  17. Colorization and Automated Segmentation of Human T2 MR Brain Images for Characterization of Soft Tissues

    PubMed Central

    Attique, Muhammad; Gilanie, Ghulam; Hafeez-Ullah; Mehmood, Malik S.; Naweed, Muhammad S.; Ikram, Masroor; Kamran, Javed A.; Vitkin, Alex

    2012-01-01

    Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described. PMID:22479421

  18. Automated detection of brain atrophy patterns based on MRI for the prediction of Alzheimer's disease

    PubMed Central

    Plant, Claudia; Teipel, Stefan J.; Oswald, Annahita; Böhm, Christian; Meindl, Thomas; Mourao-Miranda, Janaina; Bokde, Arun W.; Hampel, Harald; Ewers, Michael

    2010-01-01

    Subjects with mild cognitive impairment (MCI) have an increased risk to develop Alzheimer's disease (AD). Voxel-based MRI studies have demonstrated that widely distributed cortical and subcortical brain areas show atrophic changes in MCI, preceding the onset of AD-type dementia. Here we developed a novel data mining framework in combination with three different classifiers including support vector machine (SVM), Bayes statistics, and voting feature intervals (VFI) to derive a quantitative index of pattern matching for the prediction of the conversion from MCI to AD. MRI was collected in 32 AD patients, 24 MCI subjects and 18 healthy controls (HC). Nine out of 24 MCI subjects converted to AD after an average follow-up interval of 2.5 years. Using feature selection algorithms, brain regions showing the highest accuracy for the discrimination between AD and HC were identified, reaching a classification accuracy of up to 92%. The extracted AD clusters were used as a search region to extract those brain areas that are predictive of conversion to AD within MCI subjects. The most predictive brain areas included the anterior cingulate gyrus and orbitofrontal cortex. The best prediction accuracy, which was cross-validated via train-and-test, was 75% for the prediction of the conversion from MCI to AD. The present results suggest that novel multivariate methods of pattern matching reach a clinically relevant accuracy for the a priori prediction of the progression from MCI to AD. PMID:19961938

  19. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification

    PubMed Central

    Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan

    2015-01-01

    A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients’ benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the

  20. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    PubMed

    Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan

    2015-01-01

    A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the

  1. Injection parameters affect cell viability and implant volumes in automated cell delivery for the brain.

    PubMed

    Kondziolka, Douglas; Gobbel, Glenn T; Fellows-Mayle, Wendy; Chang, Yue-Fang; Uram, Martin

    2011-01-01

    The technique of central nervous system cell implantation can affect the outcome of preclinical or clinical studies. Our goal was to evaluate the impact of various injection parameters that may be of consequence during the delivery of solute-suspended cells. These parameters included (1) the type and concentration of cells used for implantation, (2) the rate at which cells are injected (flow rate), (3) the acceleration of the delivery device, (4) the period of time between cell loading and injection into the CNS (delay), and (5) the length and gauge of the needle used to deliver the cells. Neural progenitor cells (NPCs) and bone marrow stromal cells (BMSCs) were injected an automated device. These parameters were assessed in relation to their effect on the volume of cells injected and cell viability. Longer and thinner cannulae and higher cell concentrations were detrimental for cell delivery. Devices and techniques that optimize these parameters should be of benefit.

  2. Fully automated segmentation of the pons and midbrain using human T1 MR brain images.

    PubMed

    Nigro, Salvatore; Cerasa, Antonio; Zito, Giancarlo; Perrotta, Paolo; Chiaravalloti, Francesco; Donzuso, Giulia; Fera, Franceso; Bilotta, Eleonora; Pantano, Pietro; Quattrone, Aldo

    2014-01-01

    This paper describes a novel method to automatically segment the human brainstem into midbrain and pons, called labs: Landmark-based Automated Brainstem Segmentation. LABS processes high-resolution structural magnetic resonance images (MRIs) according to a revised landmark-based approach integrated with a thresholding method, without manual interaction. This method was first tested on morphological T1-weighted MRIs of 30 healthy subjects. Its reliability was further confirmed by including neurological patients (with Alzheimer's Disease) from the ADNI repository, in whom the presence of volumetric loss within the brainstem had been previously described. Segmentation accuracies were evaluated against expert-drawn manual delineation. To evaluate the quality of LABS segmentation we used volumetric, spatial overlap and distance-based metrics. The comparison between the quantitative measurements provided by LABS against manual segmentations revealed excellent results in healthy controls when considering either the midbrain (DICE measures higher that 0.9; Volume ratio around 1 and Hausdorff distance around 3) or the pons (DICE measures around 0.93; Volume ratio ranging 1.024-1.05 and Hausdorff distance around 2). Similar performances were detected for AD patients considering segmentation of the pons (DICE measures higher that 0.93; Volume ratio ranging from 0.97-0.98 and Hausdorff distance ranging 1.07-1.33), while LABS performed lower for the midbrain (DICE measures ranging 0.86-0.88; Volume ratio around 0.95 and Hausdorff distance ranging 1.71-2.15). Our study represents the first attempt to validate a new fully automated method for in vivo segmentation of two anatomically complex brainstem subregions. We retain that our method might represent a useful tool for future applications in clinical practice.

  3. Fully Automated Segmentation of the Pons and Midbrain Using Human T1 MR Brain Images

    PubMed Central

    Nigro, Salvatore; Cerasa, Antonio; Zito, Giancarlo; Perrotta, Paolo; Chiaravalloti, Francesco; Donzuso, Giulia; Fera, Franceso; Bilotta, Eleonora; Pantano, Pietro; Quattrone, Aldo

    2014-01-01

    Purpose This paper describes a novel method to automatically segment the human brainstem into midbrain and pons, called LABS: Landmark-based Automated Brainstem Segmentation. LABS processes high-resolution structural magnetic resonance images (MRIs) according to a revised landmark-based approach integrated with a thresholding method, without manual interaction. Methods This method was first tested on morphological T1-weighted MRIs of 30 healthy subjects. Its reliability was further confirmed by including neurological patients (with Alzheimer's Disease) from the ADNI repository, in whom the presence of volumetric loss within the brainstem had been previously described. Segmentation accuracies were evaluated against expert-drawn manual delineation. To evaluate the quality of LABS segmentation we used volumetric, spatial overlap and distance-based metrics. Results The comparison between the quantitative measurements provided by LABS against manual segmentations revealed excellent results in healthy controls when considering either the midbrain (DICE measures higher that 0.9; Volume ratio around 1 and Hausdorff distance around 3) or the pons (DICE measures around 0.93; Volume ratio ranging 1.024–1.05 and Hausdorff distance around 2). Similar performances were detected for AD patients considering segmentation of the pons (DICE measures higher that 0.93; Volume ratio ranging from 0.97–0.98 and Hausdorff distance ranging 1.07–1.33), while LABS performed lower for the midbrain (DICE measures ranging 0.86–0.88; Volume ratio around 0.95 and Hausdorff distance ranging 1.71–2.15). Conclusions Our study represents the first attempt to validate a new fully automated method for in vivo segmentation of two anatomically complex brainstem subregions. We retain that our method might represent a useful tool for future applications in clinical practice. PMID:24489664

  4. Refining an Automated Transcranial Doppler System for the Detection of Vasospasm after Traumatic Brain Injury

    DTIC Science & Technology

    2014-09-01

    SUBJECT TERMS traumatic brain injury, ultrasound , transcranial Doppler, vasospasm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18...Presto’ that they have submitted to the FDA via a 510K for approval. Their system is based upon a novel and proprietary ultrasound platform along...within ultrasound -derived maps of blood flow speed captured by their device. This is reasonable because a view of each of the major cerebral arteries

  5. A Natural Language Processing-based Model to Automate MRI Brain Protocol Selection and Prioritization.

    PubMed

    Brown, Andrew D; Marotta, Thomas R

    2017-02-01

    Incorrect imaging protocol selection can contribute to increased healthcare cost and waste. To help healthcare providers improve the quality and safety of medical imaging services, we developed and evaluated three natural language processing (NLP) models to determine whether NLP techniques could be employed to aid in clinical decision support for protocoling and prioritization of magnetic resonance imaging (MRI) brain examinations. To test the feasibility of using an NLP model to support clinical decision making for MRI brain examinations, we designed three different medical imaging prediction tasks, each with a unique outcome: selecting an examination protocol, evaluating the need for contrast administration, and determining priority. We created three models for each prediction task, each using a different classification algorithm-random forest, support vector machine, or k-nearest neighbor-to predict outcomes based on the narrative clinical indications and demographic data associated with 13,982 MRI brain examinations performed from January 1, 2013 to June 30, 2015. Test datasets were used to calculate the accuracy, sensitivity and specificity, predictive values, and the area under the curve. Our optimal results show an accuracy of 82.9%, 83.0%, and 88.2% for the protocol selection, contrast administration, and prioritization tasks, respectively, demonstrating that predictive algorithms can be used to aid in clinical decision support for examination protocoling. NLP models developed from the narrative clinical information provided by referring clinicians and demographic data are feasible methods to predict the protocol and priority of MRI brain examinations. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  6. Automated detection and quantification of residual brain tumor using an interactive computer-aided detection scheme

    NASA Astrophysics Data System (ADS)

    Gaffney, Kevin P.; Aghaei, Faranak; Battiste, James; Zheng, Bin

    2017-03-01

    Detection of residual brain tumor is important to evaluate efficacy of brain cancer surgery, determine optimal strategy of further radiation therapy if needed, and assess ultimate prognosis of the patients. Brain MR is a commonly used imaging modality for this task. In order to distinguish between residual tumor and surgery induced scar tissues, two sets of MRI scans are conducted pre- and post-gadolinium contrast injection. The residual tumors are only enhanced in the post-contrast injection images. However, subjective reading and quantifying this type of brain MR images faces difficulty in detecting real residual tumor regions and measuring total volume of the residual tumor. In order to help solve this clinical difficulty, we developed and tested a new interactive computer-aided detection scheme, which consists of three consecutive image processing steps namely, 1) segmentation of the intracranial region, 2) image registration and subtraction, 3) tumor segmentation and refinement. The scheme also includes a specially designed and implemented graphical user interface (GUI) platform. When using this scheme, two sets of pre- and post-contrast injection images are first automatically processed to detect and quantify residual tumor volume. Then, a user can visually examine segmentation results and conveniently guide the scheme to correct any detection or segmentation errors if needed. The scheme has been repeatedly tested using five cases. Due to the observed high performance and robustness of the testing results, the scheme is currently ready for conducting clinical studies and helping clinicians investigate the association between this quantitative image marker and outcome of patients.

  7. Automated Neuropsychological Assessment Metrics (ANAM) Traumatic Brain Injury (TBI): Human Factors Assessment

    DTIC Science & Technology

    2011-07-01

    Lindsay, Cory Overby, Angela Jeter, Petra E. Alfred , Gary L. Boykin, Carita DeVilbiss, and Raymond Bateman ARL-TN-0440 July 2011...Neuropsychological Assessment Metrics (ANAM) Traumatic Brain Injury (TBI): Human Factors Assessment Valerie J. Rice, Petra E. Alfred , Gary L. Boykin...Angela Jeter*, Petra E. Alfred , Gary L. Boykin, Carita DeVilbiss, and Raymond Bateman 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT

  8. An automated and fast approach to detect single-trial visual evoked potentials with application to brain-computer interface.

    PubMed

    Tu, Yiheng; Hung, Yeung Sam; Hu, Li; Huang, Gan; Hu, Yong; Zhang, Zhiguo

    2014-12-01

    This study aims (1) to develop an automated and fast approach for detecting visual evoked potentials (VEPs) in single trials and (2) to apply the single-trial VEP detection approach in designing a real-time and high-performance brain-computer interface (BCI) system. The single-trial VEP detection approach uses common spatial pattern (CSP) as a spatial filter and wavelet filtering (WF) a temporal-spectral filter to jointly enhance the signal-to-noise ratio (SNR) of single-trial VEPs. The performance of the joint spatial-temporal-spectral filtering approach was assessed in a four-command VEP-based BCI system. The offline classification accuracy of the BCI system was significantly improved from 67.6±12.5% (raw data) to 97.3±2.1% (data filtered by CSP and WF). The proposed approach was successfully implemented in an online BCI system, where subjects could make 20 decisions in one minute with classification accuracy of 90%. The proposed single-trial detection approach is able to obtain robust and reliable VEP waveform in an automatic and fast way and it is applicable in VEP based online BCI systems. This approach provides a real-time and automated solution for single-trial detection of evoked potentials or event-related potentials (EPs/ERPs) in various paradigms, which could benefit many applications such as BCI and intraoperative monitoring. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  9. Histograms of Oriented 3D Gradients for Fully Automated Fetal Brain Localization and Robust Motion Correction in 3 T Magnetic Resonance Images

    PubMed Central

    Macnaught, Gillian; Denison, Fiona C.; Reynolds, Rebecca M.; Semple, Scott I.; Boardman, James P.

    2017-01-01

    Fetal brain magnetic resonance imaging (MRI) is a rapidly emerging diagnostic imaging tool. However, automated fetal brain localization is one of the biggest obstacles in expediting and fully automating large-scale fetal MRI processing. We propose a method for automatic localization of fetal brain in 3 T MRI when the images are acquired as a stack of 2D slices that are misaligned due to fetal motion. First, the Histogram of Oriented Gradients (HOG) feature descriptor is extended from 2D to 3D images. Then, a sliding window is used to assign a score to all possible windows in an image, depending on the likelihood of it containing a brain, and the window with the highest score is selected. In our evaluation experiments using a leave-one-out cross-validation strategy, we achieved 96% of complete brain localization using a database of 104 MRI scans at gestational ages between 34 and 38 weeks. We carried out comparisons against template matching and random forest based regression methods and the proposed method showed superior performance. We also showed the application of the proposed method in the optimization of fetal motion correction and how it is essential for the reconstruction process. The method is robust and does not rely on any prior knowledge of fetal brain development. PMID:28251155

  10. Histograms of Oriented 3D Gradients for Fully Automated Fetal Brain Localization and Robust Motion Correction in 3 T Magnetic Resonance Images.

    PubMed

    Serag, Ahmed; Macnaught, Gillian; Denison, Fiona C; Reynolds, Rebecca M; Semple, Scott I; Boardman, James P

    2017-01-01

    Fetal brain magnetic resonance imaging (MRI) is a rapidly emerging diagnostic imaging tool. However, automated fetal brain localization is one of the biggest obstacles in expediting and fully automating large-scale fetal MRI processing. We propose a method for automatic localization of fetal brain in 3 T MRI when the images are acquired as a stack of 2D slices that are misaligned due to fetal motion. First, the Histogram of Oriented Gradients (HOG) feature descriptor is extended from 2D to 3D images. Then, a sliding window is used to assign a score to all possible windows in an image, depending on the likelihood of it containing a brain, and the window with the highest score is selected. In our evaluation experiments using a leave-one-out cross-validation strategy, we achieved 96% of complete brain localization using a database of 104 MRI scans at gestational ages between 34 and 38 weeks. We carried out comparisons against template matching and random forest based regression methods and the proposed method showed superior performance. We also showed the application of the proposed method in the optimization of fetal motion correction and how it is essential for the reconstruction process. The method is robust and does not rely on any prior knowledge of fetal brain development.

  11. Semi-automated image processing system for micro- to macro-scale analysis of immunohistopathology: application to ischemic brain tissue.

    PubMed

    Wu, Chunyan; Zhao, Weizhao; Lin, Baowan; Ginsberg, Myron D

    2005-04-01

    Immunochemical staining techniques are commonly used to assess neuronal, astrocytic and microglial alterations in experimental neuroscience research, and in particular, are applied to tissues from animals subjected to ischemic stroke. Immunoreactivity of brain sections can be measured from digitized immunohistology slides so that quantitative assessment can be carried out by computer-assisted analysis. Conventional methods of analyzing immunohistology are based on image classification techniques applied to a specific anatomic location at high magnification. Such micro-scale localized image analysis limits one for further correlative studies with other imaging modalities on whole brain sections, which are of particular interest in experimental stroke research. This report presents a semi-automated image analysis method that performs convolution-based image classification on micro-scale images, extracts numerical data representing positive immunoreactivity from the processed micro-scale images and creates a corresponding quantitative macro-scale image. The present method utilizes several image-processing techniques to cope with variances in intensity distribution, as well as artifacts caused by light scattering or heterogeneity of antigen expression, which are commonly encountered in immunohistology. Micro-scale images are composed by a tiling function in a mosaic manner. Image classification is accomplished by the K-means clustering method at the relatively low-magnification micro-scale level in order to increase computation efficiency. The quantitative macro-scale image is suitable for correlative analysis with other imaging modalities. This method was applied to different immunostaining antibodies, such as endothelial barrier antigen (EBA), lectin, and glial fibrillary acidic protein (GFAP), on histology slides from animals subjected to middle cerebral artery occlusion by the intraluminal suture method. Reliability tests show that the results obtained from

  12. Automated Differential Diagnosis of Early Parkinsonism Using Metabolic Brain Networks: A Validation Study.

    PubMed

    Tripathi, Madhavi; Tang, Chris C; Feigin, Andrew; De Lucia, Ivana; Nazem, Amir; Dhawan, Vijay; Eidelberg, David

    2016-01-01

    indeterminate APS. Automated pattern-based image classification can improve the diagnostic accuracy in patients with parkinsonism, even at early disease stages. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  13. Automated Protein Localization of Blood Brain Barrier Vasculature in Brightfield IHC Images

    PubMed Central

    Keenan, Brendan T.; Pack, Allan I.; Shackleford, James A.

    2016-01-01

    In this paper, we present an objective method for localization of proteins in blood brain barrier (BBB) vasculature using standard immunohistochemistry (IHC) techniques and bright-field microscopy. Images from the hippocampal region at the BBB are acquired using bright-field microscopy and subjected to our segmentation pipeline which is designed to automatically identify and segment microvessels containing the protein glucose transporter 1 (GLUT1). Gabor filtering and k-means clustering are employed to isolate potential vascular structures within cryosectioned slabs of the hippocampus, which are subsequently subjected to feature extraction followed by classification via decision forest. The false positive rate (FPR) of microvessel classification is characterized using synthetic and non-synthetic IHC image data for image entropies ranging between 3 and 8 bits. The average FPR for synthetic and non-synthetic IHC image data was found to be 5.48% and 5.04%, respectively. PMID:26828723

  14. Automated segmentation of the corpus callosum in midsagittal brain magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Lee, Chulhee; Huh, Shin; Ketter, Terence A.; Unser, Michael A.

    2000-04-01

    We propose a new algorithm to find the corpus callosum automatically from midsagittal brain MR (magnetic resonance) images using the statistical characteristics and shape information of the corpus callosum. We first extract regions satisfying the statistical characteristics (gray level distributions) of the corpus callosum that have relatively high intensity values. Then we try to find a region matching the shape information of the corpus callosum. In order to match the shape information, we propose a new directed window region growing algorithm instead of using conventional contour matching. An innovative feature of the algorithm is that we adaptively relax the statistical requirement until we find a region matching the shape information. After the initial segmentation, a directed border path pruning algorithm is proposed in order to remove some undesired artifacts, especially on the top of the corpus callosum. The proposed algorithm was applied to over 120 images and provided promising results.

  15. Automated MR image processing and analysis of malignant brain tumors: enabling technology for data mining

    NASA Astrophysics Data System (ADS)

    Dube, Shishir; Corso, Jason J.; Cloughesy, Timothy F.; El-Saden, Suzie; Yuille, Alan L.; Sinha, Usha

    2007-11-01

    Glioblastoma multiforme (GBM) is a malignant brain cancer with poor patient prognosis (i.e. time to survival, time to tumor progression). A number of clinical trials are underway evaluating novel therapeutic strategies and magnetic resonance imaging is the most routinely performed procedure for accurate serial monitoring of patients. The electronic availability of the comprehensive data collected as part of the clinical trials provides an unprecedented opportunity to discover new relationships in complex diseases such as GBM. However, imaging data, which is the most accurate non-invasive assessment of GBMs, is not directly amenable for data mining. The focus of this chapter is on image analysis techniques including image spatial and intensity standardization, novel methods for robust tumor and edema segmentation, and quantification of tumor intensity, texture, and shape characteristics. The chapter concludes with an application of discovering the relationship between these quantitative image-derived features and time to survival in GBM patients; the data is part of a comprehensive large electronically accessible archive at UCLA (UCLA Neuro-oncology database).

  16. Automated assessment of symptom severity changes during deep brain stimulation (DBS) therapy for Parkinson's disease.

    PubMed

    Angeles, Paolo; Tai, Yen; Pavese, Nicola; Wilson, Samuel; Vaidyanathan, Ravi

    2017-07-01

    Deep brain stimulation (DBS) is currently being used as a treatment for symptoms of Parkinson's disease (PD). Tracking symptom severity progression and deciding the optimal stimulation parameters for people with PD is extremely difficult. This study presents a sensor system that can quantify the three cardinal motor symptoms of PD - rigidity, bradykinesia and tremor. The first phase of this study assesses whether data recorded from the system during physical examinations can be used to correlate to clinician's severity score using supervised machine learning (ML) models. The second phase concludes whether the sensor system can distinguish differences before and after DBS optimisation by a clinician when Unified Parkinson's Disease Rating Scale (UPDRS) scores did not change. An average accuracy of 90.9 % was achieved by the best ML models in the first phase, when correlating sensor data to clinician's scores. Adding on to this, in the second phase of the study, the sensor system was able to pick up discernible differences before and after DBS optimisation sessions in instances where UPDRS scores did not change.

  17. A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface.

    PubMed

    Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin

    2016-01-01

    Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The "high quality" training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system.

  18. Brain parenchymal fraction in an age-stratified healthy population - determined by MRI using manual segmentation and three automated segmentation methods.

    PubMed

    Vågberg, Mattias; Ambarki, Khalid; Lindqvist, Thomas; Birgander, Richard; Svenningsson, Anders

    2016-12-01

    Brain atrophy is a prominent feature in many neurodegenerative diseases, such as multiple sclerosis, but age-related decrease of brain volume occurs regardless of pathological neurodegeneration. Changes in brain volume can be described by use of the brain parenchymal fraction (BPF), most often defined as the ratio of total brain parenchyma to total intracranial space. The BPF is of interest both in research and in clinical practice. To be able to properly interpret this variable, the normal range of BPF must be known. The objective of this study is to present normal values for BPF, stratified by age, and compare manual BPF measurement to three automated methods. The BPFs of 106 healthy individuals aged 21 to 85 years were determined by the automated segmentation methods SyMap, VBM8 and SPM12. In a subgroup of 54 randomly selected individuals, the BPF was also determined by manual segmentation. The median (IQR) BPFs of the whole study population were 0.857 (0.064), 0.819 (0.028) and 0.784 (0.073) determined by SyMap, VBM8 and SPM12, respectively. The BPF decreased with increasing age. The correlation coefficients between manual segmentation and SyMap, VBM8 and SPM12 were 0.93 (P<0.001), 0.77 (P<0.001) and 0.56 (P<0.001), respectively. There was a clear relationship between increasing age and decreasing BPF. Knowledge of the range of normal BPF in relation to age group will help in the interpretation of BPF data. The automated segmentation methods displayed varying degrees of similarity to the manual reference, with SyMap being the most similar. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  19. An automated approach towards detecting complex behaviours in deep brain oscillations.

    PubMed

    Mace, Michael; Yousif, Nada; Naushahi, Mohammad; Abdullah-Al-Mamun, Khondaker; Wang, Shouyan; Nandi, Dipankar; Vaidyanathan, Ravi

    2014-03-15

    Extracting event-related potentials (ERPs) from neurological rhythms is of fundamental importance in neuroscience research. Standard ERP techniques typically require the associated ERP waveform to have low variance, be shape and latency invariant and require many repeated trials. Additionally, the non-ERP part of the signal needs to be sampled from an uncorrelated Gaussian process. This limits methods of analysis to quantifying simple behaviours and movements only when multi-trial data-sets are available. We introduce a method for automatically detecting events associated with complex or large-scale behaviours, where the ERP need not conform to the aforementioned requirements. The algorithm is based on the calculation of a detection contour and adaptive threshold. These are combined using logical operations to produce a binary signal indicating the presence (or absence) of an event with the associated detection parameters tuned using a multi-objective genetic algorithm. To validate the proposed methodology, deep brain signals were recorded from implanted electrodes in patients with Parkinson's disease as they participated in a large movement-based behavioural paradigm. The experiment involved bilateral recordings of local field potentials from the sub-thalamic nucleus (STN) and pedunculopontine nucleus (PPN) during an orientation task. After tuning, the algorithm is able to extract events achieving training set sensitivities and specificities of [87.5 ± 6.5, 76.7 ± 12.8, 90.0 ± 4.1] and [92.6 ± 6.3, 86.0 ± 9.0, 29.8 ± 12.3] (mean ± 1 std) for the three subjects, averaged across the four neural sites. Furthermore, the methodology has the potential for utility in real-time applications as only a single-trial ERP is required.

  20. Rat brain digital stereotaxic white matter atlas with fine tract delineation in Paxinos space and its automated applications in DTI data analysis.

    PubMed

    Liang, Shengxiang; Wu, Shang; Huang, Qi; Duan, Shaofeng; Liu, Hua; Li, Yuxiao; Zhao, Shujun; Nie, Binbin; Shan, Baoci

    2017-11-01

    To automatically analyze diffusion tensor images of the rat brain via both voxel-based and ROI-based approaches, we constructed a new white matter atlas of the rat brain with fine tracts delineation in the Paxinos and Watson space. Unlike in previous studies, we constructed a digital atlas image from the latest edition of the Paxinos and Watson. This atlas contains 111 carefully delineated white matter fibers. A white matter network of rat brain based on anatomy was constructed by locating the intersection of all these tracts and recording the nuclei on the pathway of each white matter tract. Moreover, a compatible rat brain template from DTI images was created and standardized into the atlas space. To evaluate the automated application of the atlas in DTI data analysis, a group of rats with right-side middle cerebral artery occlusion (MCAO) and those without were enrolled in this study. The voxel-based analysis result shows that the brain region showing significant declines in signal in the MCAO rats was consistent with the occlusion position. We constructed a stereotaxic white matter atlas of the rat brain with fine tract delineation and a compatible template for the data analysis of DTI images of the rat brain. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Differentiating shunt-responsive normal pressure hydrocephalus from Alzheimer disease and normal aging: pilot study using automated MRI brain tissue segmentation.

    PubMed

    Serulle, Yafell; Rusinek, Henry; Kirov, Ivan I; Milch, Hannah; Fieremans, Els; Baxter, Alexander B; McMenamy, John; Jain, Rajan; Wisoff, Jeffrey; Golomb, James; Gonen, Oded; George, Ajax E

    2014-10-01

    Evidence suggests that normal pressure hydrocephalus (NPH) is underdiagnosed in day to day radiologic practice, and differentiating NPH from cerebral atrophy due to other neurodegenerative diseases and normal aging remains a challenge. To better characterize NPH, we test the hypothesis that a prediction model based on automated MRI brain tissue segmentation can help differentiate shunt-responsive NPH patients from cerebral atrophy due to Alzheimer disease (AD) and normal aging. Brain segmentation into gray and white matter (GM, WM), and intracranial cerebrospinal fluid was derived from pre-shunt T1-weighted MRI of 15 shunt-responsive NPH patients (9 men, 72.6 ± 8.0 years-old), 17 AD patients (10 men, 72.1 ± 11.0 years-old) chosen as a representative of cerebral atrophy in this age group; and 18 matched healthy elderly controls (HC, 7 men, 69.7 ± 7.0 years old). A multinomial prediction model was generated based on brain tissue volume distributions. GM decrease of 33% relative to HC characterized AD (P < 0.005). High preoperative ventricular and near normal GM volumes characterized NPH. A multinomial regression model based on gender, GM and ventricular volume had 96.3% accuracy differentiating NPH from AD and HC. In conclusion, automated MRI brain tissue segmentation differentiates shunt-responsive NPH with high accuracy from atrophy due to AD and normal aging. This method may improve diagnosis of NPH and improve our ability to distinguish normal from pathologic aging.

  2. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python

    PubMed Central

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries. PMID:24808857

  3. Usability of a virtual reality environment simulating an automated teller machine for assessing and training persons with acquired brain injury.

    PubMed

    Fong, Kenneth N K; Chow, Kathy Y Y; Chan, Bianca C H; Lam, Kino C K; Lee, Jeff C K; Li, Teresa H Y; Yan, Elaine W H; Wong, Asta T Y

    2010-04-30

    This study aimed to examine the usability of a newly designed virtual reality (VR) environment simulating the operation of an automated teller machine (ATM) for assessment and training. Part I involved evaluation of the sensitivity and specificity of a non-immersive VR program simulating an ATM (VR-ATM). Part II consisted of a clinical trial providing baseline and post-intervention outcome assessments. A rehabilitation hospital and university-based teaching facilities were used as the setting. A total of 24 persons in the community with acquired brain injury (ABI)--14 in Part I and 10 in Part II--made up the participants in the study. In Part I, participants were randomized to receive instruction in either an "early" or a "late" VR-ATM program and were assessed using both the VR program and a real ATM. In Part II, participants were assigned in matched pairs to either VR training or computer-assisted instruction (CAI) teaching programs for six 1-hour sessions over a three-week period. Two behavioral checklists based on activity analysis of cash withdrawals and money transfers using a real ATM were used to measure average reaction time, percentage of incorrect responses, level of cues required, and time spent as generated by the VR system; also used was the Neurobehavioral Cognitive Status Examination. The sensitivity of the VR-ATM was 100% for cash withdrawals and 83.3% for money transfers, and the specificity was 83% and 75%, respectively. For cash withdrawals, the average reaction time of the VR group was significantly shorter than that of the CAI group (p = 0.021). We found no significant differences in average reaction time or accuracy between groups for money transfers, although we did note positive improvement for the VR-ATM group. We found the VR-ATM to be usable as a valid assessment and training tool for relearning the use of ATMs prior to real-life practice in persons with ABI.

  4. Automated classification of brain tumor type in whole-slide digital pathology images using local representative tiles.

    PubMed

    Barker, Jocelyn; Hoogi, Assaf; Depeursinge, Adrien; Rubin, Daniel L

    2016-05-01

    Computerized analysis of digital pathology images offers the potential of improving clinical care (e.g. automated diagnosis) and catalyzing research (e.g. discovering disease subtypes). There are two key challenges thwarting computerized analysis of digital pathology images: first, whole slide pathology images are massive, making computerized analysis inefficient, and second, diverse tissue regions in whole slide images that are not directly relevant to the disease may mislead computerized diagnosis algorithms. We propose a method to overcome both of these challenges that utilizes a coarse-to-fine analysis of the localized characteristics in pathology images. An initial surveying stage analyzes the diversity of coarse regions in the whole slide image. This includes extraction of spatially localized features of shape, color and texture from tiled regions covering the slide. Dimensionality reduction of the features assesses the image diversity in the tiled regions and clustering creates representative groups. A second stage provides a detailed analysis of a single representative tile from each group. An Elastic Net classifier produces a diagnostic decision value for each representative tile. A weighted voting scheme aggregates the decision values from these tiles to obtain a diagnosis at the whole slide level. We evaluated our method by automatically classifying 302 brain cancer cases into two possible diagnoses (glioblastoma multiforme (N = 182) versus lower grade glioma (N = 120)) with an accuracy of 93.1% (p < 0.001). We also evaluated our method in the dataset provided for the 2014 MICCAI Pathology Classification Challenge, in which our method, trained and tested using 5-fold cross validation, produced a classification accuracy of 100% (p < 0.001). Our method showed high stability and robustness to parameter variation, with accuracy varying between 95.5% and 100% when evaluated for a wide range of parameters. Our approach may be useful to automatically

  5. Automated method to compute Evans index for diagnosis of idiopathic normal pressure hydrocephalus on brain CT images

    NASA Astrophysics Data System (ADS)

    Takahashi, Noriyuki; Kinoshita, Toshibumi; Ohmura, Tomomi; Matsuyama, Eri; Toyoshima, Hideto

    2017-03-01

    The early diagnosis of idiopathic normal pressure hydrocephalus (iNPH) considered as a treatable dementia is important. The iNPH causes enlargement of lateral ventricles (LVs). The degree of the enlargement of the LVs on CT or MR images is evaluated by using a diagnostic imaging criterion, Evans index. Evans index is defined as the ratio of the maximal width of frontal horns (FH) of the LVs to the maximal width of the inner skull (IS). Evans index is the most commonly used parameter for the evaluation of ventricular enlargement. However, manual measurement of Evans index is a time-consuming process. In this study, we present an automated method to compute Evans index on brain CT images. The algorithm of the method consisted of five major steps: standardization of CT data to an atlas, extraction of FH and IS regions, the search for the outmost points of bilateral FH regions, determination of the maximal widths of both the FH and the IS, and calculation of Evans index. The standardization to the atlas was performed by using linear affine transformation and non-linear wrapping techniques. The FH regions were segmented by using a three dimensional region growing technique. This scheme was applied to CT scans from 44 subjects, including 13 iNPH patients. The average difference in Evans index between the proposed method and manual measurement was 0.01 (1.6%), and the correlation coefficient of these data for the Evans index was 0.98. Therefore, this computerized method may have the potential to accurately compute Evans index for the diagnosis of iNPH on CT images.

  6. Identifying relevant biomarkers of brain injury from structural MRI: Validation using automated approaches in children with unilateral cerebral palsy.

    PubMed

    Pagnozzi, Alex M; Dowson, Nicholas; Doecke, James; Fiori, Simona; Bradley, Andrew P; Boyd, Roslyn N; Rose, Stephen

    2017-01-01

    Previous studies have proposed that the early elucidation of brain injury from structural Magnetic Resonance Images (sMRI) is critical for the clinical assessment of children with cerebral palsy (CP). Although distinct aetiologies, including cortical maldevelopments, white and grey matter lesions and ventricular enlargement, have been categorised, these injuries are commonly only assessed in a qualitative fashion. As a result, sMRI remains relatively underexploited for clinical assessments, despite its widespread use. In this study, several automated and validated techniques to automatically quantify these three classes of injury were generated in a large cohort of children (n = 139) aged 5-17, including 95 children diagnosed with unilateral CP. Using a feature selection approach on a training data set (n = 97) to find severity of injury biomarkers predictive of clinical function (motor, cognitive, communicative and visual function), cortical shape and regional lesion burden were most often chosen associated with clinical function. Validating the best models on the unseen test data (n = 42), correlation values ranged between 0.545 and 0.795 (p<0.008), indicating significant associations with clinical function. The measured prevalence of injury, including ventricular enlargement (70%), white and grey matter lesions (55%) and cortical malformations (30%), were similar to the prevalence observed in other cohorts of children with unilateral CP. These findings support the early characterisation of injury from sMRI into previously defined aetiologies as part of standard clinical assessment. Furthermore, the strong and significant association between quantifications of injury observed on structural MRI and multiple clinical scores accord with empirically established structure-function relationships.

  7. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    PubMed

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  8. A Method for Automated Classification of Parkinson’s Disease Diagnosis Using an Ensemble Average Propagator Template Brain Map Estimated from Diffusion MRI

    PubMed Central

    Banerjee, Monami; Okun, Michael S.; Vaillancourt, David E.; Vemuri, Baba C.

    2016-01-01

    Parkinson’s disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future. PMID

  9. Automated voxel classification used with atlas-guided diffuse optical tomography for assessment of functional brain networks in young and older adults.

    PubMed

    Li, Lin; Cazzell, Mary; Babawale, Olajide; Liu, Hanli

    2016-10-01

    Atlas-guided diffuse optical tomography (atlas-DOT) is a computational means to image changes in cortical hemodynamic signals during human brain activities. Graph theory analysis (GTA) is a network analysis tool commonly used in functional neuroimaging to study brain networks. Atlas-DOT has not been analyzed with GTA to derive large-scale brain connectivity/networks based on near-infrared spectroscopy (NIRS) measurements. We introduced an automated voxel classification (AVC) method that facilitated the use of GTA with atlas-DOT images by grouping unequal-sized finite element voxels into anatomically meaningful regions of interest within the human brain. The overall approach included volume segmentation, AVC, and cross-correlation. To demonstrate the usefulness of AVC, we applied reproducibility analysis to resting-state functional connectivity measurements conducted from 15 young adults in a two-week period. We also quantified and compared changes in several brain network metrics between young and older adults, which were in agreement with those reported by a previous positron emission tomography study. Overall, this study demonstrated that AVC is a useful means for facilitating integration or combination of atlas-DOT with GTA and thus for quantifying NIRS-based, voxel-wise resting-state functional brain networks.

  10. Automated quantification of 18F-flutemetamol PET activity for categorizing scans as negative or positive for brain amyloid: concordance with visual image reads.

    PubMed

    Thurfjell, Lennart; Lilja, Johan; Lundqvist, Roger; Buckley, Chris; Smith, Adrian; Vandenberghe, Rik; Sherwin, Paul

    2014-10-01

    Clinical trials of the PET amyloid imaging agent (18)F-flutemetamol have used visual assessment to classify PET scans as negative or positive for brain amyloid. However, quantification provides additional information about regional and global tracer uptake and may have utility for image assessment over time and across different centers. Using postmortem brain neuritic plaque density data as a truth standard to derive a standardized uptake value ratio (SUVR) threshold, we assessed a fully automated quantification method comparing visual and quantitative scan categorizations. We also compared the histopathology-derived SUVR threshold with one derived from healthy controls. Data from 345 consenting subjects enrolled in 8 prior clinical trials of (18)F-flutemetamol injection were used. We grouped subjects into 3 cohorts: an autopsy cohort (n = 68) comprising terminally ill patients with postmortem confirmation of brain amyloid status; a test cohort (n = 172) comprising 33 patients with clinically probable Alzheimer disease, 80 patients with mild cognitive impairment, and 59 healthy volunteers; and a healthy cohort of 105 volunteers, used to define a reference range for SUVR. Visual image categorizations for comparison were from a previous study. A fully automated PET-only quantification method was used to compute regional neocortical SUVRs that were combined into a single composite SUVR. An SUVR threshold for classifying scans as positive or negative was derived by ranking the PET scans from the autopsy cohort based on their composite SUVR and comparing data with the standard of truth based on postmortem brain amyloid status for subjects in the autopsy cohort. The derived threshold was used to categorize the 172 scans in the test cohort as negative or positive, and results were compared with categorization using visual assessment. Different reference and composite region definitions were assessed. Threshold levels were also compared with corresponding thresholds

  11. Usability of a virtual reality environment simulating an automated teller machine for assessing and training persons with acquired brain injury

    PubMed Central

    2010-01-01

    Objective This study aimed to examine the usability of a newly designed virtual reality (VR) environment simulating the operation of an automated teller machine (ATM) for assessment and training. Design Part I involved evaluation of the sensitivity and specificity of a non-immersive VR program simulating an ATM (VR-ATM). Part II consisted of a clinical trial providing baseline and post-intervention outcome assessments. Setting A rehabilitation hospital and university-based teaching facilities were used as the setting. Participants A total of 24 persons in the community with acquired brain injury (ABI) - 14 in Part I and 10 in Part II - made up the participants in the study. Interventions In Part I, participants were randomized to receive instruction in either an "early" or a "late" VR-ATM program and were assessed using both the VR program and a real ATM. In Part II, participants were assigned in matched pairs to either VR training or computer-assisted instruction (CAI) teaching programs for six 1-hour sessions over a three-week period. Outcome Measures Two behavioral checklists based on activity analysis of cash withdrawals and money transfers using a real ATM were used to measure average reaction time, percentage of incorrect responses, level of cues required, and time spent as generated by the VR system; also used was the Neurobehavioral Cognitive Status Examination. Results The sensitivity of the VR-ATM was 100% for cash withdrawals and 83.3% for money transfers, and the specificity was 83% and 75%, respectively. For cash withdrawals, the average reaction time of the VR group was significantly shorter than that of the CAI group (p = 0.021). We found no significant differences in average reaction time or accuracy between groups for money transfers, although we did note positive improvement for the VR-ATM group. Conclusion We found the VR-ATM to be usable as a valid assessment and training tool for relearning the use of ATMs prior to real-life practice in persons

  12. Accuracy and Reliability of Automated Gray Matter Segmentation Pathways on Real and Simulated Structural Magnetic Resonance Images of the Human Brain

    PubMed Central

    Eggert, Lucas D.; Sommer, Jens; Jansen, Andreas; Kircher, Tilo; Konrad, Carsten

    2012-01-01

    Automated gray matter segmentation of magnetic resonance imaging data is essential for morphometric analyses of the brain, particularly when large sample sizes are investigated. However, although detection of small structural brain differences may fundamentally depend on the method used, both accuracy and reliability of different automated segmentation algorithms have rarely been compared. Here, performance of the segmentation algorithms provided by SPM8, VBM8, FSL and FreeSurfer was quantified on simulated and real magnetic resonance imaging data. First, accuracy was assessed by comparing segmentations of twenty simulated and 18 real T1 images with corresponding ground truth images. Second, reliability was determined in ten T1 images from the same subject and in ten T1 images of different subjects scanned twice. Third, the impact of preprocessing steps on segmentation accuracy was investigated. VBM8 showed a very high accuracy and a very high reliability. FSL achieved the highest accuracy but demonstrated poor reliability and FreeSurfer showed the lowest accuracy, but high reliability. An universally valid recommendation on how to implement morphometric analyses is not warranted due to the vast number of scanning and analysis parameters. However, our analysis suggests that researchers can optimize their individual processing procedures with respect to final segmentation quality and exemplifies adequate performance criteria. PMID:23028771

  13. Accuracy and reliability of automated gray matter segmentation pathways on real and simulated structural magnetic resonance images of the human brain.

    PubMed

    Eggert, Lucas D; Sommer, Jens; Jansen, Andreas; Kircher, Tilo; Konrad, Carsten

    2012-01-01

    Automated gray matter segmentation of magnetic resonance imaging data is essential for morphometric analyses of the brain, particularly when large sample sizes are investigated. However, although detection of small structural brain differences may fundamentally depend on the method used, both accuracy and reliability of different automated segmentation algorithms have rarely been compared. Here, performance of the segmentation algorithms provided by SPM8, VBM8, FSL and FreeSurfer was quantified on simulated and real magnetic resonance imaging data. First, accuracy was assessed by comparing segmentations of twenty simulated and 18 real T1 images with corresponding ground truth images. Second, reliability was determined in ten T1 images from the same subject and in ten T1 images of different subjects scanned twice. Third, the impact of preprocessing steps on segmentation accuracy was investigated. VBM8 showed a very high accuracy and a very high reliability. FSL achieved the highest accuracy but demonstrated poor reliability and FreeSurfer showed the lowest accuracy, but high reliability. An universally valid recommendation on how to implement morphometric analyses is not warranted due to the vast number of scanning and analysis parameters. However, our analysis suggests that researchers can optimize their individual processing procedures with respect to final segmentation quality and exemplifies adequate performance criteria.

  14. Brain MRI lesion load quantification in multiple sclerosis: a comparison between automated multispectral and semi-automated thresholding computer-assisted techniques.

    PubMed

    Achiron, Anat; Gicquel, Sebastien; Miron, Shmuel; Faibel, Meir

    2002-12-01

    Brain magnetic resonance imaging (MRI) lesion volume measurement is an advantageous tool for assessing disease burden in multiple sclerosis (MS). We have evaluated two computer-assisted techniques: MSA multispectral automatic technique that is based on bayesian classification of brain tissue and NIH image analysis technique that is based on local (lesion by lesion) thresholding, to establish reliability and repeatability values for each technique. Brain MRIs were obtained for 30 clinically definite relapsing-remitting MS patients using a 2.0 Tesla MR scanner with contiguous, 3 mm thick axial, T1, T2 and PD weighted modalities. Digital (Dicom 3) images were analyzed independently by three observers; each analyzed the images twice, using the two different techniques (Total 360 analyses). Accuracy of lesion load measurements using phantom images of known volumes showed significantly better results for the MSA multispectral technique (p < 0.001). The mean intra-and inter-observer variances were, respectively, 0.04 +/- 0.4 (range 0.04-0.13), and 0.09 +/- 0.6 (range 0.01-0.26) for the multispectral MSA analysis technique, 0.24 +/- 2.27 (range 0.23-0.72) and 0.33 +/- 3.8 (range 0.47-1.36) for the NIH threshold technique. These data show that the MSA multispectral technique is significantly more accurate in lesion volume measurements, with better results of within and between observers' assessments, and the lesion load measurements are not influenced by increased disease burden. Measurements by the MSA multispectral technique were also faster and decreased analysis time by 43%. The MSA multispectral technique is a promising tool for evaluating MS patients. Non-biased recognition and delineation algorithms enable high accuracy, low intra-and inter-observer variances and fast assessment of MS related lesion load.

  15. Effect of pretreatment with a tyrosine kinase inhibitor (PP1) on brain oedema and neurological function in an automated cortical cryoinjury model in mice.

    PubMed

    Turel, Mazda K; Moorthy, Ranjith K; Sam, Gift Ajay; Samuel, Prasanna; Murthy, Muthukumar; Babu, K Srinivas; Rajshekhar, Vedantam

    2013-04-01

    Cerebral oedema is a significant cause of morbidity in neurosurgical practice. To our knowledge, there is no ideal drug for prevention or treatment of brain oedema. Based on the current understanding of the pathogenesis of brain oedema, tyrosine kinase inhibitors could have a role in reducing brain oedema but preclinical studies are needed to assess their effectiveness. We evaluated the role of pretreatment with 4-amino-5-(4-methylphenyl)-7-(t-butyl)pyrazolo(3,4-d)pyrimidine (PP1), an Src tyrosine kinase inhibitor, in reducing cerebral oedema and preserving neurological function measured 24hours after an automated cortical cryoinjury in mice. Sixteen adult male Swiss albino mice were subjected to an automated cortical cryoinjury using a dry ice-acetone mixture. The experimental group (n=8) received an intraperitoneal injection of PP1 dissolved in dimethyl sulfoxide (DMSO) at a dose of 1.5mg/kg body weight 45minutes prior to the injury. The control group (n=8) received an intraperitoneal injection of DMSO alone. A further eight mice underwent sham injury. The animals were evaluated using the neurological severity score (NSS) at 24hours post-injury, after which the animals were sacrificed and their brains removed, weighed, dehydrated for 48hours and weighed again. The percentage of brain water content was calculated as: {[(wet weight - dry weight)/wet weight] × 100}. The mean (standard deviation, SD) NSS was 11.7 (1.8) in the experimental group and 10.5 (1.3) in the control group (p=0.15). The mean (SD) percentage water content of the brain was 78.6% (1.3%) in the experimental group and 77.2% (1.1%) in the control group (p=0.03). The percentage water content in the experimental and control groups were both significantly higher than in the sham injury group. The immediate pre-injury administration of PP1 neither reduced cerebral oedema (water content %) nor preserved neurological function (NSS) when compared to a control group in this model of cortical cryoinjury.

  16. Sensitivity analysis and automation for intraoperative implementation of the atlas-based method for brain shift correction

    NASA Astrophysics Data System (ADS)

    Chen, Ishita; Simpson, Amber L.; Sun, Kay; Thompson, Reid C.; Miga, Michael I.

    2013-03-01

    The use of biomechanical models to correct the misregistration due to deformation in image guided neurosurgical systems has been a growing area of investigation. In previous work, an atlas-based inverse model was developed to account for soft-tissue deformations during image-guided surgery. Central to that methodology is a considerable amount of pre-computation and planning. The goal of this work is to evaluate techniques that could potentially reduce that burden. Distinct from previous manual techniques, an automated segmentation technique is described for the cerebrum and dural septa. The shift correction results using this automated segmentation method were compared to those using the manual methods. In addition, the extent and distribution of the surgical parameters associated with the deformation atlas were investigated by a sensitivity analysis using simulation experiments and clinical data. The shift correction results did not change significantly using the automated method (correction of 73+/-13% ) as compared to the semi-automated method from previous work (correction of 76+/-13%). The results of the sensitivity analysis show that the atlas could be constructed by coarser sampling (six fold reduction) without substantial degradation in the shift reconstruction, a decrease in preoperative computational time from 13.1+/-3.5 hours to 2.2+/-0.6 hours. The automated segmentation technique and the findings of the sensitivity study have significant impact on the reduction of pre-operative computational time, improving the utility of the atlas-based method. The work in this paper suggests that the atlas-based technique can become a `time of surgery' setup procedure rather than a pre-operative computing strategy.

  17. Human brain atlas for automated region of interest selection in quantitative susceptibility mapping: application to determine iron content in deep gray matter structures.

    PubMed

    Lim, Issel Anne L; Faria, Andreia V; Li, Xu; Hsu, Johnny T C; Airan, Raag D; Mori, Susumu; van Zijl, Peter C M

    2013-11-15

    The purpose of this paper is to extend the single-subject Eve atlas from Johns Hopkins University, which currently contains diffusion tensor and T1-weighted anatomical maps, by including contrast based on quantitative susceptibility mapping. The new atlas combines a "deep gray matter parcellation map" (DGMPM) derived from a single-subject quantitative susceptibility map with the previously established "white matter parcellation map" (WMPM) from the same subject's T1-weighted and diffusion tensor imaging data into an MNI coordinate map named the "Everything Parcellation Map in Eve Space," also known as the "EvePM." It allows automated segmentation of gray matter and white matter structures. Quantitative susceptibility maps from five healthy male volunteers (30 to 33 years of age) were coregistered to the Eve Atlas with AIR and Large Deformation Diffeomorphic Metric Mapping (LDDMM), and the transformation matrices were applied to the EvePM to produce automated parcellation in subject space. Parcellation accuracy was measured with a kappa analysis for the left and right structures of six deep gray matter regions. For multi-orientation QSM images, the Kappa statistic was 0.85 between automated and manual segmentation, with the inter-rater reproducibility Kappa being 0.89 for the human raters, suggesting "almost perfect" agreement between all segmentation methods. Segmentation seemed slightly more difficult for human raters on single-orientation QSM images, with the Kappa statistic being 0.88 between automated and manual segmentation, and 0.85 and 0.86 between human raters. Overall, this atlas provides a time-efficient tool for automated coregistration and segmentation of quantitative susceptibility data to analyze many regions of interest. These data were used to establish a baseline for normal magnetic susceptibility measurements for over 60 brain structures of 30- to 33-year-old males. Correlating the average susceptibility with age-based iron concentrations in gray

  18. Human brain atlas for automated region of interest selection in quantitative susceptibility mapping: application to determine iron content in deep gray matter structures

    PubMed Central

    Lim, Issel Anne L.; Faria, Andreia V.; Li, Xu; Hsu, Johnny T.C.; Airan, Raag D.; Mori, Susumu; van Zijl, Peter C. M.

    2013-01-01

    The purpose of this paper is to extend the single-subject Eve atlas from Johns Hopkins University, which currently contains diffusion tensor and T1-weighted anatomical maps, by including contrast based on quantitative susceptibility mapping. The new atlas combines a “deep gray matter parcellation map” (DGMPM) derived from a single-subject quantitative susceptibility map with the previously established “white matter parcellation map” (WMPM) from the same subject’s T1-weighted and diffusion tensor imaging data into an MNI coordinate map named the “Everything Parcellation Map in Eve Space,” also known as the “EvePM.” It allows automated segmentation of gray matter and white matter structures. Quantitative susceptibility maps from five healthy male volunteers (30 to 33 years of age) were coregistered to the Eve Atlas with AIR and Large Deformation Diffeomorphic Metric Mapping (LDDMM), and the transformation matrices were applied to the EvePM to produce automated parcellation in subject space. Parcellation accuracy was measured with a kappa analysis for the left and right structures of six deep gray matter regions. For multi-orientation QSM images, the Kappa statistic was 0.85 between automated and manual segmentation, with the inter-rater reproducibility Kappa being 0.89 for the human raters, suggesting “almost perfect” agreement between all segmentation methods. Segmentation seemed slightly more difficult for human raters on single-orientation QSM images, with the Kappa statistic being 0.88 between automated and manual segmentation, and 0.85 and 0.86 between human raters. Overall, this atlas provides a time-efficient tool for automated coregistration and segmentation of quantitative susceptibility data to analyze many regions of interest. These data were used to establish a baseline for normal magnetic susceptibility measurements for over 60 brain structures of 30- to 33-year-old males. Correlating the average susceptibility with age-based iron

  19. Shape-based multifeature brain parcellation

    NASA Astrophysics Data System (ADS)

    Nadeem, Saad; Kaufman, Arie

    2016-03-01

    We present a novel approach to parcellate - delineate the anatomical feature (folds, gyri, sulci) boundaries - the brain cortex. Our approach is based on extracting the 3D brain cortical surface mesh from magnetic resonance (MR) images, computing the shape measures (area, mean curvature, geodesic, and travel depths) for this mesh, and delineating the anatomical feature boundaries using these measures. We use angle-area preserving mapping of the cortical surface mesh to a simpler topology (disk or rectangle) to aid in the visualization and delineation of these boundaries. Contrary to commonly used generic 2D brain image atlas-based approaches, we use 3D surface mesh data extracted from a given brain MR imaging data and its specific shape measures for the parcellation. Our method does not require any non-linear registration of a given brain dataset to a generic atlas and hence, does away with the structure similarity assumption critical to the atlas-based approaches. We evaluate our approach using Mindboggle manually labeled brain datasets and achieve the following accuracies: 72.4% for gyri, 78.5% for major sulci, and 98.4% for folds. These results warrant further investigation of this approach as an alternative or as an initialization to the atlas-based approaches.

  20. Automated Quantification of Human Brain Metabolites by Artificial Neural Network Analysis from in VivoSingle-Voxel 1H NMR Spectra

    NASA Astrophysics Data System (ADS)

    Kaartinen, Jouni; Mierisová, Šarka; Oja, Joni M. E.; Usenius, Jukka-Pekka; Kauppinen, Risto A.; Hiltunen, Yrjö

    1998-09-01

    A real-time automated way of quantifying metabolites fromin vivoNMR spectra using an artificial neural network (ANN) analysis is presented. The spectral training and test sets for ANN containing peaks at the chemical shift ranges resembling long echo time proton NMR spectra from human brain were simulated. The performance of the ANN constructed was compared with an established lineshape fitting (LF) analysis using both simulated and experimental spectral data as inputs. The correspondence between the ANN and LF analyses showed correlation coefficients of order of 0.915-0.997 for spectra with large variations in both signal-to-noise and peak areas. Water suppressed1H NMR spectra from 24 healthy subjects were collected and choline-containing compounds (Cho), total creatine (Cr), and N-acetyl aspartate (NAA) were quantified with both methods. The ANN quantified these spectra with an accuracy similar to LF analysis (correlation coefficients of 0.915-0.951). These results show that LF and ANN are equally good quantifiers; however, the ANN analyses are more easily automated than LF analyses.

  1. Adaptive Automation Triggered by EEG-Based Mental Workload Index: A Passive Brain-Computer Interface Application in Realistic Air Traffic Control Environment

    PubMed Central

    Aricò, Pietro; Borghini, Gianluca; Di Flumeri, Gianluca; Colosimo, Alfredo; Bonelli, Stefano; Golfetti, Alessia; Pozzi, Simone; Imbert, Jean-Paul; Granger, Géraud; Benhacene, Raïlane; Babiloni, Fabio

    2016-01-01

    Adaptive Automation (AA) is a promising approach to keep the task workload demand within appropriate levels in order to avoid both the under- and over-load conditions, hence enhancing the overall performance and safety of the human-machine system. The main issue on the use of AA is how to trigger the AA solutions without affecting the operative task. In this regard, passive Brain-Computer Interface (pBCI) systems are a good candidate to activate automation, since they are able to gather information about the covert behavior (e.g., mental workload) of a subject by analyzing its neurophysiological signals (i.e., brain activity), and without interfering with the ongoing operational activity. We proposed a pBCI system able to trigger AA solutions integrated in a realistic Air Traffic Management (ATM) research simulator developed and hosted at ENAC (École Nationale de l'Aviation Civile of Toulouse, France). Twelve Air Traffic Controller (ATCO) students have been involved in the experiment and they have been asked to perform ATM scenarios with and without the support of the AA solutions. Results demonstrated the effectiveness of the proposed pBCI system, since it enabled the AA mostly during the high-demanding conditions (i.e., overload situations) inducing a reduction of the mental workload under which the ATCOs were operating. On the contrary, as desired, the AA was not activated when workload level was under the threshold, to prevent too low demanding conditions that could bring the operator's workload level toward potentially dangerous conditions of underload. PMID:27833542

  2. Adaptive Automation Triggered by EEG-Based Mental Workload Index: A Passive Brain-Computer Interface Application in Realistic Air Traffic Control Environment.

    PubMed

    Aricò, Pietro; Borghini, Gianluca; Di Flumeri, Gianluca; Colosimo, Alfredo; Bonelli, Stefano; Golfetti, Alessia; Pozzi, Simone; Imbert, Jean-Paul; Granger, Géraud; Benhacene, Raïlane; Babiloni, Fabio

    2016-01-01

    Adaptive Automation (AA) is a promising approach to keep the task workload demand within appropriate levels in order to avoid both the under- and over-load conditions, hence enhancing the overall performance and safety of the human-machine system. The main issue on the use of AA is how to trigger the AA solutions without affecting the operative task. In this regard, passive Brain-Computer Interface (pBCI) systems are a good candidate to activate automation, since they are able to gather information about the covert behavior (e.g., mental workload) of a subject by analyzing its neurophysiological signals (i.e., brain activity), and without interfering with the ongoing operational activity. We proposed a pBCI system able to trigger AA solutions integrated in a realistic Air Traffic Management (ATM) research simulator developed and hosted at ENAC (École Nationale de l'Aviation Civile of Toulouse, France). Twelve Air Traffic Controller (ATCO) students have been involved in the experiment and they have been asked to perform ATM scenarios with and without the support of the AA solutions. Results demonstrated the effectiveness of the proposed pBCI system, since it enabled the AA mostly during the high-demanding conditions (i.e., overload situations) inducing a reduction of the mental workload under which the ATCOs were operating. On the contrary, as desired, the AA was not activated when workload level was under the threshold, to prevent too low demanding conditions that could bring the operator's workload level toward potentially dangerous conditions of underload.

  3. Quantitative mapping of hemodynamics in the lung, brain, and dorsal window chamber-grown tumors using a novel, automated algorithm.

    PubMed

    Fontanella, Andrew N; Schroeder, Thies; Hochman, Daryl W; Chen, Raymond E; Hanna, Gabi; Haglund, Michael M; Rajaram, Narasimhan; Frees, Amy E; Secomb, Timothy W; Palmer, Gregory M; Dewhirst, Mark W

    2013-11-01

    Hemodynamic properties of vascular beds are of great interest in a variety of clinical and laboratory settings. However, there presently exists no automated, accurate, technically simple method for generating blood velocity maps of complex microvessel networks. Here, we present a novel algorithm that addresses the problem of acquiring quantitative maps by applying pixel-by-pixel cross-correlation to video data. Temporal signals at every spatial coordinate are compared with signals at neighboring points, generating a series of correlation maps from which speed and direction are calculated. User-assisted definition of vessel geometries is not required, and sequential data are analyzed automatically, without user bias. Velocity measurements were validated against the dual-slit method and against in vitro capillary flow with known velocities. The algorithm was tested in three different biological models in order to demonstrate its versatility. The hemodynamic maps presented here demonstrate an accurate, quantitative method of analyzing dynamic vascular systems. © 2013 John Wiley & Sons Ltd.

  4. Quantitative mapping of hemodynamics in the lung, brain, and dorsal window chamber-grown tumors using a novel, automated algorithm

    PubMed Central

    Fontanella, Andrew N.; Schroeder, Thies; Hochman, Daryl W.; Chen, Raymond E.; Hanna, Gabi; Haglund, Michael M.; Secomb, Timothy W.; Palmer, Gregory M.; Dewhirst, Mark W.

    2013-01-01

    Hemodynamic properties of vascular beds are of great interest in a variety of clinical and laboratory settings. However, there presently exists no automated, accurate, technically simple method for generating blood velocity maps of complex microvessel networks. Here we present a novel algorithm that addresses this problem by applying pixel-by-pixel cross-correlation to video data. Temporal signals at every spatial coordinate are compared with signals at neighboring points, generating a series of correlation maps from which speed and direction are calculated. User assisted definition of vessel geometries is not required, and sequential data are analyzed automatically, without user bias. Velocity measurements are validated against the dual-slit method and against capillary flow with known velocities. The algorithm is tested in three different biological models. Along with simultaneously acquired hemoglobin saturation and vascular geometry information, the hemodynamic maps presented here demonstrate an accurate, quantitative method of analyzing dynamic vascular systems. PMID:23781901

  5. Fully automated rodent brain MR image processing pipeline on a Midas server: from acquired images to region-based statistics

    PubMed Central

    Budin, Francois; Hoogstoel, Marion; Reynolds, Patrick; Grauer, Michael; O'Leary-Moore, Shonagh K.; Oguz, Ipek

    2013-01-01

    Magnetic resonance imaging (MRI) of rodent brains enables study of the development and the integrity of the brain under certain conditions (alcohol, drugs etc.). However, these images are difficult to analyze for biomedical researchers with limited image processing experience. In this paper we present an image processing pipeline running on a Midas server, a web-based data storage system. It is composed of the following steps: rigid registration, skull-stripping, average computation, average parcellation, parcellation propagation to individual subjects, and computation of region-based statistics on each image. The pipeline is easy to configure and requires very little image processing knowledge. We present results obtained by processing a data set using this pipeline and demonstrate how this pipeline can be used to find differences between populations. PMID:23964234

  6. Fully automated rodent brain MR image processing pipeline on a Midas server: from acquired images to region-based statistics.

    PubMed

    Budin, Francois; Hoogstoel, Marion; Reynolds, Patrick; Grauer, Michael; O'Leary-Moore, Shonagh K; Oguz, Ipek

    2013-01-01

    Magnetic resonance imaging (MRI) of rodent brains enables study of the development and the integrity of the brain under certain conditions (alcohol, drugs etc.). However, these images are difficult to analyze for biomedical researchers with limited image processing experience. In this paper we present an image processing pipeline running on a Midas server, a web-based data storage system. It is composed of the following steps: rigid registration, skull-stripping, average computation, average parcellation, parcellation propagation to individual subjects, and computation of region-based statistics on each image. The pipeline is easy to configure and requires very little image processing knowledge. We present results obtained by processing a data set using this pipeline and demonstrate how this pipeline can be used to find differences between populations.

  7. Automated Processing of Dynamic Contrast-Enhanced MRI: Correlation of Advanced Pharmacokinetic Metrics with Tumor Grade in Pediatric Brain Tumors.

    PubMed

    Vajapeyam, S; Stamoulis, C; Ricci, K; Kieran, M; Poussaint, T Young

    2017-01-01

    Pharmacokinetic parameters from dynamic contrast-enhanced MR imaging have proved useful for differentiating brain tumor grades in adults. In this study, we retrospectively reviewed dynamic contrast-enhanced perfusion data from children with newly diagnosed brain tumors and analyzed the pharmacokinetic parameters correlating with tumor grade. Dynamic contrast-enhanced MR imaging data from 38 patients were analyzed by using commercially available software. Subjects were categorized into 2 groups based on pathologic analyses consisting of low-grade (World Health Organization I and II) and high-grade (World Health Organization III and IV) tumors. Pharmacokinetic parameters were compared between the 2 groups by using linear regression models. For parameters that were statistically distinct between the 2 groups, sensitivity and specificity were also estimated. Eighteen tumors were classified as low-grade, and 20, as high-grade. Transfer constant from the blood plasma into the extracellular extravascular space (K(trans)), rate constant from extracellular extravascular space back into blood plasma (Kep), and extracellular extravascular volume fraction (Ve) were all significantly correlated with tumor grade; high-grade tumors showed higher K(trans), higher Kep, and lower Ve. Although all 3 parameters had high specificity (range, 82%-100%), Kep had the highest specificity for both grades. Optimal sensitivity was achieved for Ve, with a combined sensitivity of 76% (compared with 71% for K(trans) and Kep). Pharmacokinetic parameters derived from dynamic contrast-enhanced MR imaging can effectively discriminate low- and high-grade pediatric brain tumors. © 2017 by American Journal of Neuroradiology.

  8. Development of a Highly Automated and Multiplexed Targeted Proteome Pipeline and Assay for 112 Rat Brain Synaptic Proteins

    PubMed Central

    Colangelo, Christopher M.; Ivosev, Gordana; Chung, Lisa; Abbott, Thomas; Shifman, Mark; Sakaue, Fumika; Cox, David; Kitchen, Rob R.; Burton, Lyle; Tate, Stephen A; Gulcicek, Erol; Bonner, Ron; Rinehart, Jesse; Nairn, Angus C.; Williams, Kenneth R.

    2015-01-01

    We present a comprehensive workflow for large scale (>1000 transitions/run) label-free LC-MRM proteome assays. Innovations include automated MRM transition selection, intelligent retention time scheduling (xMRM) that improves Signal/Noise by >2-fold, and automatic peak modeling. Improvements to data analysis include a novel Q/C metric, Normalized Group Area Ratio (NGAR), MLR normalization, weighted regression analysis, and data dissemination through the Yale Protein Expression Database. As a proof of principle we developed a robust 90 minute LC-MRM assay for Mouse/Rat Post-Synaptic Density (PSD) fractions which resulted in the routine quantification of 337 peptides from 112 proteins based on 15 observations per protein. Parallel analyses with stable isotope dilution peptide standards (SIS), demonstrate very high correlation in retention time (1.0) and protein fold change (0.94) between the label-free and SIS analyses. Overall, our first method achieved a technical CV of 11.4% with >97.5% of the 1697 transitions being quantified without user intervention, resulting in a highly efficient, robust, and single injection LC-MRM assay. PMID:25476245

  9. Development of a highly automated and multiplexed targeted proteome pipeline and assay for 112 rat brain synaptic proteins.

    PubMed

    Colangelo, Christopher M; Ivosev, Gordana; Chung, Lisa; Abbott, Thomas; Shifman, Mark; Sakaue, Fumika; Cox, David; Kitchen, Robert R; Burton, Lyle; Tate, Stephen A; Gulcicek, Erol; Bonner, Ron; Rinehart, Jesse; Nairn, Angus C; Williams, Kenneth R

    2015-04-01

    We present a comprehensive workflow for large scale (>1000 transitions/run) label-free LC-MRM proteome assays. Innovations include automated MRM transition selection, intelligent retention time scheduling that improves S/N by twofold, and automatic peak modeling. Improvements to data analysis include a novel Q/C metric, normalized group area ratio, MLR normalization, weighted regression analysis, and data dissemination through the Yale protein expression database. As a proof of principle we developed a robust 90 min LC-MRM assay for mouse/rat postsynaptic density fractions which resulted in the routine quantification of 337 peptides from 112 proteins based on 15 observations per protein. Parallel analyses with stable isotope dilution peptide standards (SIS), demonstrate very high correlation in retention time (1.0) and protein fold change (0.94) between the label-free and SIS analyses. Overall, our method achieved a technical CV of 11.4% with >97.5% of the 1697 transitions being quantified without user intervention, resulting in a highly efficient, robust, and single injection LC-MRM assay. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Automated cGMP-compliant radiosynthesis of [(18) F]-(E)-PSS232 for brain PET imaging of metabotropic glutamate receptor subtype 5.

    PubMed

    Park, Jun Young; Son, Jeongmin; Yun, Mijin; Ametamey, Simon M; Chun, Joong-Hyun

    2017-09-25

    (E)-3-(Pyridin-2-yl ethynyl)cyclohex-2-enone O-(3-(2-[(18) F]-fluoroethoxy)propyl) oxime ([(18) F]-(E)-PSS232, [(18) F]2a) is a recently developed radiotracer that can be used to visualize metabotropic glutamate receptor subtype 5 (mGlu5 ) in vivo. The mGlu5 has become an attractive therapeutic and diagnostic target owing to its role in many neuropsychiatric disorders. Several carbon-11- and fluorine-18-labelled radiotracers have been developed to measure mGlu5 receptor occupancy in the human brain. The radiotracer [(18) F]2a, which is used as an analogue for [(11) C]ABP688 ([(11) C]1) and has a longer physical half-life, is a selective radiotracer that exhibits high binding affinity for mGlu5 . Herein, we report the fully automated radiosynthesis of [(18) F]2a using a commercial GE TRACERlab(TM) FX-FN synthesizer for routine production and distribution to nearby satellite clinics. Nucleophilic substitution of the corresponding mesylate precursor with cyclotron-produced [(18) F]fluoride ion at 100 °C in dimethyl sulfoxide (DMSO), followed by high-performance liquid chromatography (HPLC) purification and formulation, readily provided [(18) F]2a with a radiochemical yield of 40 ± 2% (decay corrected, n = 5) at the end of synthesis. Radiochemical purity for the [(18) F]-(E)-conformer was greater than 95%. Molar activity was determined to be 63.6 ± 9.6 GBq/μmol (n = 5), and the overall synthesis time was 70 min. This article is protected by copyright. All rights reserved.

  11. Feasibility of estimation of brain volume and 2-deoxy-2-(18)F-fluoro-D-glucose metabolism using a novel automated image analysis method: application in Alzheimer's disease.

    PubMed

    Musiek, Erik S; Saboury, Babak; Mishra, Shipra; Chen, Yufen; Reddin, Janet S; Newberg, Andrew B; Udupa, Jayaram K; Detre, John A; Hofheinz, Frank; Torigian, Drew; Alavi, Abass

    2012-01-01

    The development of clinically-applicable quantitative methods for the analysis of brain fluorine-18 fluoro desoxyglucose-positron emission tomography ((18)F-FDG-PET) images is a major area of research in many neurologic diseases, particularly Alzheimer's disease (AD). Region of interest visualization, evaluation, and image registration (ROVER) is a novel commercially-available software package which provides automated partial volume corrected measures of volume and glucose uptake from (18)F-FDG PET data. We performed a pilot study of ROVER analysis of brain (18)F-FDG PET images for the first time in a small cohort of patients with AD and controls. Brain (18)F-FDG-PET and volumetric magnetic resonance imaging (MRI) were performed on 14 AD patients and 18 age-matched controls. Images were subjected to ROVER analysis, and voxel-based analysis using SPM5. Volumes by ROVER were 35% lower than MRI volumes in AD patients (as hypometabolic regions were excluded in ROVER-derived volume measurement ) while average ROVER- and MRI-derived cortical volumes were nearly identical in control population. Whole brain volumes when ROVER-derived and whole brain metabolic volumetric products (MVP) were significantly lower in AD and accurately distinguished AD patients from controls (Area Under the Curve (AUC) of Receiver Operator Characteristic (ROC) curves 0.89 and 0.86, respectively). This diagnostic accuracy was similar to voxel-based analyses. Analysis by ROVER of (18)F-FDG-PET images provides a unique index of metabolically-active brain volume, and can accurately distinguish between AD patients and controls as a proof of concept. In conclusion, our findings suggest that ROVER may serve as a useful quantitative adjunct to visual or regional assessment and aid analysis of whole-brain metabolism in AD and other neurologic and psychiatric diseases.

  12. Automation or De-automation

    NASA Astrophysics Data System (ADS)

    Gorlach, Igor; Wessel, Oliver

    2008-09-01

    In the global automotive industry, for decades, vehicle manufacturers have continually increased the level of automation of production systems in order to be competitive. However, there is a new trend to decrease the level of automation, especially in final car assembly, for reasons of economy and flexibility. In this research, the final car assembly lines at three production sites of Volkswagen are analysed in order to determine the best level of automation for each, in terms of manufacturing costs, productivity, quality and flexibility. The case study is based on the methodology proposed by the Fraunhofer Institute. The results of the analysis indicate that fully automated assembly systems are not necessarily the best option in terms of cost, productivity and quality combined, which is attributed to high complexity of final car assembly systems; some de-automation is therefore recommended. On the other hand, the analysis shows that low automation can result in poor product quality due to reasons related to plant location, such as inadequate workers' skills, motivation, etc. Hence, the automation strategy should be formulated on the basis of analysis of all relevant aspects of the manufacturing process, such as costs, quality, productivity and flexibility in relation to the local context. A more balanced combination of automated and manual assembly operations provides better utilisation of equipment, reduces production costs and improves throughput.

  13. Automation pilot

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An important concept of the Action Information Management System (AIMS) approach is to evaluate office automation technology in the context of hands on use by technical program managers in the conduct of human acceptance difficulties which may accompany the transition to a significantly changing work environment. The improved productivity and communications which result from application of office automation technology are already well established for general office environments, but benefits unique to NASA are anticipated and these will be explored in detail.

  14. Office automation.

    PubMed

    Arenson, R L

    1986-03-01

    By now, the term "office automation" should have more meaning for those readers who are not intimately familiar with the subject. Not all of the preceding material pertains to every department or practice, but certainly, word processing and simple telephone management are key items. The size and complexity of the organization will dictate the usefulness of electronic mail and calendar management, and the individual radiologist's personal needs and habits will determine the usefulness of the home computer. Perhaps the most important ingredient for success in the office automation arena relates to the ability to integrate information from various systems in a simple and flexible manner. Unfortunately, this is perhaps the one area that most office automation systems have ignored or handled poorly. In the personal computer world, there has been much emphasis recently on integration of packages such as spreadsheet, database management, word processing, graphics, time management, and communications. This same philosophy of integration has been applied to a few office automation systems, but these are generally vendor-specific and do not allow for a mixture of foreign subsystems. During the next few years, it is likely that a few vendors will emerge as dominant in this integrated office automation field and will stress simplicity and flexibility as major components.

  15. Habitat automation

    NASA Technical Reports Server (NTRS)

    Swab, Rodney E.

    1992-01-01

    A habitat, on either the surface of the Moon or Mars, will be designed and built with the proven technologies of that day. These technologies will be mature and readily available to the habitat designer. We believe an acceleration of the normal pace of automation would allow a habitat to be safer and more easily maintained than would be the case otherwise. This document examines the operation of a habitat and describes elements of that operation which may benefit from an increased use of automation. Research topics within the automation realm are then defined and discussed with respect to the role they can have in the design of the habitat. Problems associated with the integration of advanced technologies into real-world projects at NASA are also addressed.

  16. Automating Finance

    ERIC Educational Resources Information Center

    Moore, John

    2007-01-01

    In past years, higher education's financial management side has been riddled with manual processes and aging mainframe applications. This article discusses schools which had taken advantage of an array of technologies that automate billing, payment processing, and refund processing in the case of overpayment. The investments are well worth it:…

  17. Comparison of Automated and Manual Recording of Brief Episodes of Intracranial Hypertension and Cerebral Hypoperfusion and Their Association with Outcome After Severe Traumatic Brain Injury

    DTIC Science & Technology

    2017-03-01

    Hypoperfusion and Their Association with Outcome After Severe Traumatic Brain Injury Peter Hu, PhD; Yao Li, MS; Shiming Yang, PhD; Catriona... Association with Outcome After Severe Traumatic Brain Injury 5a. CONTRACT NUMBER FA8650-13-2-6D15 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...Analysis to Find the Optimum Resolution ............................................ 14 5.6 Study of the Association Between EVD ICP Measurement and

  18. Assessment of the Molecular Expression and Structure of Gangliosides in Brain Metastasis of Lung Adenocarcinoma by an Advanced Approach Based on Fully Automated Chip-Nanoelectrospray Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Zamfir, Alina D.; Serb, Alina; Vukeli, Željka; Flangea, Corina; Schiopu, Catalin; Fabris, Dragana; Kalanj-Bognar, Svjetlana; Capitan, Florina; Sisu, Eugen

    2011-12-01

    Gangliosides (GGs), sialic acid-containing glycosphingolipids, are known to be involved in the invasive/metastatic behavior of brain tumor cells. Development of modern methods for determination of the variations in GG expression and structure during neoplastic cell transformation is a priority in the field of biomedical analysis. In this context, we report here on the first optimization and application of chip-based nanoelectrospray (NanoMate robot) mass spectrometry (MS) for the investigation of gangliosides in a secondary brain tumor. In our work a native GG mixture extracted and purified from brain metastasis of lung adenocarcinoma was screened by NanoMate robot coupled to a quadrupole time-of-flight MS. A native GG mixture from an age-matched healthy brain tissue, sampled and analyzed under identical conditions, served as a control. Comparative MS analysis demonstrated an evident dissimilarity in GG expression in the two tissue types. Brain metastasis is characterized by many species having a reduced N-acetylneuraminic acid (Neu5Ac) content, however, modified by fucosylation or O-acetylation such as Fuc-GM4, Fuc-GM3, di- O-Ac-GM1, O-Ac-GM3. In contrast, healthy brain tissue is dominated by longer structures exhibiting from mono- to hexasialylated sugar chains. Also, significant differences in ceramide composition were discovered. By tandem MS using collision-induced dissociation at low energies, brain metastasis-associated GD3 (d18:1/18:0) species as well as an uncommon Fuc-GM1 (d18:1/18:0) detected in the normal brain tissue could be structurally characterized. The novel protocol was able to provide a reliable compositional and structural characterization with high analysis pace and at a sensitivity situated in the fmol range.

  19. The Brain Is Faster than the Hand in Split-Second Intentions to Respond to an Impending Hazard: A Simulation of Neuroadaptive Automation to Speed Recovery to Perturbation in Flight Attitude.

    PubMed

    Callan, Daniel E; Terzibas, Cengiz; Cassel, Daniel B; Sato, Masa-Aki; Parasuraman, Raja

    2016-01-01

    The goal of this research is to test the potential for neuroadaptive automation to improve response speed to a hazardous event by using a brain-computer interface (BCI) to decode perceptual-motor intention. Seven participants underwent four experimental sessions while measuring brain activity with magnetoencephalograpy. The first three sessions were of a simple constrained task in which the participant was to pull back on the control stick to recover from a perturbation in attitude in one condition and to passively observe the perturbation in the other condition. The fourth session consisted of having to recover from a perturbation in attitude while piloting the plane through the Grand Canyon constantly maneuvering to track over the river below. Independent component analysis was used on the first two sessions to extract artifacts and find an event related component associated with the onset of the perturbation. These two sessions were used to train a decoder to classify trials in which the participant recovered from the perturbation (motor intention) vs. just passively viewing the perturbation. The BCI-decoder was tested on the third session of the same simple task and found to be able to significantly distinguish motor intention trials from passive viewing trials (mean = 69.8%). The same BCI-decoder was then used to test the fourth session on the complex task. The BCI-decoder significantly classified perturbation from no perturbation trials (73.3%) with a significant time savings of 72.3 ms (Original response time of 425.0-352.7 ms for BCI-decoder). The BCI-decoder model of the best subject was shown to generalize for both performance and time savings to the other subjects. The results of our off-line open loop simulation demonstrate that BCI based neuroadaptive automation has the potential to decode motor intention faster than manual control in response to a hazardous perturbation in flight attitude while ignoring ongoing motor and visual induced activity

  20. Utility of real-time prospective motion correction (PROMO) on 3D T1-weighted imaging in automated brain structure measurements

    PubMed Central

    Watanabe, Keita; Kakeda, Shingo; Igata, Natsuki; Watanabe, Rieko; Narimatsu, Hidekuni; Nozaki, Atsushi; Dan Rettmann; Abe, Osamu; Korogi, Yukunori

    2016-01-01

    PROspective MOtion correction (PROMO) can prevent motion artefacts. The aim of this study was to determine whether brain structure measurements of motion-corrected images with PROMO were reliable and equivalent to conventional images without motion artefacts. The following T1-weighted images were obtained in healthy subjects: (A) resting scans with and without PROMO and (B) two types of motion scans (“side-to-side” and “nodding” motions) with and without PROMO. The total gray matter volumes and cortical thicknesses were significantly decreased in motion scans without PROMO as compared to the resting scans without PROMO (p < 0.05). Conversely, Bland–Altman analysis indicated no bias between motion scans with PROMO, which have good image quality, and resting scans without PROMO. In addition, there was no bias between resting scans with and without PROMO. The use of PROMO facilitated more reliable brain structure measurements in subjects moving during data acquisition. PMID:27917950

  1. Utility of real-time prospective motion correction (PROMO) on 3D T1-weighted imaging in automated brain structure measurements

    NASA Astrophysics Data System (ADS)

    Watanabe, Keita; Kakeda, Shingo; Igata, Natsuki; Watanabe, Rieko; Narimatsu, Hidekuni; Nozaki, Atsushi; Dan Rettmann; Abe, Osamu; Korogi, Yukunori

    2016-12-01

    PROspective MOtion correction (PROMO) can prevent motion artefacts. The aim of this study was to determine whether brain structure measurements of motion-corrected images with PROMO were reliable and equivalent to conventional images without motion artefacts. The following T1-weighted images were obtained in healthy subjects: (A) resting scans with and without PROMO and (B) two types of motion scans (“side-to-side” and “nodding” motions) with and without PROMO. The total gray matter volumes and cortical thicknesses were significantly decreased in motion scans without PROMO as compared to the resting scans without PROMO (p < 0.05). Conversely, Bland-Altman analysis indicated no bias between motion scans with PROMO, which have good image quality, and resting scans without PROMO. In addition, there was no bias between resting scans with and without PROMO. The use of PROMO facilitated more reliable brain structure measurements in subjects moving during data acquisition.

  2. Development of multiplex immunohistochemistry and in situ hybridization using colloidal quantum dots for semi-automated neuronal expression mapping in brain

    NASA Astrophysics Data System (ADS)

    Chan, PokMan; Lin, Gang; Yuen, Tony; Roysam, Badrinath; Sealfon, Stuart C.

    2006-02-01

    The number of different subtypes of neurons, which form the basic component of the mammalian brain, has not been determined. Histological study is typically limited to the simultaneous detection of very few markers, in part because of the spectral overlap and quenching properties of organic fluorophores. The photostability and narrow emission spectra of non-organic colloidal quantum-dot fluorophores (QDs) make them desirable candidates for multiplex immunohistochemistry (IHC) and for fluorescent in situ hybridization (FISH). IHC is used to study specific protein epitopes and FISH to study the expression of specific mRNA transcripts. In order to investigate the patterns of coexpression of multiple specific protein and nucleic acid targets within cells in complex tissues, such as brain, we have developed protocols for the multiplex use of different QDs and organic fluorophores for combined IHC and FISH. We developed a method for direct QD labeling of modified oligonucleotide probes through streptavidin and biotin interactions and validated this technique in mouse brainstem sections. The reproducible histological results obtained with this protocol allow the use of high throughput computer image analysis to quantify the cellular and subcellular spatial pattern of expression of all markers studied. This approach is being utilized to generate a multiplex co-expression map of neuronal subtypes in mouse brain regions.

  3. Evaluation of brain perfusion in specific Brodmann areas in Frontotemporal dementia and Alzheimer disease using automated 3-D voxel based analysis

    NASA Astrophysics Data System (ADS)

    Valotassiou, V.; Papatriantafyllou, J.; Sifakis, N.; Karageorgiou, C.; Tsougos, I.; Tzavara, C.; Zerva, C.; Georgoulias, P.

    2009-05-01

    Introduction. Brain perfusion studies with single-photon emission computed tomography (SPECT) have been applied in demented patients to provide better discrimination between frontotemporal dementia (FTD) and Alzheimer's disease (AD). Aim. To assess the perfusion of specific Brodmann (Br) areas of the brain cortex in FTD and AD patients, using NeuroGam processing program to provide 3D voxel-by-voxel cerebral SPECT analysis. Material and methods. We studied 34 consecutive patients. We used the established criteria for the diagnosis of dementia and the specific established criteria for the diagnosis of FTD and AD. All the patients had a neuropsychological evaluation with a battery of tests including the mini-mental state examination (MMSE).Twenty-six patients (16 males, 10 females, mean age 68.76±6.51 years, education 11.81±4.25 years, MMSE 16.69±9.89) received the diagnosis of FTD and 8 patients (all females, mean age 71.25±10.48 years, education 10±4.6 years, MMSE 12.5±3.89) the diagnosis of AD. All the patients underwent a brain SPECT. We applied the NeuroGam Software for the evaluation of brain perfusion in specific Br areas in the left (L) and right (R) hemispheres. Results. Statistically significant hypoperfusion in FTD compared to AD patients, was found in the following Br areas: 11L (p<0.0001), 11R, 20L, 20R, 32L, 38L, 38R, 44L (p<0.001), 32R, 36L, 36R, 45L, 45R, 47R (p<0.01), 9L, 21L, 39R, 44R, 46R, 47L (p<0.05). On the contrary, AD patients presented significant (p<0.05) hypoperfusion in 7R and 39R Br areas. Conclusion. NeuroGam processing program of brain perfusion SPECT could result in enhanced accuracy for the differential diagnosis between AD and FTD patients.

  4. Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration

    PubMed Central

    Klein, Arno; Andersson, Jesper; Ardekani, Babak A.; Ashburner, John; Avants, Brian; Chiang, Ming-Chang; Christensen, Gary E.; Collins, D. Louis; Gee, James; Hellier, Pierre; Song, Joo Hyun; Jenkinson, Mark; Lepage, Claude; Rueckert, Daniel; Thompson, Paul; Vercauteren, Tom; Woods, Roger P.; Mann, J. John; Parsey, Ramin V.

    2009-01-01

    All fields of neuroscience that employ brain imaging need to communicate their results with reference to anatomical regions. In particular, comparative morphometry and group analysis of functional and physiological data require coregistration of brains to establish correspondences across brain structures. It is well established that linear registration of one brain to another is inadequate for aligning brain structures, so numerous algorithms have emerged to nonlinearly register brains to one another. This study is the largest evaluation of nonlinear deformation algorithms applied to brain image registration ever conducted. Fourteen algorithms from laboratories around the world are evaluated using 8 different error measures. More than 45,000 registrations between 80 manually labeled brains were performed by algorithms including: AIR, ANIMAL, ART, Diffeomorphic Demons, FNIRT, IRTK, JRD-fluid, ROMEO, SICLE, SyN, and four different SPM5 algorithms (“SPM2-type” and regular Normalization, Unified Segmentation, and the DARTEL Toolbox). All of these registrations were preceded by linear registration between the same image pairs using FLIRT. One of the most significant findings of this study is that the relative performances of the registration methods under comparison appear to be little affected by the choice of subject population, labeling protocol, and type of overlap measure. This is important because it suggests that the findings are generalizable to new subject populations that are labeled or evaluated using different labeling protocols. Furthermore, we ranked the 14 methods according to three completely independent analyses (permutation tests, one-way ANOVA tests, and indifference-zone ranking) and derived three almost identical top rankings of the methods. ART, SyN, IRTK, and SPM's DARTEL Toolbox gave the best results according to overlap and distance measures, with ART and SyN delivering the most consistently high accuracy across subjects and label sets

  5. Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration.

    PubMed

    Klein, Arno; Andersson, Jesper; Ardekani, Babak A; Ashburner, John; Avants, Brian; Chiang, Ming-Chang; Christensen, Gary E; Collins, D Louis; Gee, James; Hellier, Pierre; Song, Joo Hyun; Jenkinson, Mark; Lepage, Claude; Rueckert, Daniel; Thompson, Paul; Vercauteren, Tom; Woods, Roger P; Mann, J John; Parsey, Ramin V

    2009-07-01

    All fields of neuroscience that employ brain imaging need to communicate their results with reference to anatomical regions. In particular, comparative morphometry and group analysis of functional and physiological data require coregistration of brains to establish correspondences across brain structures. It is well established that linear registration of one brain to another is inadequate for aligning brain structures, so numerous algorithms have emerged to nonlinearly register brains to one another. This study is the largest evaluation of nonlinear deformation algorithms applied to brain image registration ever conducted. Fourteen algorithms from laboratories around the world are evaluated using 8 different error measures. More than 45,000 registrations between 80 manually labeled brains were performed by algorithms including: AIR, ANIMAL, ART, Diffeomorphic Demons, FNIRT, IRTK, JRD-fluid, ROMEO, SICLE, SyN, and four different SPM5 algorithms ("SPM2-type" and regular Normalization, Unified Segmentation, and the DARTEL Toolbox). All of these registrations were preceded by linear registration between the same image pairs using FLIRT. One of the most significant findings of this study is that the relative performances of the registration methods under comparison appear to be little affected by the choice of subject population, labeling protocol, and type of overlap measure. This is important because it suggests that the findings are generalizable to new subject populations that are labeled or evaluated using different labeling protocols. Furthermore, we ranked the 14 methods according to three completely independent analyses (permutation tests, one-way ANOVA tests, and indifference-zone ranking) and derived three almost identical top rankings of the methods. ART, SyN, IRTK, and SPM's DARTEL Toolbox gave the best results according to overlap and distance measures, with ART and SyN delivering the most consistently high accuracy across subjects and label sets. Updates

  6. The Brain Is Faster than the Hand in Split-Second Intentions to Respond to an Impending Hazard: A Simulation of Neuroadaptive Automation to Speed Recovery to Perturbation in Flight Attitude

    PubMed Central

    Callan, Daniel E.; Terzibas, Cengiz; Cassel, Daniel B.; Sato, Masa-aki; Parasuraman, Raja

    2016-01-01

    The goal of this research is to test the potential for neuroadaptive automation to improve response speed to a hazardous event by using a brain-computer interface (BCI) to decode perceptual-motor intention. Seven participants underwent four experimental sessions while measuring brain activity with magnetoencephalograpy. The first three sessions were of a simple constrained task in which the participant was to pull back on the control stick to recover from a perturbation in attitude in one condition and to passively observe the perturbation in the other condition. The fourth session consisted of having to recover from a perturbation in attitude while piloting the plane through the Grand Canyon constantly maneuvering to track over the river below. Independent component analysis was used on the first two sessions to extract artifacts and find an event related component associated with the onset of the perturbation. These two sessions were used to train a decoder to classify trials in which the participant recovered from the perturbation (motor intention) vs. just passively viewing the perturbation. The BCI-decoder was tested on the third session of the same simple task and found to be able to significantly distinguish motor intention trials from passive viewing trials (mean = 69.8%). The same BCI-decoder was then used to test the fourth session on the complex task. The BCI-decoder significantly classified perturbation from no perturbation trials (73.3%) with a significant time savings of 72.3 ms (Original response time of 425.0–352.7 ms for BCI-decoder). The BCI-decoder model of the best subject was shown to generalize for both performance and time savings to the other subjects. The results of our off-line open loop simulation demonstrate that BCI based neuroadaptive automation has the potential to decode motor intention faster than manual control in response to a hazardous perturbation in flight attitude while ignoring ongoing motor and visual induced activity

  7. Automated cognome construction and semi-automated hypothesis generation.

    PubMed

    Voytek, Jessica B; Voytek, Bradley

    2012-06-30

    Modern neuroscientific research stands on the shoulders of countless giants. PubMed alone contains more than 21 million peer-reviewed articles with 40-50,000 more published every month. Understanding the human brain, cognition, and disease will require integrating facts from dozens of scientific fields spread amongst millions of studies locked away in static documents, making any such integration daunting, at best. The future of scientific progress will be aided by bridging the gap between the millions of published research articles and modern databases such as the Allen brain atlas (ABA). To that end, we have analyzed the text of over 3.5 million scientific abstracts to find associations between neuroscientific concepts. From the literature alone, we show that we can blindly and algorithmically extract a "cognome": relationships between brain structure, function, and disease. We demonstrate the potential of data-mining and cross-platform data-integration with the ABA by introducing two methods for semi-automated hypothesis generation. By analyzing statistical "holes" and discrepancies in the literature we can find understudied or overlooked research paths. That is, we have added a layer of semi-automation to a part of the scientific process itself. This is an important step toward fundamentally incorporating data-mining algorithms into the scientific method in a manner that is generalizable to any scientific or medical field. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Path Planning for Semi-automated Simulated Robotic Neurosurgery.

    PubMed

    Hu, Danying; Gong, Yuanzheng; Hannaford, Blake; Seibel, Eric J

    2015-01-01

    This paper considers the semi-automated robotic surgical procedure for removing the brain tumor margins, where the manual operation is a tedious and time-consuming task for surgeons. We present robust path planning methods for robotic ablation of tumor residues in various shapes, which are represented in point-clouds instead of analytical geometry. Along with the path plans, corresponding metrics are also delivered to the surgeon for selecting the optimal candidate in the automated robotic ablation. The selected path plan is then executed and tested on RAVEN(™) II surgical robot platform as part of the semi-automated robotic brain tumor ablation surgery in a simulated tissue phantom.

  9. Path Planning for Semi-automated Simulated Robotic Neurosurgery

    PubMed Central

    Hu, Danying; Gong, Yuanzheng; Hannaford, Blake; Seibel, Eric J.

    2015-01-01

    This paper considers the semi-automated robotic surgical procedure for removing the brain tumor margins, where the manual operation is a tedious and time-consuming task for surgeons. We present robust path planning methods for robotic ablation of tumor residues in various shapes, which are represented in point-clouds instead of analytical geometry. Along with the path plans, corresponding metrics are also delivered to the surgeon for selecting the optimal candidate in the automated robotic ablation. The selected path plan is then executed and tested on RAVEN™ II surgical robot platform as part of the semi-automated robotic brain tumor ablation surgery in a simulated tissue phantom. PMID:26705501

  10. Autonomy and Automation

    NASA Technical Reports Server (NTRS)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  11. Workflow automation architecture standard

    SciTech Connect

    Moshofsky, R.P.; Rohen, W.T.

    1994-11-14

    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  12. Automation in Clinical Microbiology

    PubMed Central

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  13. Automation of industrial bioprocesses.

    PubMed

    Beyeler, W; DaPra, E; Schneider, K

    2000-01-01

    The dramatic development of new electronic devices within the last 25 years has had a substantial influence on the control and automation of industrial bioprocesses. Within this short period of time the method of controlling industrial bioprocesses has changed completely. In this paper, the authors will use a practical approach focusing on the industrial applications of automation systems. From the early attempts to use computers for the automation of biotechnological processes up to the modern process automation systems some milestones are highlighted. Special attention is given to the influence of Standards and Guidelines on the development of automation systems.

  14. Shoe-String Automation

    SciTech Connect

    Duncan, M.L.

    2001-07-30

    Faced with a downsizing organization, serious budget reductions and retirement of key metrology personnel, maintaining capabilities to provide necessary services to our customers was becoming increasingly difficult. It appeared that the only solution was to automate some of our more personnel-intensive processes; however, it was crucial that the most personnel-intensive candidate process be automated, at the lowest price possible and with the lowest risk of failure. This discussion relates factors in the selection of the Standard Leak Calibration System for automation, the methods of automation used to provide the lowest-cost solution and the benefits realized as a result of the automation.

  15. Automated DNA Sequencing System

    SciTech Connect

    Armstrong, G.A.; Ekkebus, C.P.; Hauser, L.J.; Kress, R.L.; Mural, R.J.

    1999-04-25

    Oak Ridge National Laboratory (ORNL) is developing a core DNA sequencing facility to support biological research endeavors at ORNL and to conduct basic sequencing automation research. This facility is novel because its development is based on existing standard biology laboratory equipment; thus, the development process is of interest to the many small laboratories trying to use automation to control costs and increase throughput. Before automation, biology Laboratory personnel purified DNA, completed cycle sequencing, and prepared 96-well sample plates with commercially available hardware designed specifically for each step in the process. Following purification and thermal cycling, an automated sequencing machine was used for the sequencing. A technician handled all movement of the 96-well sample plates between machines. To automate the process, ORNL is adding a CRS Robotics A- 465 arm, ABI 377 sequencing machine, automated centrifuge, automated refrigerator, and possibly an automated SpeedVac. The entire system will be integrated with one central controller that will direct each machine and the robot. The goal of this system is to completely automate the sequencing procedure from bacterial cell samples through ready-to-be-sequenced DNA and ultimately to completed sequence. The system will be flexible and will accommodate different chemistries than existing automated sequencing lines. The system will be expanded in the future to include colony picking and/or actual sequencing. This discrete event, DNA sequencing system will demonstrate that smaller sequencing labs can achieve cost-effective the laboratory grow.

  16. Management Planning for Workplace Automation.

    ERIC Educational Resources Information Center

    McDole, Thomas L.

    Several factors must be considered when implementing office automation. Included among these are whether or not to automate at all, the effects of automation on employees, requirements imposed by automation on the physical environment, effects of automation on the total organization, and effects on clientele. The reasons behind the success or…

  17. Laboratory Automation and Middleware.

    PubMed

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory.

  18. Complacency and Automation Bias in the Use of Imperfect Automation.

    PubMed

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  19. Comparison of automated and manual segmentation of hippocampus MR images

    NASA Astrophysics Data System (ADS)

    Haller, John W.; Christensen, Gary E.; Miller, Michael I.; Joshi, Sarang C.; Gado, Mokhtar; Csernansky, John G.; Vannier, Michael W.

    1995-05-01

    The precision and accuracy of area estimates from magnetic resonance (MR) brain images and using manual and automated segmentation methods are determined. Areas of the human hippocampus were measured to compare a new automatic method of segmentation with regions of interest drawn by an expert. MR images of nine normal subjects and nine schizophrenic patients were acquired with a 1.5-T unit (Siemens Medical Systems, Inc., Iselin, New Jersey). From each individual MPRAGE 3D volume image a single comparable 2-D slice (matrix equals 256 X 256) was chosen which corresponds to the same coronal slice of the hippocampus. The hippocampus was first manually segmented, then segmented using high dimensional transformations of a digital brain atlas to individual brain MR images. The repeatability of a trained rater was assessed by comparing two measurements from each individual subject. Variability was also compared within and between subject groups of schizophrenics and normal subjects. Finally, the precision and accuracy of automated segmentation of hippocampal areas were determined by comparing automated measurements to manual segmentation measurements made by the trained rater on MR and brain slice images. The results demonstrate the high repeatability of area measurement from MR images of the human hippocampus. Automated segmentation using high dimensional transformations from a digital brain atlas provides repeatability superior to that of manual segmentation. Furthermore, the validity of automated measurements was demonstrated by a high correlation with manual segmentation measurements made by a trained rater. Quantitative morphometry of brain substructures (e.g. hippocampus) is feasible by use of a high dimensional transformation of a digital brain atlas to an individual MR image. This method automates the search for neuromorphological correlates of schizophrenia by a new mathematically robust method with unprecedented sensitivity to small local and regional differences.

  20. Automated Cognome Construction and Semi-automated Hypothesis Generation

    PubMed Central

    Voytek, Jessica B.; Voytek, Bradley

    2012-01-01

    Modern neuroscientific research stands on the shoulders of countless giants. PubMed alone contains more than 21 million peer-reviewed articles with 40–50,000 more published every month. Understanding the human brain, cognition, and disease will require integrating facts from dozens of scientific fields spread amongst millions of studies locked away in static documents, making any such integration daunting, at best. The future of scientific progress will be aided by bridging the gap between the millions of published research articles and modern databases such as the Allen Brain Atlas (ABA). To that end, we have analyzed the text of over 3.5 million scientific abstracts to find associations between neuroscientific concepts. From the literature alone, we show that we can blindly and algorithmically extract a “cognome”: relationships between brain structure, function, and disease. We demonstrate the potential of data-mining and cross-platform data-integration with the ABA by introducing two methods for semiautomated hypothesis generation. By analyzing statistical “holes” and discrepancies in the literature we can find understudied or overlooked research paths. That is, we have added a layer of semi-automation to a part of the scientific process itself. This is an important step toward fundamentally incorporating data-mining algorithms into the scientific method in a manner that is generalizable to any scientific or medical field. PMID:22584238

  1. Automating checks of plan check automation.

    PubMed

    Halabi, Tarek; Lu, Hsiao-Ming

    2014-07-08

    While a few physicists have designed new plan check automation solutions for their clinics, fewer, if any, managed to adapt existing solutions. As complex and varied as the systems they check, these programs must gain the full confidence of those who would run them on countless patient plans. The present automation effort, planCheck, therefore focuses on versatility and ease of implementation and verification. To demonstrate this, we apply planCheck to proton gantry, stereotactic proton gantry, stereotactic proton fixed beam (STAR), and IMRT treatments.

  2. Automation and Cataloging.

    ERIC Educational Resources Information Center

    Furuta, Kenneth; And Others

    1990-01-01

    These three articles address issues in library cataloging that are affected by automation: (1) the impact of automation and bibliographic utilities on professional catalogers; (2) the effect of the LASS microcomputer software on the cost of authority work in cataloging at the University of Arizona; and (3) online subject heading and classification…

  3. Order Division Automated System.

    ERIC Educational Resources Information Center

    Kniemeyer, Justin M.; And Others

    This publication was prepared by the Order Division Automation Project staff to fulfill the Library of Congress' requirement to document all automation efforts. The report was originally intended for internal use only and not for distribution outside the Library. It is now felt that the library community at-large may have an interest in the…

  4. More Benefits of Automation.

    ERIC Educational Resources Information Center

    Getz, Malcolm

    1988-01-01

    Describes a study that measured the benefits of an automated catalog and automated circulation system from the library user's point of view in terms of the value of time saved. Topics discussed include patterns of use, access time, availability of information, search behaviors, and the effectiveness of the measures used. (seven references)…

  5. Work and Programmable Automation.

    ERIC Educational Resources Information Center

    DeVore, Paul W.

    A new industrial era based on electronics and the microprocessor has arrived, an era that is being called intelligent automation. Intelligent automation, in the form of robots, replaces workers, and the new products, using microelectronic devices, require significantly less labor to produce than the goods they replace. The microprocessor thus…

  6. The Automated Office.

    ERIC Educational Resources Information Center

    Naclerio, Nick

    1979-01-01

    Clerical personnel may be able to climb career ladders as a result of office automation and expanded job opportunities in the word processing area. Suggests opportunities in an automated office system and lists books and periodicals on word processing for counselors and teachers. (MF)

  7. Planning for Office Automation.

    ERIC Educational Resources Information Center

    Sherron, Gene T.

    1982-01-01

    The steps taken toward office automation by the University of Maryland are described. Office automation is defined and some types of word processing systems are described. Policies developed in the writing of a campus plan are listed, followed by a section on procedures adopted to implement the plan. (Author/MLW)

  8. WANTED: Fully Automated Indexing.

    ERIC Educational Resources Information Center

    Purcell, Royal

    1991-01-01

    Discussion of indexing focuses on the possibilities of fully automated indexing. Topics discussed include controlled indexing languages such as subject heading lists and thesauri, free indexing languages, natural indexing languages, computer-aided indexing, expert systems, and the need for greater creativity to further advance automated indexing.…

  9. Advances in inspection automation

    NASA Astrophysics Data System (ADS)

    Weber, Walter H.; Mair, H. Douglas; Jansen, Dion; Lombardi, Luciano

    2013-01-01

    This new session at QNDE reflects the growing interest in inspection automation. Our paper describes a newly developed platform that makes the complex NDE automation possible without the need for software programmers. Inspection tasks that are tedious, error-prone or impossible for humans to perform can now be automated using a form of drag and drop visual scripting. Our work attempts to rectify the problem that NDE is not keeping pace with the rest of factory automation. Outside of NDE, robots routinely and autonomously machine parts, assemble components, weld structures and report progress to corporate databases. By contrast, components arriving in the NDT department typically require manual part handling, calibrations and analysis. The automation examples in this paper cover the development of robotic thickness gauging and the use of adaptive contour following on the NRU reactor inspection at Chalk River.

  10. Automation in Immunohematology

    PubMed Central

    Bajpai, Meenu; Kaur, Ravneet; Gupta, Ekta

    2012-01-01

    There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process. PMID:22988378

  11. Systematic review automation technologies.

    PubMed

    Tsafnat, Guy; Glasziou, Paul; Choong, Miew Keen; Dunn, Adam; Galgani, Filippo; Coiera, Enrico

    2014-07-09

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects.We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time.

  12. Systematic review automation technologies

    PubMed Central

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  13. Automation synthesis modules review.

    PubMed

    Boschi, S; Lodi, F; Malizia, C; Cicoria, G; Marengo, M

    2013-06-01

    The introduction of (68)Ga labelled tracers has changed the diagnostic approach to neuroendocrine tumours and the availability of a reliable, long-lived (68)Ge/(68)Ga generator has been at the bases of the development of (68)Ga radiopharmacy. The huge increase in clinical demand, the impact of regulatory issues and a careful radioprotection of the operators have boosted for extensive automation of the production process. The development of automated systems for (68)Ga radiochemistry, different engineering and software strategies and post-processing of the eluate were discussed along with impact of automation with regulations. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Managing laboratory automation

    PubMed Central

    Saboe, Thomas J.

    1995-01-01

    This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Finally, some comments on future automation need are discussed. PMID:18925018

  15. Xenon International Automated Control

    SciTech Connect

    2016-08-05

    The Xenon International Automated Control software monitors, displays status, and allows for manual operator control as well as fully automatic control of multiple commercial and PNNL designed hardware components to generate and transmit atmospheric radioxenon concentration measurements every six hours.

  16. Automating the Media Center.

    ERIC Educational Resources Information Center

    Holloway, Mary A.

    1988-01-01

    Discusses the need to develop more efficient information retrieval skills by the use of new technology. Lists four stages used in automating the media center. Describes North Carolina's pilot programs. Proposes benefits and looks at the media center's future. (MVL)

  17. Planning for Office Automation.

    ERIC Educational Resources Information Center

    Mick, Colin K.

    1983-01-01

    Outlines a practical approach to planning for office automation termed the "Focused Process Approach" (the "what" phase, "how" phase, "doing" phase) which is a synthesis of the problem-solving and participatory planning approaches. Thirteen references are provided. (EJS)

  18. Automated decision stations

    NASA Technical Reports Server (NTRS)

    Tischendorf, Mark

    1990-01-01

    This paper discusses the combination of software robots and expert systems to automate everyday business tasks. Tasks which require people to repetitively interact with multiple systems screens as well as multiple systems.

  19. Automated Status Notification System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    NASA Lewis Research Center's Automated Status Notification System (ASNS) was born out of need. To prevent "hacker attacks," Lewis' telephone system needed to monitor communications activities 24 hr a day, 7 days a week. With decreasing staff resources, this continuous monitoring had to be automated. By utilizing existing communications hardware, a UNIX workstation, and NAWK (a pattern scanning and processing language), we implemented a continuous monitoring system.

  20. Automated Microfluidics for Genomics

    DTIC Science & Technology

    2001-10-25

    Abstract--The Genomation Laboratory at the University of Washington is developing an automated fluid handling system called " Acapella " to prepare...Photonic Systems, Inc. (Redmond, WA), an automated submicroliter fluid sample preparation system called ACAPELLA is being developed. Reactions such...technology include minimal residual disease quantification and sample preparation for DNA. Preliminary work on the ACAPELLA is presented in [4][5]. This

  1. Automated Pilot Advisory System

    NASA Technical Reports Server (NTRS)

    Parks, J. L., Jr.; Haidt, J. G.

    1981-01-01

    An Automated Pilot Advisory System (APAS) was developed and operationally tested to demonstrate the concept that low cost automated systems can provide air traffic and aviation weather advisory information at high density uncontrolled airports. The system was designed to enhance the see and be seen rule of flight, and pilots who used the system preferred it over the self announcement system presently used at uncontrolled airports.

  2. Automating Index Preparation

    DTIC Science & Technology

    1987-03-23

    Automating Index Preparation* Pehong Chent Michael A. Harrison Computer Science Division University of CaliforniaI Berkeley, CA 94720 March 23, 1987...Abstract Index preparation is a tedious and time-consuming task. In this paper we indicate * how the indexing process can be automated in a way which...identified and analyzed. Specifically, we describe a framework for placing index commands in the document and a general purpose index processor which

  3. Automated Lattice Perturbation Theory

    SciTech Connect

    Monahan, Christopher

    2014-11-01

    I review recent developments in automated lattice perturbation theory. Starting with an overview of lattice perturbation theory, I focus on the three automation packages currently "on the market": HiPPy/HPsrc, Pastor and PhySyCAl. I highlight some recent applications of these methods, particularly in B physics. In the final section I briefly discuss the related, but distinct, approach of numerical stochastic perturbation theory.

  4. Metrology automation reliability

    NASA Astrophysics Data System (ADS)

    Chain, Elizabeth E.

    1996-09-01

    At Motorola's MOS-12 facility automated measurements on 200- mm diameter wafers proceed in a hands-off 'load-and-go' mode requiring only wafer loading, measurement recipe loading, and a 'run' command for processing. Upon completion of all sample measurements, the data is uploaded to the factory's data collection software system via a SECS II interface, eliminating the requirement of manual data entry. The scope of in-line measurement automation has been extended to the entire metrology scheme from job file generation to measurement and data collection. Data analysis and comparison to part specification limits is also carried out automatically. Successful integration of automated metrology into the factory measurement system requires that automated functions, such as autofocus and pattern recognition algorithms, display a high degree of reliability. In the 24- hour factory reliability data can be collected automatically on every part measured. This reliability data is then uploaded to the factory data collection software system at the same time as the measurement data. Analysis of the metrology reliability data permits improvements to be made as needed, and provides an accurate accounting of automation reliability. This reliability data has so far been collected for the CD-SEM (critical dimension scanning electron microscope) metrology tool, and examples are presented. This analysis method can be applied to such automated in-line measurements as CD, overlay, particle and film thickness measurements.

  5. Automated Groundwater Screening

    SciTech Connect

    Taylor, Glenn A.; Collard, Leonard, B.

    2005-10-31

    The Automated Intruder Analysis has been extended to include an Automated Ground Water Screening option. This option screens 825 radionuclides while rigorously applying the National Council on Radiation Protection (NCRP) methodology. An extension to that methodology is presented to give a more realistic screening factor for those radionuclides which have significant daughters. The extension has the promise of reducing the number of radionuclides which must be tracked by the customer. By combining the Automated Intruder Analysis with the Automated Groundwater Screening a consistent set of assumptions and databases is used. A method is proposed to eliminate trigger values by performing rigorous calculation of the screening factor thereby reducing the number of radionuclides sent to further analysis. Using the same problem definitions as in previous groundwater screenings, the automated groundwater screening found one additional nuclide, Ge-68, which failed the screening. It also found that 18 of the 57 radionuclides contained in NCRP Table 3.1 failed the screening. This report describes the automated groundwater screening computer application.

  6. Elements of EAF automation processes

    NASA Astrophysics Data System (ADS)

    Ioana, A.; Constantin, N.; Dragna, E. C.

    2017-01-01

    Our article presents elements of Electric Arc Furnace (EAF) automation. So, we present and analyze detailed two automation schemes: the scheme of electrical EAF automation system; the scheme of thermic EAF automation system. The application results of these scheme of automation consists in: the sensitive reduction of specific consummation of electrical energy of Electric Arc Furnace, increasing the productivity of Electric Arc Furnace, increase the quality of the developed steel, increasing the durability of the building elements of Electric Arc Furnace.

  7. Quantitative analysis of brain pathology based on MRI and brain atlases--applications for cerebral palsy.

    PubMed

    Faria, Andreia V; Hoon, Alexander; Stashinko, Elaine; Li, Xin; Jiang, Hangyi; Mashayekh, Ameneh; Akhter, Kazi; Hsu, John; Oishi, Kenichi; Zhang, Jiangyang; Miller, Michael I; van Zijl, Peter C M; Mori, Susumu

    2011-02-01

    We have developed a new method to provide a comprehensive quantitative analysis of brain anatomy in cerebral palsy patients, which makes use of two techniques: diffusion tensor imaging and automated 3D whole brain segmentation based on our brain atlas and a nonlinear normalization technique (large-deformation diffeomorphic metric mapping). This method was applied to 13 patients and normal controls. The reliability of the automated segmentation revealed close agreement with the manual segmentation. We illustrate some potential applications for individual characterization and group comparison. This technique also provides a framework for determining the impact of various neuroanatomic features on brain functions.

  8. Brain size, sex, and the aging brain.

    PubMed

    Jäncke, Lutz; Mérillat, Susan; Liem, Franziskus; Hänggi, Jürgen

    2015-01-01

    This study was conducted to examine the statistical influence of brain size on cortical, subcortical, and cerebellar compartmental volumes. This brain size influence was especially studied to delineate interactions with Sex and Age. Here, we studied 856 healthy subjects of which 533 are classified as young and 323 as old. Using an automated segmentation procedure cortical (gray and white matter [GM and WM] including the corpus callosum), cerebellar (GM and WM), and subcortical (thalamus, putamen, pallidum, caudatus, hippocampus, amygdala, and accumbens) volumes were measured and subjected to statistical analyses. These analyses revealed that brain size and age exert substantial statistical influences on nearly all compartmental volumes. Analyzing the raw compartmental volumes replicated the frequently reported Sex differences in compartmental volumes with men showing larger volumes. However, when statistically controlling for brain size Sex differences and Sex × Age interactions practically disappear. Thus, brain size is more important than Sex in explaining interindividual differences in compartmental volumes. The influence of brain size is discussed in the context of an allometric scaling of the compartmental volumes. © 2014 Wiley Periodicals, Inc.

  9. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Imamura, M. S.; Moser, R. L.; Veatch, M.

    1983-01-01

    Generic power-system elements and their potential faults are identified. Automation functions and their resulting benefits are defined and automation functions between power subsystem, central spacecraft computer, and ground flight-support personnel are partitioned. All automation activities were categorized as data handling, monitoring, routine control, fault handling, planning and operations, or anomaly handling. Incorporation of all these classes of tasks, except for anomaly handling, in power subsystem hardware and software was concluded to be mandatory to meet the design and operational requirements of the space station. The key drivers are long mission lifetime, modular growth, high-performance flexibility, a need to accommodate different electrical user-load equipment, onorbit assembly/maintenance/servicing, and potentially large number of power subsystem components. A significant effort in algorithm development and validation is essential in meeting the 1987 technology readiness date for the space station.

  10. Automated telescope scheduling

    NASA Astrophysics Data System (ADS)

    Johnston, Mark D.

    1988-08-01

    With the ever increasing level of automation of astronomical telescopes the benefits and feasibility of automated planning and scheduling are becoming more apparent. Improved efficiency and increased overall telescope utilization are the most obvious goals. Automated scheduling at some level has been done for several satellite observatories, but the requirements on these systems were much less stringent than on modern ground or satellite observatories. The scheduling problem is particularly acute for Hubble Space Telescope: virtually all observations must be planned in excruciating detail weeks to months in advance. Space Telescope Science Institute has recently made significant progress on the scheduling problem by exploiting state-of-the-art artificial intelligence software technology. What is especially interesting is that this effort has already yielded software that is well suited to scheduling groundbased telescopes, including the problem of optimizing the coordinated scheduling of more than one telescope.

  11. Automated telescope scheduling

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.

    1988-01-01

    With the ever increasing level of automation of astronomical telescopes the benefits and feasibility of automated planning and scheduling are becoming more apparent. Improved efficiency and increased overall telescope utilization are the most obvious goals. Automated scheduling at some level has been done for several satellite observatories, but the requirements on these systems were much less stringent than on modern ground or satellite observatories. The scheduling problem is particularly acute for Hubble Space Telescope: virtually all observations must be planned in excruciating detail weeks to months in advance. Space Telescope Science Institute has recently made significant progress on the scheduling problem by exploiting state-of-the-art artificial intelligence software technology. What is especially interesting is that this effort has already yielded software that is well suited to scheduling groundbased telescopes, including the problem of optimizing the coordinated scheduling of more than one telescope.

  12. Fully automated protein purification

    PubMed Central

    Camper, DeMarco V.; Viola, Ronald E.

    2009-01-01

    Obtaining highly purified proteins is essential to begin investigating their functional and structural properties. The steps that are typically involved in purifying proteins can include an initial capture, intermediate purification, and a final polishing step. Completing these steps can take several days and require frequent attention to ensure success. Our goal was to design automated protocols that will allow the purification of proteins with minimal operator intervention. Separate methods have been produced and tested that automate the sample loading, column washing, sample elution and peak collection steps for ion-exchange, metal affinity, hydrophobic interaction and gel filtration chromatography. These individual methods are designed to be coupled and run sequentially in any order to achieve a flexible and fully automated protein purification protocol. PMID:19595984

  13. Automated gas chromatography

    DOEpatents

    Mowry, Curtis D.; Blair, Dianna S.; Rodacy, Philip J.; Reber, Stephen D.

    1999-01-01

    An apparatus and process for the continuous, near real-time monitoring of low-level concentrations of organic compounds in a liquid, and, more particularly, a water stream. A small liquid volume of flow from a liquid process stream containing organic compounds is diverted by an automated process to a heated vaporization capillary where the liquid volume is vaporized to a gas that flows to an automated gas chromatograph separation column to chromatographically separate the organic compounds. Organic compounds are detected and the information transmitted to a control system for use in process control. Concentrations of organic compounds less than one part per million are detected in less than one minute.

  14. Automation in optics manufacturing

    NASA Astrophysics Data System (ADS)

    Pollicove, Harvey M.; Moore, Duncan T.

    1991-01-01

    The optics industry has not followed the lead of the machining and electronics industries in applying advances in computer aided engineering (CAE), computer assisted manufacturing (CAM), automation or quality management techniques. Automation based on computer integrated manufacturing (CIM) and flexible machining systems (FMS) has been widely implemented in these industries. Optics continues to rely on standalone equipment that preserves the highly skilled, labor intensive optical fabrication systems developed in the 1940's. This paper describes development initiatives at the Center for Optics Manufacturing that will create computer integrated manufacturing technology and support processes for the optical industry.

  15. Automated software development workstation

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Engineering software development was automated using an expert system (rule-based) approach. The use of this technology offers benefits not available from current software development and maintenance methodologies. A workstation was built with a library or program data base with methods for browsing the designs stored; a system for graphical specification of designs including a capability for hierarchical refinement and definition in a graphical design system; and an automated code generation capability in FORTRAN. The workstation was then used in a demonstration with examples from an attitude control subsystem design for the space station. Documentation and recommendations are presented.

  16. Automating the CMS DAQ

    SciTech Connect

    Bauer, G.; et al.

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  17. Automated Library System Specifications.

    DTIC Science & Technology

    1986-06-01

    AD-A78 95 AUTOMATED LIBRARY SYSTEM SPECIFICATIONS(U) ARMY LIBRARY /i MANAGEMENT OFFICE ALEXANDRIA VA ASSISTANT CHIEF OF STAFF FOR INFORMATION... MANAGEMENT M B BONNETT JUN 86 UNCLASSIFIED F/G 9/2 NLEElIIhllEEEEE IllEEEEEllllEI .1lm lliml * ~I fI.L25 MI, [OCM RL,;OCLUTO fl. ’N k~ AUTOMATED LIBRARY...SYSTEM SPECIFICATIONS .,I Prepared by Mary B. Bonnett ARMY LIBRARY MANAGEMENT OFFICE OFFICE OF THE ASSISTANT CHIEF OF STAFF FOR INFORMATION MANAGEMENT Lij

  18. Automated HMC assembly

    SciTech Connect

    Blazek, R.J.

    1993-08-01

    An automated gold wire bonder was characterized for bonding 1-mil gold wires to gallium-arsenide (GaAs) monolithic microwave integrated circuits (MMICs) which are used in microwave radar transmitter-receiver (T/R) modules. Acceptable gold wire bond test results were obtained for the fragile, 5-mil-thick GaAs MMICs with gold-metallized bond pads; and average wire bond pull strengths, shear strengths, and failure modes were determined. An automated aluminum wire bonder was modified to be used as a gold wire bonder so that a wedge bond capability was available for GaAs MMICs in addition to the gold ball bond capability.

  19. Automated knowledge generation

    NASA Technical Reports Server (NTRS)

    Myler, Harley R.; Gonzalez, Avelino J.

    1988-01-01

    The general objectives of the NASA/UCF Automated Knowledge Generation Project were the development of an intelligent software system that could access CAD design data bases, interpret them, and generate a diagnostic knowledge base in the form of a system model. The initial area of concentration is in the diagnosis of the process control system using the Knowledge-based Autonomous Test Engineer (KATE) diagnostic system. A secondary objective was the study of general problems of automated knowledge generation. A prototype was developed, based on object-oriented language (Flavors).

  20. Altering user' acceptance of automation through prior automation exposure.

    PubMed

    Bekier, Marek; Molesworth, Brett R C

    2016-08-22

    Air navigation service providers worldwide see increased use of automation as one solution to overcome the capacity constraints imbedded in the present air traffic management (ATM) system. However, increased use of automation within any system is dependent on user acceptance. The present research sought to determine if the point at which an individual is no longer willing to accept or cooperate with automation can be manipulated. Forty participants underwent training on a computer-based air traffic control programme, followed by two ATM exercises (order counterbalanced), one with and one without the aid of automation. Results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation ('tipping point') decreased; suggesting it is indeed possible to alter automation acceptance. Practitioner Summary: This paper investigates whether the point at which a user of automation rejects automation (i.e. 'tipping point') is constant or can be manipulated. The results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation decreased; suggesting it is possible to alter automation acceptance.

  1. Brain herniation

    MedlinePlus

    ... herniation; Uncal herniation; Subfalcine herniation; Tonsillar herniation; Herniation - brain ... Brain herniation occurs when something inside the skull produces pressure that moves brain tissues. This is most ...

  2. Human Factors In Aircraft Automation

    NASA Technical Reports Server (NTRS)

    Billings, Charles

    1995-01-01

    Report presents survey of state of art in human factors in automation of aircraft operation. Presents examination of aircraft automation and effects on flight crews in relation to human error and aircraft accidents.

  3. Progress Toward Adaptive Integration and Optimization of Automated and Neural Processing Systems: Establishing Neural and Behavioral Benchmarks of Optimized Performance

    DTIC Science & Technology

    2014-11-01

    integrates measures of behavior and brain activity with automated information processing and display algorithms. It leverages basic science research...attentional tasks based on brain activity, work done at Science Applications International Corporation using pattern classification algorithms to detect...threats based on brain activity, and work done at the US Army Research Laboratory aimed at understanding the cognitive constraints on performance in crew

  4. Automated Management Of Documents

    NASA Technical Reports Server (NTRS)

    Boy, Guy

    1995-01-01

    Report presents main technical issues involved in computer-integrated documentation. Problems associated with automation of management and maintenance of documents analyzed from perspectives of artificial intelligence and human factors. Technologies that may prove useful in computer-integrated documentation reviewed: these include conventional approaches to indexing and retrieval of information, use of hypertext, and knowledge-based artificial-intelligence systems.

  5. Guide to Library Automation.

    ERIC Educational Resources Information Center

    Toohill, Barbara G.

    Directed toward librarians and library administrators who wish to procure automated systems or services for their libraries, this guide offers practical suggestions, advice, and methods for determining requirements, estimating costs and benefits, writing specifications procuring systems, negotiating contracts, and installing systems. The advice…

  6. Automated Management Of Documents

    NASA Technical Reports Server (NTRS)

    Boy, Guy

    1995-01-01

    Report presents main technical issues involved in computer-integrated documentation. Problems associated with automation of management and maintenance of documents analyzed from perspectives of artificial intelligence and human factors. Technologies that may prove useful in computer-integrated documentation reviewed: these include conventional approaches to indexing and retrieval of information, use of hypertext, and knowledge-based artificial-intelligence systems.

  7. Microcontroller for automation application

    NASA Technical Reports Server (NTRS)

    Cooper, H. W.

    1975-01-01

    The description of a microcontroller currently being developed for automation application was given. It is basically an 8-bit microcomputer with a 40K byte random access memory/read only memory, and can control a maximum of 12 devices through standard 15-line interface ports.

  8. Building Automation Systems.

    ERIC Educational Resources Information Center

    Honeywell, Inc., Minneapolis, Minn.

    A number of different automation systems for use in monitoring and controlling building equipment are described in this brochure. The system functions include--(1) collection of information, (2) processing and display of data at a central panel, and (3) taking corrective action by sounding alarms, making adjustments, or automatically starting and…

  9. Automated Estimating System (AES)

    SciTech Connect

    Holder, D.A.

    1989-09-01

    This document describes Version 3.1 of the Automated Estimating System, a personal computer-based software package designed to aid in the creation, updating, and reporting of project cost estimates for the Estimating and Scheduling Department of the Martin Marietta Energy Systems Engineering Division. Version 3.1 of the Automated Estimating System is capable of running in a multiuser environment across a token ring network. The token ring network makes possible services and applications that will more fully integrate all aspects of information processing, provides a central area for large data bases to reside, and allows access to the data base by multiple users. Version 3.1 of the Automated Estimating System also has been enhanced to include an Assembly pricing data base that may be used to retrieve cost data into an estimate. A WBS Title File program has also been included in Version 3.1. The WBS Title File program allows for the creation of a WBS title file that has been integrated with the Automated Estimating System to provide WBS titles in update mode and in reports. This provides for consistency in WBS titles and provides the capability to display WBS titles on reports generated at a higher WBS level.

  10. Automated Essay Scoring

    ERIC Educational Resources Information Center

    Dikli, Semire

    2006-01-01

    The impacts of computers on writing have been widely studied for three decades. Even basic computers functions, i.e. word processing, have been of great assistance to writers in modifying their essays. The research on Automated Essay Scoring (AES) has revealed that computers have the capacity to function as a more effective cognitive tool (Attali,…

  11. Automated Microbial Genome Annotation

    SciTech Connect

    Land, Miriam

    2009-05-29

    Miriam Land of the DOE Joint Genome Institute at Oak Ridge National Laboratory gives a talk on the current state and future challenges of moving toward automated microbial genome annotation at the "Sequencing, Finishing, Analysis in the Future" meeting in Santa Fe, NM

  12. Building Automation Systems.

    ERIC Educational Resources Information Center

    Honeywell, Inc., Minneapolis, Minn.

    A number of different automation systems for use in monitoring and controlling building equipment are described in this brochure. The system functions include--(1) collection of information, (2) processing and display of data at a central panel, and (3) taking corrective action by sounding alarms, making adjustments, or automatically starting and…

  13. Automated Tendering and Purchasing.

    ERIC Educational Resources Information Center

    DeZorzi, James M.

    1980-01-01

    The Middlesex County Board of Education in Hyde Park (Ontario) has developed an automated tendering/purchasing system for ordering standard items that has reduced by 80 percent the time required for tendering, evaluating, awarding, and ordering items. (Author/MLF)

  14. Automated conflict resolution issues

    NASA Technical Reports Server (NTRS)

    Wike, Jeffrey S.

    1991-01-01

    A discussion is presented of how conflicts for Space Network resources should be resolved in the ATDRSS era. The following topics are presented: a description of how resource conflicts are currently resolved; a description of issues associated with automated conflict resolution; present conflict resolution strategies; and topics for further discussion.

  15. ATC automation concepts

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    1990-01-01

    Information on the design of human-centered tools for terminal area air traffic control (ATC) is given in viewgraph form. Information is given on payoffs and products, guidelines, ATC as a team process, automation tools for ATF, and the traffic management advisor.

  16. Automated Administrative Data Bases

    NASA Technical Reports Server (NTRS)

    Marrie, M. D.; Jarrett, J. R.; Reising, S. A.; Hodge, J. E.

    1984-01-01

    Improved productivity and more effective response to information requirements for internal management, NASA Centers, and Headquarters resulted from using automated techniques. Modules developed to provide information on manpower, RTOPS, full time equivalency, and physical space reduced duplication, increased communication, and saved time. There is potential for greater savings by sharing and integrating with those who have the same requirements.

  17. Automating Small Libraries.

    ERIC Educational Resources Information Center

    Swan, James

    1996-01-01

    Presents a four-phase plan for small libraries strategizing for automation: inventory and weeding, data conversion, implementation, and enhancements. Other topics include selecting a system, MARC records, compatibility, ease of use, industry standards, searching capabilities, support services, system security, screen displays, circulation modules,…

  18. Automated Lumber Processing

    Treesearch

    Powsiri Klinkhachorn; J. Moody; Philip A. Araman

    1995-01-01

    For the past few decades, researchers have devoted time and effort to apply automation and modern computer technologies towards improving the productivity of traditional industries. To be competitive, one must streamline operations and minimize production costs, while maintaining an acceptable margin of profit. This paper describes the effort of one such endeavor...

  19. Personnel Department Automation.

    ERIC Educational Resources Information Center

    Wilkinson, David

    In 1989, the Austin Independent School District's Office of Research and Evaluation was directed to monitor the automation of personnel information and processes in the district's Department of Personnel. Earlier, a study committee appointed by the Superintendent during the 1988-89 school year identified issues related to Personnel Department…

  20. Automated Accounting. Instructor Guide.

    ERIC Educational Resources Information Center

    Moses, Duane R.

    This curriculum guide was developed to assist business instructors using Dac Easy Accounting College Edition Version 2.0 software in their accounting programs. The module consists of four units containing assignment sheets and job sheets designed to enable students to master competencies identified in the area of automated accounting. The first…

  1. Automated Student Model Improvement

    ERIC Educational Resources Information Center

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  2. Validating Automated Speaking Tests

    ERIC Educational Resources Information Center

    Bernstein, Jared; Van Moere, Alistair; Cheng, Jian

    2010-01-01

    This paper presents evidence that supports the valid use of scores from fully automatic tests of spoken language ability to indicate a person's effectiveness in spoken communication. The paper reviews the constructs, scoring, and the concurrent validity evidence of "facility-in-L2" tests, a family of automated spoken language tests in Spanish,…

  3. Automated EEG acquisition

    NASA Technical Reports Server (NTRS)

    Frost, J. D., Jr.; Hillman, C. E., Jr.

    1977-01-01

    Automated self-contained portable device can be used by technicians with minimal training. Data acquired from patient at remote site are transmitted to centralized interpretation center using conventional telephone equipment. There, diagnostic information is analyzed, and results are relayed back to remote site.

  4. Automated Inadvertent Intruder Application

    SciTech Connect

    Koffman, Larry D.; Lee, Patricia L.; Cook, James R.; Wilhite, Elmer L.

    2008-01-15

    The Environmental Analysis and Performance Modeling group of Savannah River National Laboratory (SRNL) conducts performance assessments of the Savannah River Site (SRS) low-level waste facilities to meet the requirements of DOE Order 435.1. These performance assessments, which result in limits on the amounts of radiological substances that can be placed in the waste disposal facilities, consider numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. Inadvertent intruder analysis considers three distinct scenarios for exposure referred to as the agriculture scenario, the resident scenario, and the post-drilling scenario. Each of these scenarios has specific exposure pathways that contribute to the overall dose for the scenario. For the inadvertent intruder analysis, the calculation of dose for the exposure pathways is a relatively straightforward algebraic calculation that utilizes dose conversion factors. Prior to 2004, these calculations were performed using an Excel spreadsheet. However, design checks of the spreadsheet calculations revealed that errors could be introduced inadvertently when copying spreadsheet formulas cell by cell and finding these errors was tedious and time consuming. This weakness led to the specification of functional requirements to create a software application that would automate the calculations for inadvertent intruder analysis using a controlled source of input parameters. This software application, named the Automated Inadvertent Intruder Application, has undergone rigorous testing of the internal calculations and meets software QA requirements. The Automated Inadvertent Intruder Application was intended to replace the previous spreadsheet analyses with an automated application that was verified to produce the same calculations and

  5. Automating spectral measurements

    NASA Astrophysics Data System (ADS)

    Goldstein, Fred T.

    2008-09-01

    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  6. Automated optical assembly

    NASA Astrophysics Data System (ADS)

    Bala, John L.

    1995-08-01

    Automation and polymer science represent fundamental new technologies which can be directed toward realizing the goal of establishing a domestic, world-class, commercial optics business. Use of innovative optical designs using precision polymer optics will enable the US to play a vital role in the next generation of commercial optical products. The increased cost savings inherent in the utilization of optical-grade polymers outweighs almost every advantage of using glass for high volume situations. Optical designers must gain experience with combined refractive/diffractive designs and broaden their knowledge base regarding polymer technology beyond a cursory intellectual exercise. Implementation of a fully automated assembly system, combined with utilization of polymer optics, constitutes the type of integrated manufacturing process which will enable the US to successfully compete with the low-cost labor employed in the Far East, as well as to produce an equivalent product.

  7. Automated macromolecular crystallization screening

    DOEpatents

    Segelke, Brent W.; Rupp, Bernhard; Krupka, Heike I.

    2005-03-01

    An automated macromolecular crystallization screening system wherein a multiplicity of reagent mixes are produced. A multiplicity of analysis plates is produced utilizing the reagent mixes combined with a sample. The analysis plates are incubated to promote growth of crystals. Images of the crystals are made. The images are analyzed with regard to suitability of the crystals for analysis by x-ray crystallography. A design of reagent mixes is produced based upon the expected suitability of the crystals for analysis by x-ray crystallography. A second multiplicity of mixes of the reagent components is produced utilizing the design and a second multiplicity of reagent mixes is used for a second round of automated macromolecular crystallization screening. In one embodiment the multiplicity of reagent mixes are produced by a random selection of reagent components.

  8. The automated command transmission

    NASA Astrophysics Data System (ADS)

    Inoue, Y.; Satoh, S.

    A technique for automated command transmission (ACT) to GEO-stationed satellites is presented. The system is intended for easing the command center workload. The ACT system determines the relation of the commands to on-board units, connects the telemetry with on-board units, defines the control path on the spacecraft, identifies the correspondence of back-up units to primary units, and ascertains sunlight or eclipse conditions. The system also has the address of satellite and command decoders, the ID and content for the mission command sequence, group and inhibit codes, a listing of all available commands, and restricts the data to a command sequence. Telemetry supplies data for automated problem correction. All other missions operations are terminated during system recovery data processing after a crash. The ACT system is intended for use with the GMS spacecraft.

  9. Automated gas chromatography

    DOEpatents

    Mowry, C.D.; Blair, D.S.; Rodacy, P.J.; Reber, S.D.

    1999-07-13

    An apparatus and process for the continuous, near real-time monitoring of low-level concentrations of organic compounds in a liquid, and, more particularly, a water stream. A small liquid volume of flow from a liquid process stream containing organic compounds is diverted by an automated process to a heated vaporization capillary where the liquid volume is vaporized to a gas that flows to an automated gas chromatograph separation column to chromatographically separate the organic compounds. Organic compounds are detected and the information transmitted to a control system for use in process control. Concentrations of organic compounds less than one part per million are detected in less than one minute. 7 figs.

  10. Automated Pollution Control

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Patterned after the Cassini Resource Exchange (CRE), Sholtz and Associates established the Automated Credit Exchange (ACE), an Internet-based concept that automates the auctioning of "pollution credits" in Southern California. An early challenge of the Jet Propulsion Laboratory's Cassini mission was allocating the spacecraft's resources. To support the decision-making process, the CRE was developed. The system removes the need for the science instrument manager to know the individual instruments' requirements for the spacecraft resources. Instead, by utilizing principles of exchange, the CRE induces the instrument teams to reveal their requirements. In doing so, they arrive at an efficient allocation of spacecraft resources by trading among themselves. A Southern California RECLAIM air pollution credit trading market has been set up using same bartering methods utilized in the Cassini mission in order to help companies keep pollution and costs down.

  11. Automated assembly in space

    NASA Technical Reports Server (NTRS)

    Srivastava, Sandanand; Dwivedi, Suren N.; Soon, Toh Teck; Bandi, Reddy; Banerjee, Soumen; Hughes, Cecilia

    1989-01-01

    The installation of robots and their use of assembly in space will create an exciting and promising future for the U.S. Space Program. The concept of assembly in space is very complicated and error prone and it is not possible unless the various parts and modules are suitably designed for automation. Certain guidelines are developed for part designing and for an easy precision assembly. Major design problems associated with automated assembly are considered and solutions to resolve these problems are evaluated in the guidelines format. Methods for gripping and methods for part feeding are developed with regard to the absence of gravity in space. The guidelines for part orientation, adjustments, compliances and various assembly construction are discussed. Design modifications of various fasteners and fastening methods are also investigated.

  12. Terminal automation system maintenance

    SciTech Connect

    Coffelt, D.; Hewitt, J.

    1997-01-01

    Nothing has improved petroleum product loading in recent years more than terminal automation systems. The presence of terminal automation systems (TAS) at loading racks has increased operational efficiency and safety and enhanced their accounting and management capabilities. However, like all finite systems, they occasionally malfunction or fail. Proper servicing and maintenance can minimize this. And in the unlikely event a TAS breakdown does occur, prompt and effective troubleshooting can reduce its impact on terminal productivity. To accommodate around-the-clock loading at racks, increasingly unattended by terminal personnel, TAS maintenance, servicing and troubleshooting has become increasingly demanding. It has also become increasingly important. After 15 years of trial and error at petroleum and petrochemical storage and transfer terminals, a number of successful troubleshooting programs have been developed. These include 24-hour {open_quotes}help hotlines,{close_quotes} internal (terminal company) and external (supplier) support staff, and {open_quotes}layered{close_quotes} support. These programs are described.

  13. Automated campaign system

    NASA Astrophysics Data System (ADS)

    Vondran, Gary; Chao, Hui; Lin, Xiaofan; Beyer, Dirk; Joshi, Parag; Atkins, Brian; Obrador, Pere

    2006-02-01

    To run a targeted campaign involves coordination and management across numerous organizations and complex process flows. Everything from market analytics on customer databases, acquiring content and images, composing the materials, meeting the sponsoring enterprise brand standards, driving through production and fulfillment, and evaluating results; all processes are currently performed by experienced highly trained staff. Presented is a developed solution that not only brings together technologies that automate each process, but also automates the entire flow so that a novice user could easily run a successful campaign from their desktop. This paper presents the technologies, structure, and process flows used to bring this system together. Highlighted will be how the complexity of running a targeted campaign is hidden from the user through technologies, all while providing the benefits of a professionally managed campaign.

  14. Automated chemiluminescence immunoassay measurements

    NASA Astrophysics Data System (ADS)

    Khalil, Omar S.; Mattingly, G. P.; Genger, K.; Mackowiak, J.; Butler, J.; Pepe, C.; Zurek, T. F.; Abunimeh, N.

    1993-06-01

    Chemiluminescence (CL) detection offers potential for high sensitivity immunoassays (CLIAs). Several approaches were attempted to automate CL measurements. Those include the use of photographic film, clear microtitration plates, and magnetic separation. We describe a photon counting detection apparatus that performs (CLIA) measurements. The CL detector moves toward a disposable reaction vessel to create a light-tight seal and then triggers and integrates a CL signal. The capture uses antibody coated polystyrene microparticles. A porous matrix, which is a part of a disposable reaction tray, entraps the microparticle-captured reaction product. The CL signal emanated off the immune complex immobilized by the porous matrix is detected. The detection system is a part of a fully automated immunoassay analyzer. Methods of achieving high sensitivities are discussed.

  15. Automated Chromosome Breakage Assessment

    NASA Technical Reports Server (NTRS)

    Castleman, Kenneth

    1985-01-01

    An automated karyotyping machine was built at JPL in 1972. It does computerized karyotyping, but it has some hardware limitations. The image processing hardware that was available at a reasonable price in 1972 was marginal, at best, for this job. In the meantime, NASA has developed an interest in longer term spaceflights and an interest in using chromosome breakage studies as a dosimeter for radiation or perhaps other damage that might occur to the tissues. This uses circulating lymphocytes as a physiological dosimeter looking for chromosome breakage on long-term spaceflights. For that reason, we have reactivated the automated karyotyping work at JPL. An update on that work, and a description of where it appears to be headed is presented.

  16. Automated Chromosome Breakage Assessment

    NASA Technical Reports Server (NTRS)

    Castleman, Kenneth

    1985-01-01

    An automated karyotyping machine was built at JPL in 1972. It does computerized karyotyping, but it has some hardware limitations. The image processing hardware that was available at a reasonable price in 1972 was marginal, at best, for this job. In the meantime, NASA has developed an interest in longer term spaceflights and an interest in using chromosome breakage studies as a dosimeter for radiation or perhaps other damage that might occur to the tissues. This uses circulating lymphocytes as a physiological dosimeter looking for chromosome breakage on long-term spaceflights. For that reason, we have reactivated the automated karyotyping work at JPL. An update on that work, and a description of where it appears to be headed is presented.

  17. The automation of science.

    PubMed

    King, Ross D; Rowland, Jem; Oliver, Stephen G; Young, Michael; Aubrey, Wayne; Byrne, Emma; Liakata, Maria; Markham, Magdalena; Pir, Pinar; Soldatova, Larisa N; Sparkes, Andrew; Whelan, Kenneth E; Clare, Amanda

    2009-04-03

    The basis of science is the hypothetico-deductive method and the recording of experiments in sufficient detail to enable reproducibility. We report the development of Robot Scientist "Adam," which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation. We have confirmed Adam's conclusions through manual experiments. To describe Adam's research, we have developed an ontology and logical language. The resulting formalization involves over 10,000 different research units in a nested treelike structure, 10 levels deep, that relates the 6.6 million biomass measurements to their logical description. This formalization describes how a machine contributed to scientific knowledge.

  18. Automated Assembly Center (AAC)

    NASA Technical Reports Server (NTRS)

    Stauffer, Robert J.

    1993-01-01

    The objectives of this project are as follows: to integrate advanced assembly and assembly support technology under a comprehensive architecture; to implement automated assembly technologies in the production of high-visibility DOD weapon systems; and to document the improved cost, quality, and lead time. This will enhance the production of DOD weapon systems by utilizing the latest commercially available technologies combined into a flexible system that will be able to readily incorporate new technologies as they emerge. Automated assembly encompasses the following areas: product data, process planning, information management policies and framework, three schema architecture, open systems communications, intelligent robots, flexible multi-ability end effectors, knowledge-based/expert systems, intelligent workstations, intelligent sensor systems, and PDES/PDDI data standards.

  19. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Sewy, D.; Pickering, C.; Sauers, R.

    1984-01-01

    The purpose of the phase 2 of the power subsystem automation study was to demonstrate the feasibility of using computer software to manage an aspect of the electrical power subsystem on a space station. The state of the art in expert systems software was investigated in this study. This effort resulted in the demonstration of prototype expert system software for managing one aspect of a simulated space station power subsystem.

  20. Automated Microbial Metabolism Laboratory

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Development of the automated microbial metabolism laboratory (AMML) concept is reported. The focus of effort of AMML was on the advanced labeled release experiment. Labeled substrates, inhibitors, and temperatures were investigated to establish a comparative biochemical profile. Profiles at three time intervals on soil and pure cultures of bacteria isolated from soil were prepared to establish a complete library. The development of a strategy for the return of a soil sample from Mars is also reported.

  1. Automated RTOP Management System

    NASA Technical Reports Server (NTRS)

    Hayes, P.

    1984-01-01

    The structure of NASA's Office of Aeronautics and Space Technology electronic information system network from 1983 to 1985 is illustrated. The RTOP automated system takes advantage of existing hardware, software, and expertise, and provides: (1) computerized cover sheet and resources forms; (2) electronic signature and transmission; (3) a data-based information system; (4) graphics; (5) intercenter communications; (6) management information; and (7) text editing. The system is coordinated with Headquarters efforts in codes R,E, and T.

  2. Automated RSO Stability Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, T.

    2016-09-01

    A methodology for assessing the attitude stability of a Resident Space Object (RSO) using visual magnitude data is presented and then scaled to run in an automated fashion across the entire satellite catalog. Results obtained by applying the methodology to the Commercial Space Operations Center (COMSpOC) catalog are presented and summarized, identifying objects that have changed stability. We also examine the timeline for detecting the transition from stable to unstable attitude

  3. Automated Nitrocellulose Analysis

    DTIC Science & Technology

    1978-12-01

    is acceptable. (4) As would be expected from the theory of osmosis , a high saline content in the dialysis recipient stream (countersolution) is of...Block 39, II different from Report; IS. SUPPLEMENTARY NOTES IS. KEY WOROS (Continue on rereri Analysis Automated analysis Dialysis Glyceryl...Technicon AutoAnalyzer, involves aspiration of a stirred nitrocellulose suspension, dialysis against 9 percent saline, and hydrolysis with 5N sodium

  4. Cavendish Balance Automation

    NASA Technical Reports Server (NTRS)

    Thompson, Bryan

    2000-01-01

    This is the final report for a project carried out to modify a manual commercial Cavendish Balance for automated use in cryostat. The scope of this project was to modify an off-the-shelf manually operated Cavendish Balance to allow for automated operation for periods of hours or days in cryostat. The purpose of this modification was to allow the balance to be used in the study of effects of superconducting materials on the local gravitational field strength to determine if the strength of gravitational fields can be reduced. A Cavendish Balance was chosen because it is a fairly simple piece of equipment for measuring gravity, one the least accurately known and least understood physical constants. The principle activities that occurred under this purchase order were: (1) All the components necessary to hold and automate the Cavendish Balance in a cryostat were designed. Engineering drawings were made of custom parts to be fabricated, other off-the-shelf parts were procured; (2) Software was written in LabView to control the automation process via a stepper motor controller and stepper motor, and to collect data from the balance during testing; (3)Software was written to take the data collected from the Cavendish Balance and reduce it to give a value for the gravitational constant; (4) The components of the system were assembled and fitted to a cryostat. Also the LabView hardware including the control computer, stepper motor driver, data collection boards, and necessary cabling were assembled; and (5) The system was operated for a number of periods, data collected, and reduced to give an average value for the gravitational constant.

  5. Automated Cooperative Trajectories

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Pahle, Joseph; Brown, Nelson

    2015-01-01

    This presentation is an overview of the Automated Cooperative Trajectories project. An introduction to the phenomena of wake vortices is given, along with a summary of past research into the possibility of extracting energy from the wake by flying close parallel trajectories. Challenges and barriers to adoption of civilian automatic wake surfing technology are identified. A hardware-in-the-loop simulation is described that will support future research. Finally, a roadmap for future research and technology transition is proposed.

  6. Automation in biological crystallization.

    PubMed

    Stewart, Patrick Shaw; Mueller-Dieckmann, Jochen

    2014-06-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given.

  7. Automation in biological crystallization

    PubMed Central

    Shaw Stewart, Patrick; Mueller-Dieckmann, Jochen

    2014-01-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given. PMID:24915074

  8. Autonomy, Automation, and Systems

    NASA Astrophysics Data System (ADS)

    Turner, Philip R.

    1987-02-01

    Aerospace industry interest in autonomy and automation, given fresh impetus by the national goal of establishing a Space Station, is becoming a major item of research and technology development. The promise of new technology arising from research in Artificial Intelligence (AI) has focused much attention on its potential in autonomy and automation. These technologies can improve performance in autonomous control functions that involve planning, scheduling, and fault diagnosis of complex systems. There are, however, many aspects of system and subsystem design in an autonomous system that impact AI applications, but do not directly involve AI technology. Development of a system control architecture, establishment of an operating system within the design, providing command and sensory data collection features appropriate to automated operation, and the use of design analysis tools to support system engineering are specific examples of major design issues. Aspects such as these must also receive attention and technology development support if we are to implement complex autonomous systems within the realistic limitations of mass, power, cost, and available flight-qualified technology that are all-important to a flight project.

  9. Automating existing stations

    SciTech Connect

    Little, J.E.

    1986-09-01

    The task was to automate 20 major compressor stations along ANR Pipeline Co.'s Southeastern and Southwestern pipelines in as many months. Meeting this schedule required standardized hardware and software design. Working with Bristol Babcock Co., ANR came up with an off-the-shelf station automation package suitable for a variety of compressor stations. The project involved 148 engines with 488,880-hp in the 20 stations. ANR Pipeline developed software for these engines and compressors, including horsepower prediction and efficiency. The system places processors ''intelligence'' at each station and engine to monitor and control operations. The station processor receives commands from the company's gas dispatch center at Detroit and informs dispatchers of alarms, conditions, and decision it makes. The automation system is controlled by the Detroit center through a central communications network. Operating orders from the center are sent to the station processor, which obeys orders using the most efficient means of operation at the station's disposal. In a malfunction, a control and communications backup system takes over. Commands and information are directly transmitted between the center and the individual compressor stations. Stations receive their orders based on throughput, with suction and discharge pressure overrides. Additionally, a discharge temperature override protects pipeline coatings.

  10. Automation of optical tweezers

    NASA Astrophysics Data System (ADS)

    Hsieh, Tseng-Ming; Chang, Bo-Jui; Hsu, Long

    2000-07-01

    Optical tweezers is a newly developed instrument, which makes possible the manipulation of micro-optical particles under a microscope. In this paper, we present the automation of an optical tweezers which consists of a modified optical tweezers, equipped with two motorized actuators to deflect a 1 W argon laser beam, and a computer control system including a joystick. The trapping of a single bead and a group of lactoacidofilus was shown, separately. With the aid of the joystick and two auxiliary cursers superimposed on the real-time image of a trapped bead, we demonstrated the simple and convenient operation of the automated optical tweezers. By steering the joystick and then pressing a button on it, we assign a new location for the trapped bead to move to. The increment of the motion 0.04 (mu) m for a 20X objective, is negligible. With a fast computer for image processing, the manipulation of the trapped bead is smooth and accurate. The automation of the optical tweezers is also programmable. This technique may be applied to accelerate the DNA hybridization in a gene chip. The combination of the modified optical tweezers with the computer control system provides a tool for precise manipulation of micro particles in many scientific fields.

  11. Janice VanCleave's Electricity: Mind-Boggling Experiments You Can Turn into Science Fair Projects.

    ERIC Educational Resources Information Center

    VanCleave, Janice

    This book is designed to provide guidance and ideas for science projects to help students learn more about science as they search for answers to specific problems. The 20 topics on electricity in this book suggest many possible problems to solve. Each topic has one detailed experiment followed by a section that provides additional questions about…

  12. Janice VanCleave's Electricity: Mind-Boggling Experiments You Can Turn into Science Fair Projects.

    ERIC Educational Resources Information Center

    VanCleave, Janice

    This book is designed to provide guidance and ideas for science projects to help students learn more about science as they search for answers to specific problems. The 20 topics on electricity in this book suggest many possible problems to solve. Each topic has one detailed experiment followed by a section that provides additional questions about…

  13. AUTOMATED INADVERTENT INTRUDER APPLICATION

    SciTech Connect

    Koffman, L; Patricia Lee, P; Jim Cook, J; Elmer Wilhite, E

    2007-05-29

    The Environmental Analysis and Performance Modeling group of Savannah River National Laboratory (SRNL) conducts performance assessments of the Savannah River Site (SRS) low-level waste facilities to meet the requirements of DOE Order 435.1. These performance assessments, which result in limits on the amounts of radiological substances that can be placed in the waste disposal facilities, consider numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. Inadvertent intruder analysis considers three distinct scenarios for exposure referred to as the agriculture scenario, the resident scenario, and the post-drilling scenario. Each of these scenarios has specific exposure pathways that contribute to the overall dose for the scenario. For the inadvertent intruder analysis, the calculation of dose for the exposure pathways is a relatively straightforward algebraic calculation that utilizes dose conversion factors. Prior to 2004, these calculations were performed using an Excel spreadsheet. However, design checks of the spreadsheet calculations revealed that errors could be introduced inadvertently when copying spreadsheet formulas cell by cell and finding these errors was tedious and time consuming. This weakness led to the specification of functional requirements to create a software application that would automate the calculations for inadvertent intruder analysis using a controlled source of input parameters. This software application, named the Automated Inadvertent Intruder Application, has undergone rigorous testing of the internal calculations and meets software QA requirements. The Automated Inadvertent Intruder Application was intended to replace the previous spreadsheet analyses with an automated application that was verified to produce the same calculations and

  14. A semi-automated measuring system of brain diffusion and perfusion magnetic resonance imaging abnormalities in patients with multiple sclerosis based on the integration of coregistration and tissue segmentation procedures.

    PubMed

    Revenaz, Alfredo; Ruggeri, Massimiliano; Laganà, Marcella; Bergsland, Niels; Groppo, Elisabetta; Rovaris, Marco; Fainardi, Enrico

    2016-01-14

    Diffusion-weighted imaging (DWI) and perfusion-weighted imaging (PWI) abnormalities in patients with multiple sclerosis (MS) are currently measured by a complex combination of separate procedures. Therefore, the purpose of this study was to provide a reliable method for reducing analysis complexity and obtaining reproducible results. We implemented a semi-automated measuring system in which different well-known software components for magnetic resonance imaging (MRI) analysis are integrated to obtain reliable measurements of DWI and PWI disturbances in MS. We generated the Diffusion/Perfusion Project (DPP) Suite, in which a series of external software programs are managed and harmonically and hierarchically incorporated by in-house developed Matlab software to perform the following processes: 1) image pre-processing, including imaging data anonymization and conversion from DICOM to Nifti format; 2) co-registration of 2D and 3D non-enhanced and Gd-enhanced T1-weighted images in fluid-attenuated inversion recovery (FLAIR) space; 3) lesion segmentation and classification, in which FLAIR lesions are at first segmented and then categorized according to their presumed evolution; 4) co-registration of segmented FLAIR lesion in T1 space to obtain the FLAIR lesion mask in the T1 space; 5) normal appearing tissue segmentation, in which T1 lesion mask is used to segment basal ganglia/thalami, normal appearing grey matter (NAGM) and normal appearing white matter (NAWM); 6) DWI and PWI map generation; 7) co-registration of basal ganglia/thalami, NAGM, NAWM, DWI and PWI maps in previously segmented FLAIR space; 8) data analysis. All these steps are automatic, except for lesion segmentation and classification. We developed a promising method to limit misclassifications and user errors, providing clinical researchers with a practical and reproducible tool to measure DWI and PWI changes in MS.

  15. Automated Proactive Fault Isolation: A Key to Automated Commissioning

    SciTech Connect

    Katipamula, Srinivas; Brambley, Michael R.

    2007-07-31

    In this paper, we present a generic model for automated continuous commissioing and then delve in detail into one of the processes, proactive testing for fault isolation, which is key to automating commissioning. The automated commissioining process uses passive observation-based fault detction and diagnostic techniques, followed by automated proactive testing for fault isolation, automated fault evaluation, and automated reconfiguration of controls together to continuously keep equipment controlled and running as intended. Only when hard failures occur or a physical replacement is required does the process require human intervention, and then sufficient information is provided by the automated commissioning system to target manual maintenance where it is needed. We then focus on fault isolation by presenting detailed logic that can be used to automatically isolate faults in valves, a common component in HVAC systems, as an example of how automated proactive fault isolation can be accomplished. We conclude the paper with a discussion of how this approach to isolating faults can be applied to other common HVAC components and their automated commmissioning and a summary of key conclusions of the paper.

  16. Semiautomated volumetry of the cerebrum, cerebellum-brain stem, and temporal lobe on brain magnetic resonance images.

    PubMed

    Hayashi, Norio; Sanada, Shigeru; Suzuki, Masayuki; Matsuura, Yukihiro; Kawahara, Kazuhiro; Tsujii, Hideo; Yamamoto, Tomoyuki; Matsui, Osamu

    2008-02-01

    The aim of this study was to develop an automated method of segmenting the cerebrum, cerebellum-brain stem, and temporal lobe simultaneously on magnetic resonance (MR) images. We obtained T1-weighted MR images from 10 normal subjects and 19 patients with brain atrophy. To perform automated volumetry from MR images, we performed the following three steps: (1) segmentation of the brain region; (2) separation between the cerebrum and the cerebellum-brain stem; and (3) segmentation of the temporal lobe. Evaluation was based on the correctly recognized region (CRR) (i.e., the region recognized by both the automated and manual methods). The mean CRRs of the normal and atrophic brains were 98.2% and 97.9% for the cerebrum, 87.9% and 88.5% for the cerebellum-brain stem, and 76.9% and 85.8% for the temporal lobe, respectively. We introduce an automated volumetric method for the cerebrum, cerebellum-brain stem, and temporal lobe on brain MR images. Our method can be applied to not only the normal brain but also the atrophic brain.

  17. Differentiation of sCJD and vCJD forms by automated analysis of basal ganglia intensity distribution in multisequence MRI of the brain--definition and evaluation of new MRI-based ratios.

    PubMed

    Linguraru, Marius George; Ayache, Nicholas; Bardinet, Eric; Ballester, Miguel Angel González; Galanaud, Damien; Haïk, Stéphane; Faucheux, Baptiste; Hauw, Jean-Jacques; Cozzone, Patrick; Dormont, Didier; Brandel, Jean-Philippe

    2006-08-01

    We present a method for the analysis of basal ganglia (including the thalamus) for accurate detection of human spongiform encephalopathy in multisequence magnetic resonance imaging (MRI) of the brain. One common feature of most forms of prion protein diseases is the appearance of hyperintensities in the deep grey matter area of the brain in T2-weighted magnetic resonance (MR) images. We employ T1, T2, and Flair-T2 MR sequences for the detection of intensity deviations in the internal nuclei. First, the MR data are registered to a probabilistic atlas and normalized in intensity. Then smoothing is applied with edge enhancement. The segmentation of hyperintensities is performed using a model of the human visual system. For more accurate results, a priori anatomical data from a segmented atlas are employed to refine the registration and remove false positives. The results are robust over the patient data and in accordance with the clinical ground truth. Our method further allows the quantification of intensity distributions in basal ganglia. The caudate nuclei are highlighted as main areas of diagnosis of sporadic Creutzfeldt-Jakob Disease (sCJD), in agreement with the histological data. The algorithm permitted the classification of the intensities of abnormal signals in sCJD patient FLAIR images with a higher hypersignal in caudate nuclei (10/10) and putamen (6/10) than in thalami. Defining normalized MRI measures of the intensity relations between the internal grey nuclei of patients, we robustly differentiate sCJD and variant CJD (vCJD) patients, in an attempt to create an automatic classification tool of human spongiform encephalopathies.

  18. Automation in organizations: Eternal conflict

    NASA Technical Reports Server (NTRS)

    Dieterly, D. L.

    1981-01-01

    Some ideas on and insights into the problems associated with automation in organizations are presented with emphasis on the concept of automation, its relationship to the individual, and its impact on system performance. An analogy is drawn, based on an American folk hero, to emphasize the extent of the problems encountered when dealing with automation within an organization. A model is proposed to focus attention on a set of appropriate dimensions. The function allocation process becomes a prominent aspect of the model. The current state of automation research is mentioned in relation to the ideas introduced. Proposed directions for an improved understanding of automation's effect on the individual's efficiency are discussed. The importance of understanding the individual's perception of the system in terms of the degree of automation is highlighted.

  19. An algorithm for automatic parameter adjustment for brain extraction in BrainSuite

    NASA Astrophysics Data System (ADS)

    Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.

    2017-02-01

    Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.

  20. 77 FR 48527 - National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-14

    ... Automated Commercial Environment (ACE) Simplified Entry: Modification of Participant Selection Criteria and... (NCAP) test concerning the simplified entry functionality in the Automated Commercial Environment (ACE...) National Customs Automation Program (NCAP) test concerning Automated Commercial Environment (ACE...

  1. White matter hyperintensities segmentation: a new semi-automated method.

    PubMed

    Iorio, Mariangela; Spalletta, Gianfranco; Chiapponi, Chiara; Luccichenti, Giacomo; Cacciari, Claudia; Orfei, Maria D; Caltagirone, Carlo; Piras, Fabrizio

    2013-01-01

    White matter hyperintensities (WMH) are brain areas of increased signal on T2-weighted or fluid-attenuated inverse recovery magnetic resonance imaging (MRI) scans. In this study we present a new semi-automated method to measure WMH load that is based on the segmentation of the intensity histogram of fluid-attenuated inversion recovery images. Thirty patients with mild cognitive impairment with variable WMH load were enrolled. The semi-automated WMH segmentation included removal of non-brain tissue, spatial normalization, removal of cerebellum and brain stem, spatial filtering, thresholding to segment probable WMH, manual editing for correction of false positives and negatives, generation of WMH map, and volumetric estimation of the WMH load. Accuracy was quantitatively evaluated by comparing semi-automated and manual WMH segmentations performed by two independent raters. Differences between the two procedures were assessed using Student's t-tests and similarity was evaluated using linear regression model and Dice similarity coefficient (DSC). The volumes of the manual and semi-automated segmentations did not statistically differ (t-value = -1.79, DF = 29, p = 0.839 for rater 1; t-value = 1.113, DF = 29, p = 0.2749 for rater 2), were highly correlated [R (2) = 0.921, F (1,29) = 155.54, p < 0.0001 for rater 1; R (2) = 0.935, F (1,29) = 402.709, p < 0.0001 for rater 2] and showed a very strong spatial similarity (mean DSC = 0.78, for rater 1 and 0.77 for rater 2). In conclusion, our semi-automated method to measure the load of WMH is highly reliable and could represent a good tool that could be easily implemented in routinely neuroimaging analyses to map clinical consequences of WMH.

  2. World-wide distribution automation systems

    SciTech Connect

    Devaney, T.M.

    1994-12-31

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems.

  3. Automating CPM-GOMS

    NASA Technical Reports Server (NTRS)

    John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger

    2002-01-01

    CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling. available to model human users, the Goals, Operators, Methods, and Selection (GOMS) method [6, 21] has been the most widely used, providing accurate, often zero-parameter, predictions of the routine performance of skilled users in a wide range of procedural tasks [6, 13, 15, 27, 28]. GOMS is meant to model routine behavior. The user is assumed to have methods that apply sequences of operators and to achieve a goal. Selection rules are applied when there is more than one method to achieve a goal. Many routine tasks lend themselves well to such decomposition. Decomposition produces a representation of the task as a set of nested goal states that include an initial state and a final state. The iterative decomposition into goals and nested subgoals can terminate in primitives of any desired granularity, the choice of level of detail dependent on the predictions required. Although GOMS has proven useful in HCI, tools to support the

  4. Automating CPM-GOMS

    NASA Technical Reports Server (NTRS)

    John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger

    2002-01-01

    CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling. available to model human users, the Goals, Operators, Methods, and Selection (GOMS) method [6, 21] has been the most widely used, providing accurate, often zero-parameter, predictions of the routine performance of skilled users in a wide range of procedural tasks [6, 13, 15, 27, 28]. GOMS is meant to model routine behavior. The user is assumed to have methods that apply sequences of operators and to achieve a goal. Selection rules are applied when there is more than one method to achieve a goal. Many routine tasks lend themselves well to such decomposition. Decomposition produces a representation of the task as a set of nested goal states that include an initial state and a final state. The iterative decomposition into goals and nested subgoals can terminate in primitives of any desired granularity, the choice of level of detail dependent on the predictions required. Although GOMS has proven useful in HCI, tools to support the

  5. Automation for optics manufacturing

    NASA Astrophysics Data System (ADS)

    Pollicove, Harvey M.; Moore, Duncan T.

    1990-11-01

    The optics industry has not followed the lead of the machining and electronics industries in applying advances In computer aided engineering (CAE), computer assisted manufacturing (CAM), automation or quality management techniques. Automationbased on computer integrated manufacturing (CIM) and flexible machining systems (FMS) has been widely implemented In these industries. Optics continues to rely on standalone equipment that preserves the highly skilled, labor intensive optical fabrication systems developed in the 1940's. This paper describes development initiatives at the Center for Optics Manufacturing that will create computer integrated manufacturing technology and support processes for the optical industry.

  6. Automated Propellant Blending

    NASA Technical Reports Server (NTRS)

    Hohmann, Carl W. (Inventor); Harrington, Douglas W. (Inventor); Dutton, Maureen L. (Inventor); Tipton, Billy Charles, Jr. (Inventor); Bacak, James W. (Inventor); Salazar, Frank (Inventor)

    2000-01-01

    An automated propellant blending apparatus and method that uses closely metered addition of countersolvent to a binder solution with propellant particles dispersed therein to precisely control binder precipitation and particle aggregation is discussed. A profile of binder precipitation versus countersolvent-solvent ratio is established empirically and used in a computer algorithm to establish countersolvent addition parameters near the cloud point for controlling the transition of properties of the binder during agglomeration and finishing of the propellant composition particles. The system is remotely operated by computer for safety, reliability and improved product properties, and also increases product output.

  7. [Automated anesthesia record system].

    PubMed

    Zhu, Tao; Liu, Jin

    2005-12-01

    Based on Client/Server architecture, a software of automated anesthesia record system running under Windows operation system and networks has been developed and programmed with Microsoft Visual C++ 6.0, Visual Basic 6.0 and SQL Server. The system can deal with patient's information throughout the anesthesia. It can collect and integrate the data from several kinds of medical equipment such as monitor, infusion pump and anesthesia machine automatically and real-time. After that, the system presents the anesthesia sheets automatically. The record system makes the anesthesia record more accurate and integral and can raise the anesthesiologist's working efficiency.

  8. Automated fiber pigtailing machine

    DOEpatents

    Strand, Oliver T.; Lowry, Mark E.

    1999-01-01

    The Automated Fiber Pigtailing Machine (AFPM) aligns and attaches optical fibers to optoelectonic (OE) devices such as laser diodes, photodiodes, and waveguide devices without operator intervention. The so-called pigtailing process is completed with sub-micron accuracies in less than 3 minutes. The AFPM operates unattended for one hour, is modular in design and is compatible with a mass production manufacturing environment. This machine can be used to build components which are used in military aircraft navigation systems, computer systems, communications systems and in the construction of diagnostics and experimental systems.

  9. Automated fiber pigtailing machine

    DOEpatents

    Strand, O.T.; Lowry, M.E.

    1999-01-05

    The Automated Fiber Pigtailing Machine (AFPM) aligns and attaches optical fibers to optoelectronic (OE) devices such as laser diodes, photodiodes, and waveguide devices without operator intervention. The so-called pigtailing process is completed with sub-micron accuracies in less than 3 minutes. The AFPM operates unattended for one hour, is modular in design and is compatible with a mass production manufacturing environment. This machine can be used to build components which are used in military aircraft navigation systems, computer systems, communications systems and in the construction of diagnostics and experimental systems. 26 figs.

  10. Automated wire preparation system

    NASA Astrophysics Data System (ADS)

    McCulley, Deborah J.

    The first step toward an automated wire harness facility for the aerospace industry has been taken by implementing the Wire Vektor 2000 into the wire harness preparation area. An overview of the Wire Vektor 2000 is given, including the facilities for wire cutting, marking, and transporting, for wire end processing, and for system control. Production integration in the Wire Vektor 2000 system is addressed, considering the hardware/software debug system and the system throughput. The manufacturing changes that have to be made in implementing the Wire Vektor 2000 are discussed.

  11. Automated Propellant Blending

    NASA Technical Reports Server (NTRS)

    Hohmann, Carl W. (Inventor); Harrington, Douglas W. (Inventor); Dutton, Maureen L. (Inventor); Tipton, Billy Charles, Jr. (Inventor); Bacak, James W. (Inventor); Salazar, Frank (Inventor)

    1999-01-01

    An automated propellant blending apparatus and method uses closely metered addition of countersolvent to a binder solution with propellant particles dispersed therein to precisely control binder precipitation and particle aggregation. A profile of binder precipitation versus countersolvent-solvent ratio is established empirically and used in a computer algorithm to establish countersolvent addition parameters near the cloud point for controlling the transition of properties of the binder during agglomeration and finishing of the propellant composition particles. The system is remotely operated by computer for safety, reliability and improved product properties, and also increases product output.

  12. Brain abscess

    MedlinePlus

    ... with certain heart disorders, may receive antibiotics before dental or other procedures to help reduce the risk of infection. Alternative Names Abscess - brain; Cerebral abscess; CNS abscess Patient Instructions Brain surgery - discharge Images Amebic brain abscess ...

  13. Brain components

    MedlinePlus Videos and Cool Tools

    The brain is composed of more than a thousand billion neurons. Specific groups of them, working in concert, provide ... of information. The 3 major components of the brain are the cerebrum, cerebellum, and brain stem. The ...

  14. Brain surgery

    MedlinePlus

    Craniotomy; Surgery - brain; Neurosurgery; Craniectomy; Stereotactic craniotomy; Stereotactic brain biopsy; Endoscopic craniotomy ... cut depends on where the problem in the brain is located. The surgeon creates a hole in ...

  15. Brain Malformations

    MedlinePlus

    Most brain malformations begin long before a baby is born. Something damages the developing nervous system or causes it ... medicines, infections, or radiation during pregnancy interferes with brain development. Parts of the brain may be missing, ...

  16. AUTOMATION FOR THE SYNTHESIS AND APPLICATION OF PET RADIOPHARMACEUTICALS.

    SciTech Connect

    Alexoff, D.L.

    2001-09-21

    The development of automated systems supporting the production and application of PET radiopharmaceuticals has been an important focus of researchers since the first successes of using carbon-11 (Comar et al., 1979) and fluorine-18 (Reivich et al., 1979) labeled compounds to visualize functional activity of the human brain. These initial successes of imaging the human brain soon led to applications in the human heart (Schelbert et al., 1980), and quickly radiochemists began to see the importance of automation to support PET studies in humans (Lambrecht, 1982; Langstrom et al., 1983). Driven by the necessity of controlling processes emanating high fluxes of 511 KeV photons, and by the tedium of repetitive syntheses for carrying out these human PET investigations, academic and government scientists have designed, developed and tested many useful and novel automated systems in the past twenty years. These systems, originally designed primarily by radiochemists, not only carry out effectively the tasks they were designed for, but also demonstrate significant engineering innovation in the field of laboratory automation.

  17. Automated System Marketplace 1995: The Changing Face of Automation.

    ERIC Educational Resources Information Center

    Barry, Jeff; And Others

    1995-01-01

    Discusses trends in the automated system marketplace with specific attention to online vendors and their customers: academic, public, school, and special libraries. Presents vendor profiles; tables and charts on computer systems and sales; and sidebars that include a vendor source list and the differing views on procuring an automated library…

  18. Automated Gamma Knife dose planning

    NASA Astrophysics Data System (ADS)

    Leichtman, Gregg S.; Aita, Anthony L.; Goldman, H. W.

    1998-06-01

    The Gamma Knife (Elekta Instruments, Inc., Atlanta, GA), a neurosurgical, highly focused radiation delivery device, is used to eradicate deep-seated anomalous tissue within the human brain by delivering a lethal dose of radiation to target tissue. This dose is the accumulated result of delivering sequential `shots' of radiation to the target where each shot is approximately 3D Gaussian in shape. The size and intensity of each shot can be adjusted by varying the time of radiation exposure and by using one of four collimator sizes ranging from 4 - 18 mm. Current dose planning requires that the dose plan be developed manually to cover the target, and only the target, with a desired minimum radiation intensity using a minimum number of shots. This is a laborious and subjective process which typically leads to suboptimal conformal target coverage by the dose. We have used adaptive simulated annealing/quenching followed by Nelder-Mead simplex optimization to automate the selection and placement of Gaussian-based `shots' to form a simulated dose plane. In order to make the computation of the problem tractable, the algorithm, based upon contouring and polygon clipping, takes a 2 1/2-D approach to defining the cost function. Several experiments have been performed where the optimizers have been given the freedom to vary the number of shots and the weight, collimator size, and 3D location of each shot. To data best results have been obtained by forcing the optimizers to use a fixed number of unweighted shots with each optimizer set free to vary the 3D location and collimator size of each shot. Our preliminary results indicate that this technology will radically decrease planning time while significantly increasing accuracy of conformal target coverage and reproducibility over current manual methods.

  19. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT ...

    EPA Pesticide Factsheets

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execution of the Soil Water Assessment Tool (SWAT) and KINEmatic Runoff and EROSion (KINEROS2) hydrologic models. The application of these two models allows AGWA to conduct hydrologic modeling and watershed assessments at multiple temporal and spatial scales. AGWA’s current outputs are runoff (volumes and peaks) and sediment yield, plus nitrogen and phosphorus with the SWAT model. AGWA uses commonly available GIS data layers to fully parameterize, execute, and visualize results from both models. Through an intuitive interface the user selects an outlet from which AGWA delineates and discretizes the watershed using a Digital Elevation Model (DEM) based on the individual model requirements. The watershed model elements are then intersected with soils and land cover data layers to derive the requisite model input parameters. The chosen model is then executed, and the results are imported back into AGWA for visualization. This allows managers to identify potential problem areas where additional monitoring can be undertaken or mitigation activities can be focused. AGWA also has tools to apply an array of best management practices. There are currently two versions of AGWA available; AGWA 1.5 for

  20. Space station advanced automation

    NASA Technical Reports Server (NTRS)

    Woods, Donald

    1990-01-01

    In the development of a safe, productive and maintainable space station, Automation and Robotics (A and R) has been identified as an enabling technology which will allow efficient operation at a reasonable cost. The Space Station Freedom's (SSF) systems are very complex, and interdependent. The usage of Advanced Automation (AA) will help restructure, and integrate system status so that station and ground personnel can operate more efficiently. To use AA technology for the augmentation of system management functions requires a development model which consists of well defined phases of: evaluation, development, integration, and maintenance. The evaluation phase will consider system management functions against traditional solutions, implementation techniques and requirements; the end result of this phase should be a well developed concept along with a feasibility analysis. In the development phase the AA system will be developed in accordance with a traditional Life Cycle Model (LCM) modified for Knowledge Based System (KBS) applications. A way by which both knowledge bases and reasoning techniques can be reused to control costs is explained. During the integration phase the KBS software must be integrated with conventional software, and verified and validated. The Verification and Validation (V and V) techniques applicable to these KBS are based on the ideas of consistency, minimal competency, and graph theory. The maintenance phase will be aided by having well designed and documented KBS software.

  1. Agile automated vision

    NASA Astrophysics Data System (ADS)

    Fandrich, Juergen; Schmitt, Lorenz A.

    1994-11-01

    The microelectronic industry is a protagonist in driving automated vision to new paradigms. Today semiconductor manufacturers use vision systems quite frequently in their fabs in the front-end process. In fact, the process depends on reliable image processing systems. In the back-end process, where ICs are assembled and packaged, today vision systems are only partly used. But in the next years automated vision will become compulsory for the back-end process as well. Vision will be fully integrated into every IC package production machine to increase yields and reduce costs. Modem high-speed material processing requires dedicated and efficient concepts in image processing. But the integration of various equipment in a production plant leads to unifying handling of data flow and interfaces. Only agile vision systems can act with these contradictions: fast, reliable, adaptable, scalable and comprehensive. A powerful hardware platform is a unneglectable requirement for the use of advanced and reliable, but unfortunately computing intensive image processing algorithms. The massively parallel SIMD hardware product LANTERN/VME supplies a powerful platform for existing and new functionality. LANTERN/VME is used with a new optical sensor for IC package lead inspection. This is done in 3D, including horizontal and coplanarity inspection. The appropriate software is designed for lead inspection, alignment and control tasks in IC package production and handling equipment, like Trim&Form, Tape&Reel and Pick&Place machines.

  2. Automated office blood pressure.

    PubMed

    Myers, Martin G; Godwin, Marshall

    2012-05-01

    Manual blood pressure (BP) is gradually disappearing from clinical practice with the mercury sphygmomanometer now considered to be an environmental hazard. Manual BP is also subject to measurement error on the part of the physician/nurse and patient-related anxiety which can result in poor quality BP measurements and office-induced (white coat) hypertension. Automated office (AO) BP with devices such as the BpTRU (BpTRU Medical Devices, Coquitlam, BC) has already replaced conventional manual BP in many primary care practices in Canada and has also attracted interest in other countries where research studies using AOBP have been undertaken. The basic principles of AOBP include multiple readings taken with a fully automated recorder with the patient resting alone in a quiet room. When these principles are followed, office-induced hypertension is eliminated and AOBP exhibits a much stronger correlation with the awake ambulatory BP as compared with routine manual BP measurements. Unlike routine manual BP, AOBP correlates as well with left ventricular mass as does the awake ambulatory BP. AOBP also simplifies the definition of hypertension in that the cut point for a normal AOBP (< 135/85 mm Hg) is the same as for the awake ambulatory BP and home BP. This article summarizes the currently available evidence supporting the use of AOBP in routine clinical practice and proposes an algorithm in which AOBP replaces manual BP for the diagnosis and management of hypertension.

  3. Maneuver Automation Software

    NASA Technical Reports Server (NTRS)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; hide

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  4. Automating quantum experiment control

    NASA Astrophysics Data System (ADS)

    Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.

    2017-03-01

    The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.

  5. A Demonstration of Automated DNA Sequencing.

    ERIC Educational Resources Information Center

    Latourelle, Sandra; Seidel-Rogol, Bonnie

    1998-01-01

    Details a simulation that employs a paper-and-pencil model to demonstrate the principles behind automated DNA sequencing. Discusses the advantages of automated sequencing as well as the chemistry of automated DNA sequencing. (DDR)

  6. Closed-loop, ultraprecise, automated craniotomies

    PubMed Central

    Pak, Nikita; Siegle, Joshua H.; Kinney, Justin P.; Denman, Daniel J.; Blanche, Timothy J.

    2015-01-01

    A large array of neuroscientific techniques, including in vivo electrophysiology, two-photon imaging, optogenetics, lesions, and microdialysis, require access to the brain through the skull. Ideally, the necessary craniotomies could be performed in a repeatable and automated fashion, without damaging the underlying brain tissue. Here we report that when drilling through the skull a stereotypical increase in conductance can be observed when the drill bit passes through the skull base. We present an architecture for a robotic device that can perform this algorithm, along with two implementations—one based on homebuilt hardware and one based on commercially available hardware—that can automatically detect such changes and create large numbers of precise craniotomies, even in a single skull. We also show that this technique can be adapted to automatically drill cranial windows several millimeters in diameter. Such robots will not only be useful for helping neuroscientists perform both small and large craniotomies more reliably but can also be used to create precisely aligned arrays of craniotomies with stereotaxic registration to standard brain atlases that would be difficult to drill by hand. PMID:25855700

  7. Closed-loop, ultraprecise, automated craniotomies.

    PubMed

    Pak, Nikita; Siegle, Joshua H; Kinney, Justin P; Denman, Daniel J; Blanche, Timothy J; Boyden, Edward S

    2015-06-01

    A large array of neuroscientific techniques, including in vivo electrophysiology, two-photon imaging, optogenetics, lesions, and microdialysis, require access to the brain through the skull. Ideally, the necessary craniotomies could be performed in a repeatable and automated fashion, without damaging the underlying brain tissue. Here we report that when drilling through the skull a stereotypical increase in conductance can be observed when the drill bit passes through the skull base. We present an architecture for a robotic device that can perform this algorithm, along with two implementations--one based on homebuilt hardware and one based on commercially available hardware--that can automatically detect such changes and create large numbers of precise craniotomies, even in a single skull. We also show that this technique can be adapted to automatically drill cranial windows several millimeters in diameter. Such robots will not only be useful for helping neuroscientists perform both small and large craniotomies more reliably but can also be used to create precisely aligned arrays of craniotomies with stereotaxic registration to standard brain atlases that would be difficult to drill by hand. Copyright © 2015 the American Physiological Society.

  8. Automated Power-Distribution System

    NASA Technical Reports Server (NTRS)

    Ashworth, Barry; Riedesel, Joel; Myers, Chris; Miller, William; Jones, Ellen F.; Freeman, Kenneth; Walsh, Richard; Walls, Bryan K.; Weeks, David J.; Bechtel, Robert T.

    1992-01-01

    Autonomous power-distribution system includes power-control equipment and automation equipment. System automatically schedules connection of power to loads and reconfigures itself when it detects fault. Potential terrestrial applications include optimization of consumption of power in homes, power supplies for autonomous land vehicles and vessels, and power supplies for automated industrial processes.

  9. Robotics/Automated Systems Technicians.

    ERIC Educational Resources Information Center

    Doty, Charles R.

    Major resources exist that can be used to develop or upgrade programs in community colleges and technical institutes that educate robotics/automated systems technicians. The first category of resources is Economic, Social, and Education Issues. The Office of Technology Assessment (OTA) report, "Automation and the Workplace," presents analyses of…

  10. Migration monitoring with automated technology

    Treesearch

    Rhonda L. Millikin

    2005-01-01

    Automated technology can supplement ground-based methods of migration monitoring by providing: (1) unbiased and automated sampling; (2) independent validation of current methods; (3) a larger sample area for landscape-level analysis of habitat selection for stopover, and (4) an opportunity to study flight behavior. In particular, radar-acoustic sensor fusion can...

  11. Robotics/Automated Systems Technicians.

    ERIC Educational Resources Information Center

    Doty, Charles R.

    Major resources exist that can be used to develop or upgrade programs in community colleges and technical institutes that educate robotics/automated systems technicians. The first category of resources is Economic, Social, and Education Issues. The Office of Technology Assessment (OTA) report, "Automation and the Workplace," presents analyses of…

  12. Automated Test-Form Generation

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Diao, Qi

    2011-01-01

    In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…

  13. Opening up Library Automation Software

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2009-01-01

    Throughout the history of library automation, the author has seen a steady advancement toward more open systems. In the early days of library automation, when proprietary systems dominated, the need for standards was paramount since other means of inter-operability and data exchange weren't possible. Today's focus on Application Programming…

  14. Opening up Library Automation Software

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2009-01-01

    Throughout the history of library automation, the author has seen a steady advancement toward more open systems. In the early days of library automation, when proprietary systems dominated, the need for standards was paramount since other means of inter-operability and data exchange weren't possible. Today's focus on Application Programming…

  15. Translation: Aids, Robots, and Automation.

    ERIC Educational Resources Information Center

    Andreyewsky, Alexander

    1981-01-01

    Examines electronic aids to translation both as ways to automate it and as an approach to solve problems resulting from shortage of qualified translators. Describes the limitations of robotic MT (Machine Translation) systems, viewing MAT (Machine-Aided Translation) as the only practical solution and the best vehicle for further automation. (MES)

  16. Automated Circulation. SPEC Kit 43.

    ERIC Educational Resources Information Center

    Association of Research Libraries, Washington, DC. Office of Management Studies.

    Of the 64 libraries responding to a 1978 Association of Research Libraries (ARL) survey, 37 indicated that they used automated circulation systems; half of these were commercial systems, and most were batch-process or combination batch process and online. Nearly all libraries without automated systems cited lack of funding as the reason for not…

  17. Classification of Automated Search Traffic

    NASA Astrophysics Data System (ADS)

    Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.

    As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.

  18. Automated design of aerospace structures

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.; Mccomb, H. G.

    1974-01-01

    The current state-of-the-art in structural analysis of aerospace vehicles is characterized, automated design technology is discussed, and an indication is given of the future direction of research in analysis and automated design. Representative computer programs for analysis typical of those in routine use in vehicle design activities are described, and results are shown for some selected analysis problems. Recent and planned advances in analysis capability are indicated. Techniques used to automate the more routine aspects of structural design are discussed, and some recently developed automated design computer programs are described. Finally, discussion is presented of early accomplishments in interdisciplinary automated design systems, and some indication of the future thrust of research in this field is given.

  19. Automated Desalting Apparatus

    NASA Technical Reports Server (NTRS)

    Spencer, Maegan K.; Liu, De-Ling; Kanik, Isik; Beegle, Luther

    2010-01-01

    Because salt and metals can mask the signature of a variety of organic molecules (like amino acids) in any given sample, an automated system to purify complex field samples has been created for the analytical techniques of electrospray ionization/ mass spectroscopy (ESI/MS), capillary electrophoresis (CE), and biological assays where unique identification requires at least some processing of complex samples. This development allows for automated sample preparation in the laboratory and analysis of complex samples in the field with multiple types of analytical instruments. Rather than using tedious, exacting protocols for desalting samples by hand, this innovation, called the Automated Sample Processing System (ASPS), takes analytes that have been extracted through high-temperature solvent extraction and introduces them into the desalting column. After 20 minutes, the eluent is produced. This clear liquid can then be directly analyzed by the techniques listed above. The current apparatus including the computer and power supplies is sturdy, has an approximate mass of 10 kg, and a volume of about 20 20 20 cm, and is undergoing further miniaturization. This system currently targets amino acids. For these molecules, a slurry of 1 g cation exchange resin in deionized water is packed into a column of the apparatus. Initial generation of the resin is done by flowing sequentially 2.3 bed volumes of 2N NaOH and 2N HCl (1 mL each) to rinse the resin, followed by .5 mL of deionized water. This makes the pH of the resin near neutral, and eliminates cross sample contamination. Afterward, 2.3 mL of extracted sample is then loaded into the column onto the top of the resin bed. Because the column is packed tightly, the sample can be applied without disturbing the resin bed. This is a vital step needed to ensure that the analytes adhere to the resin. After the sample is drained, oxalic acid (1 mL, pH 1.6-1.8, adjusted with NH4OH) is pumped into the column. Oxalic acid works as a

  20. Automating the analytical laboratory via the Chemical Analysis Automation paradigm

    SciTech Connect

    Hollen, R.; Rzeszutko, C.

    1997-10-01

    To address the need for standardization within the analytical chemistry laboratories of the nation, the Chemical Analysis Automation (CAA) program within the US Department of Energy, Office of Science and Technology`s Robotic Technology Development Program is developing laboratory sample analysis systems that will automate the environmental chemical laboratories. The current laboratory automation paradigm consists of islands-of-automation that do not integrate into a system architecture. Thus, today the chemist must perform most aspects of environmental analysis manually using instrumentation that generally cannot communicate with other devices in the laboratory. CAA is working towards a standardized and modular approach to laboratory automation based upon the Standard Analysis Method (SAM) architecture. Each SAM system automates a complete chemical method. The building block of a SAM is known as the Standard Laboratory Module (SLM). The SLM, either hardware or software, automates a subprotocol of an analysis method and can operate as a standalone or as a unit within a SAM. The CAA concept allows the chemist to easily assemble an automated analysis system, from sample extraction through data interpretation, using standardized SLMs without the worry of hardware or software incompatibility or the necessity of generating complicated control programs. A Task Sequence Controller (TSC) software program schedules and monitors the individual tasks to be performed by each SLM configured within a SAM. The chemist interfaces with the operation of the TSC through the Human Computer Interface (HCI), a logical, icon-driven graphical user interface. The CAA paradigm has successfully been applied in automating EPA SW-846 Methods 3541/3620/8081 for the analysis of PCBs in a soil matrix utilizing commercially available equipment in tandem with SLMs constructed by CAA.

  1. Automated Electrostatics Environmental Chamber

    NASA Technical Reports Server (NTRS)

    Calle, Carlos; Lewis, Dean C.; Buchanan, Randy K.; Buchanan, Aubri

    2005-01-01

    The Mars Electrostatics Chamber (MEC) is an environmental chamber designed primarily to create atmospheric conditions like those at the surface of Mars to support experiments on electrostatic effects in the Martian environment. The chamber is equipped with a vacuum system, a cryogenic cooling system, an atmospheric-gas replenishing and analysis system, and a computerized control system that can be programmed by the user and that provides both automation and options for manual control. The control system can be set to maintain steady Mars-like conditions or to impose temperature and pressure variations of a Mars diurnal cycle at any given season and latitude. In addition, the MEC can be used in other areas of research because it can create steady or varying atmospheric conditions anywhere within the wide temperature, pressure, and composition ranges between the extremes of Mars-like and Earth-like conditions.

  2. Health care automation companies.

    PubMed

    1995-12-01

    Health care automation companies: card transaction processing/EFT/EDI-capable banks; claims auditing/analysis; claims processors/clearinghouses; coding products/services; computer hardware; computer networking/LAN/WAN; consultants; data processing/outsourcing; digital dictation/transcription; document imaging/optical disk storage; executive information systems; health information networks; hospital/health care information systems; interface engines; laboratory information systems; managed care information systems; patient identification/credit cards; pharmacy information systems; POS terminals; radiology information systems; software--claims related/computer-based patient records/home health care/materials management/supply ordering/physician practice management/translation/utilization review/outcomes; telecommunications products/services; telemedicine/teleradiology; value-added networks.

  3. Automated synthetic scene generation

    NASA Astrophysics Data System (ADS)

    Givens, Ryan N.

    Physics-based simulations generate synthetic imagery to help organizations anticipate system performance of proposed remote sensing systems. However, manually constructing synthetic scenes which are sophisticated enough to capture the complexity of real-world sites can take days to months depending on the size of the site and desired fidelity of the scene. This research, sponsored by the Air Force Research Laboratory's Sensors Directorate, successfully developed an automated approach to fuse high-resolution RGB imagery, lidar data, and hyperspectral imagery and then extract the necessary scene components. The method greatly reduces the time and money required to generate realistic synthetic scenes and developed new approaches to improve material identification using information from all three of the input datasets.

  4. Automating Frame Analysis

    SciTech Connect

    Sanfilippo, Antonio P.; Franklin, Lyndsey; Tratz, Stephen C.; Danielson, Gary R.; Mileson, Nicholas D.; Riensche, Roderick M.; McGrath, Liam

    2008-04-01

    Frame Analysis has come to play an increasingly stronger role in the study of social movements in Sociology and Political Science. While significant steps have been made in providing a theory of frames and framing, a systematic characterization of the frame concept is still largely lacking and there are no rec-ognized criteria and methods that can be used to identify and marshal frame evi-dence reliably and in a time and cost effective manner. Consequently, current Frame Analysis work is still too reliant on manual annotation and subjective inter-pretation. The goal of this paper is to present an approach to the representation, acquisition and analysis of frame evidence which leverages Content Analysis, In-formation Extraction and Semantic Search methods to provide a systematic treat-ment of a Frame Analysis and automate frame annotation.

  5. Automated mapping system patented

    NASA Astrophysics Data System (ADS)

    A patent on a satellite system dubbed Mapsat, which would be able to map the earth from space and would thereby reduce the time and cost of mapping on a smaller scale, has been issued to the U.S. Geological Survey.The Mapsat concept, invented by Alden F. Colvocoresses, a research cartographer at the USGS National Center, is based on Landsat technology but uses sensors that acquire higher-resolution image data in either a stereo or monoscopic mode. Stereo data can be processed relatively simply with automation to produce images for interpretation or to produce maps. Monoscopic and multispectral data can be processed in a computer to derive information on earth resources. Ground control, one of the most expensive phases of mapping, could be kept to a minimum.

  6. Automated Analysis Workstation

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Information from NASA Tech Briefs of work done at Langley Research Center and the Jet Propulsion Laboratory assisted DiaSys Corporation in manufacturing their first product, the R/S 2000. Since then, the R/S 2000 and R/S 2003 have followed. Recently, DiaSys released their fourth workstation, the FE-2, which automates the process of making and manipulating wet-mount preparation of fecal concentrates. The time needed to read the sample is decreased, permitting technologists to rapidly spot parasites, ova and cysts, sometimes carried in the lower intestinal tract of humans and animals. Employing the FE-2 is non-invasive, can be performed on an out-patient basis, and quickly provides confirmatory results.

  7. Automating the multiprocessing environment

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.

    1989-01-01

    An approach to automate the programming and operation of tree-structured networks of multiprocessor systems is discussed. A conceptual, knowledge-based operating environment is presented, and requirements for two major technology elements are identified as follows: (1) An intelligent information translator is proposed for implementating information transfer between dissimilar hardware and software, thereby enabling independent and modular development of future systems and promoting a language-independence of codes and information; (2) A resident system activity manager, which recognizes the systems capabilities and monitors the status of all systems within the environment, is proposed for integrating dissimilar systems into effective parallel processing resources to optimally meet user needs. Finally, key computational capabilities which must be provided before the environment can be realized are identified.

  8. [From automation to robotics].

    PubMed

    1985-01-01

    The introduction of automation into the laboratory of biology seems to be unavoidable. But at which cost, if it is necessary to purchase a new machine for every new application? Fortunately the same image processing techniques, belonging to a theoretic framework called Mathematical Morphology, may be used in visual inspection tasks, both in car industry and in the biology lab. Since the market for industrial robotics applications is much higher than the market of biomedical applications, the price of image processing devices drops, and becomes sometimes less than the price of a complete microscope equipment. The power of the image processing methods of Mathematical Morphology will be illustrated by various examples, as automatic silver grain counting in autoradiography, determination of HLA genotype, electrophoretic gels analysis, automatic screening of cervical smears... Thus several heterogeneous applications may share the same image processing device, provided there is a separate and devoted work station for each of them.

  9. Robust automated knowledge capture.

    SciTech Connect

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  10. Distributed Experiment Automation System

    NASA Astrophysics Data System (ADS)

    Lebedev, Gennadi

    2003-03-01

    Module based distributed system for controlling and automation scientific experiments were developed. System divides in five main layers: 1. Data processing and presentation modules, 2. Controllers - support primary command evaluation, data analysis and synchronization between Device Drivers. 3. Data Server. Provide real time data storage and management. 4. Device Drivers, support communication, preliminary signals acquisitions and control of peripheral devices. 5. Utility - batch processing, login, errors of execution handling, experimental data persistent storage and management, modules and devices monitoring, alarm state, remote components messaging and notification processing. System used networking (DCOM protocol) for communication between distributed modules. Configuration, modules parameters, data and commands links defined in scripting file (XML format). This modular structure allows great flexibility and extensibility as modules can be added and configured as required without any extensive programming.

  11. Protein fabrication automation

    PubMed Central

    Cox, J. Colin; Lape, Janel; Sayed, Mahmood A.; Hellinga, Homme W.

    2007-01-01

    Facile “writing” of DNA fragments that encode entire gene sequences potentially has widespread applications in biological analysis and engineering. Rapid writing of open reading frames (ORFs) for expressed proteins could transform protein engineering and production for protein design, synthetic biology, and structural analysis. Here we present a process, protein fabrication automation (PFA), which facilitates the rapid de novo construction of any desired ORF from oligonucleotides with low effort, high speed, and little human interaction. PFA comprises software for sequence design, data management, and the generation of instruction sets for liquid-handling robotics, a liquid-handling robot, a robust PCR scheme for gene assembly from synthetic oligonucleotides, and a genetic selection system to enrich correctly assembled full-length synthetic ORFs. The process is robust and scalable. PMID:17242375

  12. Automated attendance accounting system

    NASA Technical Reports Server (NTRS)

    Chapman, C. P. (Inventor)

    1973-01-01

    An automated accounting system useful for applying data to a computer from any or all of a multiplicity of data terminals is disclosed. The system essentially includes a preselected number of data terminals which are each adapted to convert data words of decimal form to another form, i.e., binary, usable with the computer. Each data terminal may take the form of a keyboard unit having a number of depressable buttons or switches corresponding to selected data digits and/or function digits. A bank of data buffers, one of which is associated with each data terminal, is provided as a temporary storage. Data from the terminals is applied to the data buffers on a digit by digit basis for transfer via a multiplexer to the computer.

  13. Berkeley automated supernova search

    SciTech Connect

    Kare, J.T.; Pennypacker, C.R.; Muller, R.A.; Mast, T.S.; Crawford, F.S.; Burns, M.S.

    1981-01-01

    The Berkeley automated supernova search employs a computer controlled 36-inch telescope and charge coupled device (CCD) detector to image 2500 galaxies per night. A dedicated minicomputer compares each galaxy image with stored reference data to identify supernovae in real time. The threshold for detection is m/sub v/ = 18.8. We plan to monitor roughly 500 galaxies in Virgo and closer every night, and an additional 6000 galaxies out to 70 Mpc on a three night cycle. This should yield very early detection of several supernovae per year for detailed study, and reliable premaximum detection of roughly 100 supernovae per year for statistical studies. The search should be operational in mid-1982.

  14. Automated Standard Hazard Tool

    NASA Technical Reports Server (NTRS)

    Stebler, Shane

    2014-01-01

    The current system used to generate standard hazard reports is considered cumbersome and iterative. This study defines a structure for this system's process in a clear, algorithmic way so that standard hazard reports and basic hazard analysis may be completed using a centralized, web-based computer application. To accomplish this task, a test server is used to host a prototype of the tool during development. The prototype is configured to easily integrate into NASA's current server systems with minimal alteration. Additionally, the tool is easily updated and provides NASA with a system that may grow to accommodate future requirements and possibly, different applications. Results of this project's success are outlined in positive, subjective reviews complete by payload providers and NASA Safety and Mission Assurance personnel. Ideally, this prototype will increase interest in the concept of standard hazard automation and lead to the full-scale production of a user-ready application.

  15. Automated design of ligands to polypharmacological profiles.

    PubMed

    Besnard, Jérémy; Ruda, Gian Filippo; Setola, Vincent; Abecassis, Keren; Rodriguiz, Ramona M; Huang, Xi-Ping; Norval, Suzanne; Sassano, Maria F; Shin, Antony I; Webster, Lauren A; Simeons, Frederick R C; Stojanovski, Laste; Prat, Annik; Seidah, Nabil G; Constam, Daniel B; Bickerton, G Richard; Read, Kevin D; Wetsel, William C; Gilbert, Ian H; Roth, Bryan L; Hopkins, Andrew L

    2012-12-13

    The clinical efficacy and safety of a drug is determined by its activity profile across many proteins in the proteome. However, designing drugs with a specific multi-target profile is both complex and difficult. Therefore methods to design drugs rationally a priori against profiles of several proteins would have immense value in drug discovery. Here we describe a new approach for the automated design of ligands against profiles of multiple drug targets. The method is demonstrated by the evolution of an approved acetylcholinesterase inhibitor drug into brain-penetrable ligands with either specific polypharmacology or exquisite selectivity profiles for G-protein-coupled receptors. Overall, 800 ligand-target predictions of prospectively designed ligands were tested experimentally, of which 75% were confirmed to be correct. We also demonstrate target engagement in vivo. The approach can be a useful source of drug leads when multi-target profiles are required to achieve either selectivity over other drug targets or a desired polypharmacology.

  16. Automated design of ligands to polypharmacological profiles

    PubMed Central

    Besnard, Jérémy; Ruda, Gian Filippo; Setola, Vincent; Abecassis, Keren; Rodriguiz, Ramona M.; Huang, Xi-Ping; Norval, Suzanne; Sassano, Maria F.; Shin, Antony I.; Webster, Lauren A.; Simeons, Frederick R.C.; Stojanovski, Laste; Prat, Annik; Seidah, Nabil G.; Constam, Daniel B.; Bickerton, G. Richard; Read, Kevin D.; Wetsel, William C.; Gilbert, Ian H.; Roth, Bryan L.; Hopkins, Andrew L.

    2012-01-01

    The clinical efficacy and safety of a drug is determined by its activity profile across multiple proteins in the proteome. However, designing drugs with a specific multi-target profile is both complex and difficult. Therefore methods to rationally design drugs a priori against profiles of multiple proteins would have immense value in drug discovery. We describe a new approach for the automated design of ligands against profiles of multiple drug targets. The method is demonstrated by the evolution of an approved acetylcholinesterase inhibitor drug into brain penetrable ligands with either specific polypharmacology or exquisite selectivity profiles for G-protein coupled receptors. Overall, 800 ligand-target predictions of prospectively designed ligands were tested experimentally, of which 75% were confirmed correct. We also demonstrate target engagement in vivo. The approach can be a useful source of drug leads where multi-target profiles are required to achieve either selectivity over other drug targets or a desired polypharmacology. PMID:23235874

  17. Automated assessment of postural stability system.

    PubMed

    Napoli, Alessandro; Ward, Christian R; Glass, Stephen M; Tucker, Carole; Obeid, Iyad

    2016-08-01

    The Balance Error Scoring System (BESS) is one of the most commonly used clinical tests to evaluate static postural stability deficits resulting from traumatic brain events and musculoskeletal injury. This test requires a trained operator to visually assess balance and give the subject a performance score based on the number of balance "errors" they committed. Despite being regularly used in several real-world situations, the BESS test is scored by clinician observation and is therefore (a) potentially susceptible to biased and inaccurate test scores and (b) cannot be administered in the absence of a trained provider. The purpose of this research is to develop, calibrate and field test a computerized version of the BESS test using low-cost commodity motion tracking technology. This `Automated Assessment of Postural Stability' (AAPS) system will quantify balance control in field conditions. This research goal is to overcome the main limitations of both the commercially available motion capture systems and the standard BESS test. The AAPS system has been designed to be operated by a minimally trained user and it requires little set-up time with no sensor calibration necessary. These features make the proposed automated system a valuable balance assessment tool to be utilized in the field.

  18. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    PubMed Central

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  19. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    PubMed

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  20. Programmable Automated Welding System (PAWS)

    NASA Technical Reports Server (NTRS)

    Kline, Martin D.

    1994-01-01

    An ambitious project to develop an advanced, automated welding system is being funded as part of the Navy Joining Center with Babcock & Wilcox as the prime integrator. This program, the Programmable Automated Welding System (PAWS), involves the integration of both planning and real-time control activities. Planning functions include the development of a graphical decision support system within a standard, portable environment. Real-time control functions include the development of a modular, intelligent, real-time control system and the integration of a number of welding process sensors. This paper presents each of these components of the PAWS and discusses how they can be utilized to automate the welding operation.

  1. Automated Engineering Design (AED); An approach to automated documentation

    NASA Technical Reports Server (NTRS)

    Mcclure, C. W.

    1970-01-01

    The automated engineering design (AED) is reviewed, consisting of a high level systems programming language, a series of modular precoded subroutines, and a set of powerful software machine tools that effectively automate the production and design of new languages. AED is used primarily for development of problem and user-oriented languages. Software production phases are diagramed, and factors which inhibit effective documentation are evaluated.

  2. Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head

    PubMed Central

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-01-01

    Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly

  3. Automated MRI segmentation for individualized modeling of current flow in the human head

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-12-01

    Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully

  4. Fuzzy Control/Space Station automation

    NASA Technical Reports Server (NTRS)

    Gersh, Mark

    1990-01-01

    Viewgraphs on fuzzy control/space station automation are presented. Topics covered include: Space Station Freedom (SSF); SSF evolution; factors pointing to automation & robotics (A&R); astronaut office inputs concerning A&R; flight system automation and ground operations applications; transition definition program; and advanced automation software tools.

  5. Automated Substitute Notification: Technology Improves Sub Dispatching.

    ERIC Educational Resources Information Center

    Bernasconi, Chuck

    2000-01-01

    Information technology has automated the process of substitute teacher dispatching. This article describes such automated systems, examining the advantages of using the Internet for automated staffing. It concludes that for districts that take advantage of this new technology, using the Internet can make automating absence reporting and substitute…

  6. Real Automation in the Field

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Mayero, Micaela; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    We provide a package of strategies for automation of non-linear arithmetic in PVS. In particular, we describe a simplication procedure for the field of real numbers and a strategy for cancellation of common terms.

  7. Human factors in cockpit automation

    NASA Technical Reports Server (NTRS)

    Wiener, E. L.

    1984-01-01

    The rapid advance in microprocessor technology has made it possible to automate many functions that were previously performed manually. Several research areas have been identified which are basic to the question of the implementation of automation in the cockpit. One of the identified areas deserving further research is warning and alerting systems. Modern transport aircraft have had one after another warning and alerting systems added, and computer-based cockpit systems make it possible to add even more. Three major areas of concern are: input methods (including voice, keyboard, touch panel, etc.), output methods and displays (from traditional instruments to CRTs, to exotic displays including the human voice), and training for automation. Training for operating highly automatic systems requires considerably more attention than it has been given in the past. Training methods have not kept pace with the advent of flight-deck automation.

  8. Automating the Purple Crow Lidar

    NASA Astrophysics Data System (ADS)

    Hicks, Shannon; Sica, R. J.; Argall, P. S.

    2016-06-01

    The Purple Crow LiDAR (PCL) was built to measure short and long term coupling between the lower, middle, and upper atmosphere. The initial component of my MSc. project is to automate two key elements of the PCL: the rotating liquid mercury mirror and the Zaber alignment mirror. In addition to the automation of the Zaber alignment mirror, it is also necessary to describe the mirror's movement and positioning errors. Its properties will then be added into the alignment software. Once the alignment software has been completed, we will compare the new alignment method with the previous manual procedure. This is the first among several projects that will culminate in a fully-automated lidar. Eventually, we will be able to work remotely, thereby increasing the amount of data we collect. This paper will describe the motivation for automation, the methods we propose, preliminary results for the Zaber alignment error analysis, and future work.

  9. Automation of antimicrobial activity screening.

    PubMed

    Forry, Samuel P; Madonna, Megan C; López-Pérez, Daneli; Lin, Nancy J; Pasco, Madeleine D

    2016-03-01

    Manual and automated methods were compared for routine screening of compounds for antimicrobial activity. Automation generally accelerated assays and required less user intervention while producing comparable results. Automated protocols were validated for planktonic, biofilm, and agar cultures of the oral microbe Streptococcus mutans that is commonly associated with tooth decay. Toxicity assays for the known antimicrobial compound cetylpyridinium chloride (CPC) were validated against planktonic, biofilm forming, and 24 h biofilm culture conditions, and several commonly reported toxicity/antimicrobial activity measures were evaluated: the 50 % inhibitory concentration (IC50), the minimum inhibitory concentration (MIC), and the minimum bactericidal concentration (MBC). Using automated methods, three halide salts of cetylpyridinium (CPC, CPB, CPI) were rapidly screened with no detectable effect of the counter ion on antimicrobial activity.

  10. Office Automation Boosts University's Productivity.

    ERIC Educational Resources Information Center

    School Business Affairs, 1986

    1986-01-01

    The University of Pittsburgh has a 2-year agreement designating the Xerox Corporation as the primary supplier of word processing and related office automation equipment in order to increase productivity and more efficient use of campus resources. (MLF)

  11. Office Automation at Memphis State.

    ERIC Educational Resources Information Center

    Smith, R. Eugene; And Others

    1986-01-01

    The development of a university-wide office automation plan, beginning with a short-range pilot project and a five-year plan for the entire organization with the potential for modular implementation, is described. (MSE)

  12. Automation and Human Resource Management.

    ERIC Educational Resources Information Center

    Taft, Michael

    1988-01-01

    Discussion of the automation of personnel administration in libraries covers (1) new developments in human resource management systems; (2) system requirements; (3) software evaluation; (4) vendor evaluation; (5) selection of a system; (6) training and support; and (7) benefits. (MES)

  13. Automated Power-Distribution System

    NASA Technical Reports Server (NTRS)

    Thomason, Cindy; Anderson, Paul M.; Martin, James A.

    1990-01-01

    Automated power-distribution system monitors and controls electrical power to modules in network. Handles both 208-V, 20-kHz single-phase alternating current and 120- to 150-V direct current. Power distributed to load modules from power-distribution control units (PDCU's) via subsystem distributors. Ring busses carry power to PDCU's from power source. Needs minimal attention. Detects faults and also protects against them. Potential applications include autonomous land vehicles and automated industrial process systems.

  14. Technology modernization assessment flexible automation

    SciTech Connect

    Bennett, D.W.; Boyd, D.R.; Hansen, N.H.; Hansen, M.A.; Yount, J.A.

    1990-12-01

    The objectives of this report are: to present technology assessment guidelines to be considered in conjunction with defense regulations before an automation project is developed to give examples showing how assessment guidelines may be applied to a current project to present several potential areas where automation might be applied successfully in the depot system. Depots perform primarily repair and remanufacturing operations, with limited small batch manufacturing runs. While certain activities (such as Management Information Systems and warehousing) are directly applicable to either environment, the majority of applications will require combining existing and emerging technologies in different ways, with the special needs of depot remanufacturing environment. Industry generally enjoys the ability to make revisions to its product lines seasonally, followed by batch runs of thousands or more. Depot batch runs are in the tens, at best the hundreds, of parts with a potential for large variation in product mix; reconfiguration may be required on a week-to-week basis. This need for a higher degree of flexibility suggests a higher level of operator interaction, and, in turn, control systems that go beyond the state of the art for less flexible automation and industry in general. This report investigates the benefits and barriers to automation and concludes that, while significant benefits do exist for automation, depots must be prepared to carefully investigate the technical feasibility of each opportunity and the life-cycle costs associated with implementation. Implementation is suggested in two ways: (1) develop an implementation plan for automation technologies based on results of small demonstration automation projects; (2) use phased implementation for both these and later stage automation projects to allow major technical and administrative risk issues to be addressed. 10 refs., 2 figs., 2 tabs. (JF)

  15. Automated satellite control in Ada

    NASA Technical Reports Server (NTRS)

    Jaworski, Allan; Thompson, J. T.

    1988-01-01

    The Advanced Ground Segment, a prototype satellite/payload operations control center workstation, which represents an evolutionary effort to improve the automation of control centers while improving software practices and supporting distributed control center functions, is described. Multiple levels of automation are supported through a rule-based control strategy. The architecture provides the necessary interfaces and modularity for future inclusion of more sophisticated control strategies.

  16. Automated Author Aiding System Conference

    DTIC Science & Technology

    1985-07-01

    0V) 0 Technical Report 684 r-Vass A Automated Author Aiding System Conference: Final Report edited by Nancy K. Atwood University of California at...AUTHOR AIDING SYSTEM CONFERENCE: Final Report FINAL REPORT June 84 through June 85 4. PERFORMING ORG. REPORT NUMBER 7. AUTHOM(I4~dited by Nancy K...Dots Entered) Technical Report 684 Automated Author Aiding System Conference: Final Report edited by Nancy K. Atwood University of California at Los

  17. Automated Microbial Metabolism Laboratory

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Automated Microbial Metabolism Laboratory (AMML) 1971-1972 program involved the investigation of three separate life detection schemes. The first was a continued further development of the labeled release experiment. The possibility of chamber reuse without inbetween sterilization, to provide comparative biochemical information was tested. Findings show that individual substrates or concentrations of antimetabolites may be sequentially added to a single test chamber. The second detection system which was investigated for possible inclusion in the AMML package of assays, was nitrogen fixation as detected by acetylene reduction. Thirdly, a series of preliminary steps were taken to investigate the feasibility of detecting biopolymers in soil. A strategy for the safe return to Earth of a Mars sample prior to manned landings on Mars is outlined. The program assumes that the probability of indigenous life on Mars is unity and then broadly presents the procedures for acquisition and analysis of the Mars sample in a manner to satisfy the scientific community and the public that adequate safeguards are being taken.

  18. Automated anomaly detection processor

    NASA Astrophysics Data System (ADS)

    Kraiman, James B.; Arouh, Scott L.; Webb, Michael L.

    2002-07-01

    Robust exploitation of tracking and surveillance data will provide an early warning and cueing capability for military and civilian Law Enforcement Agency operations. This will improve dynamic tasking of limited resources and hence operational efficiency. The challenge is to rapidly identify threat activity within a huge background of noncombatant traffic. We discuss development of an Automated Anomaly Detection Processor (AADP) that exploits multi-INT, multi-sensor tracking and surveillance data to rapidly identify and characterize events and/or objects of military interest, without requiring operators to specify threat behaviors or templates. The AADP has successfully detected an anomaly in traffic patterns in Los Angeles, analyzed ship track data collected during a Fleet Battle Experiment to detect simulated mine laying behavior amongst maritime noncombatants, and is currently under development for surface vessel tracking within the Coast Guard's Vessel Traffic Service to support port security, ship inspection, and harbor traffic control missions, and to monitor medical surveillance databases for early alert of a bioterrorist attack. The AADP can also be integrated into combat simulations to enhance model fidelity of multi-sensor fusion effects in military operations.

  19. Genetic circuit design automation.

    PubMed

    Nielsen, Alec A K; Der, Bryan S; Shin, Jonghyeon; Vaidyanathan, Prashant; Paralanov, Vanya; Strychalski, Elizabeth A; Ross, David; Densmore, Douglas; Voigt, Christopher A

    2016-04-01

    Computation can be performed in living cells by DNA-encoded circuits that process sensory information and control biological functions. Their construction is time-intensive, requiring manual part assembly and balancing of regulator expression. We describe a design environment, Cello, in which a user writes Verilog code that is automatically transformed into a DNA sequence. Algorithms build a circuit diagram, assign and connect gates, and simulate performance. Reliable circuit design requires the insulation of gates from genetic context, so that they function identically when used in different circuits. We used Cello to design 60 circuits forEscherichia coli(880,000 base pairs of DNA), for which each DNA sequence was built as predicted by the software with no additional tuning. Of these, 45 circuits performed correctly in every output state (up to 10 regulators and 55 parts), and across all circuits 92% of the output states functioned as predicted. Design automation simplifies the incorporation of genetic circuits into biotechnology projects that require decision-making, control, sensing, or spatial organization. Copyright © 2016, American Association for the Advancement of Science.

  20. Automated Gas Distribution System

    NASA Astrophysics Data System (ADS)

    Starke, Allen; Clark, Henry

    2012-10-01

    The cyclotron of Texas A&M University is one of the few and prized cyclotrons in the country. Behind the scenes of the cyclotron is a confusing, and dangerous setup of the ion sources that supplies the cyclotron with particles for acceleration. To use this machine there is a time consuming, and even wasteful step by step process of switching gases, purging, and other important features that must be done manually to keep the system functioning properly, while also trying to maintain the safety of the working environment. Developing a new gas distribution system to the ion source prevents many of the problems generated by the older manually setup process. This developed system can be controlled manually in an easier fashion than before, but like most of the technology and machines in the cyclotron now, is mainly operated based on software programming developed through graphical coding environment Labview. The automated gas distribution system provides multi-ports for a selection of different gases to decrease the amount of gas wasted through switching gases, and a port for the vacuum to decrease the amount of time spent purging the manifold. The Labview software makes the operation of the cyclotron and ion sources easier, and safer for anyone to use.

  1. Automated leak test systems

    SciTech Connect

    Cordaro, J.V.; Thompson, W.D.; Reeves, G.

    1997-09-15

    An automated leak test system for tritium shipping containers has been developed at Westinghouse Savannah River Co. (WSRC). The leak detection system employs a computer controlled helium detector which allows an operator to enter key information when prompted. The software for controlling the tests and the equipment apparatus were both designed and manufactured at the Savannah River Technology Center within WSRC. Recertification Test: Every twelve months, the pressure vessel portion of the shipping container itself must undergo a rigorous recertification leak test. After an empty pressure vessel (shipping container) is assembled, it is placed into one of six stainless steel belljars for helium leak testing. The belljars are fashioned in row much the same as assembly line arrangement. Post-load Test: A post-load leak test is performed upon reservoirs that have been filled with tritium and placed inside the shipping containers mentioned above. These leak tests are performed by a rate-of-rise method where the area around the shipping container seals is evacuated, valved off from the vacuum pump, and then the vacuum pressure is monitored over a two-minute period. The Post Load Leak Test is a quality verification test to ensure that the shipping container has been correctly assembled. 2 figs.

  2. Towards automated traceability maintenance

    PubMed Central

    Mäder, Patrick; Gotel, Orlena

    2012-01-01

    Traceability relations support stakeholders in understanding the dependencies between artifacts created during the development of a software system and thus enable many development-related tasks. To ensure that the anticipated benefits of these tasks can be realized, it is necessary to have an up-to-date set of traceability relations between the established artifacts. This goal requires the creation of traceability relations during the initial development process. Furthermore, the goal also requires the maintenance of traceability relations over time as the software system evolves in order to prevent their decay. In this paper, an approach is discussed that supports the (semi-) automated update of traceability relations between requirements, analysis and design models of software systems expressed in the UML. This is made possible by analyzing change events that have been captured while working within a third-party UML modeling tool. Within the captured flow of events, development activities comprised of several events are recognized. These are matched with predefined rules that direct the update of impacted traceability relations. The overall approach is supported by a prototype tool and empirical results on the effectiveness of tool-supported traceability maintenance are provided. PMID:23471308

  3. An automation simulation testbed

    NASA Technical Reports Server (NTRS)

    Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.; Mutammara, Atheel

    1988-01-01

    The work being done in porting ROBOSIM (a graphical simulation system developed jointly by NASA-MSFC and Vanderbilt University) to the HP350SRX graphics workstation is described. New additional ROBOSIM features, like collision detection and new kinematics simulation methods are also discussed. Based on the experiences of the work on ROBOSIM, a new graphics structural modeling environment is suggested which is intended to be a part of a new knowledge-based multiple aspect modeling testbed. The knowledge-based modeling methodologies and tools already available are described. Three case studies in the area of Space Station automation are also reported. First a geometrical structural model of the station is presented. This model was developed using the ROBOSIM package. Next the possible application areas of an integrated modeling environment in the testing of different Space Station operations are discussed. One of these possible application areas is the modeling of the Environmental Control and Life Support System (ECLSS), which is one of the most complex subsystems of the station. Using the multiple aspect modeling methodology, a fault propagation model of this system is being built and is described.

  4. Automated hydrotreating pilot plants

    SciTech Connect

    Yanik, S.J.; Graham, J.R.

    1986-03-01

    One of the major tasks facing catalyst supplies involved in hydrotreating/hydrogenation catalyst development work is proper catalyst evaluation. There are dozens of hydrotreating catalysts available to refiners, and selecting the optimum catalyst for a particular application is a challenging task. For fixed bed applications, the choice is especially difficult because, in addition to activity and selectivity, both catalyst life and pressure drop buildup are important considerations. Unfortunately, data on these latter effects are seldom available for new catalyst formulations. While pilot-plant data have been proven to be reliable indicators of the ultimate catalyst life achieved commercially, long-term catalyst aging data are expensive to gather, and proper pilot-plant design is mandatory to duplicate commercial results. Because the proper catalyst choice can earn millions of dollars per year for refiners in terms of enhanced downstream product values, it is justified to install top-quality pilot-plant facilities to demonstrate these benefits. This article describes an automated, minimum-attention set of five state-of-the-art hydrotreating pilot plants that are being completed for the Filtrol Catalyst Division of the Harshaw/Filtrol Partnership.

  5. Particle Accelerator Focus Automation

    NASA Astrophysics Data System (ADS)

    Lopes, José; Rocha, Jorge; Redondo, Luís; Cruz, João

    2017-08-01

    The Laboratório de Aceleradores e Tecnologias de Radiação (LATR) at the Campus Tecnológico e Nuclear, of Instituto Superior Técnico (IST) has a horizontal electrostatic particle accelerator based on the Van de Graaff machine which is used for research in the area of material characterization. This machine produces alfa (He+) and proton (H+) beams of some μA currents up to 2 MeV/q energies. Beam focusing is obtained using a cylindrical lens of the Einzel type, assembled near the high voltage terminal. This paper describes the developed system that automatically focuses the ion beam, using a personal computer running the LabVIEW software, a multifunction input/output board and signal conditioning circuits. The focusing procedure consists of a scanning method to find the lens bias voltage which maximizes the beam current measured on a beam stopper target, which is used as feedback for the scanning cycle. This system, as part of a wider start up and shut down automation system built for this particle accelerator, brings great advantages to the operation of the accelerator by turning it faster and easier to operate, requiring less human presence, and adding the possibility of total remote control in safe conditions.

  6. Towards automated traceability maintenance.

    PubMed

    Mäder, Patrick; Gotel, Orlena

    2012-10-01

    Traceability relations support stakeholders in understanding the dependencies between artifacts created during the development of a software system and thus enable many development-related tasks. To ensure that the anticipated benefits of these tasks can be realized, it is necessary to have an up-to-date set of traceability relations between the established artifacts. This goal requires the creation of traceability relations during the initial development process. Furthermore, the goal also requires the maintenance of traceability relations over time as the software system evolves in order to prevent their decay. In this paper, an approach is discussed that supports the (semi-) automated update of traceability relations between requirements, analysis and design models of software systems expressed in the UML. This is made possible by analyzing change events that have been captured while working within a third-party UML modeling tool. Within the captured flow of events, development activities comprised of several events are recognized. These are matched with predefined rules that direct the update of impacted traceability relations. The overall approach is supported by a prototype tool and empirical results on the effectiveness of tool-supported traceability maintenance are provided.

  7. Automated ISS Flight Utilities

    NASA Technical Reports Server (NTRS)

    Offermann, Jan Tuzlic

    2016-01-01

    During my internship at NASA Johnson Space Center, I worked in the Space Radiation Analysis Group (SRAG), where I was tasked with a number of projects focused on the automation of tasks and activities related to the operation of the International Space Station (ISS). As I worked on a number of projects, I have written short sections below to give a description for each, followed by more general remarks on the internship experience. My first project is titled "General Exposure Representation EVADOSE", also known as "GEnEVADOSE". This project involved the design and development of a C++/ ROOT framework focused on radiation exposure for extravehicular activity (EVA) planning for the ISS. The utility helps mission managers plan EVAs by displaying information on the cumulative radiation doses that crew will receive during an EVA as a function of the egress time and duration of the activity. SRAG uses a utility called EVADOSE, employing a model of the space radiation environment in low Earth orbit to predict these doses, as while outside the ISS the astronauts will have less shielding from charged particles such as electrons and protons. However, EVADOSE output is cumbersome to work with, and prior to GEnEVADOSE, querying data and producing graphs of ISS trajectories and cumulative doses versus egress time required manual work in Microsoft Excel. GEnEVADOSE automates all this work, reading in EVADOSE output file(s) along with a plaintext file input by the user providing input parameters. GEnEVADOSE will output a text file containing all the necessary dosimetry for each proposed EVA egress time, for each specified EVADOSE file. It also plots cumulative dose versus egress time and the ISS trajectory, and displays all of this information in an auto-generated presentation made in LaTeX. New features have also been added, such as best-case scenarios (egress times corresponding to the least dose), interpolated curves for trajectories, and the ability to query any time in the

  8. Automated call tracking systems

    SciTech Connect

    Hardesty, C.

    1993-03-01

    User Services groups are on the front line with user support. We are the first to hear about problems. The speed, accuracy, and intelligence with which we respond determines the user`s perception of our effectiveness and our commitment to quality and service. To keep pace with the complex changes at our sites, we must have tools to help build a knowledge base of solutions, a history base of our users, and a record of every problem encountered. Recently, I completed a survey of twenty sites similar to the National Energy Research Supercomputer Center (NERSC). This informal survey reveals that 27% of the sites use a paper system to log calls, 60% employ homegrown automated call tracking systems, and 13% use a vendor-supplied system. Fifty-four percent of those using homegrown systems are exploring the merits of switching to a vendor-supplied system. The purpose of this paper is to provide guidelines for evaluating a call tracking system. In addition, insights are provided to assist User Services groups in selecting a system that fits their needs.

  9. Multifunction automated crawling system

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph (Inventor); Joffe, Benjamin (Inventor); Backes, Paul Gregory (Inventor)

    1999-01-01

    The present invention is an automated crawling robot system including a platform, a first leg assembly, a second leg assembly, first and second rails attached to the platform, and an onboard electronic computer controller. The first leg assembly has an intermittent coupling device and the second leg assembly has an intermittent coupling device for intermittently coupling the respective first and second leg assemblies to a particular object. The first and second leg assemblies are slidably coupled to the rail assembly and are slidably driven by motors to thereby allow linear movement. In addition, the first leg assembly is rotary driven by a rotary motor to thereby provide rotary motion relative to the platform. To effectuate motion, the intermittent coupling devices of the first and second leg assemblies alternately couple the respective first and second leg assemblies to an object. This motion is done while simultaneously moving one of the leg assemblies linearly in the desired direction and preparing the next step. This arrangement allows the crawler of the present invention to traverse an object in a range of motion covering 360 degrees.

  10. Automated Supernova Discovery (Abstract)

    NASA Astrophysics Data System (ADS)

    Post, R. S.

    2015-12-01

    (Abstract only) We are developing a system of robotic telescopes for automatic recognition of Supernovas as well as other transient events in collaboration with the Puckett Supernova Search Team. At the SAS2014 meeting, the discovery program, SNARE, was first described. Since then, it has been continuously improved to handle searches under a wide variety of atmospheric conditions. Currently, two telescopes are used to build a reference library while searching for PSN with a partial library. Since data is taken every night without clouds, we must deal with varying atmospheric and high background illumination from the moon. Software is configured to identify a PSN, reshoot for verification with options to change the run plan to acquire photometric or spectrographic data. The telescopes are 24-inch CDK24, with Alta U230 cameras, one in CA and one in NM. Images and run plans are sent between sites so the CA telescope can search while photometry is done in NM. Our goal is to find bright PSNs with magnitude 17.5 or less which is the limit of our planned spectroscopy. We present results from our first automated PSN discoveries and plans for PSN data acquisition.

  11. Automated document analysis system

    NASA Astrophysics Data System (ADS)

    Black, Jeffrey D.; Dietzel, Robert; Hartnett, David

    2002-08-01

    A software application has been developed to aid law enforcement and government intelligence gathering organizations in the translation and analysis of foreign language documents with potential intelligence content. The Automated Document Analysis System (ADAS) provides the capability to search (data or text mine) documents in English and the most commonly encountered foreign languages, including Arabic. Hardcopy documents are scanned by a high-speed scanner and are optical character recognized (OCR). Documents obtained in an electronic format bypass the OCR and are copied directly to a working directory. For translation and analysis, the script and the language of the documents are first determined. If the document is not in English, the document is machine translated to English. The documents are searched for keywords and key features in either the native language or translated English. The user can quickly review the document to determine if it has any intelligence content and whether detailed, verbatim human translation is required. The documents and document content are cataloged for potential future analysis. The system allows non-linguists to evaluate foreign language documents and allows for the quick analysis of a large quantity of documents. All document processing can be performed manually or automatically on a single document or a batch of documents.

  12. Brain Autopsy

    MedlinePlus

    ... that you contact a medical center or brain bank with experience in neurological disorders and, if at ... and Strokes also maintains a listing of brain banks on the research section of their website: www. ...

  13. Brain Power.

    ERIC Educational Resources Information Center

    Albrecht, Karl

    2002-01-01

    Reviews significant findings of recent brain research, including the concept of five minds: automatic, subconscious, practical, creative, and spiritual. Suggests approaches to training the brain that are related to this hierarchy of thinking. (JOW)

  14. Brain Diseases

    MedlinePlus

    The brain is the control center of the body. It controls thoughts, memory, speech, and movement. It regulates the function of many organs. When the brain is healthy, it works quickly and automatically. However, ...

  15. Brain Aneurysm

    MedlinePlus

    ... tests don't provide enough information. Screening for brain aneurysms The use of imaging tests to screen ... and occupational therapy to relearn skills. Treating unruptured brain aneurysms Surgical clipping or endovascular coiling can be ...

  16. An Automated Motion Detection and Reward System for Animal Training

    PubMed Central

    Miller, Brad; Lim, Audrey N; Heidbreder, Arnold F

    2015-01-01

    A variety of approaches has been used to minimize head movement during functional brain imaging studies in awake laboratory animals. Many laboratories expend substantial effort and time training animals to remain essentially motionless during such studies. We could not locate an “off-the-shelf” automated training system that suited our needs.  We developed a time- and labor-saving automated system to train animals to hold still for extended periods of time. The system uses a personal computer and modest external hardware to provide stimulus cues, monitor movement using commercial video surveillance components, and dispense rewards. A custom computer program automatically increases the motionless duration required for rewards based on performance during the training session but allows changes during sessions. This system was used to train cynomolgus monkeys (Macaca fascicularis) for awake neuroimaging studies using positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). The automated system saved the trainer substantial time, presented stimuli and rewards in a highly consistent manner, and automatically documented training sessions. We have limited data to prove the training system's success, drawn from the automated records during training sessions, but we believe others may find it useful. The system can be adapted to a range of behavioral training/recording activities for research or commercial applications, and the software is freely available for non-commercial use. PMID:26798573

  17. An automated method for high-definition transcranial direct current stimulation modeling.

    PubMed

    Huang, Yu; Su, Yuzhuo; Rorden, Christopher; Dmochowski, Jacek; Datta, Abhishek; Parra, Lucas C

    2012-01-01

    Targeted transcranial stimulation with electric currents requires accurate models of the current flow from scalp electrodes to the human brain. Idiosyncratic anatomy of individual brains and heads leads to significant variability in such current flows across subjects, thus, necessitating accurate individualized head models. Here we report on an automated processing chain that computes current distributions in the head starting from a structural magnetic resonance image (MRI). The main purpose of automating this process is to reduce the substantial effort currently required for manual segmentation, electrode placement, and solving of finite element models. In doing so, several weeks of manual labor were reduced to no more than 4 hours of computation time and minimal user interaction, while current-flow results for the automated method deviated by less than 27.9% from the manual method. Key facilitating factors are the addition of three tissue types (skull, scalp and air) to a state-of-the-art automated segmentation process, morphological processing to correct small but important segmentation errors, and automated placement of small electrodes based on easily reproducible standard electrode configurations. We anticipate that such an automated processing will become an indispensable tool to individualize transcranial direct current stimulation (tDCS) therapy.

  18. The Brain.

    ERIC Educational Resources Information Center

    Hubel, David H.

    1979-01-01

    This article on the brain is part of an entire issue about neurobiology and the question of how the human brain works. The brain as an intricate tissue composed of cells is discussed based on the current knowledge and understanding of its composition and structure. (SA)

  19. Brain Aneurysm

    MedlinePlus

    A brain aneurysm is an abnormal bulge or "ballooning" in the wall of an artery in the brain. They are sometimes called berry aneurysms because they ... often the size of a small berry. Most brain aneurysms produce no symptoms until they become large, ...

  20. The Brain.

    ERIC Educational Resources Information Center

    Hubel, David H.

    1979-01-01

    This article on the brain is part of an entire issue about neurobiology and the question of how the human brain works. The brain as an intricate tissue composed of cells is discussed based on the current knowledge and understanding of its composition and structure. (SA)

  1. Left Brain. Right Brain. Whole Brain

    ERIC Educational Resources Information Center

    Farmer, Lesley S. J.

    2004-01-01

    As the United States student population is becoming more diverse, library media specialists need to find ways to address these distinctive needs. However, some of these differences transcend culture, touching on variations in the brain itself. Most people have a dominant side of the brain, which can affect their personality and learning style.…

  2. Left Brain. Right Brain. Whole Brain

    ERIC Educational Resources Information Center

    Farmer, Lesley S. J.

    2004-01-01

    As the United States student population is becoming more diverse, library media specialists need to find ways to address these distinctive needs. However, some of these differences transcend culture, touching on variations in the brain itself. Most people have a dominant side of the brain, which can affect their personality and learning style.…

  3. Automated ship image acquisition

    NASA Astrophysics Data System (ADS)

    Hammond, T. R.

    2008-04-01

    The experimental Automated Ship Image Acquisition System (ASIA) collects high-resolution ship photographs at a shore-based laboratory, with minimal human intervention. The system uses Automatic Identification System (AIS) data to direct a high-resolution SLR digital camera to ship targets and to identify the ships in the resulting photographs. The photo database is then searchable using the rich data fields from AIS, which include the name, type, call sign and various vessel identification numbers. The high-resolution images from ASIA are intended to provide information that can corroborate AIS reports (e.g., extract identification from the name on the hull) or provide information that has been omitted from the AIS reports (e.g., missing or incorrect hull dimensions, cargo, etc). Once assembled into a searchable image database, the images can be used for a wide variety of marine safety and security applications. This paper documents the author's experience with the practicality of composing photographs based on AIS reports alone, describing a number of ways in which this can go wrong, from errors in the AIS reports, to fixed and mobile obstructions and multiple ships in the shot. The frequency with which various errors occurred in automatically-composed photographs collected in Halifax harbour in winter time were determined by manual examination of the images. 45% of the images examined were considered of a quality sufficient to read identification markings, numbers and text off the entire ship. One of the main technical challenges for ASIA lies in automatically differentiating good and bad photographs, so that few bad ones would be shown to human users. Initial attempts at automatic photo rating showed 75% agreement with manual assessments.

  4. Brain Basics: Know Your Brain

    MedlinePlus

    ... Basics: Know Your Brain Request free mailed brochure Table of Contents Introduction The Architecture of the Brain ... Information Page NINDS Epilepsy Information Page NINDS Familial Periodic Paralyses Information Page NINDS Farber's Disease Information Page ...

  5. Neurodegenerative changes in Alzheimer's disease: a comparative study of manual, semi-automated, and fully automated assessment using MRI

    NASA Astrophysics Data System (ADS)

    Fritzsche, Klaus H.; Giesel, Frederik L.; Heimann, Tobias; Thomann, Philipp A.; Hahn, Horst K.; Pantel, Johannes; Schröder, Johannes; Essig, Marco; Meinzer, Hans-Peter

    2008-03-01

    Objective quantification of disease specific neurodegenerative changes can facilitate diagnosis and therapeutic monitoring in several neuropsychiatric disorders. Reproducibility and easy-to-perform assessment are essential to ensure applicability in clinical environments. Aim of this comparative study is the evaluation of a fully automated approach that assesses atrophic changes in Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI). 21 healthy volunteers (mean age 66.2), 21 patients with MCI (66.6), and 10 patients with AD (65.1) were enrolled. Subjects underwent extensive neuropsychological testing and MRI was conducted on a 1.5 Tesla clinical scanner. Atrophic changes were measured automatically by a series of image processing steps including state of the art brain mapping techniques. Results were compared with two reference approaches: a manual segmentation of the hippocampal formation and a semi-automated estimation of temporal horn volume, which is based upon interactive selection of two to six landmarks in the ventricular system. All approaches separated controls and AD patients significantly (10 -5 < p < 10 -4) and showed a slight but not significant increase of neurodegeneration for subjects with MCI compared to volunteers. The automated approach correlated significantly with the manual (r = -0.65, p < 10 -6) and semi automated (r = -0.83, p < 10 -13) measurements. It proved high accuracy and at the same time maximized observer independency, time reduction and thus usefulness for clinical routine.

  6. Automated image segmentation using support vector machines

    NASA Astrophysics Data System (ADS)

    Powell, Stephanie; Magnotta, Vincent A.; Andreasen, Nancy C.

    2007-03-01

    Neurodegenerative and neurodevelopmental diseases demonstrate problems associated with brain maturation and aging. Automated methods to delineate brain structures of interest are required to analyze large amounts of imaging data like that being collected in several on going multi-center studies. We have previously reported on using artificial neural networks (ANN) to define subcortical brain structures including the thalamus (0.88), caudate (0.85) and the putamen (0.81). In this work, apriori probability information was generated using Thirion's demons registration algorithm. The input vector consisted of apriori probability, spherical coordinates, and an iris of surrounding signal intensity values. We have applied the support vector machine (SVM) machine learning algorithm to automatically segment subcortical and cerebellar regions using the same input vector information. SVM architecture was derived from the ANN framework. Training was completed using a radial-basis function kernel with gamma equal to 5.5. Training was performed using 15,000 vectors collected from 15 training images in approximately 10 minutes. The resulting support vectors were applied to delineate 10 images not part of the training set. Relative overlap calculated for the subcortical structures was 0.87 for the thalamus, 0.84 for the caudate, 0.84 for the putamen, and 0.72 for the hippocampus. Relative overlap for the cerebellar lobes ranged from 0.76 to 0.86. The reliability of the SVM based algorithm was similar to the inter-rater reliability between manual raters and can be achieved without rater intervention.

  7. Brain abscess.

    PubMed

    Slazinski, Tess

    2013-09-01

    A brain abscess is defined as a localized collection of pus within the parenchyma of the brain or meninges. Brain abscesses are a complication of ear, sinus, and/or dental infections. Although they may occur in many brain locations, the most common sites are frontal and temporal lobes. Modern neuroimaging and laboratory analysis have led to prompt diagnosis and have decreased the mortality rates from brain abscess. Critical care nurses have a vital role in performing accurate neurologic assessments, timely administration of antibiotics, and management of fever. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Brain to music to brain!

    PubMed

    Azizi, S Ausim

    2009-07-31

    It has been implicitly understood that culture and music as collective products of human brain in turn influence the brain itself. Now, imaging and anatomical data add substance to this notion. The impact of playing piano on the brain of musicians and its possible effects on cultural and neurological evolution are briefly discussed.

  9. Automated system for analyzing the activity of individual neurons

    NASA Technical Reports Server (NTRS)

    Bankman, Isaac N.; Johnson, Kenneth O.; Menkes, Alex M.; Diamond, Steve D.; Oshaughnessy, David M.

    1993-01-01

    This paper presents a signal processing system that: (1) provides an efficient and reliable instrument for investigating the activity of neuronal assemblies in the brain; and (2) demonstrates the feasibility of generating the command signals of prostheses using the activity of relevant neurons in disabled subjects. The system operates online, in a fully automated manner and can recognize the transient waveforms of several neurons in extracellular neurophysiological recordings. Optimal algorithms for detection, classification, and resolution of overlapping waveforms are developed and evaluated. Full automation is made possible by an algorithm that can set appropriate decision thresholds and an algorithm that can generate templates on-line. The system is implemented with a fast IBM PC compatible processor board that allows on-line operation.

  10. Automation: Decision Aid or Decision Maker?

    NASA Technical Reports Server (NTRS)

    Skitka, Linda J.

    1998-01-01

    This study clarified that automation bias is something unique to automated decision making contexts, and is not the result of a general tendency toward complacency. By comparing performance on exactly the same events on the same tasks with and without an automated decision aid, we were able to determine that at least the omission error part of automation bias is due to the unique context created by having an automated decision aid, and is not a phenomena that would occur even if people were not in an automated context. However, this study also revealed that having an automated decision aid did lead to modestly improved performance across all non-error events. Participants in the non- automated condition responded with 83.68% accuracy, whereas participants in the automated condition responded with 88.67% accuracy, across all events. Automated decision aids clearly led to better overall performance when they were accurate. People performed almost exactly at the level of reliability as the automation (which across events was 88% reliable). However, also clear, is that the presence of less than 100% accurate automated decision aids creates a context in which new kinds of errors in decision making can occur. Participants in the non-automated condition responded with 97% accuracy on the six "error" events, whereas participants in the automated condition had only a 65% accuracy rate when confronted with those same six events. In short, the presence of an AMA can lead to vigilance decrements that can lead to errors in decision making.

  11. In vivo robotics: the automation of neuroscience and other intact-system biological fields

    PubMed Central

    Kodandaramaiah, Suhasa B.; Boyden, Edward S.; Forest, Craig R.

    2013-01-01

    Robotic and automation technologies have played a huge role in in vitro biological science, having proved critical for scientific endeavors such as genome sequencing and high-throughput screening. Robotic and automation strategies are beginning to play a greater role in in vivo and in situ sciences, especially when it comes to the difficult in vivo experiments required for understanding the neural mechanisms of behavior and disease. In this perspective, we discuss the prospects for robotics and automation to impact neuroscientific and intact-system biology fields. We discuss how robotic innovations might be created to open up new frontiers in basic and applied neuroscience, and present a concrete example with our recent automation of in vivo whole cell patch clamp electrophysiology of neurons in the living mouse brain. PMID:23841584

  12. In vivo robotics: the automation of neuroscience and other intact-system biological fields.

    PubMed

    Kodandaramaiah, Suhasa B; Boyden, Edward S; Forest, Craig R

    2013-12-01

    Robotic and automation technologies have played a huge role in in vitro biological science, having proved critical for scientific endeavors such as genome sequencing and high-throughput screening. Robotic and automation strategies are beginning to play a greater role in in vivo and in situ sciences, especially when it comes to the difficult in vivo experiments required for understanding the neural mechanisms of behavior and disease. In this perspective, we discuss the prospects for robotics and automation to influence neuroscientific and intact-system biology fields. We discuss how robotic innovations might be created to open up new frontiers in basic and applied neuroscience and present a concrete example with our recent automation of in vivo whole-cell patch clamp electrophysiology of neurons in the living mouse brain. © 2013 New York Academy of Sciences.

  13. Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving.

    PubMed

    Hergeth, Sebastian; Lorenz, Lutz; Vilimek, Roman; Krems, Josef F

    2016-05-01

    The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving. © 2016, Human Factors and Ergonomics Society.

  14. Space power subsystem automation technology

    NASA Technical Reports Server (NTRS)

    Graves, J. R. (Compiler)

    1982-01-01

    The technology issues involved in power subsystem automation and the reasonable objectives to be sought in such a program were discussed. The complexities, uncertainties, and alternatives of power subsystem automation, along with the advantages from both an economic and a technological perspective were considered. Whereas most spacecraft power subsystems now use certain automated functions, the idea of complete autonomy for long periods of time is almost inconceivable. Thus, it seems prudent that the technology program for power subsystem automation be based upon a growth scenario which should provide a structured framework of deliberate steps to enable the evolution of space power subsystems from the current practice of limited autonomy to a greater use of automation with each step being justified on a cost/benefit basis. Each accomplishment should move toward the objectives of decreased requirement for ground control, increased system reliability through onboard management, and ultimately lower energy cost through longer life systems that require fewer resources to operate and maintain. This approach seems well-suited to the evolution of more sophisticated algorithms and eventually perhaps even the use of some sort of artificial intelligence. Multi-hundred kilowatt systems of the future will probably require an advanced level of autonomy if they are to be affordable and manageable.

  15. Neonatal brain MRI segmentation: A review.

    PubMed

    Devi, Chelli N; Chandrasekharan, Anupama; Sundararaman, V K; Alex, Zachariah C

    2015-09-01

    This review paper focuses on the neonatal brain segmentation algorithms in the literature. It provides an overview of clinical magnetic resonance imaging (MRI) of the newborn brain and the challenges in automated tissue classification of neonatal brain MRI. It presents a complete survey of the existing segmentation methods and their salient features. The different approaches are categorized into intracranial and brain tissue segmentation algorithms based on their level of tissue classification. Further, the brain tissue segmentation techniques are grouped based on their atlas usage into atlas-based, augmented atlas-based and atlas-free methods. In addition, the research gaps and lacunae in literature are also identified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Automation of medical examiner offices.

    PubMed

    Hanzlick, R

    1993-03-01

    General information and principles regarding the automation of medical examiner and coroner offices are presented. Topics discussed include the importance of using available resource groups, questions that should be answered to determine the need for automation, the importance of specifically defining office needs and goals prior to automation, the value of capturing data in multiple ways, the importance of maintaining some hardcopy files, the value of ensuring capability to modify and enlarge the computer system, the need to tailor systems to the needs of specific offices, possibilities for generating money or acquiring equipment at little or no cost, the use of personal computers and commercially available software, the value of having all operations inhouse, transition to new operating systems and environments, and the foreseeable use of emerging technologies.

  17. Intelligent software for laboratory automation.

    PubMed

    Whelan, Ken E; King, Ross D

    2004-09-01

    The automation of laboratory techniques has greatly increased the number of experiments that can be carried out in the chemical and biological sciences. Until recently, this automation has focused primarily on improving hardware. Here we argue that future advances will concentrate on intelligent software to integrate physical experimentation and results analysis with hypothesis formulation and experiment planning. To illustrate our thesis, we describe the 'Robot Scientist' - the first physically implemented example of such a closed loop system. In the Robot Scientist, experimentation is performed by a laboratory robot, hypotheses concerning the results are generated by machine learning and experiments are allocated and selected by a combination of techniques derived from artificial intelligence research. The performance of the Robot Scientist has been evaluated by a rediscovery task based on yeast functional genomics. The Robot Scientist is proof that the integration of programmable laboratory hardware and intelligent software can be used to develop increasingly automated laboratories.

  18. Automated mapping of hammond's landforms

    USGS Publications Warehouse

    Gallant, A.L.; Brown, D.D.; Hoffer, R.M.

    2005-01-01

    We automated a method for mapping Hammond's landforms over large landscapes using digital elevation data. We compared our results against Hammond's published landform maps, derived using manual interpretation procedures. We found general agreement in landform patterns mapped by the manual and the automated approaches, and very close agreement in characterization of local topographic relief. The two approaches produced different interpretations of intermediate landforms, which relied upon quantification of the proportion of landscape having gently sloping terrain. This type of computation is more efficiently and consistently applied by computer than human. Today's ready access to digital data and computerized geospatial technology provides a good foundation for mapping terrain features, but the mapping criteria guiding manual techniques in the past may not be appropriate for automated approaches. We suggest that future efforts center on the advantages offered by digital advancements in refining an approach to better characterize complex landforms. ?? 2005 IEEE.

  19. Visual automated macromolecular model building.

    PubMed

    Langer, Gerrit G; Hazledine, Saul; Wiegels, Tim; Carolan, Ciaran; Lamzin, Victor S

    2013-04-01

    Automated model-building software aims at the objective interpretation of crystallographic diffraction data by means of the construction or completion of macromolecular models. Automated methods have rapidly gained in popularity as they are easy to use and generate reproducible and consistent results. However, the process of model building has become increasingly hidden and the user is often left to decide on how to proceed further with little feedback on what has preceded the output of the built model. Here, ArpNavigator, a molecular viewer tightly integrated into the ARP/wARP automated model-building package, is presented that directly controls model building and displays the evolving output in real time in order to make the procedure transparent to the user.

  20. Automated power management and control

    NASA Technical Reports Server (NTRS)

    Dolce, James L.

    1991-01-01

    A comprehensive automation design is being developed for Space Station Freedom's electric power system. A joint effort between NASA's Office of Aeronautics and Exploration Technology and NASA's Office of Space Station Freedom, it strives to increase station productivity by applying expert systems and conventional algorithms to automate power system operation. The initial station operation will use ground-based dispatches to perform the necessary command and control tasks. These tasks constitute planning and decision-making activities that strive to eliminate unplanned outages. We perceive an opportunity to help these dispatchers make fast and consistent on-line decisions by automating three key tasks: failure detection and diagnosis, resource scheduling, and security analysis. Expert systems will be used for the diagnostics and for the security analysis; conventional algorithms will be used for the resource scheduling.

  1. Automated Approaches to RFI Flagging

    NASA Astrophysics Data System (ADS)

    Garimella, Karthik; Momjian, Emmanuel

    2017-01-01

    It is known that Radio Frequency Interference (RFI) is a major issue in centimeter wavelength radio astronomy. Radio astronomy software packages include tools to excise RFI; both manual and automated utilizing the visibilities (the uv data). Here we present results on an automated RFI flagging approach that utilizes a uv-grid, which is the intermediate product when converting uv data points to an image. It is a well known fact that any signal that appears widespread in a given domain (e.g., image domain) is compact in the Fourier domain (uv-grid domain), i.e., RFI sources that appear as large scale structures (e.g., stripes) in images can be located and flagged using the uv-grid data set. We developed several automated uv-grid based flagging algorithms to detect and excise RFI. These algorithms will be discussed, and results of applying them to measurement sets will be presented.

  2. Advanced automation for space missions

    NASA Technical Reports Server (NTRS)

    Freitas, R. A., Jr.; Healy, T. J.; Long, J. E.

    1982-01-01

    A NASA/ASEE Summer Study conducted at the University of Santa Clara in 1980 examined the feasibility of using advanced artificial intelligence and automation technologies in future NASA space missions. Four candidate applications missions were considered: (1) An intelligent earth-sensing information system, (2) an autonomous space exploration system, (3) an automated space manufacturing facility, and (4) a self-replicating, growing lunar factory. The study assessed the various artificial intelligence and machine technologies which must be developed if such sophisticated missions are to become feasible by century's end.

  3. Design Automation in Synthetic Biology.

    PubMed

    Appleton, Evan; Madsen, Curtis; Roehner, Nicholas; Densmore, Douglas

    2017-04-03

    Design automation refers to a category of software tools for designing systems that work together in a workflow for designing, building, testing, and analyzing systems with a target behavior. In synthetic biology, these tools are called bio-design automation (BDA) tools. In this review, we discuss the BDA tools areas-specify, design, build, test, and learn-and introduce the existing software tools designed to solve problems in these areas. We then detail the functionality of some of these tools and show how they can be used together to create the desired behavior of two types of modern synthetic genetic regulatory networks.

  4. Advanced automation for space missions

    SciTech Connect

    Freitas, R.A., Jr.; Healy, T.J.; Long, J.E.

    1982-01-01

    A NASA/ASEE summer study conducted at the University of Santa Clara in 1980 examined the feasibility of using advanced artificial intelligence and automation technologies in future NASA space missions. Four candidate applications missions were considered: an intelligent earth-sensing information system; an autonomous space exploration system; an automated space manufacturing facility; and a self-replicating, growing lunar factory. The study assessed the various artificial intelligence and machine technologies which must be developed if such sophisticated missions are to become feasible by the century's end. 18 references.

  5. BOA: Framework for automated builds

    SciTech Connect

    N. Ratnikova et al.

    2003-09-30

    Managing large-scale software products is a complex software engineering task. The automation of the software development, release and distribution process is most beneficial in the large collaborations, where the big number of developers, multiple platforms and distributed environment are typical factors. This paper describes Build and Output Analyzer framework and its components that have been developed in CMS to facilitate software maintenance and improve software quality. The system allows to generate, control and analyze various types of automated software builds and tests, such as regular rebuilds of the development code, software integration for releases and installation of the existing versions.

  6. Brain surgery - discharge

    MedlinePlus

    ... to take these medicines. If you had a brain aneurysm , you may also have other symptoms or problems. ... chap 28. Read More Acoustic neuroma Brain abscess Brain aneurysm repair Brain surgery Brain tumor - children Brain tumor - ...

  7. Automation U.S.A.: Overcoming Barriers to Automation.

    ERIC Educational Resources Information Center

    Brody, Herb

    1985-01-01

    Although labor unions and inadequate technology play minor roles, the principal barrier to factory automation is "fear of change." Related problems include long-term benefits, nontechnical executives, and uncertainty of factory cost accounting. Industry support for university programs is helping to educate engineers to design, implement, and…

  8. Truth in Automating: Case Studies in Library Automation.

    ERIC Educational Resources Information Center

    Drabenstott, Jon; And Others

    1989-01-01

    Contributors from five libraries--Bentley College, Boston University, the College of Charleston, the University of Wisconsin at Eau Claire, and the Resource Sharing Alliance of West Central Illinois--describe their automation projects, including staff impact; costs and funding; time and schedules; users; computer support; vendors; and consultants.…

  9. Automated Tools for Subject Matter Expert Evaluation of Automated Scoring

    ERIC Educational Resources Information Center

    Williamson, David M.; Bejar, Isaac I.; Sax, Anne

    2004-01-01

    As automated scoring of complex constructed-response examinations reaches operational status, the process of evaluating the quality of resultant scores, particularly in contrast to scores of expert human graders, becomes as complex as the data itself. Using a vignette from the Architectural Registration Examination (ARE), this article explores the…

  10. Automated Tools for Subject Matter Expert Evaluation of Automated Scoring

    ERIC Educational Resources Information Center

    Williamson, David M.; Bejar, Isaac I.; Sax, Anne

    2004-01-01

    As automated scoring of complex constructed-response examinations reaches operational status, the process of evaluating the quality of resultant scores, particularly in contrast to scores of expert human graders, becomes as complex as the data itself. Using a vignette from the Architectural Registration Examination (ARE), this article explores the…

  11. Automating Shallow Seismic Imaging

    SciTech Connect

    Steeples, Don W.

    2004-12-09

    This seven-year, shallow-seismic reflection research project had the aim of improving geophysical imaging of possible contaminant flow paths. Thousands of chemically contaminated sites exist in the United States, including at least 3,700 at Department of Energy (DOE) facilities. Imaging technologies such as shallow seismic reflection (SSR) and ground-penetrating radar (GPR) sometimes are capable of identifying geologic conditions that might indicate preferential contaminant-flow paths. Historically, SSR has been used very little at depths shallower than 30 m, and even more rarely at depths of 10 m or less. Conversely, GPR is rarely useful at depths greater than 10 m, especially in areas where clay or other electrically conductive materials are present near the surface. Efforts to image the cone of depression around a pumping well using seismic methods were only partially successful (for complete references of all research results, see the full Final Technical Report, DOE/ER/14826-F), but peripheral results included development of SSR methods for depths shallower than one meter, a depth range that had not been achieved before. Imaging at such shallow depths, however, requires geophone intervals of the order of 10 cm or less, which makes such surveys very expensive in terms of human time and effort. We also showed that SSR and GPR could be used in a complementary fashion to image the same volume of earth at very shallow depths. The primary research focus of the second three-year period of funding was to develop and demonstrate an automated method of conducting two-dimensional (2D) shallow-seismic surveys with the goal of saving time, effort, and money. Tests involving the second generation of the hydraulic geophone-planting device dubbed the ''Autojuggie'' showed that large numbers of geophones can be placed quickly and automatically and can acquire high-quality data, although not under rough topographic conditions. In some easy-access environments, this device could

  12. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  13. Modular workcells: modern methods for laboratory automation.

    PubMed

    Felder, R A

    1998-12-01

    Laboratory automation is beginning to become an indispensable survival tool for laboratories facing difficult market competition. However, estimates suggest that only 8% of laboratories will be able to afford total laboratory automation systems. Therefore, automation vendors have developed alternative hardware configurations called 'modular automation', to fit the smaller laboratory. Modular automation consists of consolidated analyzers, integrated analyzers, modular workcells, and pre- and post-analytical automation. These terms will be defined in this paper. Using a modular automation model, the automated core laboratory will become a site where laboratory data is evaluated by trained professionals to provide diagnostic information to practising physicians. Modem software information management and process control tools will complement modular hardware. Proper standardization that will allow vendor-independent modular configurations will assure success of this revolutionary new technology.

  14. Automation of existing natural gas compressor stations

    SciTech Connect

    Little, J.E.

    1986-05-01

    ANR Pipeline Co., in automating 20 major compressor stations in 20 months' time, standardized on hardware and software design. In this article, the author tells how off-the-shelf automation was used and how the systems work.

  15. 47 CFR 80.385 - Frequencies for automated systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... System (AMTS) and for other automated multi-station systems. (a) Automated Maritime Telecommunications System (AMTS). (1) The Automated Maritime Communications System (AMTS) is an automated maritime... stations for public correspondence communications with ship stations and units on land. AMTS...

  16. 47 CFR 80.385 - Frequencies for automated systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... System (AMTS) and for other automated multi-station systems. (a) Automated Maritime Telecommunications System (AMTS). (1) The Automated Maritime Communications System (AMTS) is an automated maritime... stations for public correspondence communications with ship stations and units on land. AMTS...

  17. 47 CFR 80.385 - Frequencies for automated systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... System (AMTS) and for other automated multi-station systems. (a) Automated Maritime Telecommunications System (AMTS). (1) The Automated Maritime Communications System (AMTS) is an automated maritime... stations for public correspondence communications with ship stations and units on land. AMTS...

  18. Deformation-based brain morphometry in rats.

    PubMed

    Gaser, Christian; Schmidt, Silvio; Metzler, Martin; Herrmann, Karl-Heinz; Krumbein, Ines; Reichenbach, Jürgen R; Witte, Otto W

    2012-10-15

    Magnetic resonance imaging (MRI)-based morphometry provides in vivo evidence for macro-structural plasticity of the brain. Experiments on small animals using automated morphometric methods usually require expensive measurements with ultra-high field dedicated animal MRI systems. Here, we developed a novel deformation-based morphometry (DBM) tool for automated analyses of rat brain images measured on a 3-Tesla clinical whole body scanner with appropriate coils. A landmark-based transformation of our customized reference brain into the coordinates of the widely used rat brain atlas from Paxinos and Watson (Paxinos Atlas) guarantees the comparability of results to other studies. For cross-sectional data, we warped images onto the reference brain using the low-dimensional nonlinear registration implemented in the MATLAB software package SPM8. For the analysis of longitudinal data sets, we chose high-dimensional registrations of all images of one data set to the first baseline image which facilitate the identification of more subtle structural changes. Because all deformations were finally used to transform the data into the space of the Paxinos Atlas, Jacobian determinants could be used to estimate absolute local volumes of predefined regions-of-interest. Pilot experiments were performed to analyze brain structural changes due to aging or photothrombotically-induced cortical stroke. The results support the utility of DBM based on commonly available clinical whole-body scanners for highly sensitive morphometric studies on rats.

  19. Brainstem Monitoring in the Neurocritical Care Unit: A Rationale for Real-Time, Automated Neurophysiological Monitoring.

    PubMed

    Stone, James L; Bailes, Julian E; Hassan, Ahmed N; Sindelar, Brian; Patel, Vimal; Fino, John

    2017-02-01

    Patients with severe traumatic brain injury or large intracranial space-occupying lesions (spontaneous cerebral hemorrhage, infarction, or tumor) commonly present to the neurocritical care unit with an altered mental status. Many experience progressive stupor and coma from mass effects and transtentorial brain herniation compromising the ascending arousal (reticular activating) system. Yet, little progress has been made in the practicality of bedside, noninvasive, real-time, automated, neurophysiological brainstem, or cerebral hemispheric monitoring. In this critical review, we discuss the ascending arousal system, brain herniation, and shortcomings of our current management including the neurological exam, intracranial pressure monitoring, and neuroimaging. We present a rationale for the development of nurse-friendly-continuous, automated, and alarmed-evoked potential monitoring, based upon the clinical and experimental literature, advances in the prognostication of cerebral anoxia, and intraoperative neurophysiological monitoring.

  20. Ask the experts: automation: part I.

    PubMed

    Allinson, John L; Blick, Kenneth E; Cohen, Lucinda; Higton, David; Li, Ming

    2013-08-01

    Bioanalysis invited a selection of leading researchers to express their views on automation in the bioanalytical laboratory. The topics discussed include the challenges that the modern bioanalyst faces when integrating automation into existing drug-development processes, the impact of automation and how they envision the modern bioanalytical laboratory changing in the near future. Their enlightening responses provide a valuable insight into the impact of automation and the future of the constantly evolving bioanalytical laboratory.

  1. Progress in Fully Automated Abdominal CT Interpretation

    PubMed Central

    Summers, Ronald M.

    2016-01-01

    OBJECTIVE Automated analysis of abdominal CT has advanced markedly over just the last few years. Fully automated assessment of organs, lymph nodes, adipose tissue, muscle, bowel, spine, and tumors are some examples where tremendous progress has been made. Computer-aided detection of lesions has also improved dramatically. CONCLUSION This article reviews the progress and provides insights into what is in store in the near future for automated analysis for abdominal CT, ultimately leading to fully automated interpretation. PMID:27101207

  2. Automation ensures safety, extends production

    SciTech Connect

    Perdue, J.M.

    1996-06-01

    This paper reviews some current improvements in offshore platform safety as a result of new regulations. In response to Norwegian Petroleum Directorate regulations limiting personnel on the rig floor, Weatherford Norway has developed a casing modem with remote-controlled power tongs. This paper reviews the various automated systems for handling drill pipes and joints on offshore platforms and how they work.

  3. Automation; The New Industrial Revolution.

    ERIC Educational Resources Information Center

    Arnstein, George E.

    Automation is a word that describes the workings of computers and the innovations of automatic transfer machines in the factory. As the hallmark of the new industrial revolution, computers displace workers and create a need for new skills and retraining programs. With improved communication between industry and the educational community to…

  4. Automating a High School Restroom.

    ERIC Educational Resources Information Center

    Ritner-Heir, Robbin

    1999-01-01

    Discusses how one high school transformed its restrooms into cleaner and more vandal-resistant environments by automating them. Solutions discussed include installing perforated stainless steel panel ceilings, using epoxy-based paint for walls, selecting china commode fixtures instead of stainless steel, installing electronic faucets and sensors,…

  5. Teacherbot: Interventions in Automated Teaching

    ERIC Educational Resources Information Center

    Bayne, Sian

    2015-01-01

    Promises of "teacher-light" tuition and of enhanced "efficiency" via the automation of teaching have been with us since the early days of digital education, sometimes embraced by academics and institutions, and sometimes resisted as a set of moves which are damaging to teacher professionalism and to the humanistic values of…

  6. Secure Automated Microgrid Energy System

    DTIC Science & Technology

    2016-12-01

    EW-201340) Secure Automated Microgrid Energy System December 2016 This document has been cleared for public release; Distribution Statement A...Operator COTS Commercial-Off-The-Shelf Software DCS Distributed Control System DER Distributed Energy Resources DIACAP Defense Information...Assurance Certification and Accreditation Process DoD Department of Defense DoDI Department of Defense Instruction DOE Department of Energy

  7. Automated species identification: why not?

    PubMed Central

    Gaston, Kevin J; O'Neill, Mark A

    2004-01-01

    Where possible, automation has been a common response of humankind to many activities that have to be repeated numerous times. The routine identification of specimens of previously described species has many of the characteristics of other activities that have been automated, and poses a major constraint on studies in many areas of both pure and applied biology. In this paper, we consider some of the reasons why automated species identification has not become widely employed, and whether it is a realistic option, addressing the notions that it is too difficult, too threatening, too different or too costly. Although recognizing that there are some very real technical obstacles yet to be overcome, we argue that progress in the development of automated species identification is extremely encouraging that such an approach has the potential to make a valuable contribution to reducing the burden of routine identifications. Vision and enterprise are perhaps more limiting at present than practical constraints on what might possibly be achieved. PMID:15253351

  8. Automation on the Laboratory Bench.

    ERIC Educational Resources Information Center

    Legrand, M.; Foucard, A.

    1978-01-01

    A kit is described for use in automation of routine chemical research procedures. The kit uses sensors to evaluate the state of the system, actuators which modify the adjustable parameters, and an organ of decision which uses the information from the sensors. (BB)

  9. Automated verification system user's guide

    NASA Technical Reports Server (NTRS)

    Hoffman, R. H.

    1972-01-01

    Descriptions of the operational requirements for all of the programs of the Automated Verification System (AVS) are provided. The AVS programs are: (1) FORTRAN code analysis and instrumentation program (QAMOD); (2) Test Effectiveness Evaluation Program (QAPROC); (3) Transfer Control Variable Tracking Program (QATRAK); (4) Program Anatomy Table Generator (TABGEN); and (5) Network Path Analysis Program (RAMBLE).

  10. Automation on the Laboratory Bench.

    ERIC Educational Resources Information Center

    Legrand, M.; Foucard, A.

    1978-01-01

    A kit is described for use in automation of routine chemical research procedures. The kit uses sensors to evaluate the state of the system, actuators which modify the adjustable parameters, and an organ of decision which uses the information from the sensors. (BB)

  11. Automated ac galvanomagnetic measurement system

    NASA Technical Reports Server (NTRS)

    Szofran, F. R.; Espy, P. N.

    1985-01-01

    An automated, ac galvanomagnetic measurement system is described. Hall or van der Pauw measurements in the temperature range 10-300 K can be made at a preselected magnetic field without operator attendance. Procedures to validate sample installation and correct operation of other system functions, such as magnetic field and thermometry, are included. Advantages of ac measurements are discussed.

  12. Automated Solar-Array Assembly

    NASA Technical Reports Server (NTRS)

    Soffa, A.; Bycer, M.

    1982-01-01

    Large arrays are rapidly assembled from individual solar cells by automated production line developed for NASA's Jet Propulsion Laboratory. Apparatus positions cells within array, attaches interconnection tabs, applies solder flux, and solders interconnections. Cells are placed in either straight or staggered configurations and may be connected either in series or in parallel. Are attached at rate of one every 5 seconds.

  13. Safety in the Automated Office.

    ERIC Educational Resources Information Center

    Graves, Pat R.; Greathouse, Lillian R.

    1990-01-01

    Office automation has introduced new hazards to the workplace: electrical hazards related to computer wiring, musculoskeletal problems resulting from use of computer terminals and design of work stations, and environmental concerns related to ventilation, noise levels, and office machine chemicals. (SK)

  14. Automated Aids for Reliable Software,

    DTIC Science & Technology

    developed for the U. S. Air Force are reported. The concepts of reliability and automation as they pertain to software are explained. Then, over twenty...of the state of the technology is made. Finally, specific recommendations which try to give direction to future efforts are offered. (Author)

  15. Automating the conflict resolution process

    NASA Technical Reports Server (NTRS)

    Wike, Jeffrey S.

    1991-01-01

    The purpose is to initiate a discussion of how the conflict resolution process at the Network Control Center can be made more efficient. Described here are how resource conflicts are currently resolved as well as the impacts of automating conflict resolution in the ATDRSS era. A variety of conflict resolution strategies are presented.

  16. Formative Automated Computer Testing (FACT).

    ERIC Educational Resources Information Center

    Hunt, Nicoll; Hughes, Janet; Rowe, Glenn

    2002-01-01

    Describes the development of a tool, FACT (Formative Automated Computer Testing), to formatively assess information technology skills of college students in the United Kingdom. Topics include word processing competency; tests designed by tutors and delivered via a network; and results of an evaluation that showed students preferred automated…

  17. Teacherbot: Interventions in Automated Teaching

    ERIC Educational Resources Information Center

    Bayne, Sian

    2015-01-01

    Promises of "teacher-light" tuition and of enhanced "efficiency" via the automation of teaching have been with us since the early days of digital education, sometimes embraced by academics and institutions, and sometimes resisted as a set of moves which are damaging to teacher professionalism and to the humanistic values of…

  18. AUTOMATING ASSET KNOWLEDGE WITH MTCONNECT.

    PubMed

    Venkatesh, Sid; Ly, Sidney; Manning, Martin; Michaloski, John; Proctor, Fred

    2016-01-01

    In order to maximize assets, manufacturers should use real-time knowledge garnered from ongoing and continuous collection and evaluation of factory-floor machine status data. In discrete parts manufacturing, factory machine monitoring has been difficult, due primarily to closed, proprietary automation equipment that make integration difficult. Recently, there has been a push in applying the data acquisition concepts of MTConnect to the real-time acquisition of machine status data. MTConnect is an open, free specification aimed at overcoming the "Islands of Automation" dilemma on the shop floor. With automated asset analysis, manufacturers can improve production to become lean, efficient, and effective. The focus of this paper will be on the deployment of MTConnect to collect real-time machine status to automate asset management. In addition, we will leverage the ISO 22400 standard, which defines an asset and quantifies asset performance metrics. In conjunction with these goals, the deployment of MTConnect in a large aerospace manufacturing facility will be studied with emphasis on asset management and understanding the impact of machine Overall Equipment Effectiveness (OEE) on manufacturing.

  19. Automated activation-analysis system

    SciTech Connect

    Minor, M.M.; Garcia, S.R.; Denton, M.M.

    1982-01-01

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day.

  20. Automation of Space Inventory Management

    NASA Technical Reports Server (NTRS)

    Fink, Patrick W.; Ngo, Phong; Wagner, Raymond; Barton, Richard; Gifford, Kevin

    2009-01-01

    This viewgraph presentation describes the utilization of automated space-based inventory management through handheld RFID readers and BioNet Middleware. The contents include: 1) Space-Based INventory Management; 2) Real-Time RFID Location and Tracking; 3) Surface Acoustic Wave (SAW) RFID; and 4) BioNet Middleware.

  1. Formative Automated Computer Testing (FACT).

    ERIC Educational Resources Information Center

    Hunt, Nicoll; Hughes, Janet; Rowe, Glenn

    2002-01-01

    Describes the development of a tool, FACT (Formative Automated Computer Testing), to formatively assess information technology skills of college students in the United Kingdom. Topics include word processing competency; tests designed by tutors and delivered via a network; and results of an evaluation that showed students preferred automated…

  2. Cognitive Approaches to Automated Instruction.

    ERIC Educational Resources Information Center

    Regian, J. Wesley, Ed.; Shute, Valerie J., Ed.

    This book contains a snapshot of state-of-the-art research on the design of automated instructional systems. Selected cognitive psychologists were asked to describe their approach to instruction and cognitive diagnosis, the theoretical basis of the approach, its utility and applicability, and the knowledge engineering or task analysis methods…

  3. Office Automation in Student Affairs.

    ERIC Educational Resources Information Center

    Johnson, Sharon L.; Hamrick, Florence A.

    1987-01-01

    Offers recommendations to assist in introducing or expanding computer assistance in student affairs. Describes need for automation and considers areas of choosing hardware and software, funding and competitive bidding, installation and training, and system management. Cites greater efficiency in handling tasks and data and increased levels of…

  4. Automated Accounting. Payroll. Instructor Module.

    ERIC Educational Resources Information Center

    Moses, Duane R.

    This teacher's guide was developed to assist business instructors using Dac Easy Accounting Payroll Version 3.0 edition software in their accounting programs. The module contains assignment sheets and job sheets designed to enable students to master competencies identified in the area of automated accounting--payroll. Basic accounting skills are…

  5. Automated calculation and simulation systems

    NASA Astrophysics Data System (ADS)

    Ohl, Thorsten

    2003-04-01

    I briefly summarize the parallel sessions on Automated Calculation and Simulation Systems for high-energy particle physics phenomenology at ACAT 2002 (Moscow State University, June 2002) and present a short overview over the current status of the field and try to identify the important trends.

  6. Special Relations in Automated Deduction,

    DTIC Science & Technology

    1985-05-01

    ABSTRACT Two deduction rules are introduced to give streamlined treatment to relations of special importance in an automated theorem-proving system...a+1,a,b). We may also write e(tl, t 2 , • • , t,e)’’ to indicate that precisely k or I replacements are made in the expressione (Sl ,52, ... , 1 0

  7. Automated analysis of oxidative metabolites

    NASA Technical Reports Server (NTRS)

    Furner, R. L. (Inventor)

    1974-01-01

    An automated system for the study of drug metabolism is described. The system monitors the oxidative metabolites of aromatic amines and of compounds which produce formaldehyde on oxidative dealkylation. It includes color developing compositions suitable for detecting hyroxylated aromatic amines and formaldehyde.

  8. Library Automation: Guidelines to Costing.

    ERIC Educational Resources Information Center

    Ford, Geoffrey

    As with all new programs, the costs associated with library automation must be carefully considered before implementation. This document suggests guidelines to be followed and areas to be considered in the costing of library procedures. An existing system model has been suggested as a standard (Appendix A) and a classification of library tasks…

  9. Automating a High School Restroom.

    ERIC Educational Resources Information Center

    Ritner-Heir, Robbin

    1999-01-01

    Discusses how one high school transformed its restrooms into cleaner and more vandal-resistant environments by automating them. Solutions discussed include installing perforated stainless steel panel ceilings, using epoxy-based paint for walls, selecting china commode fixtures instead of stainless steel, installing electronic faucets and sensors,…

  10. Automated Assessment in Massive Open Online Courses

    ERIC Educational Resources Information Center

    Ivaniushin, Dmitrii A.; Shtennikov, Dmitrii G.; Efimchick, Eugene A.; Lyamin, Andrey V.

    2016-01-01

    This paper describes an approach to use automated assessments in online courses. Open edX platform is used as the online courses platform. The new assessment type uses Scilab as learning and solution validation tool. This approach allows to use automated individual variant generation and automated solution checks without involving the course…

  11. Library Automation in the Netherlands and Pica.

    ERIC Educational Resources Information Center

    Bossers, Anton; Van Muyen, Martin

    1984-01-01

    Describes the Pica Library Automation Network (originally the Project for Integrated Catalogue Automation), which is based on a centralized bibliographic database. Highlights include the Pica conception of library automation, online shared cataloging system, circulation control system, acquisition system, and online Dutch union catalog with…

  12. Flight-deck automation: Promises and problems

    NASA Technical Reports Server (NTRS)

    Wiener, E. L.; Curry, R. E.

    1980-01-01

    The state of the art in human factors in flight-deck automation is presented. A number of critical problem areas are identified and broad design guidelines are offered. Automation-related aircraft accidents and incidents are discussed as examples of human factors problems in automated flight.

  13. Does Automated Feedback Improve Writing Quality?

    ERIC Educational Resources Information Center

    Wilson, Joshua; Olinghouse, Natalie G.; Andrada, Gilbert N.

    2014-01-01

    The current study examines data from students in grades 4-8 who participated in a statewide computer-based benchmark writing assessment that featured automated essay scoring and automated feedback. We examined whether the use of automated feedback was associated with gains in writing quality across revisions to an essay, and with transfer effects…

  14. You're a What? Automation Technician

    ERIC Educational Resources Information Center

    Mullins, John

    2010-01-01

    Many people think of automation as laborsaving technology, but it sure keeps Jim Duffell busy. Defined simply, automation is a technique for making a device run or a process occur with minimal direct human intervention. But the functions and technologies involved in automated manufacturing are complex. Nearly all functions, from orders coming in…

  15. Office Automation, Personnel and the New Technology.

    ERIC Educational Resources Information Center

    Magnus, Margaret

    1980-01-01

    At the first annual Office Automation Conference, the consensus was that personnel involvement in the development of office automation is vital if the new technology is to be successfully deployed. This report explores the problems inherent in office automation and provides a broad overview of the subject. (CT)

  16. Archives and Automation: Issues and Trends.

    ERIC Educational Resources Information Center

    Weiner, Rob

    This paper focuses on archives and automation, and reviews recent literature on various topics concerning archives and automation. Topics include: resistance to technology and the need to educate about automation; the change in archival theory due to the information age; problems with technology use; the history of organizing archival records…

  17. Does Automated Feedback Improve Writing Quality?

    ERIC Educational Resources Information Center

    Wilson, Joshua; Olinghouse, Natalie G.; Andrada, Gilbert N.

    2014-01-01

    The current study examines data from students in grades 4-8 who participated in a statewide computer-based benchmark writing assessment that featured automated essay scoring and automated feedback. We examined whether the use of automated feedback was associated with gains in writing quality across revisions to an essay, and with transfer effects…

  18. Design automation for integrated optics

    NASA Astrophysics Data System (ADS)

    Condrat, Christopher

    Recent breakthroughs in silicon photonics technology are enabling the integration of optical devices into silicon-based semiconductor processes. Photonics technology enables high-speed, high-bandwidth, and high-fidelity communications on the chip-scale---an important development in an increasingly communications-oriented semiconductor world. Significant developments in silicon photonic manufacturing and integration are also enabling investigations into applications beyond that of traditional telecom: sensing, filtering, signal processing, quantum technology---and even optical computing. In effect, we are now seeing a convergence of communications and computation, where the traditional roles of optics and microelectronics are becoming blurred. As the applications for opto-electronic integrated circuits (OEICs) are developed, and manufacturing capabilities expand, design support is necessary to fully exploit the potential of this optics technology. Such design support for moving beyond custom-design to automated synthesis and optimization is not well developed. Scalability requires abstractions, which in turn enables and requires the use of optimization algorithms and design methodology flows. Design automation represents an opportunity to take OEIC design to a larger scale, facilitating design-space exploration, and laying the foundation for current and future optical applications---thus fully realizing the potential of this technology. This dissertation proposes design automation for integrated optic system design. Using a building-block model for optical devices, we provide an EDA-inspired design flow and methodologies for optical design automation. Underlying these flows and methodologies are new supporting techniques in behavioral and physical synthesis, as well as device-resynthesis techniques for thermal-aware system integration. We also provide modeling for optical devices and determine optimization and constraint parameters that guide the automation

  19. Survey in expert clinicians on validity of automated calculation of optimal cerebral perfusion pressure.

    PubMed

    Steijn, Romy; Stewart, Roy; Czosnyka, Marek; Donnelly, Joseph; Ercole, Ari; Absalom, Antony; Elting, Jan W; Haubrich, Christina; Smielewski, Peter; Aries, Marcel

    2017-06-22

    Optimal cerebral perfusion pressure (CPPopt) targeting in traumatic brain injury (TBI) patients constitutes an active and controversial area of research. It has been suggested that an autoregulation guided CPP therapy may improve TBI outcome. Prerequisites of a CPPopt intervention study would be objective criteria for the CPPopt detection.. This study compared the agreement between automated and visual CPPopt detection. Twenty-five clinicians from 18 centres worldwide, familiar with brain monitoring and using dedicated software, reviewed ten 4-hour CPPopt screenshots at 48 hrs after ictus in selected TBI patients. Each screenshot displayed the trends of cerebral perfusion pressure (CPP), intracranial pressure (ICP), cerebrovascular pressure reactivity (PRx) as well as the 'CPP-optimal' curve and its associated value (automated CPPopt). The main objective was to evaluate the agreement between expert clinicians as well as the agreement between the clinicians and automated CPPopt. Twenty-two clinicians responded to our call (88%). Three screenshots were judged as 'CPPopt not determinable' by > 45% of the clinicians. For the whole group, the consensus between automated CPPopt and clinicians' visual CPPopt was high. Three clinicians were identified as outliers. All clinicians recommended to modify CPP when patients differed > ± 5 mmHg from their CPPopt. The inter-observer consensus was highest in cases with current CPP below the optimal value. The overall agreement between automated CPPopt and visual CPPopt identified by autoregulation experts was high, except for those cases when the curve was deemed by the clinicians not reliable enough to yield a trustworthy CPPopt.

  20. Chemo Brain

    MedlinePlus

    ... risk of memory problems in cancer survivors include: Brain cancer Chemotherapy given directly to the central nervous system ... of chemotherapy or radiation Radiation therapy to the brain Younger age at time of cancer diagnosis and treatment Increasing age Complications The severity ...

  1. Brain Facts.

    PubMed

    Wright, Kerri

    2013-11-20

    The Brain Facts website is a treasure trove of information about neuroscience and the brain. It covers a range of diseases and disorders, as well as the science of ageing, and is relevant to practitioners and students in all branches of nursing and midwifery.

  2. The Brain.

    ERIC Educational Resources Information Center

    Callison, Daniel

    2001-01-01

    Discusses basic facts about the brain and new research findings concerning growth and development that may help reconsider how information literacy skills are taught. Explains Kovalik's Integrated Thematic Instruction Model that recommends taking into account brain research and tying into relevant activities for the entire school year. (LRW)

  3. Classification of CT-brain slices based on local histograms

    NASA Astrophysics Data System (ADS)

    Avrunin, Oleg G.; Tymkovych, Maksym Y.; Pavlov, Sergii V.; Timchik, Sergii V.; Kisała, Piotr; Orakbaev, Yerbol

    2015-12-01

    Neurosurgical intervention is a very complicated process. Modern operating procedures based on data such as CT, MRI, etc. Automated analysis of these data is an important task for researchers. Some modern methods of brain-slice segmentation use additional data to process these images. Classification can be used to obtain this information. To classify the CT images of the brain, we suggest using local histogram and features extracted from them. The paper shows the process of feature extraction and classification CT-slices of the brain. The process of feature extraction is specialized for axial cross-section of the brain. The work can be applied to medical neurosurgical systems.

  4. Knowledge systems support for mission operations automation

    NASA Astrophysics Data System (ADS)

    Atkinson, David J.

    1990-10-01

    A knowledge system which utilizes artificial intelligence technology to automate a subset of real time mission operations functions is described. An overview of spacecraft telecommunications operations at the Jet Propulsion Laboratories (JPL) highlights requirements for automation. The knowledge system, called the Spacecraft Health Automated Reasoning Prototype (SHARP), developed to explore methods for automated health and status analysis is outlined. The advantages of the system were demonstrated during the spacecraft's encounter with the planet Neptune. The design of the fault detection and diagnosis portions of SHARP is discussed. The performance of SHARP during the encounter is discussed along with issues and benefits arising from application of knowledge system to mission operations automation.

  5. Role of automation in new instrumentation.

    PubMed

    Johnson, C A

    1993-04-01

    In recent years there has been an unprecedented increase in the development of automated instrumentation for ophthalmic diagnostic and assessment purposes. An important part of this growth in automated clinical ophthalmic instrumentation has been directed to perimetry and visual field testing. In less than 15 years automated perimetry has advanced from a laboratory curiosity to become the standard for clinical visual field testing. This paper will provide a brief overview of the impact that automated perimetry has had on current clinical ophthalmic practice and patient management. It is presented as a general example of the influence that automated instrumentation has exerted on the clinical environment.

  6. Automated analysis of a diverse synapse population.

    PubMed

    Busse, Brad; Smith, Stephen

    2013-01-01

    Synapses of the mammalian central nervous system are highly diverse in function and molecular composition. Synapse diversity per se may be critical to brain function, since memory and homeostatic mechanisms are thought to be rooted primarily in activity-dependent plastic changes in specific subsets of individual synapses. Unfortunately, the measurement of synapse diversity has been restricted by the limitations of methods capable of measuring synapse properties at the level of individual synapses. Array tomography is a new high-resolution, high-throughput proteomic imaging method that has the potential to advance the measurement of unit-level synapse diversity across large and diverse synapse populations. Here we present an automated feature extraction and classification algorithm designed to quantify synapses from high-dimensional array tomographic data too voluminous for manual analysis. We demonstrate the use of this method to quantify laminar distributions of synapses in mouse somatosensory cortex and validate the classification process by detecting the presence of known but uncommon proteomic profiles. Such classification and quantification will be highly useful in identifying specific subpopulations of synapses exhibiting plasticity in response to perturbations from the environment or the sensory periphery.

  7. Automated Analysis of a Diverse Synapse Population

    PubMed Central

    Busse, Brad; Smith, Stephen

    2013-01-01

    Synapses of the mammalian central nervous system are highly diverse in function and molecular composition. Synapse diversity per se may be critical to brain function, since memory and homeostatic mechanisms are thought to be rooted primarily in activity-dependent plastic changes in specific subsets of individual synapses. Unfortunately, the measurement of synapse diversity has been restricted by the limitations of methods capable of measuring synapse properties at the level of individual synapses. Array tomography is a new high-resolution, high-throughput proteomic imaging method that has the potential to advance the measurement of unit-level synapse diversity across large and diverse synapse populations. Here we present an automated feature extraction and classification algorithm designed to quantify synapses from high-dimensional array tomographic data too voluminous for manual analysis. We demonstrate the use of this method to quantify laminar distributions of synapses in mouse somatosensory cortex and validate the classification process by detecting the presence of known but uncommon proteomic profiles. Such classification and quantification will be highly useful in identifying specific subpopulations of synapses exhibiting plasticity in response to perturbations from the environment or the sensory periphery. PMID:23555213

  8. Specimen coordinate automated measuring machine/fiducial automated measuring machine

    DOEpatents

    Hedglen, Robert E.; Jacket, Howard S.; Schwartz, Allan I.

    1991-01-01

    The Specimen coordinate Automated Measuring Machine (SCAMM) and the Fiducial Automated Measuring Machine (FAMM) is a computer controlled metrology system capable of measuring length, width, and thickness, and of locating fiducial marks. SCAMM and FAMM have many similarities in their designs, and they can be converted from one to the other without taking them out of the hot cell. Both have means for: supporting a plurality of samples and a standard; controlling the movement of the samples in the +/- X and Y directions; determining the coordinates of the sample; compensating for temperature effects; and verifying the accuracy of the measurements and repeating as necessary. SCAMM and FAMM are designed to be used in hot cells.

  9. Brain investigation and brain conceptualization

    PubMed Central

    Redolfi, Alberto; Bosco, Paolo; Manset, David; Frisoni, Giovanni B.

    Summary The brain of a patient with Alzheimer’s disease (AD) undergoes changes starting many years before the development of the first clinical symptoms. The recent availability of large prospective datasets makes it possible to create sophisticated brain models of healthy subjects and patients with AD, showing pathophysiological changes occurring over time. However, these models are still inadequate; representations are mainly single-scale and they do not account for the complexity and interdependence of brain changes. Brain changes in AD patients occur at different levels and for different reasons: at the molecular level, changes are due to amyloid deposition; at cellular level, to loss of neuron synapses, and at tissue level, to connectivity disruption. All cause extensive atrophy of the whole brain organ. Initiatives aiming to model the whole human brain have been launched in Europe and the US with the goal of reducing the burden of brain diseases. In this work, we describe a new approach to earlier diagnosis based on a multimodal and multiscale brain concept, built upon existing and well-characterized single modalities. PMID:24139654

  10. [Brain concussion].

    PubMed

    Pälvimäki, Esa-Pekka; Siironen, Jari; Pohjola, Juha; Hernesniemi, Juha

    2011-01-01

    Brain concussion is a common disturbance caused by external forces or acceleration affecting the head. It may be accompanied by transient loss of consciousness and amnesia. Typical symptoms include headache, nausea and dizziness; these may remain for a week or two. Some patients may experience transient loss of inability to create new memories or other brief impairment of mental functioning. Treatment is symptomatic. Some patients may suffer from prolonged symptoms, the connection of which with brain concession is difficult to show. Almost invariably the prognosis of brain concussion is good.

  11. Improving the driver-automation interaction: an approach using automation uncertainty.

    PubMed

    Beller, Johannes; Heesen, Matthias; Vollrath, Mark

    2013-12-01

    The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.

  12. Automated nutrient analyses in seawater

    SciTech Connect

    Whitledge, T.E.; Malloy, S.C.; Patton, C.J.; Wirick, C.D.

    1981-02-01

    This manual was assembled for use as a guide for analyzing the nutrient content of seawater samples collected in the marine coastal zone of the Northeast United States and the Bering Sea. Some modifications (changes in dilution or sample pump tube sizes) may be necessary to achieve optimum measurements in very pronounced oligotrophic, eutrophic or brackish areas. Information is presented under the following section headings: theory and mechanics of automated analysis; continuous flow system description; operation of autoanalyzer system; cookbook of current nutrient methods; automated analyzer and data analysis software; computer interfacing and hardware modifications; and trouble shooting. The three appendixes are entitled: references and additional reading; manifold components and chemicals; and software listings. (JGB)

  13. Automated Illustration of Patients Instructions

    PubMed Central

    Bui, Duy; Nakamura, Carlos; Bray, Bruce E.; Zeng-Treitler, Qing

    2012-01-01

    A picture can be a powerful communication tool. However, creating pictures to illustrate patient instructions can be a costly and time-consuming task. Building on our prior research in this area, we developed a computer application that automatically converts text to pictures using natural language processing and computer graphics techniques. After iterative testing, the automated illustration system was evaluated using 49 previously unseen cardiology discharge instructions. The completeness of the system-generated illustrations was assessed by three raters using a three-level scale. The average inter-rater agreement for text correctly represented in the pictograph was about 66 percent. Since illustration in this context is intended to enhance rather than replace text, these results support the feasibility of conducting automated illustration. PMID:23304392

  14. Automated labeling in document images

    NASA Astrophysics Data System (ADS)

    Kim, Jongwoo; Le, Daniel X.; Thoma, George R.

    2000-12-01

    The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.

  15. Approach to automation in immunohistochemistry.

    PubMed

    Moreau, A; Le Neel, T; Joubert, M; Truchaud, A; Laboisse, C

    1998-12-01

    The introduction of immunochemical techniques into the routine pathology laboratory has significantly expanded the capabilities of the pathologist in diagnostic procedures. Immunostaining represents a powerful diagnostic tool in the identification and localization of cellular antigens, in paraffin sections, frozen tissues and cell preparations. The labeled-streptavidin-biotin method provides excellent sensitivity and performance. This multistep procedure includes: incubation of the slide with primary antibody, reaction with the biotinylated secondary antibody, binding with an enzyme conjugated streptavidin and revelation with chromogen substrate. Evaluation of the finished product is directly dependent on the quality of the technique. The main critical steps of this manual method are reagents application, incubation times and rinsing. These steps could be accessible to automation. Automation in immunohistochemistry could guarantee a continuous quality of labelling in improving standardisation, optimization and traceability of operations. The required qualifications are analytical flexibility, low cost, walkaway operation, user-friendly interface and biosafety.

  16. Automated Demand Response and Commissioning

    SciTech Connect

    Piette, Mary Ann; Watson, David S.; Motegi, Naoya; Bourassa, Norman

    2005-04-01

    This paper describes the results from the second season of research to develop and evaluate the performance of new Automated Demand Response (Auto-DR) hardware and software technology in large facilities. Demand Response (DR) is a set of activities to reduce or shift electricity use to improve the electric grid reliability and manage electricity costs. Fully-Automated Demand Response does not involve human intervention, but is initiated at a home, building, or facility through receipt of an external communications signal. We refer to this as Auto-DR. The evaluation of the control and communications must be properly configured and pass through a set of test stages: Readiness, Approval, Price Client/Price Server Communication, Internet Gateway/Internet Relay Communication, Control of Equipment, and DR Shed Effectiveness. New commissioning tests are needed for such systems to improve connecting demand responsive building systems to the electric grid demand response systems.

  17. Automated illustration of patients instructions.

    PubMed

    Bui, Duy; Nakamura, Carlos; Bray, Bruce E; Zeng-Treitler, Qing

    2012-01-01

    A picture can be a powerful communication tool. However, creating pictures to illustrate patient instructions can be a costly and time-consuming task. Building on our prior research in this area, we developed a computer application that automatically converts text to pictures using natural language processing and computer graphics techniques. After iterative testing, the automated illustration system was evaluated using 49 previously unseen cardiology discharge instructions. The completeness of the system-generated illustrations was assessed by three raters using a three-level scale. The average inter-rater agreement for text correctly represented in the pictograph was about 66 percent. Since illustration in this context is intended to enhance rather than replace text, these results support the feasibility of conducting automated illustration.

  18. Automated Scheduling Via Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Biefeld, Eric W.; Cooper, Lynne P.

    1991-01-01

    Artificial-intelligence software that automates scheduling developed in Operations Mission Planner (OMP) research project. Software used in both generation of new schedules and modification of existing schedules in view of changes in tasks and/or available resources. Approach based on iterative refinement. Although project focused upon scheduling of operations of scientific instruments and other equipment aboard spacecraft, also applicable to such terrestrial problems as scheduling production in factory.

  19. Automated Test Requirement Document Generation

    DTIC Science & Technology

    1987-11-01

    DIAGNOSTICS BASED ON THE PRINCIPLES OF ARTIFICIAL INTELIGENCE ", 1984 International Test Conference, 01Oct84, (A3, 3, Cs D3, E2, G2, H2, 13, J6, K) 425...j0O GLOSSARY OF ACRONYMS 0 ABBREVIATION DEFINITION AFSATCOM Air Force Satellite Communication Al Artificial Intelligence ASIC Application Specific...In-Test Equipment (BITE) and AI ( Artificial Intelligence) - Expert Systems - need to be fully applied before a completely automated process can be

  20. Convection automated logic oven control

    SciTech Connect

    Boyer, M.A.; Eke, K.I.

    1998-03-01

    For the past few years, there has been a greater push to bring more automation to the cooling process. There have been attempts at automated cooking using a wide range of sensors and procedures, but with limited success. The authors have the answer to the automated cooking process; this patented technology is called Convection AutoLogic (CAL). The beauty of the technology is that it requires no extra hardware for the existing oven system. They use the existing temperature probe, whether it is an RTD, thermocouple, or thermistor. This means that the manufacturer does not have to be burdened with extra costs associated with automated cooking in comparison to standard ovens. The only change to the oven is the program in the central processing unit (CPU) on the board. As for its operation, when the user places the food into the oven, he or she is required to select a category (e.g., beef, poultry, or casseroles) and then simply press the start button. The CAL program then begins its cooking program. It first looks at the ambient oven temperature to see if it is a cold, warm, or hot start. CAL stores this data and then begins to look at the food`s thermal footprint. After CAL has properly detected this thermal footprint, it can calculate the time and temperature at which the food needs to be cooked. CAL then sets up these factors for the cooking stage of the program and, when the food has finished cooking, the oven is turned off automatically. The total time for this entire process is the same as the standard cooking time the user would normally set. The CAL program can also compensate for varying line voltages and detect when the oven door is opened. With all of these varying factors being monitored, CAL can produce a perfectly cooked item with minimal user input.

  1. Automating GONG's Angle Determination Pipeline

    NASA Astrophysics Data System (ADS)

    Toner, C. G.

    2005-05-01

    Recently, GONG started recording regular noon drift-scans throughout the Network (3 per week). This is in an effort to prevent spurious "wobbling" of GONG's merged images by providing regular "reality checks" on the true orientation of the site images. Wobbling can be very detrimental to local helioseismology analyses (A.K.A. the "Washing Machine Effect") Here we describe recent steps to automate the processing of the drift-scans once they arrive in Tucson.

  2. Home automation in the workplace.

    PubMed

    McCormack, J E; Tello, S F

    1994-01-01

    Environmental control units and home automation devices contribute to the independence and potential of individuals with disabilities, both at work and at home. Devices currently exist that can assist people with physical, cognitive, and sensory disabilities to control lighting, appliances, temperature, security, and telephone communications. This article highlights several possible applications for these technologies and discusses emerging technologies that will increase the benefits these devices offer people with disabilities.

  3. Plans Toward Automated Chat Summarization

    DTIC Science & Technology

    2011-06-01

    summarize real- time chat room messages to address a problem in the United States military: information overload and the need for automated techniques...frequent use of ab- breviations, acronyms, deletion of subject pronouns, use of emoticons, abbreviation of nicknames, and stripping of vowels from words to...personal requirements. For example, if someone only has a short amount of time to read a summary, then they can specify a low level of detail to quickly

  4. Software design for distribution automation

    SciTech Connect

    Gillerman, J.; Nave, R.; Tran, T.

    1994-12-31

    This paper describes the design and implementation of standards based communications software for distribution automation. Design of a simple application to communicate with a program in control of sampling of a AC waveform and controlling a switch are detailed. The software developed provides one possible implementation of a standard communication method for communication between a central station and a remote controllers. The software has been built using an EPRI Utility Communications Architecture (UCA) component called Manufacturing Messaging Service (MMS).

  5. Fully automated urban traffic system

    NASA Technical Reports Server (NTRS)

    Dobrotin, B. M.; Hansen, G. R.; Peng, T. K. C.; Rennels, D. A.

    1977-01-01

    The replacement of the driver with an automatic system which could perform the functions of guiding and routing a vehicle with a human's capability of responding to changing traffic demands was discussed. The problem was divided into four technological areas; guidance, routing, computing, and communications. It was determined that the latter three areas being developed independent of any need for fully automated urban traffic. A guidance system that would meet system requirements was not being developed but was technically feasible.

  6. Market Investigation for Automated Warehousing

    DTIC Science & Technology

    1990-06-28

    support supply units can take full advantage of available space and material handling equipment (MHE). These supplies are grouped for warehousing...provides maximum product accessibility with minimum floor space use. On-board machine controls interface with the PC end-of-aisle controllers for...enough to explort the adaptation of AGV 0 4-15 MARKET INVESTIGATION FOR AUTOMATED WAREHOUSING * technology to the field environment. Control

  7. Protocol for Automated Zooplankton Analysis

    DTIC Science & Technology

    2010-01-01

    Protocol for Automated Zooplankton Analysis LIST OF FIGURES Figure 1. Photograph of the SensoPlate• Glass Bottom Cell Culture Plate 5 Figure A-l. File...Artemia franciscana) and rotifers {Brachionus plicatilis and B. calyciflorus). Initial work was conducted with homogeneous monocultures with little to...resistant materials. Based on these criteria, NRL used the SensoPlate• Glass Bottom Cell Culture Plates (Item # 692892; Greiner Bio-One, Monroe, NC

  8. Small Business Innovations (Automated Information)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Bruce G. Jackson & Associates Document Director is an automated tool that combines word processing and database management technologies to offer the flexibility and convenience of text processing with the linking capability of database management. Originally developed for NASA, it provides a means to collect and manage information associated with requirements development. The software system was used by NASA in the design of the Assured Crew Return Vehicle, as well as by other government and commercial organizations including the Southwest Research Institute.

  9. Algorithms Could Automate Cancer Diagnosis

    NASA Technical Reports Server (NTRS)

    Baky, A. A.; Winkler, D. G.

    1982-01-01

    Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.

  10. Automated Anti-Virus Deployment

    DTIC Science & Technology

    2004-11-01

    External collaborators and visitors also need to keep in contact with their home laboratories or institutes, using the Internet to exchange e - mails or...layered defence system deployed with other components like host or network- based intrusion detection, global and personal firewalls, logical network...and provides the standard services that are requested to a modern enterprise network: office automation, e - mail , Internet access and workgroup file

  11. Automated Scheduling Via Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Biefeld, Eric W.; Cooper, Lynne P.

    1991-01-01

    Artificial-intelligence software that automates scheduling developed in Operations Mission Planner (OMP) research project. Software used in both generation of new schedules and modification of existing schedules in view of changes in tasks and/or available resources. Approach based on iterative refinement. Although project focused upon scheduling of operations of scientific instruments and other equipment aboard spacecraft, also applicable to such terrestrial problems as scheduling production in factory.

  12. Automated Platform Management System Scheduling

    NASA Technical Reports Server (NTRS)

    Hull, Larry G.

    1990-01-01

    The Platform Management System was established to coordinate the operation of platform systems and instruments. The management functions are split between ground and space components. Since platforms are to be out of contact with the ground more than the manned base, the on-board functions are required to be more autonomous than those of the manned base. Under this concept, automated replanning and rescheduling, including on-board real-time schedule maintenance and schedule repair, are required to effectively and efficiently meet Space Station Freedom mission goals. In a FY88 study, we developed several promising alternatives for automated platform planning and scheduling. We recommended both a specific alternative and a phased approach to automated platform resource scheduling. Our recommended alternative was based upon use of exactly the same scheduling engine in both ground and space components of the platform management system. Our phased approach recommendation was based upon evolutionary development of the platform. In the past year, we developed platform scheduler requirements and implemented a rapid prototype of a baseline platform scheduler. Presently we are rehosting this platform scheduler rapid prototype and integrating the scheduler prototype into two Goddard Space Flight Center testbeds, as the ground scheduler in the Scheduling Concepts, Architectures, and Networks Testbed and as the on-board scheduler in the Platform Management System Testbed. Using these testbeds, we will investigate rescheduling issues, evaluate operational performance and enhance the platform scheduler prototype to demonstrate our evolutionary approach to automated platform scheduling. The work described in this paper was performed prior to Space Station Freedom rephasing, transfer of platform responsibility to Code E, and other recently discussed changes. We neither speculate on these changes nor attempt to predict the impact of the final decisions. As a consequence some of our

  13. Making the transition to automation

    SciTech Connect

    Christenson, D.J. )

    1992-10-01

    By 1995, the Bureau of Reclamation's hydropower plant near Hungry Horse, Montana, will be remotely operated from Grand Coulee dam (about 300 miles away) in Washington State. Automation at Hungry Horse will eliminate the need for four full-time power plant operators. Between now and then, a transition plan that offers employees choices for retraining, transferring, or taking early retirement will smooth the transition in reducing from five operators to one. The transition plan also includes the use of temporary employees to offset risks of reducing staff too soon. When completed in 1953, the Hungry Horse structure was the world's fourth largest and fourth highest concrete dam. The arch-gravity structure has a crest length of 2,115 feet; it is 3,565 feet above sea level. The four turbine-generator units in the powerhouse total 284 MW, and supply approximately 1 billion kilowatt-hours of electricity annually to the federal power grid managed by the Bonneville Power Administration. In 1988, Reclamation began to automate operations at many of its hydro plants, and to establish centralized control points. The control center concept will increase efficiency. It also will coordinate water movements and power supply throughout the West. In the Pacific Northwest, the Grand Coulee and Black Canyon plants are automated control centers. Several Reclamation-owned facilities in the Columbia River Basin, including Hungry Horse, will be connected to these centers via microwave and telephone lines. When automation is complete, constant monitoring by computer will replace hourly manual readings and equipment checks. Computers also are expected to increase water use efficiency by 1 to 2 percent by ensuring operation for maximum turbine efficiency. Unit efficiency curves for various heads will be programmed into the system.

  14. Brain Stimulation Therapies

    MedlinePlus

    ... Magnetic Seizure Therapy Deep Brain Stimulation Additional Resources Brain Stimulation Therapies Overview Brain stimulation therapies can play ... for a shorter recovery time than ECT Deep Brain Stimulation Deep brain stimulation (DBS) was first developed ...

  15. Right Hemisphere Brain Damage

    MedlinePlus

    ... Language and Swallowing / Disorders and Diseases Right Hemisphere Brain Damage [ en Español ] What is right hemisphere brain ... right hemisphere brain damage ? What is right hemisphere brain damage? Right hemisphere brain damage (RHD) is damage ...

  16. Brain radiation - discharge

    MedlinePlus

    Radiation - brain - discharge; Cancer-brain radiation; Lymphoma - brain radiation; Leukemia - brain radiation ... Decadron) while you are getting radiation to the brain. It may make you hungrier, cause leg swelling ...

  17. Brain tumor - primary - adults

    MedlinePlus

    ... Vestibular schwannoma (acoustic neuroma) - adults; Meningioma - adults; Cancer - brain tumor (adults) ... Primary brain tumors include any tumor that starts in the brain. Primary brain tumors can start from brain cells, ...

  18. Brain Basics

    MedlinePlus

    ... News & Events News & Events Home Science News Events Multimedia Social Media Press Resources Newsletters NIMH News Feeds ... affect many aspects of life. Scientists are continually learning more about how the brain grows and works ...

  19. Brain Lesions

    MedlinePlus

    ... uptodate.com/contents/search. Accessed Aug. 14, 2017. Sports-related concussion. Merck Manual Professional Version http://www.merckmanuals.com/professional/injuries-poisoning/traumatic-brain-injury-tbi/sports-related-concussion. Accessed Aug. 14, 2017. Oct. 04, 2017 Original ...

  20. Fragile Brains.

    ERIC Educational Resources Information Center

    Jensen, Eric

    2001-01-01

    Describes three types of brain disorders: the sluggish, the oppositional, and the depressed. Explains how to identify these disorders and offers educators strategies for dealing with each. (Contains 11 references.) (PKP)