Sample records for semi-automatic electronic evaluation

  1. A Modular Hierarchical Approach to 3D Electron Microscopy Image Segmentation

    PubMed Central

    Liu, Ting; Jones, Cory; Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2014-01-01

    The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy. PMID:24491638

  2. Response Evaluation of Malignant Liver Lesions After TACE/SIRT: Comparison of Manual and Semi-Automatic Measurement of Different Response Criteria in Multislice CT.

    PubMed

    Höink, Anna Janina; Schülke, Christoph; Koch, Raphael; Löhnert, Annika; Kammerer, Sara; Fortkamp, Rasmus; Heindel, Walter; Buerke, Boris

    2017-11-01

    Purpose  To compare measurement precision and interobserver variability in the evaluation of hepatocellular carcinoma (HCC) and liver metastases in MSCT before and after transarterial local ablative therapies. Materials and Methods  Retrospective study of 72 patients with malignant liver lesions (42 metastases; 30 HCCs) before and after therapy (43 SIRT procedures; 29 TACE procedures). Established (LAD; SAD; WHO) and vitality-based parameters (mRECIST; mLAD; mSAD; EASL) were assessed manually and semi-automatically by two readers. The relative interobserver difference (RID) and intraclass correlation coefficient (ICC) were calculated. Results  The median RID for vitality-based parameters was lower from semi-automatic than from manual measurement of mLAD (manual 12.5 %; semi-automatic 3.4 %), mSAD (manual 12.7 %; semi-automatic 5.7 %) and EASL (manual 10.4 %; semi-automatic 1.8 %). The difference in established parameters was not statistically noticeable (p > 0.05). The ICCs of LAD (manual 0.984; semi-automatic 0.982), SAD (manual 0.975; semi-automatic 0.958) and WHO (manual 0.984; semi-automatic 0.978) are high, both in manual and semi-automatic measurements. The ICCs of manual measurements of mLAD (0.897), mSAD (0.844) and EASL (0.875) are lower. This decrease cannot be found in semi-automatic measurements of mLAD (0.997), mSAD (0.992) and EASL (0.998). Conclusion  Vitality-based tumor measurements of HCC and metastases after transarterial local therapies should be performed semi-automatically due to greater measurement precision, thus increasing the reproducibility and in turn the reliability of therapeutic decisions. Key points   · Liver lesion measurements according to EASL and mRECIST are more precise when performed semi-automatically.. · The higher reproducibility may facilitate a more reliable classification of therapy response.. · Measurements according to RECIST and WHO offer equivalent precision semi-automatically and manually.. Citation Format · Höink AJ, Schülke C, Koch R et al. Response Evaluation of Malignant Liver Lesions After TACE/SIRT: Comparison of Manual and Semi-Automatic Measurement of Different Response Criteria in Multislice CT. Fortschr Röntgenstr 2017; 189: 1067 - 1075. © Georg Thieme Verlag KG Stuttgart · New York.

  3. An evaluation of automatic coronary artery calcium scoring methods with cardiac CT using the orCaScore framework.

    PubMed

    Wolterink, Jelmer M; Leiner, Tim; de Vos, Bob D; Coatrieux, Jean-Louis; Kelm, B Michael; Kondo, Satoshi; Salgado, Rodrigo A; Shahzad, Rahil; Shu, Huazhong; Snoeren, Miranda; Takx, Richard A P; van Vliet, Lucas J; van Walsum, Theo; Willems, Tineke P; Yang, Guanyu; Zheng, Yefeng; Viergever, Max A; Išgum, Ivana

    2016-05-01

    The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiac CT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiac CT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiac CT using a publicly available standardized framework. Cardiac CT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CT scanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohen's kappa for patient CVD risk categorization by the evaluated methods ranged from 0.80 to 1.00. A publicly available standardized framework for the evaluation of (semi)automatic methods for CAC identification in cardiac CT is described. An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible. CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVD risk determination.

  4. Efficient Semi-Automatic 3D Segmentation for Neuron Tracing in Electron Microscopy Images

    PubMed Central

    Jones, Cory; Liu, Ting; Cohan, Nathaniel Wood; Ellisman, Mark; Tasdizen, Tolga

    2015-01-01

    0.1. Background In the area of connectomics, there is a significant gap between the time required for data acquisition and dense reconstruction of the neural processes contained in the same dataset. Automatic methods are able to eliminate this timing gap, but the state-of-the-art accuracy so far is insufficient for use without user corrections. If completed naively, this process of correction can be tedious and time consuming. 0.2. New Method We present a new semi-automatic method that can be used to perform 3D segmentation of neurites in EM image stacks. It utilizes an automatic method that creates a hierarchical structure for recommended merges of superpixels. The user is then guided through each predicted region to quickly identify errors and establish correct links. 0.3. Results We tested our method on three datasets with both novice and expert users. Accuracy and timing were compared with published automatic, semi-automatic, and manual results. 0.4. Comparison with Existing Methods Post-automatic correction methods have also been used in [1] and [2]. These methods do not provide navigation or suggestions in the manner we present. Other semi-automatic methods require user input prior to the automatic segmentation such as [3] and [4] and are inherently different than our method. 0.5. Conclusion Using this method on the three datasets, novice users achieved accuracy exceeding state-of-the-art automatic results, and expert users achieved accuracy on par with full manual labeling but with a 70% time improvement when compared with other examples in publication. PMID:25769273

  5. Preliminary clinical evaluation of semi-automated nailfold capillaroscopy in the assessment of patients with Raynaud's phenomenon.

    PubMed

    Murray, Andrea K; Feng, Kaiyan; Moore, Tonia L; Allen, Phillip D; Taylor, Christopher J; Herrick, Ariane L

    2011-08-01

      Nailfold capillaroscopy is well established in screening patients with Raynaud's phenomenon for underlying SSc-spectrum disorders, by identifying abnormal capillaries. Our aim was to compare semi-automatic feature measurement from newly developed software with manual measurements, and determine the degree to which semi-automated data allows disease group classification.   Images from 46 healthy controls, 21 patients with PRP and 49 with SSc were preprocessed, and semi-automated measurements of intercapillary distance and capillary width, tortuosity, and derangement were performed. These were compared with manual measurements. Features were used to classify images into the three subject groups.   Comparison of automatic and manual measures for distance, width, tortuosity, and derangement had correlations of r=0.583, 0.624, 0.495 (p<0.001), and 0.195 (p=0.040). For automatic measures, correlations were found between width and intercapillary distance, r=0.374, and width and tortuosity, r=0.573 (p<0.001). Significant differences between subject groups were found for all features (p<0.002). Overall, 75% of images correctly matched clinical classification using semi-automated features, compared with 71% for manual measurements.   Semi-automatic and manual measurements of distance, width, and tortuosity showed moderate (but statistically significant) correlations. Correlation for derangement was weaker. Semi-automatic measurements are faster than manual measurements. Semi-automatic parameters identify differences between groups, and are as good as manual measurements for between-group classification. © 2011 John Wiley & Sons Ltd.

  6. The Use of Opto-Electronics in Viscometry.

    ERIC Educational Resources Information Center

    Mazza, R. J.; Washbourn, D. H.

    1982-01-01

    Describes a semi-automatic viscometer which incorporates a microprocessor system and uses optoelectronics to detect flow of liquid through the capillary, flow time being displayed on a timer with accuracy of 0.01 second. The system could be made fully automatic with an additional microprocessor circuit and inclusion of a pump. (Author/JN)

  7. Towards the Real-Time Evaluation of Collaborative Activities: Integration of an Automatic Rater of Collaboration Quality in the Classroom from the Teacher's Perspective

    ERIC Educational Resources Information Center

    Chounta, Irene-Angelica; Avouris, Nikolaos

    2016-01-01

    This paper presents the integration of a real time evaluation method of collaboration quality in a monitoring application that supports teachers in class orchestration. The method is implemented as an automatic rater of collaboration quality and studied in a real time scenario of use. We argue that automatic and semi-automatic methods which…

  8. The virtual craniofacial patient: 3D jaw modeling and animation.

    PubMed

    Enciso, Reyes; Memon, Ahmed; Fidaleo, Douglas A; Neumann, Ulrich; Mah, James

    2003-01-01

    In this paper, we present new developments in the area of 3D human jaw modeling and animation. CT (Computed Tomography) scans have traditionally been used to evaluate patients with dental implants, assess tumors, cysts, fractures and surgical procedures. More recently this data has been utilized to generate models. Researchers have reported semi-automatic techniques to segment and model the human jaw from CT images and manually segment the jaw from MRI images. Recently opto-electronic and ultrasonic-based systems (JMA from Zebris) have been developed to record mandibular position and movement. In this research project we introduce: (1) automatic patient-specific three-dimensional jaw modeling from CT data and (2) three-dimensional jaw motion simulation using jaw tracking data from the JMA system (Zebris).

  9. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint.

    PubMed

    Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K

    2017-10-01

    Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.

  10. Semi-automatic spray pyrolysis deposition of thin, transparent, titania films as blocking layers for dye-sensitized and perovskite solar cells.

    PubMed

    Krýsová, Hana; Krýsa, Josef; Kavan, Ladislav

    2018-01-01

    For proper function of the negative electrode of dye-sensitized and perovskite solar cells, the deposition of a nonporous blocking film is required on the surface of F-doped SnO 2 (FTO) glass substrates. Such a blocking film can minimise undesirable parasitic processes, for example, the back reaction of photoinjected electrons with the oxidized form of the redox mediator or with the hole-transporting medium can be avoided. In the present work, thin, transparent, blocking TiO 2 films are prepared by semi-automatic spray pyrolysis of precursors consisting of titanium diisopropoxide bis(acetylacetonate) as the main component. The variation in the layer thickness of the sprayed films is achieved by varying the number of spray cycles. The parameters investigated in this work were deposition temperature (150, 300 and 450 °C), number of spray cycles (20-200), precursor composition (with/without deliberately added acetylacetone), concentration (0.05 and 0.2 M) and subsequent post-calcination at 500 °C. The photo-electrochemical properties were evaluated in aqueous electrolyte solution under UV irradiation. The blocking properties were tested by cyclic voltammetry with a model redox probe with a simple one-electron-transfer reaction. Semi-automatic spraying resulted in the formation of transparent, homogeneous, TiO 2 films, and the technique allows for easy upscaling to large electrode areas. The deposition temperature of 450 °C was necessary for the fabrication of highly photoactive TiO 2 films. The blocking properties of the as-deposited TiO 2 films (at 450 °C) were impaired by post-calcination at 500 °C, but this problem could be addressed by increasing the number of spray cycles. The modification of the precursor by adding acetylacetone resulted in the fabrication of TiO 2 films exhibiting perfect blocking properties that were not influenced by post-calcination. These results will surely find use in the fabrication of large-scale dye-sensitized and perovskite solar cells.

  11. Semi-automatic volume measurement for orbital fat and total extraocular muscles based on Cube FSE-flex sequence in patients with thyroid-associated ophthalmopathy.

    PubMed

    Tang, X; Liu, H; Chen, L; Wang, Q; Luo, B; Xiang, N; He, Y; Zhu, W; Zhang, J

    2018-05-24

    To investigate the accuracy of two semi-automatic segmentation measurements based on magnetic resonance imaging (MRI) three-dimensional (3D) Cube fast spin echo (FSE)-flex sequence in phantoms, and to evaluate the feasibility of determining the volumetric alterations of orbital fat (OF) and total extraocular muscles (TEM) in patients with thyroid-associated ophthalmopathy (TAO) by semi-automatic segmentation. Forty-four fatty (n=22) and lean (n=22) phantoms were scanned by using Cube FSE-flex sequence with a 3 T MRI system. Their volumes were measured by manual segmentation (MS) and two semi-automatic segmentation algorithms (regional growing [RG], multi-dimensional threshold [MDT]). Pearson correlation and Bland-Altman analysis were used to evaluate the measuring accuracy of MS, RG, and MDT in phantoms as compared with the true volume. Then, OF and TEM volumes of 15 TAO patients and 15 normal controls were measured using MDT. Paired-sample t-tests were used to compare the volumes and volume ratios of different orbital tissues between TAO patients and controls. Each segmentation (MS RG, MDT) has a significant correlation (p<0.01) with true volume. There was a minimal bias for MS, and a stronger agreement between MDT and the true volume than RG and the true volume both in fatty and lean phantoms. The reproducibility of Cube FSE-flex determined MDT was adequate. The volumetric ratios of OF/globe (p<0.01), TEM/globe (p<0.01), whole orbit/globe (p<0.01) and bone orbit/globe (p<0.01) were significantly greater in TAO patients than those in healthy controls. MRI Cube FSE-flex determined MDT is a relatively accurate semi-automatic segmentation that can be used to evaluate OF and TEM volumes in clinic. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  12. Comparison of the effects of model-based iterative reconstruction and filtered back projection algorithms on software measurements in pulmonary subsolid nodules.

    PubMed

    Cohen, Julien G; Kim, Hyungjin; Park, Su Bin; van Ginneken, Bram; Ferretti, Gilbert R; Lee, Chang Hyun; Goo, Jin Mo; Park, Chang Min

    2017-08-01

    To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p < 0.05) with mean differences of 1.1% (limits of agreement, -6.4 to 8.5%), 3.2% (-20.9 to 27.3%) and 2.9% (-16.9 to 22.7%) and 3.2% (-20.5 to 27%), 6.3% (-51.9 to 64.6%), 6.6% (-50.1 to 63.3%), respectively. The limits of agreement between FBP and MBIR were within the range of intra- and interobserver variability for both algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. • Intra- and interobserver reproducibility of measurements did not differ between FBP and MBIR. • Differences in SSNs' semi-automatic measurement induced by reconstruction algorithms were not clinically significant. • Semi-automatic measurement may be conducted regardless of reconstruction algorithm. • SSNs' semi-automated classification agreement (pure vs. part-solid) did not significantly differ between algorithms.

  13. Comparison of a semi-automatic annotation tool and a natural language processing application for the generation of clinical statement entries.

    PubMed

    Lin, Ching-Heng; Wu, Nai-Yuan; Lai, Wei-Shao; Liou, Der-Ming

    2015-01-01

    Electronic medical records with encoded entries should enhance the semantic interoperability of document exchange. However, it remains a challenge to encode the narrative concept and to transform the coded concepts into a standard entry-level document. This study aimed to use a novel approach for the generation of entry-level interoperable clinical documents. Using HL7 clinical document architecture (CDA) as the example, we developed three pipelines to generate entry-level CDA documents. The first approach was a semi-automatic annotation pipeline (SAAP), the second was a natural language processing (NLP) pipeline, and the third merged the above two pipelines. We randomly selected 50 test documents from the i2b2 corpora to evaluate the performance of the three pipelines. The 50 randomly selected test documents contained 9365 words, including 588 Observation terms and 123 Procedure terms. For the Observation terms, the merged pipeline had a significantly higher F-measure than the NLP pipeline (0.89 vs 0.80, p<0.0001), but a similar F-measure to that of the SAAP (0.89 vs 0.87). For the Procedure terms, the F-measure was not significantly different among the three pipelines. The combination of a semi-automatic annotation approach and the NLP application seems to be a solution for generating entry-level interoperable clinical documents. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.comFor numbered affiliation see end of article.

  14. Supporting the annotation of chronic obstructive pulmonary disease (COPD) phenotypes with text mining workflows.

    PubMed

    Fu, Xiao; Batista-Navarro, Riza; Rak, Rafal; Ananiadou, Sophia

    2015-01-01

    Chronic obstructive pulmonary disease (COPD) is a life-threatening lung disorder whose recent prevalence has led to an increasing burden on public healthcare. Phenotypic information in electronic clinical records is essential in providing suitable personalised treatment to patients with COPD. However, as phenotypes are often "hidden" within free text in clinical records, clinicians could benefit from text mining systems that facilitate their prompt recognition. This paper reports on a semi-automatic methodology for producing a corpus that can ultimately support the development of text mining tools that, in turn, will expedite the process of identifying groups of COPD patients. A corpus of 30 full-text papers was formed based on selection criteria informed by the expertise of COPD specialists. We developed an annotation scheme that is aimed at producing fine-grained, expressive and computable COPD annotations without burdening our curators with a highly complicated task. This was implemented in the Argo platform by means of a semi-automatic annotation workflow that integrates several text mining tools, including a graphical user interface for marking up documents. When evaluated using gold standard (i.e., manually validated) annotations, the semi-automatic workflow was shown to obtain a micro-averaged F-score of 45.70% (with relaxed matching). Utilising the gold standard data to train new concept recognisers, we demonstrated that our corpus, although still a work in progress, can foster the development of significantly better performing COPD phenotype extractors. We describe in this work the means by which we aim to eventually support the process of COPD phenotype curation, i.e., by the application of various text mining tools integrated into an annotation workflow. Although the corpus being described is still under development, our results thus far are encouraging and show great potential in stimulating the development of further automatic COPD phenotype extractors.

  15. Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography

    PubMed Central

    Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji

    2013-01-01

    OBJECTIVES To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. METHODS We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume < −950 HU of each lobe using (i) the slice-by-slice method (reference standard), (ii) number of segments method, and (iii) semi-automatic and (iv) automatic computer-aided diagnosis. We determined Pearson correlation coefficients between the reference standard and the three other methods for lobar volumes and emphysematous lobar volumes. We also compared the relative errors among the three measurement methods. RESULTS Both semi-automatic and automatic computer-aided diagnosis results were more strongly correlated with the reference standard than the number of segments method. The correlation coefficients for automatic computer-aided diagnosis were slightly lower than those for semi-automatic computer-aided diagnosis because there was one outlier among 50 cases (2%) in the right upper lobe and two outliers among 50 cases (4%) in the other lobes. The number of segments method relative error was significantly greater than those for semi-automatic and automatic computer-aided diagnosis (P < 0.001). The computational time for automatic computer-aided diagnosis was 1/2 to 2/3 than that of semi-automatic computer-aided diagnosis. CONCLUSIONS A novel lobar volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer-aided diagnosis failed. PMID:23526418

  16. Digital X-ray portable scanner based on monolithic semi-insulating GaAs detectors: General description and first “quantum” images

    NASA Astrophysics Data System (ADS)

    Dubecký, F.; Perd'ochová, A.; Ščepko, P.; Zat'ko, B.; Sekerka, V.; Nečas, V.; Sekáčová, M.; Hudec, M.; Boháček, P.; Huran, J.

    2005-07-01

    The present work describes a portable digital X-ray scanner based on bulk undoped semi-insulating (SI) GaAs monolithic strip line detectors. The scanner operates in "quantum" imaging mode ("single photon counting"), with potential improvement of the dynamic range in contrast of the observed X-ray images. The "heart" of the scanner (detection unit) is based on SI GaAs strip line detectors. The measured detection efficiency of the SI GaAs detector reached a value of over 60 % (compared to the theoretical one of ˜75 %) for the detection of 60 keV photons at a reverse bias of 200 V. The read-out electronics consists of 20 modules fabricated using a progressive SMD technology with automatic assembly of electronic devices. Signals from counters included in the digital parts of the modules are collected in a PC via a USB port and evaluated by custom developed software allowing X-ray image reconstruction. The collected data were used for the creation of the first X-ray "quantum" images of various test objects using the imaging software developed.

  17. Semi-automatic spray pyrolysis deposition of thin, transparent, titania films as blocking layers for dye-sensitized and perovskite solar cells

    PubMed Central

    Krýsová, Hana; Kavan, Ladislav

    2018-01-01

    For proper function of the negative electrode of dye-sensitized and perovskite solar cells, the deposition of a nonporous blocking film is required on the surface of F-doped SnO2 (FTO) glass substrates. Such a blocking film can minimise undesirable parasitic processes, for example, the back reaction of photoinjected electrons with the oxidized form of the redox mediator or with the hole-transporting medium can be avoided. In the present work, thin, transparent, blocking TiO2 films are prepared by semi-automatic spray pyrolysis of precursors consisting of titanium diisopropoxide bis(acetylacetonate) as the main component. The variation in the layer thickness of the sprayed films is achieved by varying the number of spray cycles. The parameters investigated in this work were deposition temperature (150, 300 and 450 °C), number of spray cycles (20–200), precursor composition (with/without deliberately added acetylacetone), concentration (0.05 and 0.2 M) and subsequent post-calcination at 500 °C. The photo-electrochemical properties were evaluated in aqueous electrolyte solution under UV irradiation. The blocking properties were tested by cyclic voltammetry with a model redox probe with a simple one-electron-transfer reaction. Semi-automatic spraying resulted in the formation of transparent, homogeneous, TiO2 films, and the technique allows for easy upscaling to large electrode areas. The deposition temperature of 450 °C was necessary for the fabrication of highly photoactive TiO2 films. The blocking properties of the as-deposited TiO2 films (at 450 °C) were impaired by post-calcination at 500 °C, but this problem could be addressed by increasing the number of spray cycles. The modification of the precursor by adding acetylacetone resulted in the fabrication of TiO2 films exhibiting perfect blocking properties that were not influenced by post-calcination. These results will surely find use in the fabrication of large-scale dye-sensitized and perovskite solar cells. PMID:29719764

  18. Very Portable Remote Automatic Weather Stations

    Treesearch

    John R. Warren

    1987-01-01

    Remote Automatic Weather Stations (RAWS) were introduced to Forest Service and Bureau of Land Management field units in 1978 following development, test, and evaluation activities conducted jointly by the two agencies. The original configuration was designed for semi-permanent installation. Subsequently, a need for a more portable RAWS was expressed, and one was...

  19. Integration of tools for binding archetypes to SNOMED CT.

    PubMed

    Sundvall, Erik; Qamar, Rahil; Nyström, Mikael; Forss, Mattias; Petersson, Håkan; Karlsson, Daniel; Ahlfeldt, Hans; Rector, Alan

    2008-10-27

    The Archetype formalism and the associated Archetype Definition Language have been proposed as an ISO standard for specifying models of components of electronic healthcare records as a means of achieving interoperability between clinical systems. This paper presents an archetype editor with support for manual or semi-automatic creation of bindings between archetypes and terminology systems. Lexical and semantic methods are applied in order to obtain automatic mapping suggestions. Information visualisation methods are also used to assist the user in exploration and selection of mappings. An integrated tool for archetype authoring, semi-automatic SNOMED CT terminology binding assistance and terminology visualization was created and released as open source. Finding the right terms to bind is a difficult task but the effort to achieve terminology bindings may be reduced with the help of the described approach. The methods and tools presented are general, but here only bindings between SNOMED CT and archetypes based on the openEHR reference model are presented in detail.

  20. Integration of tools for binding archetypes to SNOMED CT

    PubMed Central

    Sundvall, Erik; Qamar, Rahil; Nyström, Mikael; Forss, Mattias; Petersson, Håkan; Karlsson, Daniel; Åhlfeldt, Hans; Rector, Alan

    2008-01-01

    Background The Archetype formalism and the associated Archetype Definition Language have been proposed as an ISO standard for specifying models of components of electronic healthcare records as a means of achieving interoperability between clinical systems. This paper presents an archetype editor with support for manual or semi-automatic creation of bindings between archetypes and terminology systems. Methods Lexical and semantic methods are applied in order to obtain automatic mapping suggestions. Information visualisation methods are also used to assist the user in exploration and selection of mappings. Results An integrated tool for archetype authoring, semi-automatic SNOMED CT terminology binding assistance and terminology visualization was created and released as open source. Conclusion Finding the right terms to bind is a difficult task but the effort to achieve terminology bindings may be reduced with the help of the described approach. The methods and tools presented are general, but here only bindings between SNOMED CT and archetypes based on the openEHR reference model are presented in detail. PMID:19007444

  1. Semi-Automatic Electronic Stent Register: a novel approach to preventing ureteric stents lost to follow up.

    PubMed

    Macneil, James W H; Michail, Peter; Patel, Manish I; Ashbourne, Julie; Bariol, Simon V; Ende, David A; Hossack, Tania A; Lau, Howard; Wang, Audrey C; Brooks, Andrew J

    2017-10-01

    Ureteric stents are indispensable tools in modern urology; however, the risk of them not being followed-up once inserted poses medical and medico-legal risks. Stent registers are a common solution to mitigate this risk; however, manual registers are logistically challenging, especially for busy units. Western Sydney Local Health District developed a novel Semi-Automatic Electronic Stent Register (SAESR) utilizing billing information to track stent insertions. To determine the utility of this system, an audit was conducted comparing the 6 months before the introduction of the register to the first 6 months of the register. In the first 6 months of the register, 457 stents were inserted. At the time of writing, two of these are severely delayed for removal, representing a rate of 0.4%. In the 6 months immediately preceding the introduction of the register, 497 stents were inserted, and six were either missed completely or severely delayed in their removal, representing a rate of 1.2%. A non-inferiority analysis found this to be no worse than the results achieved before the introduction of the register. The SAESR allowed us to improve upon our better than expected rate of stents lost to follow up or severely delayed. We demonstrated non-inferiority in the rate of lost or severely delayed stents, and a number of other advantages including savings in personnel costs. The semi-automatic register represents an effective way of reducing the risk associated with a common urological procedure. We believe that this methodology could be implemented elsewhere. © 2017 Royal Australasian College of Surgeons.

  2. Optimizing the 3D-reconstruction technique for serial block-face scanning electron microscopy.

    PubMed

    Wernitznig, Stefan; Sele, Mariella; Urschler, Martin; Zankel, Armin; Pölt, Peter; Rind, F Claire; Leitinger, Gerd

    2016-05-01

    Elucidating the anatomy of neuronal circuits and localizing the synaptic connections between neurons, can give us important insights in how the neuronal circuits work. We are using serial block-face scanning electron microscopy (SBEM) to investigate the anatomy of a collision detection circuit including the Lobula Giant Movement Detector (LGMD) neuron in the locust, Locusta migratoria. For this, thousands of serial electron micrographs are produced that allow us to trace the neuronal branching pattern. The reconstruction of neurons was previously done manually by drawing cell outlines of each cell in each image separately. This approach was very time consuming and troublesome. To make the process more efficient a new interactive software was developed. It uses the contrast between the neuron under investigation and its surrounding for semi-automatic segmentation. For segmentation the user sets starting regions manually and the algorithm automatically selects a volume within the neuron until the edges corresponding to the neuronal outline are reached. Internally the algorithm optimizes a 3D active contour segmentation model formulated as a cost function taking the SEM image edges into account. This reduced the reconstruction time, while staying close to the manual reference segmentation result. Our algorithm is easy to use for a fast segmentation process, unlike previous methods it does not require image training nor an extended computing capacity. Our semi-automatic segmentation algorithm led to a dramatic reduction in processing time for the 3D-reconstruction of identified neurons. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Quantitative analysis of the patellofemoral motion pattern using semi-automatic processing of 4D CT data.

    PubMed

    Forsberg, Daniel; Lindblom, Maria; Quick, Petter; Gauffin, Håkan

    2016-09-01

    To present a semi-automatic method with minimal user interaction for quantitative analysis of the patellofemoral motion pattern. 4D CT data capturing the patellofemoral motion pattern of a continuous flexion and extension were collected for five patients prone to patellar luxation both pre- and post-surgically. For the proposed method, an observer would place landmarks in a single 3D volume, which then are automatically propagated to the other volumes in a time sequence. From the landmarks in each volume, the measures patellar displacement, patellar tilt and angle between femur and tibia were computed. Evaluation of the observer variability showed the proposed semi-automatic method to be favorable over a fully manual counterpart, with an observer variability of approximately 1.5[Formula: see text] for the angle between femur and tibia, 1.5 mm for the patellar displacement, and 4.0[Formula: see text]-5.0[Formula: see text] for the patellar tilt. The proposed method showed that surgery reduced the patellar displacement and tilt at maximum extension with approximately 10-15 mm and 15[Formula: see text]-20[Formula: see text] for three patients but with less evident differences for two of the patients. A semi-automatic method suitable for quantification of the patellofemoral motion pattern as captured by 4D CT data has been presented. Its observer variability is on par with that of other methods but with the distinct advantage to support continuous motions during the image acquisition.

  4. Semi-automatic knee cartilage segmentation

    NASA Astrophysics Data System (ADS)

    Dam, Erik B.; Folkesson, Jenny; Pettersen, Paola C.; Christiansen, Claus

    2006-03-01

    Osteo-Arthritis (OA) is a very common age-related cause of pain and reduced range of motion. A central effect of OA is wear-down of the articular cartilage that otherwise ensures smooth joint motion. Quantification of the cartilage breakdown is central in monitoring disease progression and therefore cartilage segmentation is required. Recent advances allow automatic cartilage segmentation with high accuracy in most cases. However, the automatic methods still fail in some problematic cases. For clinical studies, even if a few failing cases will be averaged out in the overall results, this reduces the mean accuracy and precision and thereby necessitates larger/longer studies. Since the severe OA cases are often most problematic for the automatic methods, there is even a risk that the quantification will introduce a bias in the results. Therefore, interactive inspection and correction of these problematic cases is desirable. For diagnosis on individuals, this is even more crucial since the diagnosis will otherwise simply fail. We introduce and evaluate a semi-automatic cartilage segmentation method combining an automatic pre-segmentation with an interactive step that allows inspection and correction. The automatic step consists of voxel classification based on supervised learning. The interactive step combines a watershed transformation of the original scan with the posterior probability map from the classification step at sub-voxel precision. We evaluate the method for the task of segmenting the tibial cartilage sheet from low-field magnetic resonance imaging (MRI) of knees. The evaluation shows that the combined method allows accurate and highly reproducible correction of the segmentation of even the worst cases in approximately ten minutes of interaction.

  5. Semi-Automatic Extraction Algorithm for Images of the Ciliary Muscle

    PubMed Central

    Kao, Chiu-Yen; Richdale, Kathryn; Sinnott, Loraine T.; Ernst, Lauren E.; Bailey, Melissa D.

    2011-01-01

    Purpose To development and evaluate a semi-automatic algorithm for segmentation and morphological assessment of the dimensions of the ciliary muscle in Visante™ Anterior Segment Optical Coherence Tomography images. Methods Geometric distortions in Visante images analyzed as binary files were assessed by imaging an optical flat and human donor tissue. The appropriate pixel/mm conversion factor to use for air (n = 1) was estimated by imaging calibration spheres. A semi-automatic algorithm was developed to extract the dimensions of the ciliary muscle from Visante images. Measurements were also made manually using Visante software calipers. Interclass correlation coefficients (ICC) and Bland-Altman analyses were used to compare the methods. A multilevel model was fitted to estimate the variance of algorithm measurements that was due to differences within- and between-examiners in scleral spur selection versus biological variability. Results The optical flat and the human donor tissue were imaged and appeared without geometric distortions in binary file format. Bland-Altman analyses revealed that caliper measurements tended to underestimate ciliary muscle thickness at 3 mm posterior to the scleral spur in subjects with the thickest ciliary muscles (t = 3.6, p < 0.001). The percent variance due to within- or between-examiner differences in scleral spur selection was found to be small (6%) when compared to the variance due to biological difference across subjects (80%). Using the mean of measurements from three images achieved an estimated ICC of 0.85. Conclusions The semi-automatic algorithm successfully segmented the ciliary muscle for further measurement. Using the algorithm to follow the scleral curvature to locate more posterior measurements is critical to avoid underestimating thickness measurements. This semi-automatic algorithm will allow for repeatable, efficient, and masked ciliary muscle measurements in large datasets. PMID:21169877

  6. User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy.

    PubMed

    Ramkumar, Anjana; Dolz, Jose; Kirisli, Hortense A; Adebahr, Sonja; Schimek-Jasch, Tanja; Nestle, Ursula; Massoptier, Laurent; Varga, Edit; Stappers, Pieter Jan; Niessen, Wiro J; Song, Yu

    2016-04-01

    Accurate segmentation of organs at risk is an important step in radiotherapy planning. Manual segmentation being a tedious procedure and prone to inter- and intra-observer variability, there is a growing interest in automated segmentation methods. However, automatic methods frequently fail to provide satisfactory result, and post-processing corrections are often needed. Semi-automatic segmentation methods are designed to overcome these problems by combining physicians' expertise and computers' potential. This study evaluates two semi-automatic segmentation methods with different types of user interactions, named the "strokes" and the "contour", to provide insights into the role and impact of human-computer interaction. Two physicians participated in the experiment. In total, 42 case studies were carried out on five different types of organs at risk. For each case study, both the human-computer interaction process and quality of the segmentation results were measured subjectively and objectively. Furthermore, different measures of the process and the results were correlated. A total of 36 quantifiable and ten non-quantifiable correlations were identified for each type of interaction. Among those pairs of measures, 20 of the contour method and 22 of the strokes method were strongly or moderately correlated, either directly or inversely. Based on those correlated measures, it is concluded that: (1) in the design of semi-automatic segmentation methods, user interactions need to be less cognitively challenging; (2) based on the observed workflows and preferences of physicians, there is a need for flexibility in the interface design; (3) the correlated measures provide insights that can be used in improving user interaction design.

  7. Automated generation of individually customized visualizations of diagnosis-specific medical information using novel techniques of information extraction

    NASA Astrophysics Data System (ADS)

    Chen, Andrew A.; Meng, Frank; Morioka, Craig A.; Churchill, Bernard M.; Kangarloo, Hooshang

    2005-04-01

    Managing pediatric patients with neurogenic bladder (NGB) involves regular laboratory, imaging, and physiologic testing. Using input from domain experts and current literature, we identified specific data points from these tests to develop the concept of an electronic disease vector for NGB. An information extraction engine was used to extract the desired data elements from free-text and semi-structured documents retrieved from the patient"s medical record. Finally, a Java-based presentation engine created graphical visualizations of the extracted data. After precision, recall, and timing evaluation, we conclude that these tools may enable clinically useful, automatically generated, and diagnosis-specific visualizations of patient data, potentially improving compliance and ultimately, outcomes.

  8. Analysis of Technique to Extract Data from the Web for Improved Performance

    NASA Astrophysics Data System (ADS)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  9. Comparison between manual and semi-automatic segmentation of nasal cavity and paranasal sinuses from CT images.

    PubMed

    Tingelhoff, K; Moral, A I; Kunkel, M E; Rilk, M; Wagner, I; Eichhorn, K G; Wahl, F M; Bootz, F

    2007-01-01

    Segmentation of medical image data is getting more and more important over the last years. The results are used for diagnosis, surgical planning or workspace definition of robot-assisted systems. The purpose of this paper is to find out whether manual or semi-automatic segmentation is adequate for ENT surgical workflow or whether fully automatic segmentation of paranasal sinuses and nasal cavity is needed. We present a comparison of manual and semi-automatic segmentation of paranasal sinuses and the nasal cavity. Manual segmentation is performed by custom software whereas semi-automatic segmentation is realized by a commercial product (Amira). For this study we used a CT dataset of the paranasal sinuses which consists of 98 transversal slices, each 1.0 mm thick, with a resolution of 512 x 512 pixels. For the analysis of both segmentation procedures we used volume, extension (width, length and height), segmentation time and 3D-reconstruction. The segmentation time was reduced from 960 minutes with manual to 215 minutes with semi-automatic segmentation. We found highest variances segmenting nasal cavity. For the paranasal sinuses manual and semi-automatic volume differences are not significant. Dependent on the segmentation accuracy both approaches deliver useful results and could be used for e.g. robot-assisted systems. Nevertheless both procedures are not useful for everyday surgical workflow, because they take too much time. Fully automatic and reproducible segmentation algorithms are needed for segmentation of paranasal sinuses and nasal cavity.

  10. Semi-automatic brain tumor segmentation by constrained MRFs using structural trajectories.

    PubMed

    Zhao, Liang; Wu, Wei; Corso, Jason J

    2013-01-01

    Quantifying volume and growth of a brain tumor is a primary prognostic measure and hence has received much attention in the medical imaging community. Most methods have sought a fully automatic segmentation, but the variability in shape and appearance of brain tumor has limited their success and further adoption in the clinic. In reaction, we present a semi-automatic brain tumor segmentation framework for multi-channel magnetic resonance (MR) images. This framework does not require prior model construction and only requires manual labels on one automatically selected slice. All other slices are labeled by an iterative multi-label Markov random field optimization with hard constraints. Structural trajectories-the medical image analog to optical flow and 3D image over-segmentation are used to capture pixel correspondences between consecutive slices for pixel labeling. We show robustness and effectiveness through an evaluation on the 2012 MICCAI BRATS Challenge Dataset; our results indicate superior performance to baselines and demonstrate the utility of the constrained MRF formulation.

  11. Semi-automatic computerized approach to radiological quantification in rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Steiner, Wolfgang; Schoeffmann, Sylvia; Prommegger, Andrea; Boegl, Karl; Klinger, Thomas; Peloschek, Philipp; Kainberger, Franz

    2004-04-01

    Rheumatoid Arthritis (RA) is a common systemic disease predominantly involving the joints. Precise diagnosis and follow-up therapy requires objective quantification. For this purpose, radiological analyses using standardized scoring systems are considered to be the most appropriate method. The aim of our study is to develop a semi-automatic image analysis software, especially applicable for scoring of joints in rheumatic disorders. The X-Ray RheumaCoach software delivers various scoring systems (Larsen-Score and Ratingen-Rau-Score) which can be applied by the scorer. In addition to the qualitative assessment of joints performed by the radiologist, a semi-automatic image analysis for joint detection and measurements of bone diameters and swollen tissue supports the image assessment process. More than 3000 radiographs from hands and feet of more than 200 RA patients were collected, analyzed, and statistically evaluated. Radiographs were quantified using conventional paper-based Larsen score and the X-Ray RheumaCoach software. The use of the software shortened the scoring time by about 25 percent and reduced the rate of erroneous scorings in all our studies. Compared to paper-based scoring methods, the X-Ray RheumaCoach software offers several advantages: (i) Structured data analysis and input that minimizes variance by standardization, (ii) faster and more precise calculation of sum scores and indices, (iii) permanent data storing and fast access to the software"s database, (iv) the possibility of cross-calculation to other scores, (v) semi-automatic assessment of images, and (vii) reliable documentation of results in the form of graphical printouts.

  12. Standardized evaluation framework for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography.

    PubMed

    Kirişli, H A; Schaap, M; Metz, C T; Dharampal, A S; Meijboom, W B; Papadopoulou, S L; Dedic, A; Nieman, K; de Graaf, M A; Meijs, M F L; Cramer, M J; Broersen, A; Cetin, S; Eslami, A; Flórez-Valencia, L; Lor, K L; Matuszewski, B; Melki, I; Mohr, B; Oksüz, I; Shahzad, R; Wang, C; Kitslaar, P H; Unal, G; Katouzian, A; Örkisz, M; Chen, C M; Precioso, F; Najman, L; Masood, S; Ünay, D; van Vliet, L; Moreno, R; Goldenberg, R; Vuçini, E; Krestin, G P; Niessen, W J; van Walsum, T

    2013-12-01

    Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert's manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. ODM Data Analysis-A tool for the automatic validation, monitoring and generation of generic descriptive statistics of patient data.

    PubMed

    Brix, Tobias Johannes; Bruland, Philipp; Sarfraz, Saad; Ernsting, Jan; Neuhaus, Philipp; Storck, Michael; Doods, Justin; Ständer, Sonja; Dugas, Martin

    2018-01-01

    A required step for presenting results of clinical studies is the declaration of participants demographic and baseline characteristics as claimed by the FDAAA 801. The common workflow to accomplish this task is to export the clinical data from the used electronic data capture system and import it into statistical software like SAS software or IBM SPSS. This software requires trained users, who have to implement the analysis individually for each item. These expenditures may become an obstacle for small studies. Objective of this work is to design, implement and evaluate an open source application, called ODM Data Analysis, for the semi-automatic analysis of clinical study data. The system requires clinical data in the CDISC Operational Data Model format. After uploading the file, its syntax and data type conformity of the collected data is validated. The completeness of the study data is determined and basic statistics, including illustrative charts for each item, are generated. Datasets from four clinical studies have been used to evaluate the application's performance and functionality. The system is implemented as an open source web application (available at https://odmanalysis.uni-muenster.de) and also provided as Docker image which enables an easy distribution and installation on local systems. Study data is only stored in the application as long as the calculations are performed which is compliant with data protection endeavors. Analysis times are below half an hour, even for larger studies with over 6000 subjects. Medical experts have ensured the usefulness of this application to grant an overview of their collected study data for monitoring purposes and to generate descriptive statistics without further user interaction. The semi-automatic analysis has its limitations and cannot replace the complex analysis of statisticians, but it can be used as a starting point for their examination and reporting.

  14. Validation of semi-automatic segmentation of the left atrium

    NASA Astrophysics Data System (ADS)

    Rettmann, M. E.; Holmes, D. R., III; Camp, J. J.; Packer, D. L.; Robb, R. A.

    2008-03-01

    Catheter ablation therapy has become increasingly popular for the treatment of left atrial fibrillation. The effect of this treatment on left atrial morphology, however, has not yet been completely quantified. Initial studies have indicated a decrease in left atrial size with a concomitant decrease in pulmonary vein diameter. In order to effectively study if catheter based therapies affect left atrial geometry, robust segmentations with minimal user interaction are required. In this work, we validate a method to semi-automatically segment the left atrium from computed-tomography scans. The first step of the technique utilizes seeded region growing to extract the entire blood pool including the four chambers of the heart, the pulmonary veins, aorta, superior vena cava, inferior vena cava, and other surrounding structures. Next, the left atrium and pulmonary veins are separated from the rest of the blood pool using an algorithm that searches for thin connections between user defined points in the volumetric data or on a surface rendering. Finally, pulmonary veins are separated from the left atrium using a three dimensional tracing tool. A single user segmented three datasets three times using both the semi-automatic technique as well as manual tracing. The user interaction time for the semi-automatic technique was approximately forty-five minutes per dataset and the manual tracing required between four and eight hours per dataset depending on the number of slices. A truth model was generated using a simple voting scheme on the repeated manual segmentations. A second user segmented each of the nine datasets using the semi-automatic technique only. Several metrics were computed to assess the agreement between the semi-automatic technique and the truth model including percent differences in left atrial volume, DICE overlap, and mean distance between the boundaries of the segmented left atria. Overall, the semi-automatic approach was demonstrated to be repeatable within and between raters, and accurate when compared to the truth model. Finally, we generated a visualization to assess the spatial variability in the segmentation errors between the semi-automatic approach and the truth model. The visualization demonstrates the highest errors occur at the boundaries between the left atium and pulmonary veins as well as the left atrium and left atrial appendage. In conclusion, we describe a semi-automatic approach for left atrial segmentation that demonstrates repeatability and accuracy, with the advantage of significant time reduction in user interaction time.

  15. Semi-Automatic Determination of Citation Relevancy: User Evaluation.

    ERIC Educational Resources Information Center

    Huffman, G. David

    1990-01-01

    Discussion of online bibliographic database searches focuses on a software system, SORT-AID/SABRE, that ranks retrieved citations in terms of relevance. Results of a comprehensive user evaluation of the relevance ranking procedure to determine its effectiveness are presented, and implications for future work are suggested. (10 references) (LRW)

  16. A novel semi-automatic snake robot for natural orifice transluminal endoscopic surgery: preclinical tests in animal and human cadaver models (with video).

    PubMed

    Son, Jaebum; Cho, Chang Nho; Kim, Kwang Gi; Chang, Tae Young; Jung, Hyunchul; Kim, Sung Chun; Kim, Min-Tae; Yang, Nari; Kim, Tae-Yun; Sohn, Dae Kyung

    2015-06-01

    Natural orifice transluminal endoscopic surgery (NOTES) is an emerging surgical technique. We aimed to design, create, and evaluate a new semi-automatic snake robot for NOTES. The snake robot employs the characteristics of both a manual endoscope and a multi-segment snake robot. This robot is inserted and retracted manually, like a classical endoscope, while its shape is controlled using embedded robot technology. The feasibility of a prototype robot for NOTES was evaluated in animals and human cadavers. The transverse stiffness and maneuverability of the snake robot appeared satisfactory. It could be advanced through the anus as far as the peritoneal cavity without any injury to adjacent organs. Preclinical tests showed that the device could navigate the peritoneal cavity. The snake robot has advantages of high transverse force and intuitive control. This new robot may be clinically superior to conventional tools for transanal NOTES.

  17. Supporting Mediated Peer-Evaluation to Grade Answers to Open-Ended Questions

    ERIC Educational Resources Information Center

    De Marsico, Maria; Sciarrone, Filippo; Sterbini, Andrea; Temperini, Marco

    2017-01-01

    We show an approach to semi-automatic grading of answers given by students to open ended questions (open answers). We use both peer-evaluation and teacher evaluation. A learner is modeled by her Knowledge and her assessments quality (Judgment). The data generated by the peer- and teacher-evaluations, and by the learner models is represented by a…

  18. A semi-automatic annotation tool for cooking video

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe

    2013-03-01

    In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.

  19. A 3D THz image processing methodology for a fully integrated, semi-automatic and near real-time operational system

    NASA Astrophysics Data System (ADS)

    Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.

    2012-05-01

    The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.

  20. FIJI Macro 3D ART VeSElecT: 3D Automated Reconstruction Tool for Vesicle Structures of Electron Tomograms

    PubMed Central

    Kaltdorf, Kristin Verena; Schulze, Katja; Helmprobst, Frederik; Kollmannsberger, Philip; Stigloher, Christian

    2017-01-01

    Automatic image reconstruction is critical to cope with steadily increasing data from advanced microscopy. We describe here the Fiji macro 3D ART VeSElecT which we developed to study synaptic vesicles in electron tomograms. We apply this tool to quantify vesicle properties (i) in embryonic Danio rerio 4 and 8 days past fertilization (dpf) and (ii) to compare Caenorhabditis elegans N2 neuromuscular junctions (NMJ) wild-type and its septin mutant (unc-59(e261)). We demonstrate development-specific and mutant-specific changes in synaptic vesicle pools in both models. We confirm the functionality of our macro by applying our 3D ART VeSElecT on zebrafish NMJ showing smaller vesicles in 8 dpf embryos then 4 dpf, which was validated by manual reconstruction of the vesicle pool. Furthermore, we analyze the impact of C. elegans septin mutant unc-59(e261) on vesicle pool formation and vesicle size. Automated vesicle registration and characterization was implemented in Fiji as two macros (registration and measurement). This flexible arrangement allows in particular reducing false positives by an optional manual revision step. Preprocessing and contrast enhancement work on image-stacks of 1nm/pixel in x and y direction. Semi-automated cell selection was integrated. 3D ART VeSElecT removes interfering components, detects vesicles by 3D segmentation and calculates vesicle volume and diameter (spherical approximation, inner/outer diameter). Results are collected in color using the RoiManager plugin including the possibility of manual removal of non-matching confounder vesicles. Detailed evaluation considered performance (detected vesicles) and specificity (true vesicles) as well as precision and recall. We furthermore show gain in segmentation and morphological filtering compared to learning based methods and a large time gain compared to manual segmentation. 3D ART VeSElecT shows small error rates and its speed gain can be up to 68 times faster in comparison to manual annotation. Both automatic and semi-automatic modes are explained including a tutorial. PMID:28056033

  1. Advanced and standardized evaluation of neurovascular compression syndromes

    NASA Astrophysics Data System (ADS)

    Hastreiter, Peter; Vega Higuera, Fernando; Tomandl, Bernd; Fahlbusch, Rudolf; Naraghi, Ramin

    2004-05-01

    Caused by a contact between vascular structures and the root entry or exit zone of cranial nerves neurovascular compression syndromes are combined with different neurological diseases (trigeminal neurolagia, hemifacial spasm, vertigo, glossopharyngeal neuralgia) and show a relation with essential arterial hypertension. As presented previously, the semi-automatic segmentation and 3D visualization of strongly T2 weighted MR volumes has proven to be an effective strategy for a better spatial understanding prior to operative microvascular decompression. After explicit segmentation of coarse structures, the tiny target nerves and vessels contained in the area of cerebrospinal fluid are segmented implicitly using direct volume rendering. However, based on this strategy the delineation of vessels in the vicinity of the brainstem and those at the border of the segmented CSF subvolume are critical. Therefore, we suggest registration with MR angiography and introduce consecutive fusion after semi-automatic labeling of the vascular information. Additionally, we present an approach of automatic 3D visualization and video generation based on predefined flight paths. Thereby, a standardized evaluation of the fused image data is supported and the visualization results are optimally prepared for intraoperative application. Overall, our new strategy contributes to a significantly improved 3D representation and evaluation of vascular compression syndromes. Its value for diagnosis and surgery is demonstrated with various clinical examples.

  2. Compensation of missing wedge effects with sequential statistical reconstruction in electron tomography.

    PubMed

    Paavolainen, Lassi; Acar, Erman; Tuna, Uygar; Peltonen, Sari; Moriya, Toshio; Soonsawad, Pan; Marjomäki, Varpu; Cheng, R Holland; Ruotsalainen, Ulla

    2014-01-01

    Electron tomography (ET) of biological samples is used to study the organization and the structure of the whole cell and subcellular complexes in great detail. However, projections cannot be acquired over full tilt angle range with biological samples in electron microscopy. ET image reconstruction can be considered an ill-posed problem because of this missing information. This results in artifacts, seen as the loss of three-dimensional (3D) resolution in the reconstructed images. The goal of this study was to achieve isotropic resolution with a statistical reconstruction method, sequential maximum a posteriori expectation maximization (sMAP-EM), using no prior morphological knowledge about the specimen. The missing wedge effects on sMAP-EM were examined with a synthetic cell phantom to assess the effects of noise. An experimental dataset of a multivesicular body was evaluated with a number of gold particles. An ellipsoid fitting based method was developed to realize the quantitative measures elongation and contrast in an automated, objective, and reliable way. The method statistically evaluates the sub-volumes containing gold particles randomly located in various parts of the whole volume, thus giving information about the robustness of the volume reconstruction. The quantitative results were also compared with reconstructions made with widely-used weighted backprojection and simultaneous iterative reconstruction technique methods. The results showed that the proposed sMAP-EM method significantly suppresses the effects of the missing information producing isotropic resolution. Furthermore, this method improves the contrast ratio, enhancing the applicability of further automatic and semi-automatic analysis. These improvements in ET reconstruction by sMAP-EM enable analysis of subcellular structures with higher three-dimensional resolution and contrast than conventional methods.

  3. Implementation and evaluation of a new workflow for registration and segmentation of pulmonary MRI data for regional lung perfusion assessment.

    PubMed

    Böttger, T; Grunewald, K; Schöbinger, M; Fink, C; Risse, F; Kauczor, H U; Meinzer, H P; Wolf, Ivo

    2007-03-07

    Recently it has been shown that regional lung perfusion can be assessed using time-resolved contrast-enhanced magnetic resonance (MR) imaging. Quantification of the perfusion images has been attempted, based on definition of small regions of interest (ROIs). Use of complete lung segmentations instead of ROIs could possibly increase quantification accuracy. Due to the low signal-to-noise ratio, automatic segmentation algorithms cannot be applied. On the other hand, manual segmentation of the lung tissue is very time consuming and can become inaccurate, as the borders of the lung to adjacent tissues are not always clearly visible. We propose a new workflow for semi-automatic segmentation of the lung from additionally acquired morphological HASTE MR images. First the lung is delineated semi-automatically in the HASTE image. Next the HASTE image is automatically registered with the perfusion images. Finally, the transformation resulting from the registration is used to align the lung segmentation from the morphological dataset with the perfusion images. We evaluated rigid, affine and locally elastic transformations, suitable optimizers and different implementations of mutual information (MI) metrics to determine the best possible registration algorithm. We located the shortcomings of the registration procedure and under which conditions automatic registration will succeed or fail. Segmentation results were evaluated using overlap and distance measures. Integration of the new workflow reduces the time needed for post-processing of the data, simplifies the perfusion quantification and reduces interobserver variability in the segmentation process. In addition, the matched morphological data set can be used to identify morphologic changes as the source for the perfusion abnormalities.

  4. RCrane: semi-automated RNA model building.

    PubMed

    Keating, Kevin S; Pyle, Anna Marie

    2012-08-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems.

  5. WE-A-17A-06: Evaluation of An Automatic Interstitial Catheter Digitization Algorithm That Reduces Treatment Planning Time and Provide Means for Adaptive Re-Planning in HDR Brachytherapy of Gynecologic Cancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dise, J; Liang, X; Lin, L

    Purpose: To evaluate an automatic interstitial catheter digitization algorithm that reduces treatment planning time and provide means for adaptive re-planning in HDR Brachytherapy of Gynecologic Cancers. Methods: The semi-automatic catheter digitization tool utilizes a region growing algorithm in conjunction with a spline model of the catheters. The CT images were first pre-processed to enhance the contrast between the catheters and soft tissue. Several seed locations were selected in each catheter for the region growing algorithm. The spline model of the catheters assisted in the region growing by preventing inter-catheter cross-over caused by air or metal artifacts. Source dwell positions frommore » day one CT scans were applied to subsequent CTs and forward calculated using the automatically digitized catheter positions. This method was applied to 10 patients who had received HDR interstitial brachytherapy on an IRB approved image-guided radiation therapy protocol. The prescribed dose was 18.75 or 20 Gy delivered in 5 fractions, twice daily, over 3 consecutive days. Dosimetric comparisons were made between automatic and manual digitization on day two CTs. Results: The region growing algorithm, assisted by the spline model of the catheters, was able to digitize all catheters. The difference between automatic and manually digitized positions was 0.8±0.3 mm. The digitization time ranged from 34 minutes to 43 minutes with a mean digitization time of 37 minutes. The bulk of the time was spent on manual selection of initial seed positions and spline parameter adjustments. There was no significance difference in dosimetric parameters between the automatic and manually digitized plans. D90% to the CTV was 91.5±4.4% for the manual digitization versus 91.4±4.4% for the automatic digitization (p=0.56). Conclusion: A region growing algorithm was developed to semi-automatically digitize interstitial catheters in HDR brachytherapy using the Syed-Neblett template. This automatic digitization tool was shown to be accurate compared to manual digitization.« less

  6. Multiple sclerosis lesion segmentation using an automatic multimodal graph cuts.

    PubMed

    García-Lorenzo, Daniel; Lecoeur, Jeremy; Arnold, Douglas L; Collins, D Louis; Barillot, Christian

    2009-01-01

    Graph Cuts have been shown as a powerful interactive segmentation technique in several medical domains. We propose to automate the Graph Cuts in order to automatically segment Multiple Sclerosis (MS) lesions in MRI. We replace the manual interaction with a robust EM-based approach in order to discriminate between MS lesions and the Normal Appearing Brain Tissues (NABT). Evaluation is performed in synthetic and real images showing good agreement between the automatic segmentation and the target segmentation. We compare our algorithm with the state of the art techniques and with several manual segmentations. An advantage of our algorithm over previously published ones is the possibility to semi-automatically improve the segmentation due to the Graph Cuts interactive feature.

  7. Semi Automatic Ontology Instantiation in the domain of Risk Management

    NASA Astrophysics Data System (ADS)

    Makki, Jawad; Alquier, Anne-Marie; Prince, Violaine

    One of the challenging tasks in the context of Ontological Engineering is to automatically or semi-automatically support the process of Ontology Learning and Ontology Population from semi-structured documents (texts). In this paper we describe a Semi-Automatic Ontology Instantiation method from natural language text, in the domain of Risk Management. This method is composed from three steps 1 ) Annotation with part-of-speech tags, 2) Semantic Relation Instances Extraction, 3) Ontology instantiation process. It's based on combined NLP techniques using human intervention between steps 2 and 3 for control and validation. Since it heavily relies on linguistic knowledge it is not domain dependent which is a good feature for portability between the different fields of risk management application. The proposed methodology uses the ontology of the PRIMA1 project (supported by the European community) as a Generic Domain Ontology and populates it via an available corpus. A first validation of the approach is done through an experiment with Chemical Fact Sheets from Environmental Protection Agency2.

  8. SU-C-201-04: Quantification of Perfusion Heterogeneity Based On Texture Analysis for Fully Automatic Detection of Ischemic Deficits From Myocardial Perfusion Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Y; Huang, H; Su, T

    Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less

  9. Semi-automatic, octave-spanning optical frequency counter.

    PubMed

    Liu, Tze-An; Shu, Ren-Huei; Peng, Jin-Long

    2008-07-07

    This work presents and demonstrates a semi-automatic optical frequency counter with octave-spanning counting capability using two fiber laser combs operated at different repetition rates. Monochromators are utilized to provide an approximate frequency of the laser under measurement to determine the mode number difference between the two laser combs. The exact mode number of the beating comb line is obtained from the mode number difference and the measured beat frequencies. The entire measurement process, except the frequency stabilization of the laser combs and the optimization of the beat signal-to-noise ratio, is controlled by a computer running a semi-automatic optical frequency counter.

  10. Semi-supervised Learning for Phenotyping Tasks.

    PubMed

    Dligach, Dmitriy; Miller, Timothy; Savova, Guergana K

    2015-01-01

    Supervised learning is the dominant approach to automatic electronic health records-based phenotyping, but it is expensive due to the cost of manual chart review. Semi-supervised learning takes advantage of both scarce labeled and plentiful unlabeled data. In this work, we study a family of semi-supervised learning algorithms based on Expectation Maximization (EM) in the context of several phenotyping tasks. We first experiment with the basic EM algorithm. When the modeling assumptions are violated, basic EM leads to inaccurate parameter estimation. Augmented EM attenuates this shortcoming by introducing a weighting factor that downweights the unlabeled data. Cross-validation does not always lead to the best setting of the weighting factor and other heuristic methods may be preferred. We show that accurate phenotyping models can be trained with only a few hundred labeled (and a large number of unlabeled) examples, potentially providing substantial savings in the amount of the required manual chart review.

  11. A Development of Automatic Audit System for Written Informed Consent using Machine Learning.

    PubMed

    Yamada, Hitomi; Takemura, Tadamasa; Asai, Takahiro; Okamoto, Kazuya; Kuroda, Tomohiro; Kuwata, Shigeki

    2015-01-01

    In Japan, most of all the university and advanced hospitals have implemented both electronic order entry systems and electronic charting. In addition, all medical records are subjected to inspector audit for quality assurance. The record of informed consent (IC) is very important as this provides evidence of consent from the patient or patient's family and health care provider. Therefore, we developed an automatic audit system for a hospital information system (HIS) that is able to evaluate IC automatically using machine learning.

  12. Biologically inspired EM image alignment and neural reconstruction.

    PubMed

    Knowles-Barley, Seymour; Butcher, Nancy J; Meinertzhagen, Ian A; Armstrong, J Douglas

    2011-08-15

    Three-dimensional reconstruction of consecutive serial-section transmission electron microscopy (ssTEM) images of neural tissue currently requires many hours of manual tracing and annotation. Several computational techniques have already been applied to ssTEM images to facilitate 3D reconstruction and ease this burden. Here, we present an alternative computational approach for ssTEM image analysis. We have used biologically inspired receptive fields as a basis for a ridge detection algorithm to identify cell membranes, synaptic contacts and mitochondria. Detected line segments are used to improve alignment between consecutive images and we have joined small segments of membrane into cell surfaces using a dynamic programming algorithm similar to the Needleman-Wunsch and Smith-Waterman DNA sequence alignment procedures. A shortest path-based approach has been used to close edges and achieve image segmentation. Partial reconstructions were automatically generated and used as a basis for semi-automatic reconstruction of neural tissue. The accuracy of partial reconstructions was evaluated and 96% of membrane could be identified at the cost of 13% false positive detections. An open-source reference implementation is available in the Supplementary information. seymour.kb@ed.ac.uk; douglas.armstrong@ed.ac.uk Supplementary data are available at Bioinformatics online.

  13. Semi-automatic tracking, smoothing and segmentation of hyoid bone motion from videofluoroscopic swallowing study.

    PubMed

    Kim, Won-Seok; Zeng, Pengcheng; Shi, Jian Qing; Lee, Youngjo; Paik, Nam-Jong

    2017-01-01

    Motion analysis of the hyoid bone via videofluoroscopic study has been used in clinical research, but the classical manual tracking method is generally labor intensive and time consuming. Although some automatic tracking methods have been developed, masked points could not be tracked and smoothing and segmentation, which are necessary for functional motion analysis prior to registration, were not provided by the previous software. We developed software to track the hyoid bone motion semi-automatically. It works even in the situation where the hyoid bone is masked by the mandible and has been validated in dysphagia patients with stroke. In addition, we added the function of semi-automatic smoothing and segmentation. A total of 30 patients' data were used to develop the software, and data collected from 17 patients were used for validation, of which the trajectories of 8 patients were partly masked. Pearson correlation coefficients between the manual and automatic tracking are high and statistically significant (0.942 to 0.991, P-value<0.0001). Relative errors between automatic tracking and manual tracking in terms of the x-axis, y-axis and 2D range of hyoid bone excursion range from 3.3% to 9.2%. We also developed an automatic method to segment each hyoid bone trajectory into four phases (elevation phase, anterior movement phase, descending phase and returning phase). The semi-automatic hyoid bone tracking from VFSS data by our software is valid compared to the conventional manual tracking method. In addition, the ability of automatic indication to switch the automatic mode to manual mode in extreme cases and calibration without attaching the radiopaque object is convenient and useful for users. Semi-automatic smoothing and segmentation provide further information for functional motion analysis which is beneficial to further statistical analysis such as functional classification and prognostication for dysphagia. Therefore, this software could provide the researchers in the field of dysphagia with a convenient, useful, and all-in-one platform for analyzing the hyoid bone motion. Further development of our method to track the other swallowing related structures or objects such as epiglottis and bolus and to carry out the 2D curve registration may be needed for a more comprehensive functional data analysis for dysphagia with big data.

  14. Automatic generation of natural language nursing shift summaries in neonatal intensive care: BT-Nurse.

    PubMed

    Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy

    2012-11-01

    Our objective was to determine whether and how a computer system could automatically generate helpful natural language nursing shift summaries solely from an electronic patient record system, in a neonatal intensive care unit (NICU). A system was developed which automatically generates partial NICU shift summaries (for the respiratory and cardiovascular systems), using data-to-text technology. It was evaluated for 2 months in the NICU at the Royal Infirmary of Edinburgh, under supervision. In an on-ward evaluation, a substantial majority of the summaries was found by outgoing and incoming nurses to be understandable (90%), and a majority was found to be accurate (70%), and helpful (59%). The evaluation also served to identify some outstanding issues, especially with regard to extra content the nurses wanted to see in the computer-generated summaries. It is technically possible automatically to generate limited natural language NICU shift summaries from an electronic patient record. However, it proved difficult to handle electronic data that was intended primarily for display to the medical staff, and considerable engineering effort would be required to create a deployable system from our proof-of-concept software. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. MO-F-CAMPUS-J-04: Tissue Segmentation-Based MR Electron Density Mapping Method for MR-Only Radiation Treatment Planning of Brain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, H; Lee, Y; Ruschin, M

    2015-06-15

    Purpose: Automatically derive electron density of tissues using MR images and generate a pseudo-CT for MR-only treatment planning of brain tumours. Methods: 20 stereotactic radiosurgery (SRS) patients’ T1-weighted MR images and CT images were retrospectively acquired. First, a semi-automated tissue segmentation algorithm was developed to differentiate tissues with similar MR intensities and large differences in electron densities. The method started with approximately 12 slices of manually contoured spatial regions containing sinuses and airways, then air, bone, brain, cerebrospinal fluid (CSF) and eyes were automatically segmented using edge detection and anatomical information including location, shape, tissue uniformity and relative intensity distribution.more » Next, soft tissues - muscle and fat were segmented based on their relative intensity histogram. Finally, intensities of voxels in each segmented tissue were mapped into their electron density range to generate pseudo-CT by linearly fitting their relative intensity histograms. Co-registered CT was used as a ground truth. The bone segmentations of pseudo-CT were compared with those of co-registered CT obtained by using a 300HU threshold. The average distances between voxels on external edges of the skull of pseudo-CT and CT in three axial, coronal and sagittal slices with the largest width of skull were calculated. The mean absolute electron density (in Hounsfield unit) difference of voxels in each segmented tissues was calculated. Results: The average of distances between voxels on external skull from pseudo-CT and CT were 0.6±1.1mm (mean±1SD). The mean absolute electron density differences for bone, brain, CSF, muscle and fat are 78±114 HU, and 21±8 HU, 14±29 HU, 57±37 HU, and 31±63 HU, respectively. Conclusion: The semi-automated MR electron density mapping technique was developed using T1-weighted MR images. The generated pseudo-CT is comparable to that of CT in terms of anatomical position of tissues and similarity of electron density assignment. This method can allow MR-only treatment planning.« less

  16. Diagnostic accuracy of semi-automatic quantitative metrics as an alternative to expert reading of CT myocardial perfusion in the CORE320 study.

    PubMed

    Ostovaneh, Mohammad R; Vavere, Andrea L; Mehra, Vishal C; Kofoed, Klaus F; Matheson, Matthew B; Arbab-Zadeh, Armin; Fujisawa, Yasuko; Schuijf, Joanne D; Rochitte, Carlos E; Scholte, Arthur J; Kitagawa, Kakuya; Dewey, Marc; Cox, Christopher; DiCarli, Marcelo F; George, Richard T; Lima, Joao A C

    To determine the diagnostic accuracy of semi-automatic quantitative metrics compared to expert reading for interpretation of computed tomography perfusion (CTP) imaging. The CORE320 multicenter diagnostic accuracy clinical study enrolled patients between 45 and 85 years of age who were clinically referred for invasive coronary angiography (ICA). Computed tomography angiography (CTA), CTP, single photon emission computed tomography (SPECT), and ICA images were interpreted manually in blinded core laboratories by two experienced readers. Additionally, eight quantitative CTP metrics as continuous values were computed semi-automatically from myocardial and blood attenuation and were combined using logistic regression to derive a final quantitative CTP metric score. For the reference standard, hemodynamically significant coronary artery disease (CAD) was defined as a quantitative ICA stenosis of 50% or greater and a corresponding perfusion defect by SPECT. Diagnostic accuracy was determined by area under the receiver operating characteristic curve (AUC). Of the total 377 included patients, 66% were male, median age was 62 (IQR: 56, 68) years, and 27% had prior myocardial infarction. In patient based analysis, the AUC (95% CI) for combined CTA-CTP expert reading and combined CTA-CTP semi-automatic quantitative metrics was 0.87(0.84-0.91) and 0.86 (0.83-0.9), respectively. In vessel based analyses the AUC's were 0.85 (0.82-0.88) and 0.84 (0.81-0.87), respectively. No significant difference in AUC was found between combined CTA-CTP expert reading and CTA-CTP semi-automatic quantitative metrics in patient based or vessel based analyses(p > 0.05 for all). Combined CTA-CTP semi-automatic quantitative metrics is as accurate as CTA-CTP expert reading to detect hemodynamically significant CAD. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  17. Does semi-automatic bone-fragment segmentation improve the reproducibility of the Letournel acetabular fracture classification?

    PubMed

    Boudissa, M; Orfeuvre, B; Chabanas, M; Tonetti, J

    2017-09-01

    The Letournel classification of acetabular fracture shows poor reproducibility in inexperienced observers, despite the introduction of 3D imaging. We therefore developed a method of semi-automatic segmentation based on CT data. The present prospective study aimed to assess: (1) whether semi-automatic bone-fragment segmentation increased the rate of correct classification; (2) if so, in which fracture types; and (3) feasibility using the open-source itksnap 3.0 software package without incurring extra cost for users. Semi-automatic segmentation of acetabular fractures significantly increases the rate of correct classification by orthopedic surgery residents. Twelve orthopedic surgery residents classified 23 acetabular fractures. Six used conventional 3D reconstructions provided by the center's radiology department (conventional group) and 6 others used reconstructions obtained by semi-automatic segmentation using the open-source itksnap 3.0 software package (segmentation group). Bone fragments were identified by specific colors. Correct classification rates were compared between groups on Chi 2 test. Assessment was repeated 2 weeks later, to determine intra-observer reproducibility. Correct classification rates were significantly higher in the "segmentation" group: 114/138 (83%) versus 71/138 (52%); P<0.0001. The difference was greater for simple (36/36 (100%) versus 17/36 (47%); P<0.0001) than complex fractures (79/102 (77%) versus 54/102 (53%); P=0.0004). Mean segmentation time per fracture was 27±3min [range, 21-35min]. The segmentation group showed excellent intra-observer correlation coefficients, overall (ICC=0.88), and for simple (ICC=0.92) and complex fractures (ICC=0.84). Semi-automatic segmentation, identifying the various bone fragments, was effective in increasing the rate of correct acetabular fracture classification on the Letournel system by orthopedic surgery residents. It may be considered for routine use in education and training. III: prospective case-control study of a diagnostic procedure. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  18. Volumetric glioma quantification: comparison of manual and semi-automatic tumor segmentation for the quantification of tumor growth.

    PubMed

    Odland, Audun; Server, Andres; Saxhaug, Cathrine; Breivik, Birger; Groote, Rasmus; Vardal, Jonas; Larsson, Christopher; Bjørnerud, Atle

    2015-11-01

    Volumetric magnetic resonance imaging (MRI) is now widely available and routinely used in the evaluation of high-grade gliomas (HGGs). Ideally, volumetric measurements should be included in this evaluation. However, manual tumor segmentation is time-consuming and suffers from inter-observer variability. Thus, tools for semi-automatic tumor segmentation are needed. To present a semi-automatic method (SAM) for segmentation of HGGs and to compare this method with manual segmentation performed by experts. The inter-observer variability among experts manually segmenting HGGs using volumetric MRIs was also examined. Twenty patients with HGGs were included. All patients underwent surgical resection prior to inclusion. Each patient underwent several MRI examinations during and after adjuvant chemoradiation therapy. Three experts performed manual segmentation. The results of tumor segmentation by the experts and by the SAM were compared using Dice coefficients and kappa statistics. A relatively close agreement was seen among two of the experts and the SAM, while the third expert disagreed considerably with the other experts and the SAM. An important reason for this disagreement was a different interpretation of contrast enhancement as either surgically-induced or glioma-induced. The time required for manual tumor segmentation was an average of 16 min per scan. Editing of the tumor masks produced by the SAM required an average of less than 2 min per sample. Manual segmentation of HGG is very time-consuming and using the SAM could increase the efficiency of this process. However, the accuracy of the SAM ultimately depends on the expert doing the editing. Our study confirmed a considerable inter-observer variability among experts defining tumor volume from volumetric MRIs. © The Foundation Acta Radiologica 2014.

  19. Semi-Automatic Grading of Students' Answers Written in Free Text

    ERIC Educational Resources Information Center

    Escudeiro, Nuno; Escudeiro, Paula; Cruz, Augusto

    2011-01-01

    The correct grading of free text answers to exam questions during an assessment process is time consuming and subject to fluctuations in the application of evaluation criteria, particularly when the number of answers is high (in the hundreds). In consequence of these fluctuations, inherent to human nature, and largely determined by emotional…

  20. Feasibility of automatic evaluation of clinical rules in general practice.

    PubMed

    Opondo, Dedan; Visscher, Stefan; Eslami, Saied; Medlock, Stephanie; Verheij, Robert; Korevaar, Joke C; Abu-Hanna, Ameen

    2017-04-01

    To assess the extent to which clinical rules (CRs) can be implemented for automatic evaluation of quality of care in general practice. We assessed 81 clinical rules (CRs) adapted from a subset of Assessing Care of Vulnerable Elders (ACOVE) clinical rules, against Dutch College of General Practitioners (NHG) data model. Each CR was analyzed using the Logical Elements Rule METHOD: (LERM). LERM is a stepwise method of assessing and formalizing clinical rules for decision support. Clinical rules that satisfied the criteria outlined in the LERM method were judged to be implementable in automatic evaluation in general practice. Thirty-three out of 81 (40.7%) Dutch-translated ACOVE clinical rules can be automatically evaluated in electronic medical record systems. Seven out of 7 CRs (100%) in the domain of diabetes can be automatically evaluated, 9/17 (52.9%) in medication use, 5/10 (50%) in depression care, 3/6 (50%) in nutrition care, 6/13 (46.1%) in dementia care, 1/6 (16.6%) in end of life care, 2/13 (15.3%) in continuity of care, and 0/9 (0%) in the fall-related care. Lack of documentation of care activities between primary and secondary health facilities and ambiguous formulation of clinical rules were the main reasons for the inability to automate the clinical rules. Approximately two-fifths of the primary care Dutch ACOVE-based clinical rules can be automatically evaluated. Clear definition of clinical rules, improved GP database design and electronic linkage of primary and secondary healthcare facilities can improve prospects of automatic assessment of quality of care. These findings are relevant especially because the Netherlands has very high automation of primary care. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Impact of translation on named-entity recognition in radiology texts

    PubMed Central

    Pedro, Vasco

    2017-01-01

    Abstract Radiology reports describe the results of radiography procedures and have the potential of being a useful source of information which can bring benefits to health care systems around the world. One way to automatically extract information from the reports is by using Text Mining tools. The problem is that these tools are mostly developed for English and reports are usually written in the native language of the radiologist, which is not necessarily English. This creates an obstacle to the sharing of Radiology information between different communities. This work explores the solution of translating the reports to English before applying the Text Mining tools, probing the question of what translation approach should be used. We created MRRAD (Multilingual Radiology Research Articles Dataset), a parallel corpus of Portuguese research articles related to Radiology and a number of alternative translations (human, automatic and semi-automatic) to English. This is a novel corpus which can be used to move forward the research on this topic. Using MRRAD we studied which kind of automatic or semi-automatic translation approach is more effective on the Named-entity recognition task of finding RadLex terms in the English version of the articles. Considering the terms extracted from human translations as our gold standard, we calculated how similar to this standard were the terms extracted using other translations. We found that a completely automatic translation approach using Google leads to F-scores (between 0.861 and 0.868, depending on the extraction approach) similar to the ones obtained through a more expensive semi-automatic translation approach using Unbabel (between 0.862 and 0.870). To better understand the results we also performed a qualitative analysis of the type of errors found in the automatic and semi-automatic translations. Database URL: https://github.com/lasigeBioTM/MRRAD PMID:29220455

  2. Image-based red cell counting for wild animals blood.

    PubMed

    Mauricio, Claudio R M; Schneider, Fabio K; Dos Santos, Leonilda Correia

    2010-01-01

    An image-based red blood cell (RBC) automatic counting system is presented for wild animals blood analysis. Images with 2048×1536-pixel resolution acquired on an optical microscope using Neubauer chambers are used to evaluate RBC counting for three animal species (Leopardus pardalis, Cebus apella and Nasua nasua) and the error found using the proposed method is similar to that obtained for inter observer visual counting method, i.e., around 10%. Smaller errors (e.g., 3%) can be obtained in regions with less grid artifacts. These promising results allow the use of the proposed method either as a complete automatic counting tool in laboratories for wild animal's blood analysis or as a first counting stage in a semi-automatic counting tool.

  3. New method for characterizing paper coating structures using argon ion beam milling and field emission scanning electron microscopy.

    PubMed

    Dahlström, C; Allem, R; Uesaka, T

    2011-02-01

    We have developed a new method for characterizing microstructures of paper coating using argon ion beam milling technique and field emission scanning electron microscopy. The combination of these two techniques produces extremely high-quality images with very few artefacts, which are particularly suited for quantitative analyses of coating structures. A new evaluation method has been developed by using marker-controlled watershed segmentation technique of the secondary electron images. The high-quality secondary electron images with well-defined pores makes it possible to use this semi-automatic segmentation method. One advantage of using secondary electron images instead of backscattered electron images is being able to avoid possible overestimation of the porosity because of the signal depth. A comparison was made between the new method and the conventional method using greyscale histogram thresholding of backscattered electron images. The results showed that the conventional method overestimated the pore area by 20% and detected around 5% more pores than the new method. As examples of the application of the new method, we have investigated the distributions of coating binders, and the relationship between local coating porosity and base sheet structures. The technique revealed, for the first time with direct evidence, the long-suspected coating non-uniformity, i.e. binder migration, and the correlation between coating porosity versus base sheet mass density, in a straightforward way. © 2010 The Authors Journal compilation © 2010 The Royal Microscopical Society.

  4. Patient-specific semi-supervised learning for postoperative brain tumor segmentation.

    PubMed

    Meier, Raphael; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2014-01-01

    In contrast to preoperative brain tumor segmentation, the problem of postoperative brain tumor segmentation has been rarely approached so far. We present a fully-automatic segmentation method using multimodal magnetic resonance image data and patient-specific semi-supervised learning. The idea behind our semi-supervised approach is to effectively fuse information from both pre- and postoperative image data of the same patient to improve segmentation of the postoperative image. We pose image segmentation as a classification problem and solve it by adopting a semi-supervised decision forest. The method is evaluated on a cohort of 10 high-grade glioma patients, with segmentation performance and computation time comparable or superior to a state-of-the-art brain tumor segmentation method. Moreover, our results confirm that the inclusion of preoperative MR images lead to a better performance regarding postoperative brain tumor segmentation.

  5. Design and development of a prototypical software for semi-automatic generation of test methodologies and security checklists for IT vulnerability assessment in small- and medium-sized enterprises (SME)

    NASA Astrophysics Data System (ADS)

    Möller, Thomas; Bellin, Knut; Creutzburg, Reiner

    2015-03-01

    The aim of this paper is to show the recent progress in the design and prototypical development of a software suite Copra Breeder* for semi-automatic generation of test methodologies and security checklists for IT vulnerability assessment in small and medium-sized enterprises.

  6. Automatic scoring of dicentric chromosomes as a tool in large scale radiation accidents.

    PubMed

    Romm, H; Ainsbury, E; Barnard, S; Barrios, L; Barquinero, J F; Beinke, C; Deperas, M; Gregoire, E; Koivistoinen, A; Lindholm, C; Moquet, J; Oestreicher, U; Puig, R; Rothkamm, K; Sommer, S; Thierens, H; Vandersickel, V; Vral, A; Wojcik, A

    2013-08-30

    Mass casualty scenarios of radiation exposure require high throughput biological dosimetry techniques for population triage in order to rapidly identify individuals who require clinical treatment. The manual dicentric assay is a highly suitable technique, but it is also very time consuming and requires well trained scorers. In the framework of the MULTIBIODOSE EU FP7 project, semi-automated dicentric scoring has been established in six European biodosimetry laboratories. Whole blood was irradiated with a Co-60 gamma source resulting in 8 different doses between 0 and 4.5Gy and then shipped to the six participating laboratories. To investigate two different scoring strategies, cell cultures were set up with short term (2-3h) or long term (24h) colcemid treatment. Three classifiers for automatic dicentric detection were applied, two of which were developed specifically for these two different culture techniques. The automation procedure included metaphase finding, capture of cells at high resolution and detection of dicentric candidates. The automatically detected dicentric candidates were then evaluated by a trained human scorer, which led to the term 'semi-automated' being applied to the analysis. The six participating laboratories established at least one semi-automated calibration curve each, using the appropriate classifier for their colcemid treatment time. There was no significant difference between the calibration curves established, regardless of the classifier used. The ratio of false positive to true positive dicentric candidates was dose dependent. The total staff effort required for analysing 150 metaphases using the semi-automated approach was 2 min as opposed to 60 min for manual scoring of 50 metaphases. Semi-automated dicentric scoring is a useful tool in a large scale radiation accident as it enables high throughput screening of samples for fast triage of potentially exposed individuals. Furthermore, the results from the participating laboratories were comparable which supports networking between laboratories for this assay. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Evaluation of an automatic MR-based gold fiducial marker localisation method for MR-only prostate radiotherapy

    NASA Astrophysics Data System (ADS)

    Maspero, Matteo; van den Berg, Cornelis A. T.; Zijlstra, Frank; Sikkes, Gonda G.; de Boer, Hans C. J.; Meijer, Gert J.; Kerkmeijer, Linda G. W.; Viergever, Max A.; Lagendijk, Jan J. W.; Seevinck, Peter R.

    2017-10-01

    An MR-only radiotherapy planning (RTP) workflow would reduce the cost, radiation exposure and uncertainties introduced by CT-MRI registrations. In the case of prostate treatment, one of the remaining challenges currently holding back the implementation of an RTP workflow is the MR-based localisation of intraprostatic gold fiducial markers (FMs), which is crucial for accurate patient positioning. Currently, MR-based FM localisation is clinically performed manually. This is sub-optimal, as manual interaction increases the workload. Attempts to perform automatic FM detection often rely on being able to detect signal voids induced by the FMs in magnitude images. However, signal voids may not always be sufficiently specific, hampering accurate and robust automatic FM localisation. Here, we present an approach that aims at automatic MR-based FM localisation. This method is based on template matching using a library of simulated complex-valued templates, and exploiting the behaviour of the complex MR signal in the vicinity of the FM. Clinical evaluation was performed on seventeen prostate cancer patients undergoing external beam radiotherapy treatment. Automatic MR-based FM localisation was compared to manual MR-based and semi-automatic CT-based localisation (the current gold standard) in terms of detection rate and the spatial accuracy and precision of localisation. The proposed method correctly detected all three FMs in 15/17 patients. The spatial accuracy (mean) and precision (STD) were 0.9 mm and 0.5 mm respectively, which is below the voxel size of 1.1 × 1.1 × 1.2 mm3 and comparable to MR-based manual localisation. FM localisation failed (3/51 FMs) in the presence of bleeding or calcifications in the direct vicinity of the FM. The method was found to be spatially accurate and precise, which is essential for clinical use. To overcome any missed detection, we envision the use of the proposed method along with verification by an observer. This will result in a semi-automatic workflow facilitating the introduction of an MR-only workflow.

  8. An ERTS-1 investigation for Lake Ontario and its basin

    NASA Technical Reports Server (NTRS)

    Polcyn, F. C.; Falconer, A. (Principal Investigator); Wagner, T. W.; Rebel, D. L.

    1975-01-01

    The author has identified the following significant results. Methods of manual, semi-automatic, and automatic (computer) data processing were evaluated, as were the requirements for spatial physiographic and limnological information. The coupling of specially processed ERTS data with simulation models of the watershed precipitation/runoff process provides potential for water resources management. Optimal and full use of the data requires a mix of data processing and analysis techniques, including single band editing, two band ratios, and multiband combinations. A combination of maximum likelihood ratio and near-IR/red band ratio processing was found to be particularly useful.

  9. A semi-automatic traffic sign detection, classification, and positioning system

    NASA Astrophysics Data System (ADS)

    Creusen, I. M.; Hazelhoff, L.; de With, P. H. N.

    2012-01-01

    The availability of large-scale databases containing street-level panoramic images offers the possibility to perform semi-automatic surveying of real-world objects such as traffic signs. These inventories can be performed significantly more efficiently than using conventional methods. Governmental agencies are interested in these inventories for maintenance and safety reasons. This paper introduces a complete semi-automatic traffic sign inventory system. The system consists of several components. First, a detection algorithm locates the 2D position of the traffic signs in the panoramic images. Second, a classification algorithm is used to identify the traffic sign. Third, the 3D position of the traffic sign is calculated using the GPS position of the photographs. Finally, the results are listed in a table for quick inspection and are also visualized in a web browser.

  10. Semi-automatic detection of Gd-DTPA-saline filled capsules for colonic transit time assessment in MRI

    NASA Astrophysics Data System (ADS)

    Harrer, Christian; Kirchhoff, Sonja; Keil, Andreas; Kirchhoff, Chlodwig; Mussack, Thomas; Lienemann, Andreas; Reiser, Maximilian; Navab, Nassir

    2008-03-01

    Functional gastrointestinal disorders result in a significant number of consultations in primary care facilities. Chronic constipation and diarrhea are regarded as two of the most common diseases affecting between 2% and 27% of the population in western countries 1-3. Defecatory disorders are most commonly due to dysfunction of the pelvic floor or the anal sphincter. Although an exact differentiation of these pathologies is essential for adequate therapy, diagnosis is still only based on a clinical evaluation1. Regarding quantification of constipation only the ingestion of radio-opaque markers or radioactive isotopes and the consecutive assessment of colonic transit time using X-ray or scintigraphy, respectively, has been feasible in clinical settings 4-8. However, these approaches have several drawbacks such as involving rather inconvenient, time consuming examinations and exposing the patient to ionizing radiation. Therefore, conventional assessment of colonic transit time has not been widely used. Most recently a new technique for the assessment of colonic transit time using MRI and MR-contrast media filled capsules has been introduced 9. However, due to numerous examination dates per patient and corresponding datasets with many images, the evaluation of the image data is relatively time-consuming. The aim of our study was to develop a computer tool to facilitate the detection of the capsules in MRI datasets and thus to shorten the evaluation time. We present a semi-automatic tool which provides an intensity, size 10, and shape-based 11,12 detection of ingested Gd-DTPA-saline filled capsules. After an automatic pre-classification, radiologists may easily correct the results using the application-specific user interface, therefore decreasing the evaluation time significantly.

  11. Surface smoothness: cartilage biomarkers for knee OA beyond the radiologist

    NASA Astrophysics Data System (ADS)

    Tummala, Sudhakar; Dam, Erik B.

    2010-03-01

    Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers.

  12. Feasibility of Automatic Extraction of Electronic Health Data to Evaluate a Status Epilepticus Clinical Protocol.

    PubMed

    Hafeez, Baria; Paolicchi, Juliann; Pon, Steven; Howell, Joy D; Grinspan, Zachary M

    2016-05-01

    Status epilepticus is a common neurologic emergency in children. Pediatric medical centers often develop protocols to standardize care. Widespread adoption of electronic health records by hospitals affords the opportunity for clinicians to rapidly, and electronically evaluate protocol adherence. We reviewed the clinical data of a small sample of 7 children with status epilepticus, in order to (1) qualitatively determine the feasibility of automated data extraction and (2) demonstrate a timeline-style visualization of each patient's first 24 hours of care. Qualitatively, our observations indicate that most clinical data are well labeled in structured fields within the electronic health record, though some important information, particularly electroencephalography (EEG) data, may require manual abstraction. We conclude that a visualization that clarifies a patient's clinical course can be automatically created using the patient's electronic clinical data, supplemented with some manually abstracted data. Future work could use this timeline to evaluate adherence to status epilepticus clinical protocols. © The Author(s) 2015.

  13. Application of semi-supervised deep learning to lung sound analysis.

    PubMed

    Chamberlain, Daniel; Kodgule, Rahul; Ganelin, Daniela; Miglani, Vivek; Fletcher, Richard Ribon

    2016-08-01

    The analysis of lung sounds, collected through auscultation, is a fundamental component of pulmonary disease diagnostics for primary care and general patient monitoring for telemedicine. Despite advances in computation and algorithms, the goal of automated lung sound identification and classification has remained elusive. Over the past 40 years, published work in this field has demonstrated only limited success in identifying lung sounds, with most published studies using only a small numbers of patients (typically N<;20) and usually limited to a single type of lung sound. Larger research studies have also been impeded by the challenge of labeling large volumes of data, which is extremely labor-intensive. In this paper, we present the development of a semi-supervised deep learning algorithm for automatically classify lung sounds from a relatively large number of patients (N=284). Focusing on the two most common lung sounds, wheeze and crackle, we present results from 11,627 sound files recorded from 11 different auscultation locations on these 284 patients with pulmonary disease. 890 of these sound files were labeled to evaluate the model, which is significantly larger than previously published studies. Data was collected with a custom mobile phone application and a low-cost (US$30) electronic stethoscope. On this data set, our algorithm achieves ROC curves with AUCs of 0.86 for wheeze and 0.74 for crackle. Most importantly, this study demonstrates how semi-supervised deep learning can be used with larger data sets without requiring extensive labeling of data.

  14. The Influence of Endmember Selection Method in Extracting Impervious Surface from Airborne Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Wang, J.; Feng, B.

    2016-12-01

    Impervious surface area (ISA) has long been studied as an important input into moisture flux models. In general, ISA impedes groundwater recharge, increases stormflow/flood frequency, and alters in-stream and riparian habitats. Urban area is recognized as one of the richest ISA environment. Urban ISA mapping assists flood prevention and urban planning. Hyperspectral imagery (HI), for its ability to detect subtle spectral signature, becomes an ideal candidate in urban ISA mapping. To map ISA from HI involves endmember (EM) selection. The high degree of spatial and spectral heterogeneity of urban environment puts great difficulty in this task: a compromise point is needed between the automatic degree and the good representativeness of the method. The study tested one manual and two semi-automatic EM selection strategies. The manual and the first semi-automatic methods have been widely used in EM selection. The second semi-automatic EM selection method is rather new and has been only proposed for moderate spatial resolution satellite. The manual method visually selected the EM candidates from eight landcover types in the original image. The first semi-automatic method chose the EM candidates using a threshold over the pixel purity index (PPI) map. The second semi-automatic method used the triangle shape of the HI scatter plot in the n-Dimension visualizer to identify the V-I-S (vegetation-impervious surface-soil) EM candidates: the pixels locate at the triangle points. The initial EM candidates from the three methods were further refined by three indexes (EM average RMSE, minimum average spectral angle, and count based EM selection) and generated three spectral libraries, which were used to classify the test image. Spectral angle mapper was applied. The accuracy reports for the classification results were generated. The overall accuracy are 85% for the manual method, 81% for the PPI method, and 87% for the V-I-S method. The V-I-S EM selection method performs best in this study. This fact proves the value of V-I-S EM selection method in not only moderate spatial resolution satellite image but also the more and more accessible high spatial resolution airborne image. This semi-automatic EM selection method can be adopted into a wide range of remote sensing images and provide ISA map for hydrology analysis.

  15. Towards natural language question generation for the validation of ontologies and mappings.

    PubMed

    Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos

    2016-08-08

    The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.

  16. A Graph-Based Recovery and Decomposition of Swanson’s Hypothesis using Semantic Predications

    PubMed Central

    Cameron, Delroy; Bodenreider, Olivier; Yalamanchili, Hima; Danh, Tu; Vallabhaneni, Sreeram; Thirunarayan, Krishnaprasad; Sheth, Amit P.; Rindflesch, Thomas C.

    2014-01-01

    Objectives This paper presents a methodology for recovering and decomposing Swanson’s Raynaud Syndrome–Fish Oil Hypothesis semi-automatically. The methodology leverages the semantics of assertions extracted from biomedical literature (called semantic predications) along with structured background knowledge and graph-based algorithms to semi-automatically capture the informative associations originally discovered manually by Swanson. Demonstrating that Swanson’s manually intensive techniques can be undertaken semi-automatically, paves the way for fully automatic semantics-based hypothesis generation from scientific literature. Methods Semantic predications obtained from biomedical literature allow the construction of labeled directed graphs which contain various associations among concepts from the literature. By aggregating such associations into informative subgraphs, some of the relevant details originally articulated by Swanson has been uncovered. However, by leveraging background knowledge to bridge important knowledge gaps in the literature, a methodology for semi-automatically capturing the detailed associations originally explicated in natural language by Swanson has been developed. Results Our methodology not only recovered the 3 associations commonly recognized as Swanson’s Hypothesis, but also decomposed them into an additional 16 detailed associations, formulated as chains of semantic predications. Altogether, 14 out of the 19 associations that can be attributed to Swanson were retrieved using our approach. To the best of our knowledge, such an in-depth recovery and decomposition of Swanson’s Hypothesis has never been attempted. Conclusion In this work therefore, we presented a methodology for semi- automatically recovering and decomposing Swanson’s RS-DFO Hypothesis using semantic representations and graph algorithms. Our methodology provides new insights into potential prerequisites for semantics-driven Literature-Based Discovery (LBD). These suggest that three critical aspects of LBD include: 1) the need for more expressive representations beyond Swanson’s ABC model; 2) an ability to accurately extract semantic information from text; and 3) the semantic integration of scientific literature with structured background knowledge. PMID:23026233

  17. Building a time-saving and adaptable tool to report adverse drug events.

    PubMed

    Parès, Yves; Declerck, Gunnar; Hussain, Sajjad; Ng, Romain; Jaulent, Marie-Christine

    2013-01-01

    The difficult task of detecting adverse drug events (ADEs) and the tedious process of building manual reports of ADE occurrences out of patient profiles result in a majority of adverse reactions not being reported to health regulatory authorities. The SALUS individual case safety report (ICSR) reporting tool, a component currently developed within the SALUS project, aims to support semi-automatic reporting of ADEs to regulatory authorities. In this paper, we present an initial design and current state of of our ICSR reporting tool that features: (i) automatic pre-population of reporting forms through extraction of the patient data contained in an Electronic Health Record (EHR); (ii) generation and electronic submission of the completed ICSRs by the physician to regulatory authorities; and (iii) integration of the reporting process into the physician's work-flow to limit the disturbance. The objective is to increase the rates of ADE reporting and the quality of the reported data. The SALUS interoperability platform supports patient data extraction independently of the EHR data model in use and allows generation of reports using the format expected by regulatory authorities.

  18. Automated high-performance cIMT measurement techniques using patented AtheroEdge™: a screening and home monitoring system.

    PubMed

    Molinari, Filippo; Meiburger, Kristen M; Suri, Jasjit

    2011-01-01

    The evaluation of the carotid artery wall is fundamental for the assessment of cardiovascular risk. This paper presents the general architecture of an automatic strategy, which segments the lumen-intima and media-adventitia borders, classified under a class of Patented AtheroEdge™ systems (Global Biomedical Technologies, Inc, CA, USA). Guidelines to produce accurate and repeatable measurements of the intima-media thickness are provided and the problem of the different distance metrics one can adopt is confronted. We compared the results of a completely automatic algorithm that we developed with those of a semi-automatic algorithm, and showed final segmentation results for both techniques. The overall rationale is to provide user-independent high-performance techniques suitable for screening and remote monitoring.

  19. A modular, prospective, semi-automated drug safety monitoring system for use in a distributed data environment.

    PubMed

    Gagne, Joshua J; Wang, Shirley V; Rassen, Jeremy A; Schneeweiss, Sebastian

    2014-06-01

    The aim of this study was to develop and test a semi-automated process for conducting routine active safety monitoring for new drugs in a network of electronic healthcare databases. We built a modular program that semi-automatically performs cohort identification, confounding adjustment, diagnostic checks, aggregation and effect estimation across multiple databases, and application of a sequential alerting algorithm. During beta-testing, we applied the system to five databases to evaluate nine examples emulating prospective monitoring with retrospective data (five pairs for which we expected signals, two negative controls, and two examples for which it was uncertain whether a signal would be expected): cerivastatin versus atorvastatin and rhabdomyolysis; paroxetine versus tricyclic antidepressants and gastrointestinal bleed; lisinopril versus angiotensin receptor blockers and angioedema; ciprofloxacin versus macrolide antibiotics and Achilles tendon rupture; rofecoxib versus non-selective non-steroidal anti-inflammatory drugs (ns-NSAIDs) and myocardial infarction; telithromycin versus azithromycin and hepatotoxicity; rosuvastatin versus atorvastatin and diabetes and rhabdomyolysis; and celecoxib versus ns-NSAIDs and myocardial infarction. We describe the program, the necessary inputs, and the assumed data environment. In beta-testing, the system generated four alerts, all among positive control examples (i.e., lisinopril and angioedema; rofecoxib and myocardial infarction; ciprofloxacin and tendon rupture; and cerivastatin and rhabdomyolysis). Sequential effect estimates for each example were consistent in direction and magnitude with existing literature. Beta-testing across nine drug-outcome examples demonstrated the feasibility of the proposed semi-automated prospective monitoring approach. In retrospective assessments, the system identified an increased risk of myocardial infarction with rofecoxib and an increased risk of rhabdomyolysis with cerivastatin years before these drugs were withdrawn from the market. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Semi-Automatic Terminology Generation for Information Extraction from German Chest X-Ray Reports.

    PubMed

    Krebs, Jonathan; Corovic, Hamo; Dietrich, Georg; Ertl, Max; Fette, Georg; Kaspar, Mathias; Krug, Markus; Stoerk, Stefan; Puppe, Frank

    2017-01-01

    Extraction of structured data from textual reports is an important subtask for building medical data warehouses for research and care. Many medical and most radiology reports are written in a telegraphic style with a concatenation of noun phrases describing the presence or absence of findings. Therefore a lexico-syntactical approach is promising, where key terms and their relations are recognized and mapped on a predefined standard terminology (ontology). We propose a two-phase algorithm for terminology matching: In the first pass, a local terminology for recognition is derived as close as possible to the terms used in the radiology reports. In the second pass, the local terminology is mapped to a standard terminology. In this paper, we report on an algorithm for the first step of semi-automatic generation of the local terminology and evaluate the algorithm with radiology reports of chest X-ray examinations from Würzburg university hospital. With an effort of about 20 hours work of a radiologist as domain expert and 10 hours for meetings, a local terminology with about 250 attributes and various value patterns was built. In an evaluation with 100 randomly chosen reports it achieved an F1-Score of about 95% for information extraction.

  1. Derivation of groundwater flow-paths based on semi-automatic extraction of lineaments from remote sensing data

    NASA Astrophysics Data System (ADS)

    Mallast, U.; Gloaguen, R.; Geyer, S.; Rödiger, T.; Siebert, C.

    2011-08-01

    In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxiliary information and finally evaluated in terms of hydro-geological significance. Using the example of the western catchment of the Dead Sea (Israel/Palestine), the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. We demonstrate that a strong correlation between lineaments and structural features exists. Using Euclidean distances between lineaments and wells provides an assessment criterion to evaluate the hydraulic significance of detected lineaments. Based on this analysis, we suggest that the statistical analysis of lineaments allows a delineation of flow-paths and thus significant information on groundwater movements. To validate the flow-paths we compare them to existing results of groundwater models that are based on well data.

  2. Application of a Novel Semi-Automatic Technique for Determining the Bilateral Symmetry Plane of the Facial Skeleton of Normal Adult Males.

    PubMed

    Roumeliotis, Grayson; Willing, Ryan; Neuert, Mark; Ahluwalia, Romy; Jenkyn, Thomas; Yazdani, Arjang

    2015-09-01

    The accurate assessment of symmetry in the craniofacial skeleton is important for cosmetic and reconstructive craniofacial surgery. Although there have been several published attempts to develop an accurate system for determining the correct plane of symmetry, all are inaccurate and time consuming. Here, the authors applied a novel semi-automatic method for the calculation of craniofacial symmetry, based on principal component analysis and iterative corrective point computation, to a large sample of normal adult male facial computerized tomography scans obtained clinically (n = 32). The authors hypothesized that this method would generate planes of symmetry that would result in less error when one side of the face was compared to the other than a symmetry plane generated using a plane defined by cephalometric landmarks. When a three-dimensional model of one side of the face was reflected across the semi-automatic plane of symmetry there was less error than when reflected across the cephalometric plane. The semi-automatic plane was also more accurate when the locations of bilateral cephalometric landmarks (eg, frontozygomatic sutures) were compared across the face. The authors conclude that this method allows for accurate and fast measurements of craniofacial symmetry. This has important implications for studying the development of the facial skeleton, and clinical application for reconstruction.

  3. An information extraction framework for cohort identification using electronic health records.

    PubMed

    Liu, Hongfang; Bielinski, Suzette J; Sohn, Sunghwan; Murphy, Sean; Wagholikar, Kavishwar B; Jonnalagadda, Siddhartha R; Ravikumar, K E; Wu, Stephen T; Kullo, Iftikhar J; Chute, Christopher G

    2013-01-01

    Information extraction (IE), a natural language processing (NLP) task that automatically extracts structured or semi-structured information from free text, has become popular in the clinical domain for supporting automated systems at point-of-care and enabling secondary use of electronic health records (EHRs) for clinical and translational research. However, a high performance IE system can be very challenging to construct due to the complexity and dynamic nature of human language. In this paper, we report an IE framework for cohort identification using EHRs that is a knowledge-driven framework developed under the Unstructured Information Management Architecture (UIMA). A system to extract specific information can be developed by subject matter experts through expert knowledge engineering of the externalized knowledge resources used in the framework.

  4. System for definition of the central-chest vasculature

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2009-02-01

    Accurate definition of the central-chest vasculature from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. For instance, the aorta and pulmonary artery help in automatic definition of the Mountain lymph-node stations for lung-cancer staging. This work presents a system for defining major vascular structures in the central chest. The system provides automatic methods for extracting the aorta and pulmonary artery and semi-automatic methods for extracting the other major central chest arteries/veins, such as the superior vena cava and azygos vein. Automatic aorta and pulmonary artery extraction are performed by model fitting and selection. The system also extracts certain vascular structure information to validate outputs. A semi-automatic method extracts vasculature by finding the medial axes between provided important sites. Results of the system are applied to lymph-node station definition and guidance of bronchoscopic biopsy.

  5. Automatic multimodal detection for long-term seizure documentation in epilepsy.

    PubMed

    Fürbass, F; Kampusch, S; Kaniusas, E; Koren, J; Pirker, S; Hopfengärtner, R; Stefan, H; Kluge, T; Baumgartner, C

    2017-08-01

    This study investigated sensitivity and false detection rate of a multimodal automatic seizure detection algorithm and the applicability to reduced electrode montages for long-term seizure documentation in epilepsy patients. An automatic seizure detection algorithm based on EEG, EMG, and ECG signals was developed. EEG/ECG recordings of 92 patients from two epilepsy monitoring units including 494 seizures were used to assess detection performance. EMG data were extracted by bandpass filtering of EEG signals. Sensitivity and false detection rate were evaluated for each signal modality and for reduced electrode montages. All focal seizures evolving to bilateral tonic-clonic (BTCS, n=50) and 89% of focal seizures (FS, n=139) were detected. Average sensitivity in temporal lobe epilepsy (TLE) patients was 94% and 74% in extratemporal lobe epilepsy (XTLE) patients. Overall detection sensitivity was 86%. Average false detection rate was 12.8 false detections in 24h (FD/24h) for TLE and 22 FD/24h in XTLE patients. Utilization of 8 frontal and temporal electrodes reduced average sensitivity from 86% to 81%. Our automatic multimodal seizure detection algorithm shows high sensitivity with full and reduced electrode montages. Evaluation of different signal modalities and electrode montages paces the way for semi-automatic seizure documentation systems. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  6. Preclinical Biokinetic Modelling of Tc-99m Radiophamaceuticals Obtained from Semi-Automatic Image Processing.

    PubMed

    Cornejo-Aragón, Luz G; Santos-Cuevas, Clara L; Ocampo-García, Blanca E; Chairez-Oria, Isaac; Diaz-Nieto, Lorenza; García-Quiroz, Janice

    2017-01-01

    The aim of this study was to develop a semi automatic image processing algorithm (AIPA) based on the simultaneous information provided by X-ray and radioisotopic images to determine the biokinetic models of Tc-99m radiopharmaceuticals from quantification of image radiation activity in murine models. These radioisotopic images were obtained by a CCD (charge couple device) camera coupled to an ultrathin phosphorous screen in a preclinical multimodal imaging system (Xtreme, Bruker). The AIPA consisted of different image processing methods for background, scattering and attenuation correction on the activity quantification. A set of parametric identification algorithms was used to obtain the biokinetic models that characterize the interaction between different tissues and the radiopharmaceuticals considered in the study. The set of biokinetic models corresponded to the Tc-99m biodistribution observed in different ex vivo studies. This fact confirmed the contribution of the semi-automatic image processing technique developed in this study.

  7. Fuzzy logic and image processing techniques for the interpretation of seismic data

    NASA Astrophysics Data System (ADS)

    Orozco-del-Castillo, M. G.; Ortiz-Alemán, C.; Urrutia-Fucugauchi, J.; Rodríguez-Castellanos, A.

    2011-06-01

    Since interpretation of seismic data is usually a tedious and repetitive task, the ability to do so automatically or semi-automatically has become an important objective of recent research. We believe that the vagueness and uncertainty in the interpretation process makes fuzzy logic an appropriate tool to deal with seismic data. In this work we developed a semi-automated fuzzy inference system to detect the internal architecture of a mass transport complex (MTC) in seismic images. We propose that the observed characteristics of a MTC can be expressed as fuzzy if-then rules consisting of linguistic values associated with fuzzy membership functions. The constructions of the fuzzy inference system and various image processing techniques are presented. We conclude that this is a well-suited problem for fuzzy logic since the application of the proposed methodology yields a semi-automatically interpreted MTC which closely resembles the MTC from expert manual interpretation.

  8. A semi-analytic theory for the motion of a close-earth artificial satellite with drag

    NASA Technical Reports Server (NTRS)

    Liu, J. J. F.; Alford, R. L.

    1979-01-01

    A semi-analytic method is used to estimate the decay history/lifetime and to generate orbital ephemeris for close-earth satellites perturbed by the atmospheric drag and earth oblateness due to the spherical harmonics J2, J3, and J4. The theory maintains efficiency through the application of the theory of a method of averaging and employs sufficient numerical emphasis to include a rather sophisticated atmospheric density model. The averaged drag effects with respect to mean anomaly are evaluated by a Gauss-Legendre quadrature while the averaged variational equations of motion are integrated numerically with automatic step size and error control.

  9. Semi-Automatic Segmentation Software for Quantitative Clinical Brain Glioblastoma Evaluation

    PubMed Central

    Zhu, Y; Young, G; Xue, Z; Huang, R; You, H; Setayesh, K; Hatabu, H; Cao, F; Wong, S.T.

    2012-01-01

    Rationale and Objectives Quantitative measurement provides essential information about disease progression and treatment response in patients with Glioblastoma multiforme (GBM). The goal of this paper is to present and validate a software pipeline for semi-automatic GBM segmentation, called AFINITI (Assisted Follow-up in NeuroImaging of Therapeutic Intervention), using clinical data from GBM patients. Materials and Methods Our software adopts the current state-of-the-art tumor segmentation algorithms and combines them into one clinically usable pipeline. Both the advantages of the traditional voxel-based and the deformable shape-based segmentation are embedded into the software pipeline. The former provides an automatic tumor segmentation scheme based on T1- and T2-weighted MR brain data, and the latter refines the segmentation results with minimal manual input. Results Twenty six clinical MR brain images of GBM patients were processed and compared with manual results. The results can be visualized using the embedded graphic user interface (GUI). Conclusion Validation results using clinical GBM data showed high correlation between the AFINITI results and manual annotation. Compared to the voxel-wise segmentation, AFINITI yielded more accurate results in segmenting the enhanced GBM from multimodality MRI data. The proposed pipeline could be used as additional information to interpret MR brain images in neuroradiology. PMID:22591720

  10. Monte Carlo calculations of energy deposition distributions of electrons below 20 keV in protein.

    PubMed

    Tan, Zhenyu; Liu, Wei

    2014-05-01

    The distributions of energy depositions of electrons in semi-infinite bulk protein and the radial dose distributions of point-isotropic mono-energetic electron sources [i.e., the so-called dose point kernel (DPK)] in protein have been systematically calculated in the energy range below 20 keV, based on Monte Carlo methods. The ranges of electrons have been evaluated by extrapolating two calculated distributions, respectively, and the evaluated ranges of electrons are compared with the electron mean path length in protein which has been calculated by using electron inelastic cross sections described in this work in the continuous-slowing-down approximation. It has been found that for a given energy, the electron mean path length is smaller than the electron range evaluated from DPK, but it is large compared to the electron range obtained from the energy deposition distributions of electrons in semi-infinite bulk protein. The energy dependences of the extrapolated electron ranges based on the two investigated distributions are given, respectively, in a power-law form. In addition, the DPK in protein has also been compared with that in liquid water. An evident difference between the two DPKs is observed. The calculations presented in this work may be useful in studies of radiation effects on proteins.

  11. Flexible methods for segmentation evaluation: results from CT-based luggage screening.

    PubMed

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2014-01-01

    Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.

  12. An automatic granular structure generation and finite element analysis of heterogeneous semi-solid materials

    NASA Astrophysics Data System (ADS)

    Sharifi, Hamid; Larouche, Daniel

    2015-09-01

    The quality of cast metal products depends on the capacity of the semi-solid metal to sustain the stresses generated during the casting. Predicting the evolution of these stresses with accuracy in the solidification interval should be highly helpful to avoid the formation of defects like hot tearing. This task is however very difficult because of the heterogeneous nature of the material. In this paper, we propose to evaluate the mechanical behaviour of a metal during solidification using a mesh generation technique of the heterogeneous semi-solid material for a finite element analysis at the microscopic level. This task is done on a two-dimensional (2D) domain in which the granular structure of the solid phase is generated surrounded by an intergranular and interdendritc liquid phase. Some basic solid grains are first constructed and projected in the 2D domain with random orientations and scale factors. Depending on their orientation, the basic grains are combined to produce larger grains or separated by a liquid film. Different basic grain shapes can produce different granular structures of the mushy zone. As a result, using this automatic grain generation procedure, we can investigate the effect of grain shapes and sizes on the thermo-mechanical behaviour of the semi-solid material. The granular models are automatically converted to the finite element meshes. The solid grains and the liquid phase are meshed properly using quadrilateral elements. This method has been used to simulate the microstructure of a binary aluminium-copper alloy (Al-5.8 wt% Cu) when the fraction solid is 0.92. Using the finite element method and the Mie-Grüneisen equation of state for the liquid phase, the transient mechanical behaviour of the mushy zone under tensile loading has been investigated. The stress distribution and the bridges, which are formed during the tensile loading, have been detected.

  13. Information retrieval and terminology extraction in online resources for patients with diabetes.

    PubMed

    Seljan, Sanja; Baretić, Maja; Kucis, Vlasta

    2014-06-01

    Terminology use, as a mean for information retrieval or document indexing, plays an important role in health literacy. Specific types of users, i.e. patients with diabetes need access to various online resources (on foreign and/or native language) searching for information on self-education of basic diabetic knowledge, on self-care activities regarding importance of dietetic food, medications, physical exercises and on self-management of insulin pumps. Automatic extraction of corpus-based terminology from online texts, manuals or professional papers, can help in building terminology lists or list of "browsing phrases" useful in information retrieval or in document indexing. Specific terminology lists represent an intermediate step between free text search and controlled vocabulary, between user's demands and existing online resources in native and foreign language. The research aiming to detect the role of terminology in online resources, is conducted on English and Croatian manuals and Croatian online texts, and divided into three interrelated parts: i) comparison of professional and popular terminology use ii) evaluation of automatic statistically-based terminology extraction on English and Croatian texts iii) comparison and evaluation of extracted terminology performed on English manual using statistical and hybrid approaches. Extracted terminology candidates are evaluated by comparison with three types of reference lists: list created by professional medical person, list of highly professional vocabulary contained in MeSH and list created by non-medical persons, made as intersection of 15 lists. Results report on use of popular and professional terminology in online diabetes resources, on evaluation of automatically extracted terminology candidates in English and Croatian texts and on comparison of statistical and hybrid extraction methods in English text. Evaluation of automatic and semi-automatic terminology extraction methods is performed by recall, precision and f-measure.

  14. An Information Extraction Framework for Cohort Identification Using Electronic Health Records

    PubMed Central

    Liu, Hongfang; Bielinski, Suzette J.; Sohn, Sunghwan; Murphy, Sean; Wagholikar, Kavishwar B.; Jonnalagadda, Siddhartha R.; Ravikumar, K.E.; Wu, Stephen T.; Kullo, Iftikhar J.; Chute, Christopher G

    Information extraction (IE), a natural language processing (NLP) task that automatically extracts structured or semi-structured information from free text, has become popular in the clinical domain for supporting automated systems at point-of-care and enabling secondary use of electronic health records (EHRs) for clinical and translational research. However, a high performance IE system can be very challenging to construct due to the complexity and dynamic nature of human language. In this paper, we report an IE framework for cohort identification using EHRs that is a knowledge-driven framework developed under the Unstructured Information Management Architecture (UIMA). A system to extract specific information can be developed by subject matter experts through expert knowledge engineering of the externalized knowledge resources used in the framework. PMID:24303255

  15. A Benchmark of Vehicle Maintenance Training Between the U.S. Air Force and a Civilian Industry Leader

    DTIC Science & Technology

    1992-09-01

    Aas :nosen to 113entlfy tasKs oerformed Dv reczcnizeo :omoe:ent automotive serv’ce Personnel :intry level o"ersonnei 4ere iot , ,ic udec i n tie sirve...Diagnose the cause of poor, intermittent, or no electric door and hatch/trunk lock operation. 10. Repair or replace switches, relays, actuators ...Semi-4utomative Temoerature Controls i. Cnecx ooeration of automatic ana semi-automatic neating, HP ventalation ana air-conaitioning ( HVAC ) control

  16. Implementation of a microcontroller-based semi-automatic coagulator.

    PubMed

    Chan, K; Kirumira, A; Elkateeb, A

    2001-01-01

    The coagulator is an instrument used in hospitals to detect clot formation as a function of time. Generally, these coagulators are very expensive and therefore not affordable by a doctors' office and small clinics. The objective of this project is to design and implement a low cost semi-automatic coagulator (SAC) prototype. The SAC is capable of assaying up to 12 samples and can perform the following tests: prothrombin time (PT), activated partial thromboplastin time (APTT), and PT/APTT combination. The prototype has been tested successfully.

  17. Semi-automatic version of the potentiometric titration method for characterization of uranium compounds.

    PubMed

    Cristiano, Bárbara F G; Delgado, José Ubiratan; da Silva, José Wanderley S; de Barros, Pedro D; de Araújo, Radier M S; Dias, Fábio C; Lopes, Ricardo T

    2012-09-01

    The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Diffusion kurtosis imaging can efficiently assess the glioma grade and cellular proliferation.

    PubMed

    Jiang, Rifeng; Jiang, Jingjing; Zhao, Lingyun; Zhang, Jiaxuan; Zhang, Shun; Yao, Yihao; Yang, Shiqi; Shi, Jingjing; Shen, Nanxi; Su, Changliang; Zhang, Ju; Zhu, Wenzhen

    2015-12-08

    Conventional diffusion imaging techniques are not sufficiently accurate for evaluating glioma grade and cellular proliferation, which are critical for guiding glioma treatment. Diffusion kurtosis imaging (DKI), an advanced non-Gaussian diffusion imaging technique, has shown potential in grading glioma; however, its applications in this tumor have not been fully elucidated. In this study, DKI and diffusion weighted imaging (DWI) were performed on 74 consecutive patients with histopathologically confirmed glioma. The kurtosis and conventional diffusion metric values of the tumor were semi-automatically obtained. The relationships of these metrics with the glioma grade and Ki-67 expression were evaluated. The diagnostic efficiency of these metrics in grading was further compared. It was demonstrated that compared with the conventional diffusion metrics, the kurtosis metrics were more promising imaging markers in distinguishing high-grade from low-grade gliomas and distinguishing among grade II, III and IV gliomas; the kurtosis metrics also showed great potential in the prediction of Ki-67 expression. To our best knowledge, we are the first to reveal the ability of DKI to assess the cellular proliferation of gliomas, and to employ the semi-automatic method for the accurate measurement of gliomas. These results could have a significant impact on the diagnosis and subsequent therapy of glioma.

  19. An Evolving Ecosystem for Natural Language Processing in Department of Veterans Affairs.

    PubMed

    Garvin, Jennifer H; Kalsy, Megha; Brandt, Cynthia; Luther, Stephen L; Divita, Guy; Coronado, Gregory; Redd, Doug; Christensen, Carrie; Hill, Brent; Kelly, Natalie; Treitler, Qing Zeng

    2017-02-01

    In an ideal clinical Natural Language Processing (NLP) ecosystem, researchers and developers would be able to collaborate with others, undertake validation of NLP systems, components, and related resources, and disseminate them. We captured requirements and formative evaluation data from the Veterans Affairs (VA) Clinical NLP Ecosystem stakeholders using semi-structured interviews and meeting discussions. We developed a coding rubric to code interviews. We assessed inter-coder reliability using percent agreement and the kappa statistic. We undertook 15 interviews and held two workshop discussions. The main areas of requirements related to; design and functionality, resources, and information. Stakeholders also confirmed the vision of the second generation of the Ecosystem and recommendations included; adding mechanisms to better understand terms, measuring collaboration to demonstrate value, and datasets/tools to navigate spelling errors with consumer language, among others. Stakeholders also recommended capability to: communicate with developers working on the next version of the VA electronic health record (VistA Evolution), provide a mechanism to automatically monitor download of tools and to automatically provide a summary of the downloads to Ecosystem contributors and funders. After three rounds of coding and discussion, we determined the percent agreement of two coders to be 97.2% and the kappa to be 0.7851. The vision of the VA Clinical NLP Ecosystem met stakeholder needs. Interviews and discussion provided key requirements that inform the design of the VA Clinical NLP Ecosystem.

  20. Exploring the crowded central region of ten Galactic globular clusters using EMCCDs. Variable star searches and new discoveries

    NASA Astrophysics Data System (ADS)

    Figuera Jaimes, R.; Bramich, D. M.; Skottfelt, J.; Kains, N.; Jørgensen, U. G.; Horne, K.; Dominik, M.; Alsubai, K. A.; Bozza, V.; Calchi Novati, S.; Ciceri, S.; D'Ago, G.; Galianni, P.; Gu, S.-H.; Harpsøe, K. B. W.; Haugbølle, T.; Hinse, T. C.; Hundertmark, M.; Juncher, D.; Korhonen, H.; Mancini, L.; Popovas, A.; Rabus, M.; Rahvar, S.; Scarpetta, G.; Schmidt, R. W.; Snodgrass, C.; Southworth, J.; Starkey, D.; Street, R. A.; Surdej, J.; Wang, X.-B.; Wertz, O.

    2016-04-01

    Aims: We aim to obtain time-series photometry of the very crowded central regions of Galactic globular clusters; to obtain better angular resolution thanhas been previously achieved with conventional CCDs on ground-based telescopes; and to complete, or improve, the census of the variable star population in those stellar systems. Methods: Images were taken using the Danish 1.54-m Telescope at the ESO observatory at La Silla in Chile. The telescope was equipped with an electron-multiplying CCD, and the short-exposure-time images obtained (ten images per second) were stacked using the shift-and-add technique to produce the normal-exposure-time images (minutes). Photometry was performed via difference image analysis. Automatic detection of variable stars in the field was attempted. Results: The light curves of 12 541 stars in the cores of ten globular clusters were statistically analysed to automatically extract the variable stars. We obtained light curves for 31 previously known variable stars (3 long-period irregular, 2 semi-regular, 20 RR Lyrae, 1 SX Phoenicis, 3 cataclysmic variables, 1 W Ursae Majoris-type and 1 unclassified) and we discovered 30 new variables (16 long-period irregular, 7 semi-regular, 4 RR Lyrae, 1 SX Phoenicis and 2 unclassified). Fluxes and photometric measurements for these stars are available in electronic form through the Strasbourg astronomical Data Center. Based on data collected by the MiNDSTEp team with the Danish 1.54m telescope at ESO's La Silla observatory in Chile.Full Table 1 is only available at CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/588/A128

  1. Preparing Electronic Clinical Data for Quality Improvement and Comparative Effectiveness Research: The SCOAP CERTAIN Automation and Validation Project

    PubMed Central

    Devine, Emily Beth; Capurro, Daniel; van Eaton, Erik; Alfonso-Cristancho, Rafael; Devlin, Allison; Yanez, N. David; Yetisgen-Yildiz, Meliha; Flum, David R.; Tarczy-Hornoch, Peter

    2013-01-01

    Background: The field of clinical research informatics includes creation of clinical data repositories (CDRs) used to conduct quality improvement (QI) activities and comparative effectiveness research (CER). Ideally, CDR data are accurately and directly abstracted from disparate electronic health records (EHRs), across diverse health-systems. Objective: Investigators from Washington State’s Surgical Care Outcomes and Assessment Program (SCOAP) Comparative Effectiveness Research Translation Network (CERTAIN) are creating such a CDR. This manuscript describes the automation and validation methods used to create this digital infrastructure. Methods: SCOAP is a QI benchmarking initiative. Data are manually abstracted from EHRs and entered into a data management system. CERTAIN investigators are now deploying Caradigm’s Amalga™ tool to facilitate automated abstraction of data from multiple, disparate EHRs. Concordance is calculated to compare data automatically to manually abstracted. Performance measures are calculated between Amalga and each parent EHR. Validation takes place in repeated loops, with improvements made over time. When automated abstraction reaches the current benchmark for abstraction accuracy - 95% - itwill ‘go-live’ at each site. Progress to Date: A technical analysis was completed at 14 sites. Five sites are contributing; the remaining sites prioritized meeting Meaningful Use criteria. Participating sites are contributing 15–18 unique data feeds, totaling 13 surgical registry use cases. Common feeds are registration, laboratory, transcription/dictation, radiology, and medications. Approximately 50% of 1,320 designated data elements are being automatically abstracted—25% from structured data; 25% from text mining. Conclusion: In semi-automating data abstraction and conducting a rigorous validation, CERTAIN investigators will semi-automate data collection to conduct QI and CER, while advancing the Learning Healthcare System. PMID:25848565

  2. A semi-automatic computer-aided method for surgical template design

    NASA Astrophysics Data System (ADS)

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-02-01

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.

  3. A semi-automatic computer-aided method for surgical template design

    PubMed Central

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-01-01

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method. PMID:26843434

  4. A semi-automatic computer-aided method for surgical template design.

    PubMed

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-02-04

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.

  5. Automatic Summarization of MEDLINE Citations for Evidence–Based Medical Treatment: A Topic-Oriented Evaluation

    PubMed Central

    Fiszman, Marcelo; Demner-Fushman, Dina; Kilicoglu, Halil; Rindflesch, Thomas C.

    2009-01-01

    As the number of electronic biomedical textual resources increases, it becomes harder for physicians to find useful answers at the point of care. Information retrieval applications provide access to databases; however, little research has been done on using automatic summarization to help navigate the documents returned by these systems. After presenting a semantic abstraction automatic summarization system for MEDLINE citations, we concentrate on evaluating its ability to identify useful drug interventions for fifty-three diseases. The evaluation methodology uses existing sources of evidence-based medicine as surrogates for a physician-annotated reference standard. Mean average precision (MAP) and a clinical usefulness score developed for this study were computed as performance metrics. The automatic summarization system significantly outperformed the baseline in both metrics. The MAP gain was 0.17 (p < 0.01) and the increase in the overall score of clinical usefulness was 0.39 (p < 0.05). PMID:19022398

  6. Automatic and semi-automatic approaches for arteriolar-to-venular computation in retinal photographs

    NASA Astrophysics Data System (ADS)

    Mendonça, Ana Maria; Remeseiro, Beatriz; Dashtbozorg, Behdad; Campilho, Aurélio

    2017-03-01

    The Arteriolar-to-Venular Ratio (AVR) is a popular dimensionless measure which allows the assessment of patients' condition for the early diagnosis of different diseases, including hypertension and diabetic retinopathy. This paper presents two new approaches for AVR computation in retinal photographs which include a sequence of automated processing steps: vessel segmentation, caliber measurement, optic disc segmentation, artery/vein classification, region of interest delineation, and AVR calculation. Both approaches have been tested on the INSPIRE-AVR dataset, and compared with a ground-truth provided by two medical specialists. The obtained results demonstrate the reliability of the fully automatic approach which provides AVR ratios very similar to at least one of the observers. Furthermore, the semi-automatic approach, which includes the manual modification of the artery/vein classification if needed, allows to significantly reduce the error to a level below the human error.

  7. 21 CFR 211.68 - Automatic, mechanical, and electronic equipment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 4 2011-04-01 2011-04-01 false Automatic, mechanical, and electronic equipment. 211.68 Section 211.68 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... Equipment § 211.68 Automatic, mechanical, and electronic equipment. (a) Automatic, mechanical, or electronic...

  8. 21 CFR 211.68 - Automatic, mechanical, and electronic equipment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 4 2012-04-01 2012-04-01 false Automatic, mechanical, and electronic equipment. 211.68 Section 211.68 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... Equipment § 211.68 Automatic, mechanical, and electronic equipment. (a) Automatic, mechanical, or electronic...

  9. Assessment of local pulse wave velocity distribution in mice using k-t BLAST PC-CMR with semi-automatic area segmentation.

    PubMed

    Herold, Volker; Herz, Stefan; Winter, Patrick; Gutjahr, Fabian Tobias; Andelovic, Kristina; Bauer, Wolfgang Rudolf; Jakob, Peter Michael

    2017-10-16

    Local aortic pulse wave velocity (PWV) is a measure for vascular stiffness and has a predictive value for cardiovascular events. Ultra high field CMR scanners allow the quantification of local PWV in mice, however these systems are yet unable to monitor the distribution of local elasticities. In the present study we provide a new accelerated method to quantify local aortic PWV in mice with phase-contrast cardiovascular magnetic resonance imaging (PC-CMR) at 17.6 T. Based on a k-t BLAST (Broad-use Linear Acquisition Speed-up Technique) undersampling scheme, total measurement time could be reduced by a factor of 6. The fast data acquisition enables to quantify the local PWV at several locations along the aortic blood vessel based on the evaluation of local temporal changes in blood flow and vessel cross sectional area. To speed up post processing and to eliminate operator bias, we introduce a new semi-automatic segmentation algorithm to quantify cross-sectional areas of the aortic vessel. The new methods were applied in 10 eight-month-old mice (4 C57BL/6J-mice and 6 ApoE (-/-) -mice) at 12 adjacent locations along the abdominal aorta. Accelerated data acquisition and semi-automatic post-processing delivered reliable measures for the local PWV, similiar to those obtained with full data sampling and manual segmentation. No statistically significant differences of the mean values could be detected for the different measurement approaches. Mean PWV values were elevated for the ApoE (-/-) -group compared to the C57BL/6J-group (3.5 ± 0.7 m/s vs. 2.2 ± 0.4 m/s, p < 0.01). A more heterogeneous PWV-distribution in the ApoE (-/-) -animals could be observed compared to the C57BL/6J-mice, representing the local character of lesion development in atherosclerosis. In the present work, we showed that k-t BLAST PC-MRI enables the measurement of the local PWV distribution in the mouse aorta. The semi-automatic segmentation method based on PC-CMR data allowed rapid determination of local PWV. The findings of this study demonstrate the ability of the proposed methods to non-invasively quantify the spatial variations in local PWV along the aorta of ApoE (-/-) -mice as a relevant model of atherosclerosis.

  10. An automatic bolus injector for use in radiotracer studies of blood flow: design and evaluation.

    PubMed

    Snyder, R E; Overton, T R; Boisvert, D P; Petruk, K C

    1976-12-01

    An electromechanical device is described which automatically injects the radiotracer bolus used in the measurement of cerebral blood flow. It consists of two electronically controlled, solenoid operated syringes, one containing the radiotracer solution and the other heparinized saline. Results are presented which show that use of the automatic bolus injector in place of hand injection leads to an improvement in the precision of measured flow values. Additional advantages of the device are discussed.

  11. A method for semi-automatic segmentation and evaluation of intracranial aneurysms in bone-subtraction computed tomography angiography (BSCTA) images

    NASA Astrophysics Data System (ADS)

    Krämer, Susanne; Ditt, Hendrik; Biermann, Christina; Lell, Michael; Keller, Jörg

    2009-02-01

    The rupture of an intracranial aneurysm has dramatic consequences for the patient. Hence early detection of unruptured aneurysms is of paramount importance. Bone-subtraction computed tomography angiography (BSCTA) has proven to be a powerful tool for detection of aneurysms in particular those located close to the skull base. Most aneurysms though are chance findings in BSCTA scans performed for other reasons. Therefore it is highly desirable to have techniques operating on standard BSCTA scans available which assist radiologists and surgeons in evaluation of intracranial aneurysms. In this paper we present a semi-automatic method for segmentation and assessment of intracranial aneurysms. The only user-interaction required is placement of a marker into the vascular malformation. Termination ensues automatically as soon as the segmentation reaches the vessels which feed the aneurysm. The algorithm is derived from an adaptive region-growing which employs a growth gradient as criterion for termination. Based on this segmentation values of high clinical and prognostic significance, such as volume, minimum and maximum diameter as well as surface of the aneurysm, are calculated automatically. the segmentation itself as well as the calculated diameters are visualised. Further segmentation of the adjoining vessels provides the means for visualisation of the topographical situation of vascular structures associated to the aneurysm. A stereolithographic mesh (STL) can be derived from the surface of the segmented volume. STL together with parameters like the resiliency of vascular wall tissue provide for an accurate wall model of the aneurysm and its associated vascular structures. Consequently the haemodynamic situation in the aneurysm itself and close to it can be assessed by flow modelling. Significant values of haemodynamics such as pressure onto the vascular wall, wall shear stress or pathlines of the blood flow can be computed. Additionally a dynamic flow model can be generated. Thus the presented method supports a better understanding of the clinical situation and assists the evaluation of therapeutic options. Furthermore it contributes to future research addressing intervention planning and prognostic assessment of intracranial aneurysms.

  12. Asteroid (21) Lutetia: Semi-Automatic Impact Craters Detection and Classification

    NASA Astrophysics Data System (ADS)

    Jenerowicz, M.; Banaszkiewicz, M.

    2018-05-01

    The need to develop an automated method, independent of lighting and surface conditions, for the identification and measurement of impact craters, as well as the creation of a reliable and efficient tool, has become a justification of our studies. This paper presents a methodology for the detection of impact craters based on their spectral and spatial features. The analysis aims at evaluation of the algorithm capabilities to determinate the spatial parameters of impact craters presented in a time series. In this way, time-consuming visual interpretation of images would be reduced to the special cases. The developed algorithm is tested on a set of OSIRIS high resolution images of asteroid Lutetia surface which is characterized by varied landforms and the abundance of craters created by collisions with smaller bodies of the solar system.The proposed methodology consists of three main steps: characterisation of objects of interest on limited set of data, semi-automatic extraction of impact craters performed for total set of data by applying the Mathematical Morphology image processing (Serra, 1988, Soille, 2003), and finally, creating libraries of spatial and spectral parameters for extracted impact craters, i.e. the coordinates of the crater center, semi-major and semi-minor axis, shadow length and cross-section. The overall accuracy of the proposed method is 98 %, the Kappa coefficient is 0.84, the correlation coefficient is ∼ 0.80, the omission error 24.11 %, the commission error 3.45 %. The obtained results show that methods based on Mathematical Morphology operators are effective also with a limited number of data and low-contrast images.

  13. Intrathoracic airway measurement: ex-vivo validation

    NASA Astrophysics Data System (ADS)

    Reinhardt, Joseph M.; Raab, Stephen A.; D'Souza, Neil D.; Hoffman, Eric A.

    1997-05-01

    High-resolution x-ray CT (HRCT) provides detailed images of the lungs and bronchial tree. HRCT-based imaging and quantitation of peripheral bronchial airway geometry provides a valuable tool for assessing regional airway physiology. Such measurements have been sued to address physiological questions related to the mechanics of airway collapse in sleep apnea, the measurement of airway response to broncho-constriction agents, and to evaluate and track the progression of disease affecting the airways, such as asthma and cystic fibrosis. Significant attention has been paid to the measurements of extra- and intra-thoracic airways in 2D sections from volumetric x-ray CT. A variety of manual and semi-automatic techniques have been proposed for airway geometry measurement, including the use of standardized display window and level settings for caliper measurements, methods based on manual or semi-automatic border tracing, and more objective, quantitative approaches such as the use of the 'half-max' criteria. A recently proposed measurements technique uses a model-based deconvolution to estimate the location of the inner and outer airway walls. Validation using a plexiglass phantom indicates that the model-based method is more accurate than the half-max approach for thin-walled structures. In vivo validation of these airway measurement techniques is difficult because of the problems in identifying a reliable measurement 'gold standard.' In this paper we report on ex vivo validation of the half-max and model-based methods using an excised pig lung. The lung is sliced into thin sections of tissue and scanned using an electron beam CT scanner. Airways of interest are measured from the CT images, and also measured with using a microscope and micrometer to obtain a measurement gold standard. The result show no significant difference between the model-based measurements and the gold standard; while the half-max estimates exhibited a measurement bias and were significantly different than the gold standard.

  14. Closed-Loop Process Control for Electron Beam Freeform Fabrication and Deposition Processes

    NASA Technical Reports Server (NTRS)

    Taminger, Karen M. (Inventor); Hofmeister, William H. (Inventor); Martin, Richard E. (Inventor); Hafley, Robert A. (Inventor)

    2013-01-01

    A closed-loop control method for an electron beam freeform fabrication (EBF(sup 3)) process includes detecting a feature of interest during the process using a sensor(s), continuously evaluating the feature of interest to determine, in real time, a change occurring therein, and automatically modifying control parameters to control the EBF(sup 3) process. An apparatus provides closed-loop control method of the process, and includes an electron gun for generating an electron beam, a wire feeder for feeding a wire toward a substrate, wherein the wire is melted and progressively deposited in layers onto the substrate, a sensor(s), and a host machine. The sensor(s) measure the feature of interest during the process, and the host machine continuously evaluates the feature of interest to determine, in real time, a change occurring therein. The host machine automatically modifies control parameters to the EBF(sup 3) apparatus to control the EBF(sup 3) process in a closed-loop manner.

  15. Design for Manufacturing and Assembly in Apparel. Part 1. Handbook

    DTIC Science & Technology

    1994-02-01

    reduced and the inverted pleat was eliminated to take advantage of the automatic seam stitcher . The shape and size of the side back section seam...coin pocket. The size and shape of the pocket would be designed to best utilize the equipment. An automatic dart stitcher may be utilized to stitch the...with stacker Semi-automatic serging units with stacker Automatic seaming units/profile stitchers Programmable seaming units for various operations

  16. Multifractal-based nuclei segmentation in fish images.

    PubMed

    Reljin, Nikola; Slavkovic-Ilic, Marijeta; Tapia, Coya; Cihoric, Nikola; Stankovic, Srdjan

    2017-09-01

    The method for nuclei segmentation in fluorescence in-situ hybridization (FISH) images, based on the inverse multifractal analysis (IMFA) is proposed. From the blue channel of the FISH image in RGB format, the matrix of Holder exponents, with one-by-one correspondence with the image pixels, is determined first. The following semi-automatic procedure is proposed: initial nuclei segmentation is performed automatically from the matrix of Holder exponents by applying predefined hard thresholding; then the user evaluates the result and is able to refine the segmentation by changing the threshold, if necessary. After successful nuclei segmentation, the HER2 (human epidermal growth factor receptor 2) scoring can be determined in usual way: by counting red and green dots within segmented nuclei, and finding their ratio. The IMFA segmentation method is tested over 100 clinical cases, evaluated by skilled pathologist. Testing results show that the new method has advantages compared to already reported methods.

  17. A procedural method for the efficient implementation of full-custom VLSI designs

    NASA Technical Reports Server (NTRS)

    Belk, P.; Hickey, N.

    1987-01-01

    An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.

  18. Application of semi-active RFID power meter in automatic verification pipeline and intelligent storage system

    NASA Astrophysics Data System (ADS)

    Chen, Xiangqun; Huang, Rui; Shen, Liman; chen, Hao; Xiong, Dezhi; Xiao, Xiangqi; Liu, Mouhai; Xu, Renheng

    2018-03-01

    In this paper, the semi-active RFID watt-hour meter is applied to automatic test lines and intelligent warehouse management, from the transmission system, test system and auxiliary system, monitoring system, realize the scheduling of watt-hour meter, binding, control and data exchange, and other functions, make its more accurate positioning, high efficiency of management, update the data quickly, all the information at a glance. Effectively improve the quality, efficiency and automation of verification, and realize more efficient data management and warehouse management.

  19. Indirect blood pressure and heart rate measured quickly without observer bias using a semi-automatic machine (auto-manometer)--response to isometric exercise in normal healthy males and its modification by beta-adrenoceptor blockade.

    PubMed Central

    Nyberg, G

    1977-01-01

    1 In a double-blind crossover study, six volunteers performed sustained handgrip at 50% of maximal voluntary contraction before and 90 min following oral administration of 0.25 and 100 mg metoprolol tartrate, a beta1 selective adrenoceptor blocking agent. Blood pressure and heart rate were measured with the Auto-Manometer, an electronic semi-automatic device based on the principles of the London School of Hygiene and Tropical Medicine sphygmomanometer. It eliminates observer and digital bias completely, and also records heart rate at the same time as blood pressure is recorded. 2 Resting heart rate fell 15% after 25 mg, 21% after 100 mg and was unchanged after placebo. Systolic blood pressure fell 6% on both doses and was unchanged on placebo. Diastolic pressure did not change with any of the doses. 3 At 1 min of handgrip, heart rate was significantly lower after 25 and 100 mg than before drug or after placebo. There was no difference between the blood pressure levels attained before or after any of the dose levels. The rise of heart rate tended to be somewhat dampened after 100 mg only. The rise in blood pressure was unchanged after any dose compared with before. Images Figure 1 PMID:901695

  20. Public knowledge of how to use an automatic external defibrillator in out-of-hospital cardiac arrest in Hong Kong.

    PubMed

    Fan, K L; Leung, L P; Poon, H T; Chiu, H Y; Liu, H L; Tang, W Y

    2016-12-01

    The survival rate of out-of-hospital cardiac arrest in Hong Kong is low. A long delay between collapse and defibrillation is a contributing factor. Public access to defibrillation may shorten this delay. It is unknown, however, whether Hong Kong's public is willing or able to use an automatic external defibrillator. This study aimed to evaluate public knowledge of how to use an automatic external defibrillator in out-of-hospital cardiac arrest. A face-to-face semi-structured questionnaire survey of the public was conducted in six locations with a high pedestrian flow in Hong Kong. In this study, 401 members of the public were interviewed. Most had no training in first aid (65.8%) or in use of an automatic external defibrillator (85.3%). Nearly all (96.5%) would call for help for a victim of out-of-hospital cardiac arrest but only 18.0% would use an automatic external defibrillator. Public knowledge of automatic external defibrillator use was low: 77.6% did not know the location of an automatic external defibrillator in the vicinity of their home or workplace. People who had ever been trained in both first aid and use of an automatic external defibrillator were more likely to respond to and help a victim of cardiac arrest, and to use an automatic external defibrillator. Public knowledge of automatic external defibrillator use is low in Hong Kong. A combination of training in first aid and in the use of an automatic external defibrillator is better than either one alone.

  1. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  2. Isfahan MISP Dataset.

    PubMed

    Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein

    2017-01-01

    An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled "biosigdata.com." It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf).

  3. 10 CFR Appendix J to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...'s true energy consumption characteristics as to provide materially inaccurate comparative data... clothes washers should be totally representative of the design, construction, and control system that will...

  4. 10 CFR Appendix J to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...'s true energy consumption characteristics as to provide materially inaccurate comparative data... clothes washers should be totally representative of the design, construction, and control system that will...

  5. 10 CFR Appendix J to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...'s true energy consumption characteristics as to provide materially inaccurate comparative data... clothes washers should be totally representative of the design, construction, and control system that will...

  6. Flexible methods for segmentation evaluation: Results from CT-based luggage screening

    PubMed Central

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2017-01-01

    BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346

  7. To do it or to let an automatic tool do it? The priority of control over effort.

    PubMed

    Osiurak, François; Wagner, Clara; Djerbi, Sara; Navarro, Jordan

    2013-01-01

    The aim of the present study is to provide experimental data relevant to the issue of what leads humans to use automatic tools. Two answers can be offered. The first is that humans strive to minimize physical and/or cognitive effort (principle of least effort). The second is that humans tend to keep their perceived control over the environment (principle of more control). These two factors certainly play a role, but the question raised here is to what do people give priority in situations wherein both manual and automatic actions take the same time - minimizing effort or keeping perceived control? To answer that question, we built four experiments in which participants were confronted with a recurring choice between performing a task manually (physical effort) or in a semi-automatic way (cognitive effort) versus using an automatic tool that completes the task for them (no effort). In this latter condition, participants were required to follow the progression of the automatic tool step by step. Our results showed that participants favored the manual or semi-automatic condition over the automatic condition. However, when they were offered the opportunity to perform recreational tasks in parallel, the shift toward manual condition disappeared. The findings give support to the idea that people give priority to keeping control over minimizing effort.

  8. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods

    PubMed Central

    Burlina, Philippe; Billings, Seth; Joshi, Neil

    2017-01-01

    Objective To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. Methods Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and “engineered” features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. Results The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). Conclusions This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification. PMID:28854220

  9. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods.

    PubMed

    Burlina, Philippe; Billings, Seth; Joshi, Neil; Albayda, Jemima

    2017-01-01

    To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and "engineered" features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification.

  10. Localization accuracy from automatic and semi-automatic rigid registration of locally-advanced lung cancer targets during image-guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2012-01-15

    Purpose: To evaluate localization accuracy resulting from rigid registration of locally-advanced lung cancer targets using fully automatic and semi-automatic protocols for image-guided radiation therapy. Methods: Seventeen lung cancer patients, fourteen also presenting with involved lymph nodes, received computed tomography (CT) scans once per week throughout treatment under active breathing control. A physician contoured both lung and lymph node targets for all weekly scans. Various automatic and semi-automatic rigid registration techniques were then performed for both individual and simultaneous alignments of the primary gross tumor volume (GTV{sub P}) and involved lymph nodes (GTV{sub LN}) to simulate the localization process in image-guidedmore » radiation therapy. Techniques included ''standard'' (direct registration of weekly images to a planning CT), ''seeded'' (manual prealignment of targets to guide standard registration), ''transitive-based'' (alignment of pretreatment and planning CTs through one or more intermediate images), and ''rereferenced'' (designation of a new reference image for registration). Localization error (LE) was assessed as the residual centroid and border distances between targets from planning and weekly CTs after registration. Results: Initial bony alignment resulted in centroid LE of 7.3 {+-} 5.4 mm and 5.4 {+-} 3.4 mm for the GTV{sub P} and GTV{sub LN}, respectively. Compared to bony alignment, transitive-based and seeded registrations significantly reduced GTV{sub P} centroid LE to 4.7 {+-} 3.7 mm (p = 0.011) and 4.3 {+-} 2.5 mm (p < 1 x 10{sup -3}), respectively, but the smallest GTV{sub P} LE of 2.4 {+-} 2.1 mm was provided by rereferenced registration (p < 1 x 10{sup -6}). Standard registration significantly reduced GTV{sub LN} centroid LE to 3.2 {+-} 2.5 mm (p < 1 x 10{sup -3}) compared to bony alignment, with little additional gain offered by the other registration techniques. For simultaneous target alignment, centroid LE as low as 3.9 {+-} 2.7 mm and 3.8 {+-} 2.3 mm were achieved for the GTV{sub P} and GTV{sub LN}, respectively, using rereferenced registration. Conclusions: Target shape, volume, and configuration changes during radiation therapy limited the accuracy of standard rigid registration for image-guided localization in locally-advanced lung cancer. Significant error reductions were possible using other rigid registration techniques, with LE approaching the lower limit imposed by interfraction target variability throughout treatment.« less

  11. Validation of a semi-automatic protocol for the assessment of the tear meniscus central area based on open-source software

    NASA Astrophysics Data System (ADS)

    Pena-Verdeal, Hugo; Garcia-Resua, Carlos; Yebra-Pimentel, Eva; Giraldez, Maria J.

    2017-08-01

    Purpose: Different lower tear meniscus parameters can be clinical assessed on dry eye diagnosis. The aim of this study was to propose and analyse the variability of a semi-automatic method for measuring lower tear meniscus central area (TMCA) by using open source software. Material and methods: On a group of 105 subjects, one video of the lower tear meniscus after fluorescein instillation was generated by a digital camera attached to a slit-lamp. A short light beam (3x5 mm) with moderate illumination in the central portion of the meniscus (6 o'clock) was used. Images were extracted from each video by a masked observer. By using an open source software based on Java (NIH ImageJ), a further observer measured in a masked and randomized order the TMCA in the short light beam illuminated area by two methods: (1) manual method, where TMCA images was "manually" measured; (2) semi-automatic method, where TMCA images were transformed in an 8-bit-binary image, then holes inside this shape were filled and on the isolated shape, the area size was obtained. Finally, both measurements, manual and semi-automatic, were compared. Results: Paired t-test showed no statistical difference between both techniques results (p = 0.102). Pearson correlation between techniques show a significant positive near to perfect correlation (r = 0.99; p < 0.001). Conclusions: This study showed a useful tool to objectively measure the frontal central area of the meniscus in photography by free open source software.

  12. Semi-automatic mapping of cultural heritage from airborne laser scanning using deep learning

    NASA Astrophysics Data System (ADS)

    Due Trier, Øivind; Salberg, Arnt-Børre; Holger Pilø, Lars; Tonning, Christer; Marius Johansen, Hans; Aarsten, Dagrun

    2016-04-01

    This paper proposes to use deep learning to improve semi-automatic mapping of cultural heritage from airborne laser scanning (ALS) data. Automatic detection methods, based on traditional pattern recognition, have been applied in a number of cultural heritage mapping projects in Norway for the past five years. Automatic detection of pits and heaps have been combined with visual interpretation of the ALS data for the mapping of deer hunting systems, iron production sites, grave mounds and charcoal kilns. However, the performance of the automatic detection methods varies substantially between ALS datasets. For the mapping of deer hunting systems on flat gravel and sand sediment deposits, the automatic detection results were almost perfect. However, some false detections appeared in the terrain outside of the sediment deposits. These could be explained by other pit-like landscape features, like parts of river courses, spaces between boulders, and modern terrain modifications. However, these were easy to spot during visual interpretation, and the number of missed individual pitfall traps was still low. For the mapping of grave mounds, the automatic method produced a large number of false detections, reducing the usefulness of the semi-automatic approach. The mound structure is a very common natural terrain feature, and the grave mounds are less distinct in shape than the pitfall traps. Still, applying automatic mound detection on an entire municipality did lead to a new discovery of an Iron Age grave field with more than 15 individual mounds. Automatic mound detection also proved to be useful for a detailed re-mapping of Norway's largest Iron Age grave yard, which contains almost 1000 individual graves. Combined pit and mound detection has been applied to the mapping of more than 1000 charcoal kilns that were used by an iron work 350-200 years ago. The majority of charcoal kilns were indirectly detected as either pits on the circumference, a central mound, or both. However, kilns with a flat interior and a shallow ditch along the circumference were often missed by the automatic detection method. The successfulness of automatic detection seems to depend on two factors: (1) the density of ALS ground hits on the cultural heritage structures being sought, and (2) to what extent these structures stand out from natural terrain structures. The first factor may, to some extent, be improved by using a higher number of ALS pulses per square meter. The second factor is difficult to change, and also highlights another challenge: how to make a general automatic method that is applicable in all types of terrain within a country. The mixed experience with traditional pattern recognition for semi-automatic mapping of cultural heritage led us to consider deep learning as an alternative approach. The main principle is that a general feature detector has been trained on a large image database. The feature detector is then tailored to a specific task by using a modest number of images of true and false examples of the features being sought. Results of using deep learning are compared with previous results using traditional pattern recognition.

  13. Semi-automated CCTV surveillance: the effects of system confidence, system accuracy and task complexity on operator vigilance, reliance and workload.

    PubMed

    Dadashi, N; Stedmon, A W; Pridmore, T P

    2013-09-01

    Recent advances in computer vision technology have lead to the development of various automatic surveillance systems, however their effectiveness is adversely affected by many factors and they are not completely reliable. This study investigated the potential of a semi-automated surveillance system to reduce CCTV operator workload in both detection and tracking activities. A further focus of interest was the degree of user reliance on the automated system. A simulated prototype was developed which mimicked an automated system that provided different levels of system confidence information. Dependent variable measures were taken for secondary task performance, reliance and subjective workload. When the automatic component of a semi-automatic CCTV surveillance system provided reliable system confidence information to operators, workload significantly decreased and spare mental capacity significantly increased. Providing feedback about system confidence and accuracy appears to be one important way of making the status of the automated component of the surveillance system more 'visible' to users and hence more effective to use. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. New developments in FeynCalc 9.0

    NASA Astrophysics Data System (ADS)

    Shtabovenko, Vladyslav; Mertig, Rolf; Orellana, Frederik

    2016-10-01

    In this note we report on the new version of FEYNCALC, a MATHEMATICA package for symbolic semi-automatic evaluation of Feynman diagrams and algebraic expressions in quantum field theory. The main features of version 9.0 are: improved tensor reduction and partial fractioning of loop integrals, new functions for using FEYNCALC together with tools for reduction of scalar loop integrals using integration-by-parts (IBP) identities, better interface to FEYNARTS and support for SU(N) generators with explicit fundamental indices.

  15. Denoising and 4D visualization of OCT images

    PubMed Central

    Gargesha, Madhusudhana; Jenkins, Michael W.; Rollins, Andrew M.; Wilson, David L.

    2009-01-01

    We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications. PMID:18679509

  16. Pilot-scale cooling tower to evaluate corrosion, scaling, and biofouling control strategies for cooling system makeup water.

    PubMed

    Chien, S H; Hsieh, M K; Li, H; Monnell, J; Dzombak, D; Vidic, R

    2012-02-01

    Pilot-scale cooling towers can be used to evaluate corrosion, scaling, and biofouling control strategies when using particular cooling system makeup water and particular operating conditions. To study the potential for using a number of different impaired waters as makeup water, a pilot-scale system capable of generating 27,000 kJ∕h heat load and maintaining recirculating water flow with a Reynolds number of 1.92 × 10(4) was designed to study these critical processes under conditions that are similar to full-scale systems. The pilot-scale cooling tower was equipped with an automatic makeup water control system, automatic blowdown control system, semi-automatic biocide feeding system, and corrosion, scaling, and biofouling monitoring systems. Observed operational data revealed that the major operating parameters, including temperature change (6.6 °C), cycles of concentration (N = 4.6), water flow velocity (0.66 m∕s), and air mass velocity (3660 kg∕h m(2)), were controlled quite well for an extended period of time (up to 2 months). Overall, the performance of the pilot-scale cooling towers using treated municipal wastewater was shown to be suitable to study critical processes (corrosion, scaling, biofouling) and evaluate cooling water management strategies for makeup waters of complex quality.

  17. Hydrological Response of Semi-arid Degraded Catchments in Tigray, Northern Ethiopia

    NASA Astrophysics Data System (ADS)

    Teka, Daniel; Van Wesemael, Bas; Vanacker, Veerle; Hallet, Vincent

    2013-04-01

    To address water scarcity in the arid and semi-arid part of developing countries, accurate estimation of surface runoff is an essential task. In semi-arid catchments runoff data are scarce and therefore runoff estimation using hydrological models becomes an alternative. This research was initiated in order to characterize runoff response of semi-arid catchments in Tigray, North Ethiopia to evaluate SCS-CN for various catchments. Ten sub-catchments were selected in different river basins and rainfall and runoff were measured with automatic hydro-monitoring equipments for 2-3 years. The Curve Number was estimated for each Hydrological Response Unit (HRU) in the sub-catchments and runoff was modeled using the SCS-CN method at λ = 0.05 and λ = 0.20. The result showed a significant difference between the two abstraction ratios (P =0.05, df = 1, n= 132) and reasonable good result was obtained for predicted runoff at λ = 0.05 (NSE = -0.69; PBIAS = 18.1%). When using the CN values from literature runoff was overestimated compared to the measured value (e= -11.53). This research showed the importance of using measured runoff data to characterize semi-arid catchments and accurately estimate the scarce water resource. Key words: Hydrological response, rainfall-runoff, degraded environments, semi-arid, Ethiopia, Tigray

  18. Automatic, semi-automatic and manual validation of urban drainage data.

    PubMed

    Branisavljević, N; Prodanović, D; Pavlović, D

    2010-01-01

    Advances in sensor technology and the possibility of automated long distance data transmission have made continuous measurements the preferable way of monitoring urban drainage processes. Usually, the collected data have to be processed by an expert in order to detect and mark the wrong data, remove them and replace them with interpolated data. In general, the first step in detecting the wrong, anomaly data is called the data quality assessment or data validation. Data validation consists of three parts: data preparation, validation scores generation and scores interpretation. This paper will present the overall framework for the data quality improvement system, suitable for automatic, semi-automatic or manual operation. The first two steps of the validation process are explained in more detail, using several validation methods on the same set of real-case data from the Belgrade sewer system. The final part of the validation process, which is the scores interpretation, needs to be further investigated on the developed system.

  19. Developments in the CCP4 molecular-graphics project.

    PubMed

    Potterton, Liz; McNicholas, Stuart; Krissinel, Eugene; Gruber, Jan; Cowtan, Kevin; Emsley, Paul; Murshudov, Garib N; Cohen, Serge; Perrakis, Anastassis; Noble, Martin

    2004-12-01

    Progress towards structure determination that is both high-throughput and high-value is dependent on the development of integrated and automatic tools for electron-density map interpretation and for the analysis of the resulting atomic models. Advances in map-interpretation algorithms are extending the resolution regime in which fully automatic tools can work reliably, but at present human intervention is required to interpret poor regions of macromolecular electron density, particularly where crystallographic data is only available to modest resolution [for example, I/sigma(I) < 2.0 for minimum resolution 2.5 A]. In such cases, a set of manual and semi-manual model-building molecular-graphics tools is needed. At the same time, converting the knowledge encapsulated in a molecular structure into understanding is dependent upon visualization tools, which must be able to communicate that understanding to others by means of both static and dynamic representations. CCP4 mg is a program designed to meet these needs in a way that is closely integrated with the ongoing development of CCP4 as a program suite suitable for both low- and high-intervention computational structural biology. As well as providing a carefully designed user interface to advanced algorithms of model building and analysis, CCP4 mg is intended to present a graphical toolkit to developers of novel algorithms in these fields.

  20. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, M; Woo, B; Kim, J

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automaticallymore » from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.« less

  1. Development of Semi-Automatic Lathe by using Intelligent Soft Computing Technique

    NASA Astrophysics Data System (ADS)

    Sakthi, S.; Niresh, J.; Vignesh, K.; Anand Raj, G.

    2018-03-01

    This paper discusses the enhancement of conventional lathe machine to semi-automated lathe machine by implementing a soft computing method. In the present scenario, lathe machine plays a vital role in the engineering division of manufacturing industry. While the manual lathe machines are economical, the accuracy and efficiency are not up to the mark. On the other hand, CNC machine provide the desired accuracy and efficiency, but requires a huge capital. In order to over come this situation, a semi-automated approach towards the conventional lathe machine is developed by employing stepper motors to the horizontal and vertical drive, that can be controlled by Arduino UNO -microcontroller. Based on the input parameters of the lathe operation the arduino coding is been generated and transferred to the UNO board. Thus upgrading from manual to semi-automatic lathe machines can significantly increase the accuracy and efficiency while, at the same time, keeping a check on investment cost and consequently provide a much needed escalation to the manufacturing industry.

  2. Analysis of manual segmentation in paranasal CT images.

    PubMed

    Tingelhoff, Kathrin; Eichhorn, Klaus W G; Wagner, Ingo; Kunkel, Maria E; Moral, Analia I; Rilk, Markus E; Wahl, Friedrich M; Bootz, Friedrich

    2008-09-01

    Manual segmentation is often used for evaluation of automatic or semi-automatic segmentation. The purpose of this paper is to describe the inter and intraindividual variability, the dubiety of manual segmentation as a gold standard and to find reasons for the discrepancy. We realized two experiments. In the first one ten ENT surgeons, ten medical students and one engineer outlined the right maxillary sinus and ethmoid sinuses manually on a standard CT dataset of a human head. In the second experiment two participants outlined maxillary sinus and ethmoid sinuses five times consecutively. Manual segmentation was accomplished with custom software using a line segmentation tool. The first experiment shows the interindividual variability of manual segmentation which is higher for ethmoidal sinuses than for maxillary sinuses. The variability can be caused by the level of experience, different interpretation of the CT data or different levels of accuracy. The second experiment shows intraindividual variability which is lower than interindividual variability. Most variances in both experiments appear during segmentation of ethmoidal sinuses and outlining hiatus semilunaris. Concerning the inter and intraindividual variances the segmentation result of one manual segmenter could not directly be used as gold standard for the evaluation of automatic segmentation algorithms.

  3. Heuristic evaluation of eNote: an electronic notes system.

    PubMed

    Bright, Tiffani J; Bakken, Suzanne; Johnson, Stephen B

    2006-01-01

    eNote is an electronic health record (EHR) system based on semi-structured narrative documents. A heuristic evaluation was conducted with a sample of five usability experts. eNote performed highly in: 1)consistency with standards and 2)recognition rather than recall. eNote needs improvement in: 1)help and documentation, 2)aesthetic and minimalist design, 3)error prevention, 4)helping users recognize, diagnosis, and recover from errors, and 5)flexibility and efficiency of use. The heuristic evaluation was an efficient method of evaluating our interface.

  4. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  5. Semi-automatic motion compensation of contrast-enhanced ultrasound images from abdominal organs for perfusion analysis.

    PubMed

    Schäfer, Sebastian; Nylund, Kim; Sævik, Fredrik; Engjom, Trond; Mézl, Martin; Jiřík, Radovan; Dimcevski, Georg; Gilja, Odd Helge; Tönnies, Klaus

    2015-08-01

    This paper presents a system for correcting motion influences in time-dependent 2D contrast-enhanced ultrasound (CEUS) images to assess tissue perfusion characteristics. The system consists of a semi-automatic frame selection method to find images with out-of-plane motion as well as a method for automatic motion compensation. Translational and non-rigid motion compensation is applied by introducing a temporal continuity assumption. A study consisting of 40 clinical datasets was conducted to compare the perfusion with simulated perfusion using pharmacokinetic modeling. Overall, the proposed approach decreased the mean average difference between the measured perfusion and the pharmacokinetic model estimation. It was non-inferior for three out of four patient cohorts to a manual approach and reduced the analysis time by 41% compared to manual processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Isfahan MISP Dataset

    PubMed Central

    Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein

    2017-01-01

    An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled “biosigdata.com.” It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf). PMID:28487832

  7. Development and Implementation of a Web-based Evaluation System for an Internal Medicine Residency Program.

    ERIC Educational Resources Information Center

    Rosenberg, Mark E.; Watson, Kathleen; Paul, Jeevan; Miller, Wesley; Harris, Ilene; Valdivia, Tomas D.

    2001-01-01

    Describes the development and implementation of a World Wide Web-based electronic evaluation system for the internal medicine residency program at the University of Minnesota. Features include automatic entry of evaluations by faculty or students into a database, compliance tracking, reminders, extensive reporting capabilities, automatic…

  8. Comparison and assessment of semi-automatic image segmentation in computed tomography scans for image-guided kidney surgery.

    PubMed

    Glisson, Courtenay L; Altamar, Hernan O; Herrell, S Duke; Clark, Peter; Galloway, Robert L

    2011-11-01

    Image segmentation is integral to implementing intraoperative guidance for kidney tumor resection. Results seen in computed tomography (CT) data are affected by target organ physiology as well as by the segmentation algorithm used. This work studies variables involved in using level set methods found in the Insight Toolkit to segment kidneys from CT scans and applies the results to an image guidance setting. A composite algorithm drawing on the strengths of multiple level set approaches was built using the Insight Toolkit. This algorithm requires image contrast state and seed points to be identified as input, and functions independently thereafter, selecting and altering method and variable choice as needed. Semi-automatic results were compared to expert hand segmentation results directly and by the use of the resultant surfaces for registration of intraoperative data. Direct comparison using the Dice metric showed average agreement of 0.93 between semi-automatic and hand segmentation results. Use of the segmented surfaces in closest point registration of intraoperative laser range scan data yielded average closest point distances of approximately 1 mm. Application of both inverse registration transforms from the previous step to all hand segmented image space points revealed that the distance variability introduced by registering to the semi-automatically segmented surface versus the hand segmented surface was typically less than 3 mm both near the tumor target and at distal points, including subsurface points. Use of the algorithm shortened user interaction time and provided results which were comparable to the gold standard of hand segmentation. Further, the use of the algorithm's resultant surfaces in image registration provided comparable transformations to surfaces produced by hand segmentation. These data support the applicability and utility of such an algorithm as part of an image guidance workflow.

  9. Self-calibrating models for dynamic monitoring and diagnosis

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1996-01-01

    A method for automatically building qualitative and semi-quantitative models of dynamic systems, and using them for monitoring and fault diagnosis, is developed and demonstrated. The qualitative approach and semi-quantitative method are applied to monitoring observation streams, and to design of non-linear control systems.

  10. Research in Automatic Russian-English Scientific and Technical Lexicography. Final Report.

    ERIC Educational Resources Information Center

    Wayne State Univ., Detroit, MI.

    Techniques of reversing English-Russian scientific and technical dictionaries into Russian-English versions through semi-automated compilation are described. Sections on manual and automatic processing discuss pre- and post-editing, the task program, updater (correction of errors and revision by specialist in a given field), the system employed…

  11. Development of automatic through-insulation welding for microelectric interconnections

    NASA Technical Reports Server (NTRS)

    Arnett, J. C.

    1972-01-01

    The capability to automatically route, remove insulation from, and weld small-diameter solid conductor wire is presented. This would facilitate the economical small-quantity production of complex miniature electronic assemblies. An engineering model of equipment having this capability was developed and evaluated. Whereas early work in the use of welded magnet wire interconnections was concentrated on opposed electrode systems, and generally used heat to melt the wire insulation, the present method is based on a concentric electrode system and a wire feed system which splits the insulation by application of pressure prior to welding. The work deals with the design, fabrication, and evaluation testing of an improved version of this concentric electrode system. Two different approaches to feeding the wire to the concentric electrodes were investigated. It was concluded that the process is feasible for the interconnection of complex miniature electronic assemblies.

  12. A Semi-Automatic Alignment Method for Math Educational Standards Using the MP (Materialization Pattern) Model

    ERIC Educational Resources Information Center

    Choi, Namyoun

    2010-01-01

    Educational standards alignment, which matches similar or equivalent concepts of educational standards, is a necessary task for educational resource discovery and retrieval. Automated or semi-automated alignment systems for educational standards have been recently available. However, existing systems frequently result in inconsistency in…

  13. Semi-automated identification of leopard frogs

    USGS Publications Warehouse

    Petrovska-Delacrétaz, Dijana; Edwards, Aaron; Chiasson, John; Chollet, Gérard; Pilliod, David S.

    2014-01-01

    Principal component analysis is used to implement a semi-automatic recognition system to identify recaptured northern leopard frogs (Lithobates pipiens). Results of both open set and closed set experiments are given. The presented algorithm is shown to provide accurate identification of 209 individual leopard frogs from a total set of 1386 images.

  14. SABRE--A Novel Software Tool for Bibliographic Post-Processing.

    ERIC Educational Resources Information Center

    Burge, Cecil D.

    1989-01-01

    Describes the software architecture and application of SABRE (Semi-Automated Bibliographic Environment), which is one of the first products to provide a semi-automatic environment for relevancy ranking of citations obtained from searches of bibliographic databases. Features designed to meet the review, categorization, culling, and reporting needs…

  15. Semi-automatic 3D lung nodule segmentation in CT using dynamic programming

    NASA Astrophysics Data System (ADS)

    Sargent, Dustin; Park, Sun Young

    2017-02-01

    We present a method for semi-automatic segmentation of lung nodules in chest CT that can be extended to general lesion segmentation in multiple modalities. Most semi-automatic algorithms for lesion segmentation or similar tasks use region-growing or edge-based contour finding methods such as level-set. However, lung nodules and other lesions are often connected to surrounding tissues, which makes these algorithms prone to growing the nodule boundary into the surrounding tissue. To solve this problem, we apply a 3D extension of the 2D edge linking method with dynamic programming to find a closed surface in a spherical representation of the nodule ROI. The algorithm requires a user to draw a maximal diameter across the nodule in the slice in which the nodule cross section is the largest. We report the lesion volume estimation accuracy of our algorithm on the FDA lung phantom dataset, and the RECIST diameter estimation accuracy on the lung nodule dataset from the SPIE 2016 lung nodule classification challenge. The phantom results in particular demonstrate that our algorithm has the potential to mitigate the disparity in measurements performed by different radiologists on the same lesions, which could improve the accuracy of disease progression tracking.

  16. Argo: enabling the development of bespoke workflows and services for disease annotation.

    PubMed

    Batista-Navarro, Riza; Carter, Jacob; Ananiadou, Sophia

    2016-01-01

    Argo (http://argo.nactem.ac.uk) is a generic text mining workbench that can cater to a variety of use cases, including the semi-automatic annotation of literature. It enables its technical users to build their own customised text mining solutions by providing a wide array of interoperable and configurable elementary components that can be seamlessly integrated into processing workflows. With Argo's graphical annotation interface, domain experts can then make use of the workflows' automatically generated output to curate information of interest.With the continuously rising need to understand the aetiology of diseases as well as the demand for their informed diagnosis and personalised treatment, the curation of disease-relevant information from medical and clinical documents has become an indispensable scientific activity. In the Fifth BioCreative Challenge Evaluation Workshop (BioCreative V), there was substantial interest in the mining of literature for disease-relevant information. Apart from a panel discussion focussed on disease annotations, the chemical-disease relations (CDR) track was also organised to foster the sharing and advancement of disease annotation tools and resources.This article presents the application of Argo's capabilities to the literature-based annotation of diseases. As part of our participation in BioCreative V's User Interactive Track (IAT), we demonstrated and evaluated Argo's suitability to the semi-automatic curation of chronic obstructive pulmonary disease (COPD) phenotypes. Furthermore, the workbench facilitated the development of some of the CDR track's top-performing web services for normalising disease mentions against the Medical Subject Headings (MeSH) database. In this work, we highlight Argo's support for developing various types of bespoke workflows ranging from ones which enabled us to easily incorporate information from various databases, to those which train and apply machine learning-based concept recognition models, through to user-interactive ones which allow human curators to manually provide their corrections to automatically generated annotations. Our participation in the BioCreative V challenges shows Argo's potential as an enabling technology for curating disease and phenotypic information from literature.Database URL: http://argo.nactem.ac.uk. © The Author(s) 2016. Published by Oxford University Press.

  17. Argo: enabling the development of bespoke workflows and services for disease annotation

    PubMed Central

    Batista-Navarro, Riza; Carter, Jacob; Ananiadou, Sophia

    2016-01-01

    Argo (http://argo.nactem.ac.uk) is a generic text mining workbench that can cater to a variety of use cases, including the semi-automatic annotation of literature. It enables its technical users to build their own customised text mining solutions by providing a wide array of interoperable and configurable elementary components that can be seamlessly integrated into processing workflows. With Argo's graphical annotation interface, domain experts can then make use of the workflows' automatically generated output to curate information of interest. With the continuously rising need to understand the aetiology of diseases as well as the demand for their informed diagnosis and personalised treatment, the curation of disease-relevant information from medical and clinical documents has become an indispensable scientific activity. In the Fifth BioCreative Challenge Evaluation Workshop (BioCreative V), there was substantial interest in the mining of literature for disease-relevant information. Apart from a panel discussion focussed on disease annotations, the chemical-disease relations (CDR) track was also organised to foster the sharing and advancement of disease annotation tools and resources. This article presents the application of Argo’s capabilities to the literature-based annotation of diseases. As part of our participation in BioCreative V’s User Interactive Track (IAT), we demonstrated and evaluated Argo’s suitability to the semi-automatic curation of chronic obstructive pulmonary disease (COPD) phenotypes. Furthermore, the workbench facilitated the development of some of the CDR track’s top-performing web services for normalising disease mentions against the Medical Subject Headings (MeSH) database. In this work, we highlight Argo’s support for developing various types of bespoke workflows ranging from ones which enabled us to easily incorporate information from various databases, to those which train and apply machine learning-based concept recognition models, through to user-interactive ones which allow human curators to manually provide their corrections to automatically generated annotations. Our participation in the BioCreative V challenges shows Argo’s potential as an enabling technology for curating disease and phenotypic information from literature. Database URL: http://argo.nactem.ac.uk PMID:27189607

  18. LexValueSets: An Approach for Context-Driven Value Sets Extraction

    PubMed Central

    Pathak, Jyotishman; Jiang, Guoqian; Dwarkanath, Sridhar O.; Buntrock, James D.; Chute, Christopher G.

    2008-01-01

    The ability to model, share and re-use value sets across multiple medical information systems is an important requirement. However, generating value sets semi-automatically from a terminology service is still an unresolved issue, in part due to the lack of linkage to clinical context patterns that provide the constraints in defining a concept domain and invocation of value sets extraction. Towards this goal, we develop and evaluate an approach for context-driven automatic value sets extraction based on a formal terminology model. The crux of the technique is to identify and define the context patterns from various domains of discourse and leverage them for value set extraction using two complementary ideas based on (i) local terms provided by the Subject Matter Experts (extensional) and (ii) semantic definition of the concepts in coding schemes (intensional). A prototype was implemented based on SNOMED CT rendered in the LexGrid terminology model and a preliminary evaluation is presented. PMID:18998955

  19. Evaluation of the training capacity of the Spanish Resident Book of Otolaryngology (FORMIR) as an electronic portfolio.

    PubMed

    Maza Solano, Juan Manuel; Benavente Bermudo, Gustavo; Estrada Molina, Francisco José; Ambrosiani Fernández, Jesús; Sánchez Gómez, Serafín

    2017-08-10

    and objectives We have evaluated the training capacity of the Spanish resident training book as an electronic portfolio to achieve the learning objectives of otorhinolaryngology (ENT) residents. A multi-method qualitative investigation with transversal characteristics, temporal and retrospective guidance was performed on Spanish ENT residents using a structured questionnaire, a semi-structured interview, and a computer application on the FORMIR website. A 56.5% of ENT-residents specialising in one of the 63 accredited Spanish hospitals between 2009-2012 participated in the study. The results obtained show that the ENT residents who used the e-portfolio were better able to implement self-guided study, were more aware of their learning objectives, fulfilled the training programme more efficiently, identified the causes of learning gaps more clearly, and considered FORMIR in e-portfolio format to be an ideal training tool to replace the resident training book in paper format. The ENT residents greatly appreciated the training benefits of FORMIR as an e-portfolio, especially its simple and intuitive interface, the ease and comfort with which they could record their activities, the automatic and numeric feedback on the acquisition of their competencies (which facilitates self-guided learning), its storage capacity for evidence, and its ability to be used as UEMS logbook as well as a standard curriculum vitae. All these features make FORMIR a training and evaluation tool that outperforms similar instruments available to ENT residents. They do not hesitate to identify it as the ideal resident training book for facilitating their specialised training. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.

  20. 10 CFR Appendix J2 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... hardness or less) using 27.0 grams + 4.0 grams per pound of cloth load of AHAM Standard detergent Formula 3... repellent finishes, such as fluoropolymer stain resistant finishes shall not be applied to the test cloth...

  1. Integration of wireless sensor networks into automatic irrigation scheduling of a center pivot

    USDA-ARS?s Scientific Manuscript database

    A six-span center pivot system was used as a platform for testing two wireless sensor networks (WSN) of infrared thermometers. The cropped field was a semi-circle, divided into six pie shaped sections of which three were irrigated manually and three were irrigated automatically based on the time tem...

  2. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  3. Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation

    NASA Astrophysics Data System (ADS)

    Lu, Kongkuo; Hall, Christopher S.

    2014-03-01

    Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.

  4. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    NASA Astrophysics Data System (ADS)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  5. Comparison of computer systems and ranking criteria for automatic melanoma detection in dermoscopic images.

    PubMed

    Møllersen, Kajsa; Zortea, Maciel; Schopf, Thomas R; Kirchesch, Herbert; Godtliebsen, Fred

    2017-01-01

    Melanoma is the deadliest form of skin cancer, and early detection is crucial for patient survival. Computer systems can assist in melanoma detection, but are not widespread in clinical practice. In 2016, an open challenge in classification of dermoscopic images of skin lesions was announced. A training set of 900 images with corresponding class labels and semi-automatic/manual segmentation masks was released for the challenge. An independent test set of 379 images, of which 75 were of melanomas, was used to rank the participants. This article demonstrates the impact of ranking criteria, segmentation method and classifier, and highlights the clinical perspective. We compare five different measures for diagnostic accuracy by analysing the resulting ranking of the computer systems in the challenge. Choice of performance measure had great impact on the ranking. Systems that were ranked among the top three for one measure, dropped to the bottom half when changing performance measure. Nevus Doctor, a computer system previously developed by the authors, was used to participate in the challenge, and investigate the impact of segmentation and classifier. The diagnostic accuracy when using an automatic versus the semi-automatic/manual segmentation is investigated. The unexpected small impact of segmentation method suggests that improvements of the automatic segmentation method w.r.t. resemblance to semi-automatic/manual segmentation will not improve diagnostic accuracy substantially. A small set of similar classification algorithms are used to investigate the impact of classifier on the diagnostic accuracy. The variability in diagnostic accuracy for different classifier algorithms was larger than the variability for segmentation methods, and suggests a focus for future investigations. From a clinical perspective, the misclassification of a melanoma as benign has far greater cost than the misclassification of a benign lesion. For computer systems to have clinical impact, their performance should be ranked by a high-sensitivity measure.

  6. Comparison Of Semi-Automatic And Automatic Slick Detection Algorithms For Jiyeh Power Station Oil Spill, Lebanon

    NASA Astrophysics Data System (ADS)

    Osmanoglu, B.; Ozkan, C.; Sunar, F.

    2013-10-01

    After air strikes on July 14 and 15, 2006 the Jiyeh Power Station started leaking oil into the eastern Mediterranean Sea. The power station is located about 30 km south of Beirut and the slick covered about 170 km of coastline threatening the neighboring countries Turkey and Cyprus. Due to the ongoing conflict between Israel and Lebanon, cleaning efforts could not start immediately resulting in 12 000 to 15 000 tons of fuel oil leaking into the sea. In this paper we compare results from automatic and semi-automatic slick detection algorithms. The automatic detection method combines the probabilities calculated for each pixel from each image to obtain a joint probability, minimizing the adverse effects of atmosphere on oil spill detection. The method can readily utilize X-, C- and L-band data where available. Furthermore wind and wave speed observations can be used for a more accurate analysis. For this study, we utilize Envisat ASAR ScanSAR data. A probability map is generated based on the radar backscatter, effect of wind and dampening value. The semi-automatic algorithm is based on supervised classification. As a classifier, Artificial Neural Network Multilayer Perceptron (ANN MLP) classifier is used since it is more flexible and efficient than conventional maximum likelihood classifier for multisource and multi-temporal data. The learning algorithm for ANN MLP is chosen as the Levenberg-Marquardt (LM). Training and test data for supervised classification are composed from the textural information created from SAR images. This approach is semiautomatic because tuning the parameters of classifier and composing training data need a human interaction. We point out the similarities and differences between the two methods and their results as well as underlining their advantages and disadvantages. Due to the lack of ground truth data, we compare obtained results to each other, as well as other published oil slick area assessments.

  7. A methodology for the semi-automatic digital image analysis of fragmental impactites

    NASA Astrophysics Data System (ADS)

    Chanou, A.; Osinski, G. R.; Grieve, R. A. F.

    2014-04-01

    A semi-automated digital image analysis method is developed for the comparative textural study of impact melt-bearing breccias. This method uses the freeware software ImageJ developed by the National Institute of Health (NIH). Digital image analysis is performed on scans of hand samples (10-15 cm across), based on macroscopic interpretations of the rock components. All image processing and segmentation are done semi-automatically, with the least possible manual intervention. The areal fraction of components is estimated and modal abundances can be deduced, where the physical optical properties (e.g., contrast, color) of the samples allow it. Other parameters that can be measured include, for example, clast size, clast-preferred orientations, average box-counting dimension or fragment shape complexity, and nearest neighbor distances (NnD). This semi-automated method allows the analysis of a larger number of samples in a relatively short time. Textures, granulometry, and shape descriptors are of considerable importance in rock characterization. The methodology is used to determine the variations of the physical characteristics of some examples of fragmental impactites.

  8. SU-E-J-275: Review - Computerized PET/CT Image Analysis in the Evaluation of Tumor Response to Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, W; Wang, J; Zhang, H

    Purpose: To review the literature in using computerized PET/CT image analysis for the evaluation of tumor response to therapy. Methods: We reviewed and summarized more than 100 papers that used computerized image analysis techniques for the evaluation of tumor response with PET/CT. This review mainly covered four aspects: image registration, tumor segmentation, image feature extraction, and response evaluation. Results: Although rigid image registration is straightforward, it has been shown to achieve good alignment between baseline and evaluation scans. Deformable image registration has been shown to improve the alignment when complex deformable distortions occur due to tumor shrinkage, weight loss ormore » gain, and motion. Many semi-automatic tumor segmentation methods have been developed on PET. A comparative study revealed benefits of high levels of user interaction with simultaneous visualization of CT images and PET gradients. On CT, semi-automatic methods have been developed for only tumors that show marked difference in CT attenuation between the tumor and the surrounding normal tissues. Quite a few multi-modality segmentation methods have been shown to improve accuracy compared to single-modality algorithms. Advanced PET image features considering spatial information, such as tumor volume, tumor shape, total glycolytic volume, histogram distance, and texture features have been found more informative than the traditional SUVmax for the prediction of tumor response. Advanced CT features, including volumetric, attenuation, morphologic, structure, and texture descriptors, have also been found advantage over the traditional RECIST and WHO criteria in certain tumor types. Predictive models based on machine learning technique have been constructed for correlating selected image features to response. These models showed improved performance compared to current methods using cutoff value of a single measurement for tumor response. Conclusion: This review showed that computerized PET/CT image analysis holds great potential to improve the accuracy in evaluation of tumor response. This work was supported in part by the National Cancer Institute Grant R01CA172638.« less

  9. Simultaneous extraction of centerlines, stenosis, and thrombus detection in renal CT angiography

    NASA Astrophysics Data System (ADS)

    Subramanyan, Krishna; Durgan, Jacob; Hodgkiss, Thomas D.; Chandra, Shalabh

    2004-05-01

    The Renal Artery Stenosis (RAS) is the major cause of renovascular hypertension and CT angiography has shown tremendous promise as a noninvasive method for reliably detecting renal artery stenosis. The purpose of this study was to validate the semi-automated methods to assist in extraction of renal branches and characterizing the associated renal artery stenosis. Automatically computed diagnostic images such as straight MIP, curved MPR, cross-sections, and diameters from multi-slice CT are presented and evaluated for its acceptance. We used vessel-tracking image processing methods to extract the aortic-renal vessel tree in a CT data in axial slice images. Next, from the topology and anatomy of the aortic vessel tree, the stenosis, and thrombus section and branching of the renal arteries are extracted. The results are presented in curved MPR and continuously variable MIP images. In this study, 15 patients were scanned with contrast on Mx8000 CT scanner (Philips Medical Systems), with 1.0 mm thickness, 0.5mm slice spacing, and 120kVp and a stack of 512x512x150 volume sets were reconstructed. The automated image processing took less than 50 seconds to compute the centerline and borders of the aortic/renal vessel tree. The overall assessment of manual and automatically generated stenosis yielded a weighted kappa statistic of 0.97 at right renal arteries, 0.94 at the left renal branches. The thrombus region contoured manually and semi-automatically agreed upon at 0.93. The manual time to process each case is approximately 25 to 30 minutes.

  10. A semi-automatic framework of measuring pulmonary arterial metrics at anatomic airway locations using CT imaging

    NASA Astrophysics Data System (ADS)

    Jin, Dakai; Guo, Junfeng; Dougherty, Timothy M.; Iyer, Krishna S.; Hoffman, Eric A.; Saha, Punam K.

    2016-03-01

    Pulmonary vascular dysfunction has been implicated in smoking-related susceptibility to emphysema. With the growing interest in characterizing arterial morphology for early evaluation of the vascular role in pulmonary diseases, there is an increasing need for the standardization of a framework for arterial morphological assessment at airway segmental levels. In this paper, we present an effective and robust semi-automatic framework to segment pulmonary arteries at different anatomic airway branches and measure their cross-sectional area (CSA). The method starts with user-specified endpoints of a target arterial segment through a custom-built graphical user interface. It then automatically detect the centerline joining the endpoints, determines the local structure orientation and computes the CSA along the centerline after filtering out the adjacent pulmonary structures, such as veins or airway walls. Several new techniques are presented, including collision-impact based cost function for centerline detection, radial sample-line based CSA computation, and outlier analysis of radial distance to subtract adjacent neighboring structures in the CSA measurement. The method was applied to repeat-scan pulmonary multirow detector CT (MDCT) images from ten healthy subjects (age: 21-48 Yrs, mean: 28.5 Yrs; 7 female) at functional residual capacity (FRC). The reproducibility of computed arterial CSA from four airway segmental regions in middle and lower lobes was analyzed. The overall repeat-scan intra-class correlation (ICC) of the computed CSA from all four airway regions in ten subjects was 96% with maximum ICC found at LB10 and RB4 regions.

  11. Some aspects of SR beamline alignment

    NASA Astrophysics Data System (ADS)

    Gaponov, Yu. A.; Cerenius, Y.; Nygaard, J.; Ursby, T.; Larsson, K.

    2011-09-01

    Based on the Synchrotron Radiation (SR) beamline optical element-by-element alignment with analysis of the alignment results an optimized beamline alignment algorithm has been designed and developed. The alignment procedures have been designed and developed for the MAX-lab I911-4 fixed energy beamline. It has been shown that the intermediate information received during the monochromator alignment stage can be used for the correction of both monochromator and mirror without the next stages of alignment of mirror, slits, sample holder, etc. Such an optimization of the beamline alignment procedures decreases the time necessary for the alignment and becomes useful and helpful in the case of any instability of the beamline optical elements, storage ring electron orbit or the wiggler insertion device, which could result in the instability of angular and positional parameters of the SR beam. A general purpose software package for manual, semi-automatic and automatic SR beamline alignment has been designed and developed using the developed algorithm. The TANGO control system is used as the middle-ware between the stand-alone beamline control applications BLTools, BPMonitor and the beamline equipment.

  12. Registering 2D and 3D imaging data of bone during healing.

    PubMed

    Hoerth, Rebecca M; Baum, Daniel; Knötel, David; Prohaska, Steffen; Willie, Bettina M; Duda, Georg N; Hege, Hans-Christian; Fratzl, Peter; Wagermaier, Wolfgang

    2015-04-01

    PURPOSE/AIMS OF THE STUDY: Bone's hierarchical structure can be visualized using a variety of methods. Many techniques, such as light and electron microscopy generate two-dimensional (2D) images, while micro-computed tomography (µCT) allows a direct representation of the three-dimensional (3D) structure. In addition, different methods provide complementary structural information, such as the arrangement of organic or inorganic compounds. The overall aim of the present study is to answer bone research questions by linking information of different 2D and 3D imaging techniques. A great challenge in combining different methods arises from the fact that they usually reflect different characteristics of the real structure. We investigated bone during healing by means of µCT and a couple of 2D methods. Backscattered electron images were used to qualitatively evaluate the tissue's calcium content and served as a position map for other experimental data. Nanoindentation and X-ray scattering experiments were performed to visualize mechanical and structural properties. We present an approach for the registration of 2D data in a 3D µCT reference frame, where scanning electron microscopies serve as a methodic link. Backscattered electron images are perfectly suited for registration into µCT reference frames, since both show structures based on the same physical principles. We introduce specific registration tools that have been developed to perform the registration process in a semi-automatic way. By applying this routine, we were able to exactly locate structural information (e.g. mineral particle properties) in the 3D bone volume. In bone healing studies this will help to better understand basic formation, remodeling and mineralization processes.

  13. Semi-automatic ground truth generation using unsupervised clustering and limited manual labeling: Application to handwritten character recognition

    PubMed Central

    Vajda, Szilárd; Rangoni, Yves; Cecotti, Hubert

    2015-01-01

    For training supervised classifiers to recognize different patterns, large data collections with accurate labels are necessary. In this paper, we propose a generic, semi-automatic labeling technique for large handwritten character collections. In order to speed up the creation of a large scale ground truth, the method combines unsupervised clustering and minimal expert knowledge. To exploit the potential discriminant complementarities across features, each character is projected into five different feature spaces. After clustering the images in each feature space, the human expert labels the cluster centers. Each data point inherits the label of its cluster’s center. A majority (or unanimity) vote decides the label of each character image. The amount of human involvement (labeling) is strictly controlled by the number of clusters – produced by the chosen clustering approach. To test the efficiency of the proposed approach, we have compared, and evaluated three state-of-the art clustering methods (k-means, self-organizing maps, and growing neural gas) on the MNIST digit data set, and a Lampung Indonesian character data set, respectively. Considering a k-nn classifier, we show that labeling manually only 1.3% (MNIST), and 3.2% (Lampung) of the training data, provides the same range of performance than a completely labeled data set would. PMID:25870463

  14. SURE reliability analysis: Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; White, Allan L.

    1988-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  15. The NLM Indexing Initiative's Medical Text Indexer.

    PubMed

    Aronson, Alan R; Mork, James G; Gay, Clifford W; Humphrey, Susanne M; Rogers, Willie J

    2004-01-01

    The Medical Text Indexer (MTI) is a program for producing MeSH indexing recommendations. It is the major product of NLM's Indexing Initiative and has been used in both semi-automated and fully automated indexing environments at the Library since mid 2002. We report here on an experiment conducted with MEDLINE indexers to evaluate MTI's performance and to generate ideas for its improvement as a tool for user-assisted indexing. We also discuss some filtering techniques developed to improve MTI's accuracy for use primarily in automatically producing the indexing for several abstracts collections.

  16. Comparison of two intraosseous infusion systems for adult emergency medical use.

    PubMed

    Brenner, Thorsten; Bernhard, Michael; Helm, Matthias; Doll, Sara; Völkl, Alfred; Ganion, Nicole; Friedmann, Claudia; Sikinger, Marcus; Knapp, Jürgen; Martin, Eike; Gries, André

    2008-09-01

    The current guidelines of the European Resuscitation Council (ERC) stipulate that an intraosseous access should be placed if establishing a peripheral venous access for cardiopulmonary resuscitation (CPR) would involve delays. The aim of this study was therefore to compare a manual intraosseous infusion technique (MAN-IO) and a semi-automatic intraosseous infusion system (EZ-IO) using adult human cadavers as a model. After receiving verbal instruction and giving their written informed consent, the participants of the study were randomized into two groups (group I: MAN-IO, and group II: EZ-IO). In addition to the demographic data, the following were evaluated: (1) Number of attempts required to successfully place the infusion, (2) Insertion time, (3) Occurrence of technical complications and (4) User friendliness. Evaluation protocols from 84 study participants could be evaluated (MAN-IO: n=39 vs. EZ-IO: n=45). No significant differences were seen in the study participants' characteristics. Insertion times (MW+/-S.D.) of the respective successful attempts were comparable (MAN-IO: 33+/-28s vs. EZ-IO: 32+/-11s). When using the EZ-IO, the access was successfully established significantly more often on the first attempt (MAN-IO: 79.5% vs. EZ-IO: 97.8%; p<0.01). The EZ-IO was also found to have more advantages in terms of technical complications (MAN-IO: 15.4% vs. EZ-IO: 0.0%; p<0.01) and user friendliness (school grading system: MAN-IO: 1.9+/-0.7 vs. EZ-IO: 1.2+/-0.4; p<0.01). In an adult human cadaver model, the semi-automatic system was proven to be more effective. The EZ-IO gave more successful results, was associated with fewer technical complications, and is user friendlier.

  17. LitPathExplorer: a confidence-based visual text analytics tool for exploring literature-enriched pathway models.

    PubMed

    Soto, Axel J; Zerva, Chrysoula; Batista-Navarro, Riza; Ananiadou, Sophia

    2018-04-15

    Pathway models are valuable resources that help us understand the various mechanisms underpinning complex biological processes. Their curation is typically carried out through manual inspection of published scientific literature to find information relevant to a model, which is a laborious and knowledge-intensive task. Furthermore, models curated manually cannot be easily updated and maintained with new evidence extracted from the literature without automated support. We have developed LitPathExplorer, a visual text analytics tool that integrates advanced text mining, semi-supervised learning and interactive visualization, to facilitate the exploration and analysis of pathway models using statements (i.e. events) extracted automatically from the literature and organized according to levels of confidence. LitPathExplorer supports pathway modellers and curators alike by: (i) extracting events from the literature that corroborate existing models with evidence; (ii) discovering new events which can update models; and (iii) providing a confidence value for each event that is automatically computed based on linguistic features and article metadata. Our evaluation of event extraction showed a precision of 89% and a recall of 71%. Evaluation of our confidence measure, when used for ranking sampled events, showed an average precision ranging between 61 and 73%, which can be improved to 95% when the user is involved in the semi-supervised learning process. Qualitative evaluation using pair analytics based on the feedback of three domain experts confirmed the utility of our tool within the context of pathway model exploration. LitPathExplorer is available at http://nactem.ac.uk/LitPathExplorer_BI/. sophia.ananiadou@manchester.ac.uk. Supplementary data are available at Bioinformatics online.

  18. Cross-terminology mapping challenges: a demonstration using medication terminological systems.

    PubMed

    Saitwal, Himali; Qing, David; Jones, Stephen; Bernstam, Elmer V; Chute, Christopher G; Johnson, Todd R

    2012-08-01

    Standardized terminological systems for biomedical information have provided considerable benefits to biomedical applications and research. However, practical use of this information often requires mapping across terminological systems-a complex and time-consuming process. This paper demonstrates the complexity and challenges of mapping across terminological systems in the context of medication information. It provides a review of medication terminological systems and their linkages, then describes a case study in which we mapped proprietary medication codes from an electronic health record to SNOMED CT and the UMLS Metathesaurus. The goal was to create a polyhierarchical classification system for querying an i2b2 clinical data warehouse. We found that three methods were required to accurately map the majority of actively prescribed medications. Only 62.5% of source medication codes could be mapped automatically. The remaining codes were mapped using a combination of semi-automated string comparison with expert selection, and a completely manual approach. Compound drugs were especially difficult to map: only 7.5% could be mapped using the automatic method. General challenges to mapping across terminological systems include (1) the availability of up-to-date information to assess the suitability of a given terminological system for a particular use case, and to assess the quality and completeness of cross-terminology links; (2) the difficulty of correctly using complex, rapidly evolving, modern terminologies; (3) the time and effort required to complete and evaluate the mapping; (4) the need to address differences in granularity between the source and target terminologies; and (5) the need to continuously update the mapping as terminological systems evolve. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Cross-terminology mapping challenges: A demonstration using medication terminological systems

    PubMed Central

    Saitwal, Himali; Qing, David; Jones, Stephen; Bernstam, Elmer; Chute, Christopher G.; Johnson, Todd R.

    2015-01-01

    Standardized terminological systems for biomedical information have provided considerable benefits to biomedical applications and research. However, practical use of this information often requires mapping across terminological systems—a complex and time-consuming process. This paper demonstrates the complexity and challenges of mapping across terminological systems in the context of medication information. It provides a review of medication terminological systems and their linkages, then describes a case study in which we mapped proprietary medication codes from an electronic health record to SNOMED-CT and the UMLS Metathesaurus. The goal was to create a polyhierarchical classification system for querying an i2b2 clinical data warehouse. We found that three methods were required to accurately map the majority of actively prescribed medications. Only 62.5% of source medication codes could be mapped automatically. The remaining codes were mapped using a combination of semi-automated string comparison with expert selection, and a completely manual approach. Compound drugs were especially difficult to map: only 7.5% could be mapped using the automatic method. General challenges to mapping across terminological systems include (1) the availability of up-to-date information to assess the suitability of a given terminological system for a particular use case, and to assess the quality and completeness of cross-terminology links; (2) the difficulty of correctly using complex, rapidly evolving, modern terminologies; (3) the time and effort required to complete and evaluate the mapping; (4) the need to address differences in granularity between the source and target terminologies; and (5) the need to continuously update the mapping as terminological systems evolve. PMID:22750536

  20. 20 CFR 220.133 - Skill requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... needs little or no judgment to do simple duties that can be learned on the job in a short period of time... claimant can usually learn to do the job in 30 days, and little job training and judgment are needed. The... machines which are automatic or operated by others); or (4) Machine tending. (c) Semi-skilled work. Semi...

  1. 20 CFR 220.133 - Skill requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... needs little or no judgment to do simple duties that can be learned on the job in a short period of time... claimant can usually learn to do the job in 30 days, and little job training and judgment are needed. The... machines which are automatic or operated by others); or (4) Machine tending. (c) Semi-skilled work. Semi...

  2. 20 CFR 220.133 - Skill requirements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... needs little or no judgment to do simple duties that can be learned on the job in a short period of time... claimant can usually learn to do the job in 30 days, and little job training and judgment are needed. The... machines which are automatic or operated by others); or (4) Machine tending. (c) Semi-skilled work. Semi...

  3. 20 CFR 220.133 - Skill requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... needs little or no judgment to do simple duties that can be learned on the job in a short period of time... claimant can usually learn to do the job in 30 days, and little job training and judgment are needed. The... machines which are automatic or operated by others); or (4) Machine tending. (c) Semi-skilled work. Semi...

  4. 20 CFR 220.133 - Skill requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... needs little or no judgment to do simple duties that can be learned on the job in a short period of time... claimant can usually learn to do the job in 30 days, and little job training and judgment are needed. The... machines which are automatic or operated by others); or (4) Machine tending. (c) Semi-skilled work. Semi...

  5. Analysis of contact zones from whole field isochromatics using reflection photoelasticity

    NASA Astrophysics Data System (ADS)

    Hariprasad, M. P.; Ramesh, K.

    2018-06-01

    This paper discusses the method for evaluating the unknown contact parameters by post processing the whole field fringe order data obtained from reflection photoelasticity in a nonlinear least squares sense. Recent developments in Twelve Fringe Photoelasticity (TFP) for fringe order evaluation from single isochromatics is utilized for the whole field fringe order evaluation. One of the issues in using TFP for reflection photoelasticity is the smudging of isochromatic data at the contact zone. This leads to error in identifying the origin of contact, which is successfully addressed by implementing a semi-automatic contact point refinement algorithm. The methodologies are initially verified for benchmark problems and demonstrated for two application problems of turbine blade and sheet pile contacting interfaces.

  6. A medical software system for volumetric analysis of cerebral pathologies in magnetic resonance imaging (MRI) data.

    PubMed

    Egger, Jan; Kappus, Christoph; Freisleben, Bernd; Nimsky, Christopher

    2012-08-01

    In this contribution, a medical software system for volumetric analysis of different cerebral pathologies in magnetic resonance imaging (MRI) data is presented. The software system is based on a semi-automatic segmentation algorithm and helps to overcome the time-consuming process of volume determination during monitoring of a patient. After imaging, the parameter settings-including a seed point-are set up in the system and an automatic segmentation is performed by a novel graph-based approach. Manually reviewing the result leads to reseeding, adding seed points or an automatic surface mesh generation. The mesh is saved for monitoring the patient and for comparisons with follow-up scans. Based on the mesh, the system performs a voxelization and volume calculation, which leads to diagnosis and therefore further treatment decisions. The overall system has been tested with different cerebral pathologies-glioblastoma multiforme, pituitary adenomas and cerebral aneurysms- and evaluated against manual expert segmentations using the Dice Similarity Coefficient (DSC). Additionally, intra-physician segmentations have been performed to provide a quality measure for the presented system.

  7. Extraction of sandy bedforms features through geodesic morphometry

    NASA Astrophysics Data System (ADS)

    Debese, Nathalie; Jacq, Jean-José; Garlan, Thierry

    2016-09-01

    State-of-art echosounders reveal fine-scale details of mobile sandy bedforms, which are commonly found on continental shelfs. At present, their dynamics are still far from being completely understood. These bedforms are a serious threat to navigation security, anthropic structures and activities, placing emphasis on research breakthroughs. Bedform geometries and their dynamics are closely linked; therefore, one approach is to develop semi-automatic tools aiming at extracting their structural features from bathymetric datasets. Current approaches mimic manual processes or rely on morphological simplification of bedforms. The 1D and 2D approaches cannot address the wide ranges of both types and complexities of bedforms. In contrast, this work attempts to follow a 3D global semi-automatic approach based on a bathymetric TIN. The currently extracted primitives are the salient ridge and valley lines of the sand structures, i.e., waves and mega-ripples. The main difficulty is eliminating the ripples that are found to heavily overprint any observations. To this end, an anisotropic filter that is able to discard these structures while still enhancing the wave ridges is proposed. The second part of the work addresses the semi-automatic interactive extraction and 3D augmented display of the main lines structures. The proposed protocol also allows geoscientists to interactively insert topological constraints.

  8. Assessing the feasibility, acceptability, and potential effectiveness of a behavioral-automaticity focused lifestyle intervention for African Americans with metabolic syndrome: The Pick two to Stick to protocol.

    PubMed

    Fritz, Heather; Brody, Aaron; Levy, Philip

    2017-09-01

    Metabolic syndrome (MetS) significantly increases the risk of developing diabetes and cardiovascular disease. Being physically active and eating a healthy diet can reduce MetS risk factors. Too frequently, however, studies report that the effects of interventions targeting those factors are not maintained once interventions are withdrawn. A potential solution to the problem is targeting behavioral automaticity (habit-development) to aid in initiation and maintenance of health-behavior changes. The Pick two to Stick To (P2S2), is an 8-week, theory-based hybrid (face-to-face/telecoaching) habit focused lifestyle intervention designed to increase healthful physical activity and dietary behavioral automaticity. The purpose of this article is to describe the rationale and protocol for evaluating the P2S2 program's feasibility, acceptability and potential effectiveness. Using a prospective, non-comparative design, the P2S2 program will be implemented by trained occupational therapy 'coaches' to 40 African Americans aged 40 and above with MetS recruited from the emergency department. Semi-structured interviews with participants, bi-weekly research meetings with study staff, and observations of intervention delivery will provide data for a process evaluation. Estimates of effectiveness include weight, blood pressure, waist circumference, BMI, and behavioral automaticity measures that will be collected at baseline and week 20. The P2S2 program could facilitate the development of healthful dietary and physical activity habits in an underserved population. Whether interventions aimed at changing habits can feasibly influence this automaticity, particularly for high-risk, low resource communities where other barriers exist, is not known. This pilot study, therefore, will fill an important gap, providing insight to inform subsequent trials.

  9. Tool Efficiency Analysis model research in SEMI industry

    NASA Astrophysics Data System (ADS)

    Lei, Ma; Nana, Zhang; Zhongqiu, Zhang

    2018-06-01

    One of the key goals in SEMI industry is to improve equipment through put and ensure equipment production efficiency maximization. This paper is based on SEMI standards in semiconductor equipment control, defines the transaction rules between different tool states, and presents a TEA system model which is to analysis tool performance automatically based on finite state machine. The system was applied to fab tools and verified its effectiveness successfully, and obtained the parameter values used to measure the equipment performance, also including the advices of improvement.

  10. Electron capture cross sections by O+ from atomic He

    NASA Astrophysics Data System (ADS)

    Joseph, Dwayne C.; Saha, Bidhan C.

    2009-11-01

    The adiabatic representation is used in both the quantal and semi classical molecular orbital close coupling methods (MOCC) to evaluate charge exchange cross sections. Our results show good agreement with experimental cross sections

  11. 10 CFR Appendix J1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... characteristics of the clothes load placed in the clothes container, without allowing or requiring consumer... weight of the clothes load placed in the clothes container, without allowing or requiring consumer....4Clothes container means the compartment within the clothes washer that holds the clothes during the...

  12. Three-dimensional reconstruction from serial sections in PC-Windows platform by using 3D_Viewer.

    PubMed

    Xu, Yi-Hua; Lahvis, Garet; Edwards, Harlene; Pitot, Henry C

    2004-11-01

    Three-dimensional (3D) reconstruction from serial sections allows identification of objects of interest in 3D and clarifies the relationship among these objects. 3D_Viewer, developed in our laboratory for this purpose, has four major functions: image alignment, movie frame production, movie viewing, and shift-overlay image generation. Color images captured from serial sections were aligned; then the contours of objects of interest were highlighted in a semi-automatic manner. These 2D images were then automatically stacked at different viewing angles, and their composite images on a projected plane were recorded by an image transform-shift-overlay technique. These composition images are used in the object-rotation movie show. The design considerations of the program and the procedures used for 3D reconstruction from serial sections are described. This program, with a digital image-capture system, a semi-automatic contours highlight method, and an automatic image transform-shift-overlay technique, greatly speeds up the reconstruction process. Since images generated by 3D_Viewer are in a general graphic format, data sharing with others is easy. 3D_Viewer is written in MS Visual Basic 6, obtainable from our laboratory on request.

  13. An Evaluation Method of Words Tendency Depending on Time-Series Variation and Its Improvements.

    ERIC Educational Resources Information Center

    Atlam, El-Sayed; Okada, Makoto; Shishibori, Masami; Aoe, Jun-ichi

    2002-01-01

    Discussion of word frequency and keywords in text focuses on a method to estimate automatically the stability classes that indicate a word's popularity with time-series variations based on the frequency change in past electronic text data. Compares the evaluation of decision tree stability class results with manual classification results.…

  14. Semi-Automatic Determination of Rockfall Trajectories

    PubMed Central

    Volkwein, Axel; Klette, Johannes

    2014-01-01

    In determining rockfall trajectories in the field, it is essential to calibrate and validate rockfall simulation software. This contribution presents an in situ device and a complementary Local Positioning System (LPS) that allow the determination of parts of the trajectory. An assembly of sensors (herein called rockfall sensor) is installed in the falling block recording the 3D accelerations and rotational velocities. The LPS automatically calculates the position of the block along the slope over time based on Wi-Fi signals emitted from the rockfall sensor. The velocity of the block over time is determined through post-processing. The setup of the rockfall sensor is presented followed by proposed calibration and validation procedures. The performance of the LPS is evaluated by means of different experiments. The results allow for a quality analysis of both the obtained field data and the usability of the rockfall sensor for future/further applications in the field. PMID:25268916

  15. larvalign: Aligning Gene Expression Patterns from the Larval Brain of Drosophila melanogaster.

    PubMed

    Muenzing, Sascha E A; Strauch, Martin; Truman, James W; Bühler, Katja; Thum, Andreas S; Merhof, Dorit

    2018-01-01

    The larval brain of the fruit fly Drosophila melanogaster is a small, tractable model system for neuroscience. Genes for fluorescent marker proteins can be expressed in defined, spatially restricted neuron populations. Here, we introduce the methods for 1) generating a standard template of the larval central nervous system (CNS), 2) spatial mapping of expression patterns from different larvae into a reference space defined by the standard template. We provide a manually annotated gold standard that serves for evaluation of the registration framework involved in template generation and mapping. A method for registration quality assessment enables the automatic detection of registration errors, and a semi-automatic registration method allows one to correct registrations, which is a prerequisite for a high-quality, curated database of expression patterns. All computational methods are available within the larvalign software package: https://github.com/larvalign/larvalign/releases/tag/v1.0.

  16. An FPGA-Based WASN for Remote Real-Time Monitoring of Endangered Species: A Case Study on the Birdsong Recognition of Botaurus stellaris.

    PubMed

    Hervás, Marcos; Alsina-Pagès, Rosa Ma; Alías, Francesc; Salvador, Martí

    2017-06-08

    Fast environmental variations due to climate change can cause mass decline or even extinctions of species, having a dramatic impact on the future of biodiversity. During the last decade, different approaches have been proposed to track and monitor endangered species, generally based on costly semi-automatic systems that require human supervision adding limitations in coverage and time. However, the recent emergence of Wireless Acoustic Sensor Networks (WASN) has allowed non-intrusive remote monitoring of endangered species in real time through the automatic identification of the sound they emit. In this work, an FPGA-based WASN centralized architecture is proposed and validated on a simulated operation environment. The feasibility of the architecture is evaluated in a case study designed to detect the threatened Botaurus stellaris among other 19 cohabiting birds species in The Parc Natural dels Aiguamolls de l'Empord.

  17. Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets

    PubMed Central

    Jeong, Won-Ki; Beyer, Johanna; Hadwiger, Markus; Vazquez, Amelio; Pfister, Hanspeter; Whitaker, Ross T.

    2011-01-01

    Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuroscientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes. PMID:19834227

  18. Automatic energy calibration algorithm for an RBS setup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, Tiago F.; Moro, Marcos V.; Added, Nemitala

    2013-05-06

    This work describes a computer algorithm for automatic extraction of the energy calibration parameters from a Rutherford Back-Scattering Spectroscopy (RBS) spectrum. Parameters like the electronic gain, electronic offset and detection resolution (FWHM) of a RBS setup are usually determined using a standard sample. In our case, the standard sample comprises of a multi-elemental thin film made of a mixture of Ti-Al-Ta that is analyzed at the beginning of each run at defined beam energy. A computer program has been developed to extract automatically the calibration parameters from the spectrum of the standard sample. The code evaluates the first derivative ofmore » the energy spectrum, locates the trailing edges of the Al, Ti and Ta peaks and fits a first order polynomial for the energy-channel relation. The detection resolution is determined fitting the convolution of a pre-calculated theoretical spectrum. To test the code, data of two years have been analyzed and the results compared with the manual calculations done previously, obtaining good agreement.« less

  19. Addressing case specific biogas plant tasks: industry oriented methane yields derived from 5L Automatic Methane Potential Test Systems in batch or semi-continuous tests using realistic inocula, substrate particle sizes and organic loading.

    PubMed

    Kolbl, Sabina; Paloczi, Attila; Panjan, Jože; Stres, Blaž

    2014-02-01

    The primary aim of the study was to develop and validate an in-house upscale of Automatic Methane Potential Test System II for studying real-time inocula and real-scale substrates in batch, codigestion and enzyme enhanced hydrolysis experiments, in addition to semi-continuous operation of the developed equipment and experiments testing inoculum functional quality. The successful upscale to 5L enabled comparison of different process configurations in shorter preparation times with acceptable accuracy and high-through put intended for industrial decision making. The adoption of the same scales, equipment and methodologies in batch and semi-continuous tests mirroring those at full scale biogas plants resulted in matching methane yields between the two laboratory tests and full-scale, confirming thus the increased decision making value of the approach for industrial operations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. The SURE reliability analysis program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  1. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  2. Semi automatic indexing of PostScript files using Medical Text Indexer in medical education.

    PubMed

    Mollah, Shamim Ara; Cimino, Christopher

    2007-10-11

    At Albert Einstein College of Medicine a large part of online lecture materials contain PostScript files. As the collection grows it becomes essential to create a digital library to have easy access to relevant sections of the lecture material that is full-text indexed; to create this index it is necessary to extract all the text from the document files that constitute the originals of the lectures. In this study we present a semi automatic indexing method using robust technique for extracting text from PostScript files and National Library of Medicine's Medical Text Indexer (MTI) program for indexing the text. This model can be applied to other medical schools for indexing purposes.

  3. A semi-automatic method for positioning a femoral bone reconstruction for strict view generation.

    PubMed

    Milano, Federico; Ritacco, Lucas; Gomez, Adrian; Gonzalez Bernaldo de Quiros, Fernan; Risk, Marcelo

    2010-01-01

    In this paper we present a semi-automatic method for femoral bone positioning after 3D image reconstruction from Computed Tomography images. This serves as grounding for the definition of strict axial, longitudinal and anterior-posterior views, overcoming the problem of patient positioning biases in 2D femoral bone measuring methods. After the bone reconstruction is aligned to a standard reference frame, new tomographic slices can be generated, on which unbiased measures may be taken. This could allow not only accurate inter-patient comparisons but also intra-patient comparisons, i.e., comparisons of images of the same patient taken at different times. This method could enable medical doctors to diagnose and follow up several bone deformities more easily.

  4. Hand Gesture Based Wireless Robotic Arm Control for Agricultural Applications

    NASA Astrophysics Data System (ADS)

    Kannan Megalingam, Rajesh; Bandhyopadhyay, Shiva; Vamsy Vivek, Gedela; Juned Rahi, Muhammad

    2017-08-01

    One of the major challenges in agriculture is harvesting. It is very hard and sometimes even unsafe for workers to go to each plant and pluck fruits. Robotic systems are increasingly combined with new technologies to automate or semi automate labour intensive work, such as e.g. grape harvesting. In this work we propose a semi-automatic method for aid in harvesting fruits and hence increase productivity per man hour. A robotic arm fixed to a rover roams in the in orchard and the user can control it remotely using the hand glove fixed with various sensors. These sensors can position the robotic arm remotely to harvest the fruits. In this paper we discuss the design of hand glove fixed with various sensors, design of 4 DoF robotic arm and the wireless control interface. In addition the setup of the system and the testing and evaluation under lab conditions are also presented in this paper.

  5. Semi-Supervised Learning to Identify UMLS Semantic Relations.

    PubMed

    Luo, Yuan; Uzuner, Ozlem

    2014-01-01

    The UMLS Semantic Network is constructed by experts and requires periodic expert review to update. We propose and implement a semi-supervised approach for automatically identifying UMLS semantic relations from narrative text in PubMed. Our method analyzes biomedical narrative text to collect semantic entity pairs, and extracts multiple semantic, syntactic and orthographic features for the collected pairs. We experiment with seeded k-means clustering with various distance metrics. We create and annotate a ground truth corpus according to the top two levels of the UMLS semantic relation hierarchy. We evaluate our system on this corpus and characterize the learning curves of different clustering configuration. Using KL divergence consistently performs the best on the held-out test data. With full seeding, we obtain macro-averaged F-measures above 70% for clustering the top level UMLS relations (2-way), and above 50% for clustering the second level relations (7-way).

  6. A Hybrid Human-Computer Approach to the Extraction of Scientific Facts from the Literature.

    PubMed

    Tchoua, Roselyne B; Chard, Kyle; Audus, Debra; Qin, Jian; de Pablo, Juan; Foster, Ian

    2016-01-01

    A wealth of valuable data is locked within the millions of research articles published each year. Reading and extracting pertinent information from those articles has become an unmanageable task for scientists. This problem hinders scientific progress by making it hard to build on results buried in literature. Moreover, these data are loosely structured, encoded in manuscripts of various formats, embedded in different content types, and are, in general, not machine accessible. We present a hybrid human-computer solution for semi-automatically extracting scientific facts from literature. This solution combines an automated discovery, download, and extraction phase with a semi-expert crowd assembled from students to extract specific scientific facts. To evaluate our approach we apply it to a challenging molecular engineering scenario, extraction of a polymer property: the Flory-Huggins interaction parameter. We demonstrate useful contributions to a comprehensive database of polymer properties.

  7. Semi-Supervised Approach to Monitoring Clinical Depressive Symptoms in Social Media

    PubMed Central

    Yazdavar, Amir Hossein; Al-Olimat, Hussein S.; Ebrahimi, Monireh; Bajaj, Goonmeet; Banerjee, Tanvi; Thirunarayan, Krishnaprasad; Pathak, Jyotishman; Sheth, Amit

    2017-01-01

    With the rise of social media, millions of people are routinely expressing their moods, feelings, and daily struggles with mental health issues on social media platforms like Twitter. Unlike traditional observational cohort studies conducted through questionnaires and self-reported surveys, we explore the reliable detection of clinical depression from tweets obtained unobtrusively. Based on the analysis of tweets crawled from users with self-reported depressive symptoms in their Twitter profiles, we demonstrate the potential for detecting clinical depression symptoms which emulate the PHQ-9 questionnaire clinicians use today. Our study uses a semi-supervised statistical model to evaluate how the duration of these symptoms and their expression on Twitter (in terms of word usage patterns and topical preferences) align with the medical findings reported via the PHQ-9. Our proactive and automatic screening tool is able to identify clinical depressive symptoms with an accuracy of 68% and precision of 72%. PMID:29707701

  8. Assessment of Automatically Exported Clinical Data from a Hospital Information System for Clinical Research in Multiple Myeloma.

    PubMed

    Torres, Viviana; Cerda, Mauricio; Knaup, Petra; Löpprich, Martin

    2016-01-01

    An important part of the electronic information available in Hospital Information System (HIS) has the potential to be automatically exported to Electronic Data Capture (EDC) platforms for improving clinical research. This automation has the advantage of reducing manual data transcription, a time consuming and prone to errors process. However, quantitative evaluations of the process of exporting data from a HIS to an EDC system have not been reported extensively, in particular comparing with manual transcription. In this work an assessment to study the quality of an automatic export process, focused in laboratory data from a HIS is presented. Quality of the laboratory data was assessed in two types of processes: (1) a manual process of data transcription, and (2) an automatic process of data transference. The automatic transference was implemented as an Extract, Transform and Load (ETL) process. Then, a comparison was carried out between manual and automatic data collection methods. The criteria to measure data quality were correctness and completeness. The manual process had a general error rate of 2.6% to 7.1%, obtaining the lowest error rate if data fields with a not clear definition were removed from the analysis (p < 10E-3). In the case of automatic process, the general error rate was 1.9% to 12.1%, where lowest error rate is obtained when excluding information missing in the HIS but transcribed to the EDC from other physical sources. The automatic ETL process can be used to collect laboratory data for clinical research if data in the HIS as well as physical documentation not included in HIS, are identified previously and follows a standardized data collection protocol.

  9. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    NASA Astrophysics Data System (ADS)

    Dore, C.; Murphy, M.

    2013-02-01

    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  10. 10 CFR Appendix J1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... clothes washer design can achieve spin speeds in the 500g range. When this matrix is repeated 3 times, a...) or an equivalent extractor with same basket design (i.e. diameter, length, volume, and hole... materially inaccurate comparative data, field testing may be appropriate for establishing an acceptable test...

  11. 10 CFR Appendix J1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    .... The 500g requirement will only be used if a clothes washer design can achieve spin speeds in the 500g... Products, P.O. Box 5127, Toledo, OH 43611) or an equivalent extractor with same basket design (i.e... provide materially inaccurate comparative data, field testing may be appropriate for establishing an...

  12. Automated carotid artery intima layer regional segmentation.

    PubMed

    Meiburger, Kristen M; Molinari, Filippo; Acharya, U Rajendra; Saba, Luca; Rodrigues, Paulo; Liboni, William; Nicolaides, Andrew; Suri, Jasjit S

    2011-07-07

    Evaluation of the carotid artery wall is essential for the assessment of a patient's cardiovascular risk or for the diagnosis of cardiovascular pathologies. This paper presents a new, completely user-independent algorithm called carotid artery intima layer regional segmentation (CAILRS, a class of AtheroEdge™ systems), which automatically segments the intima layer of the far wall of the carotid ultrasound artery based on mean shift classification applied to the far wall. Further, the system extracts the lumen-intima and media-adventitia borders in the far wall of the carotid artery. Our new system is characterized and validated by comparing CAILRS borders with the manual tracings carried out by experts. The new technique is also benchmarked with a semi-automatic technique based on a first-order absolute moment edge operator (FOAM) and compared to our previous edge-based automated methods such as CALEX (Molinari et al 2010 J. Ultrasound Med. 29 399-418, 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CULEX (Delsanto et al 2007 IEEE Trans. Instrum. Meas. 56 1265-74, Molinari et al 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CALSFOAM (Molinari et al Int. Angiol. (at press)), and CAUDLES-EF (Molinari et al J. Digit. Imaging (at press)). Our multi-institutional database consisted of 300 longitudinal B-mode carotid images. In comparison to semi-automated FOAM, CAILRS showed the IMT bias of -0.035 ± 0.186 mm while FOAM showed -0.016 ± 0.258 mm. Our IMT was slightly underestimated with respect to the ground truth IMT, but showed uniform behavior over the entire database. CAILRS outperformed all the four previous automated methods. The system's figure of merit was 95.6%, which was lower than that of the semi-automated method (98%), but higher than that of the other automated techniques.

  13. Automated carotid artery intima layer regional segmentation

    NASA Astrophysics Data System (ADS)

    Meiburger, Kristen M.; Molinari, Filippo; Rajendra Acharya, U.; Saba, Luca; Rodrigues, Paulo; Liboni, William; Nicolaides, Andrew; Suri, Jasjit S.

    2011-07-01

    Evaluation of the carotid artery wall is essential for the assessment of a patient's cardiovascular risk or for the diagnosis of cardiovascular pathologies. This paper presents a new, completely user-independent algorithm called carotid artery intima layer regional segmentation (CAILRS, a class of AtheroEdge™ systems), which automatically segments the intima layer of the far wall of the carotid ultrasound artery based on mean shift classification applied to the far wall. Further, the system extracts the lumen-intima and media-adventitia borders in the far wall of the carotid artery. Our new system is characterized and validated by comparing CAILRS borders with the manual tracings carried out by experts. The new technique is also benchmarked with a semi-automatic technique based on a first-order absolute moment edge operator (FOAM) and compared to our previous edge-based automated methods such as CALEX (Molinari et al 2010 J. Ultrasound Med. 29 399-418, 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CULEX (Delsanto et al 2007 IEEE Trans. Instrum. Meas. 56 1265-74, Molinari et al 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CALSFOAM (Molinari et al Int. Angiol. (at press)), and CAUDLES-EF (Molinari et al J. Digit. Imaging (at press)). Our multi-institutional database consisted of 300 longitudinal B-mode carotid images. In comparison to semi-automated FOAM, CAILRS showed the IMT bias of -0.035 ± 0.186 mm while FOAM showed -0.016 ± 0.258 mm. Our IMT was slightly underestimated with respect to the ground truth IMT, but showed uniform behavior over the entire database. CAILRS outperformed all the four previous automated methods. The system's figure of merit was 95.6%, which was lower than that of the semi-automated method (98%), but higher than that of the other automated techniques.

  14. Book4All: A Tool to Make an e-Book More Accessible to Students with Vision/Visual-Impairments

    NASA Astrophysics Data System (ADS)

    Calabrò, Antonello; Contini, Elia; Leporini, Barbara

    Empowering people who are blind or otherwise visually impaired includes ensuring that products and electronic materials incorporate a broad range of accessibility features and work well with screen readers and other assistive technology devices. This is particularly important for students with vision impairments. Unfortunately, authors and publishers often do not include specific criteria when preparing the contents. Consequently, e-books can be inadequate for blind and low vision users, especially for students. In this paper we describe a semi-automatic tool developed to support operators who adapt e-documents for visually impaired students. The proposed tool can be used to convert a PDF e-book into a more suitable accessible and usable format readable on desktop computer or on mobile devices.

  15. Visual and semi-automatic non-invasive detection of interictal fast ripples: A potential biomarker of epilepsy in children with tuberous sclerosis complex.

    PubMed

    Bernardo, Danilo; Nariai, Hiroki; Hussain, Shaun A; Sankar, Raman; Salamon, Noriko; Krueger, Darcy A; Sahin, Mustafa; Northrup, Hope; Bebin, E Martina; Wu, Joyce Y

    2018-04-03

    We aim to establish that interictal fast ripples (FR; 250-500 Hz) are detectable on scalp EEG, and to investigate their association to epilepsy. Scalp EEG recordings of a subset of children with tuberous sclerosis complex (TSC)-associated epilepsy from two large multicenter observational TSC studies were analyzed and compared to control children without epilepsy or any other brain-based diagnoses. FR were identified both by human visual review and compared with semi-automated review utilizing a deep learning-based FR detector. Seven out of 7 children with TSC-associated epilepsy had scalp FR compared to 0 out of 4 children in the control group (p = 0.003). The automatic detector has a sensitivity of 98% and false positive rate with average of 11.2 false positives per minute. Non-invasive detection of interictal scalp FR was feasible, by both visual and semi-automatic detection. Interictal scalp FR occurred exclusively in children with TSC-associated epilepsy and were absent in controls without epilepsy. The proposed detector achieves high sensitivity of FR detection; however, expert review of the results to reduce false positives is advised. Interictal FR are detectable on scalp EEG and may potentially serve as a biomarker of epilepsy in children with TSC. Copyright © 2018 International Federation of Clinical Neurophysiology. All rights reserved.

  16. Semi-automatic object geometry estimation for image personalization

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Eschbach, Reiner; Bouman, Charles A.; Allebach, Jan P.

    2010-01-01

    Digital printing brings about a host of benefits, one of which is the ability to create short runs of variable, customized content. One form of customization that is receiving much attention lately is in photofinishing applications, whereby personalized calendars, greeting cards, and photo books are created by inserting text strings into images. It is particularly interesting to estimate the underlying geometry of the surface and incorporate the text into the image content in an intelligent and natural way. Current solutions either allow fixed text insertion schemes into preprocessed images, or provide manual text insertion tools that are time consuming and aimed only at the high-end graphic designer. It would thus be desirable to provide some level of automation in the image personalization process. We propose a semi-automatic image personalization workflow which includes two scenarios: text insertion and text replacement. In both scenarios, the underlying surfaces are assumed to be planar. A 3-D pinhole camera model is used for rendering text, whose parameters are estimated by analyzing existing structures in the image. Techniques in image processing and computer vison such as the Hough transform, the bilateral filter, and connected component analysis are combined, along with necessary user inputs. In particular, the semi-automatic workflow is implemented as an image personalization tool, which is presented in our companion paper.1 Experimental results including personalized images for both scenarios are shown, which demonstrate the effectiveness of our algorithms.

  17. Automatic Thesaurus Generation for an Electronic Community System.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; And Others

    1995-01-01

    This research reports an algorithmic approach to the automatic generation of thesauri for electronic community systems. The techniques used include term filtering, automatic indexing, and cluster analysis. The Worm Community System, used by molecular biologists studying the nematode worm C. elegans, was used as the testbed for this research.…

  18. Quantification of regional fat volume in rat MRI

    NASA Astrophysics Data System (ADS)

    Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren

    2003-05-01

    Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.

  19. Semi-automatic geographic atrophy segmentation for SD-OCT images.

    PubMed

    Chen, Qiang; de Sisternes, Luis; Leng, Theodore; Zheng, Luoluo; Kutzscher, Lauren; Rubin, Daniel L

    2013-01-01

    Geographic atrophy (GA) is a condition that is associated with retinal thinning and loss of the retinal pigment epithelium (RPE) layer. It appears in advanced stages of non-exudative age-related macular degeneration (AMD) and can lead to vision loss. We present a semi-automated GA segmentation algorithm for spectral-domain optical coherence tomography (SD-OCT) images. The method first identifies and segments a surface between the RPE and the choroid to generate retinal projection images in which the projection region is restricted to a sub-volume of the retina where the presence of GA can be identified. Subsequently, a geometric active contour model is employed to automatically detect and segment the extent of GA in the projection images. Two image data sets, consisting on 55 SD-OCT scans from twelve eyes in eight patients with GA and 56 SD-OCT scans from 56 eyes in 56 patients with GA, respectively, were utilized to qualitatively and quantitatively evaluate the proposed GA segmentation method. Experimental results suggest that the proposed algorithm can achieve high segmentation accuracy. The mean GA overlap ratios between our proposed method and outlines drawn in the SD-OCT scans, our method and outlines drawn in the fundus auto-fluorescence (FAF) images, and the commercial software (Carl Zeiss Meditec proprietary software, Cirrus version 6.0) and outlines drawn in FAF images were 72.60%, 65.88% and 59.83%, respectively.

  20. Semi-automatic measuring of arteriovenous relation as a possible silent brain infarction risk index in hypertensive patients.

    PubMed

    Vázquez Dorrego, X M; Manresa Domínguez, J M; Heras Tebar, A; Forés, R; Girona Marcé, A; Alzamora Sas, M T; Delgado Martínez, P; Riba-Llena, I; Ugarte Anduaga, J; Beristain Iraola, A; Barandiaran Martirena, I; Ruiz Bilbao, S M; Torán Monserrat, P

    2016-11-01

    To evaluate the usefulness of a semiautomatic measuring system of arteriovenous relation (RAV) from retinographic images of hypertensive patients in assessing their cardiovascular risk and silent brain ischemia (ICS) detection. Semi-automatic measurement of arterial and venous width were performed with the aid of Imedos software and conventional fundus examination from the analysis of retinal images belonging to the 976 patients integrated in the cohort Investigating Silent Strokes in Hypertensives: a magnetic resonance imaging study (ISSYS), group of hypertensive patients. All patients have been subjected to a cranial magnetic resonance imaging (RMN) to assess the presence or absence of brain silent infarct. Retinal images of 768 patients were studied. Among the clinical findings observed, association with ICS was only detected in patients with microaneurysms (OR 2.50; 95% CI: 1.05-5.98) or altered RAV (<0.666) (OR: 4.22; 95% CI: 2.56-6.96). In multivariate logistic regression analysis adjusted by age and sex, only altered RAV continued demonstrating as a risk factor (OR: 3.70; 95% CI: 2.21-6.18). The results show that the semiautomatic analysis of the retinal vasculature from retinal images has the potential to be considered as an important vascular risk factor in hypertensive population. Copyright © 2016 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.

  1. Density estimation in aerial images of large crowds for automatic people counting

    NASA Astrophysics Data System (ADS)

    Herrmann, Christian; Metzler, Juergen

    2013-05-01

    Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.

  2. Automatic aortic root segmentation in CTA whole-body dataset

    NASA Astrophysics Data System (ADS)

    Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.

    2016-03-01

    Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.

  3. Reducing Fuel Consumption through Semi-Automated Platooning with Class 8 Tractor Trailer Combinations (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lammert, M.; Gonder, J.

    This poster describes the National Renewable Energy Laboratory's evaluation of the fuel savings potential of semi-automated truck platooning. Platooning involves reducing aerodynamic drag by grouping vehicles together and decreasing the distance between them through the use of electronic coupling, which allows multiple vehicles to accelerate or brake simultaneously. The NREL study addressed the need for data on American style line-haul sleeper cabs with modern aerodynamics and over a range of trucking speeds common in the United States.

  4. Primary care physicians' use of an electronic medical record system: a cognitive task analysis.

    PubMed

    Shachak, Aviv; Hadas-Dayagi, Michal; Ziv, Amitai; Reis, Shmuel

    2009-03-01

    To describe physicians' patterns of using an Electronic Medical Record (EMR) system; to reveal the underlying cognitive elements involved in EMR use, possible resulting errors, and influences on patient-doctor communication; to gain insight into the role of expertise in incorporating EMRs into clinical practice in general and communicative behavior in particular. Cognitive task analysis using semi-structured interviews and field observations. Twenty-five primary care physicians from the northern district of the largest health maintenance organization (HMO) in Israel. The comprehensiveness, organization, and readability of data in the EMR system reduced physicians' need to recall information from memory and the difficulty of reading handwriting. Physicians perceived EMR use as reducing the cognitive load associated with clinical tasks. Automaticity of EMR use contributed to efficiency, but sometimes resulted in errors, such as the selection of incorrect medication or the input of data into the wrong patient's chart. EMR use interfered with patient-doctor communication. The main strategy for overcoming this problem involved separating EMR use from time spent communicating with patients. Computer mastery and enhanced physicians' communication skills also helped. There is a fine balance between the benefits and risks of EMR use. Automaticity, especially in combination with interruptions, emerged as the main cognitive factor contributing to errors. EMR use had a negative influence on communication, a problem that can be partially addressed by improving the spatial organization of physicians' offices and by enhancing physicians' computer and communication skills.

  5. Primary Care Physicians’ Use of an Electronic Medical Record System: A Cognitive Task Analysis

    PubMed Central

    Hadas-Dayagi, Michal; Ziv, Amitai; Reis, Shmuel

    2009-01-01

    OBJECTIVE To describe physicians’ patterns of using an Electronic Medical Record (EMR) system; to reveal the underlying cognitive elements involved in EMR use, possible resulting errors, and influences on patient–doctor communication; to gain insight into the role of expertise in incorporating EMRs into clinical practice in general and communicative behavior in particular. DESIGN Cognitive task analysis using semi-structured interviews and field observations. PARTICIPANTS Twenty-five primary care physicians from the northern district of the largest health maintenance organization (HMO) in Israel. RESULTS The comprehensiveness, organization, and readability of data in the EMR system reduced physicians’ need to recall information from memory and the difficulty of reading handwriting. Physicians perceived EMR use as reducing the cognitive load associated with clinical tasks. Automaticity of EMR use contributed to efficiency, but sometimes resulted in errors, such as the selection of incorrect medication or the input of data into the wrong patient’s chart. EMR use interfered with patient–doctor communication. The main strategy for overcoming this problem involved separating EMR use from time spent communicating with patients. Computer mastery and enhanced physicians’ communication skills also helped. CONCLUSIONS There is a fine balance between the benefits and risks of EMR use. Automaticity, especially in combination with interruptions, emerged as the main cognitive factor contributing to errors. EMR use had a negative influence on communication, a problem that can be partially addressed by improving the spatial organization of physicians’ offices and by enhancing physicians’ computer and communication skills. PMID:19130148

  6. Electronic circuit provides automatic level control for liquid nitrogen traps

    NASA Technical Reports Server (NTRS)

    Turvy, R. R.

    1968-01-01

    Electronic circuit, based on the principle of increased thermistor resistance corresponding to decreases in temperature provides an automatic level control for liquid nitrogen cold traps. The electronically controlled apparatus is practically service-free, requiring only occasional reliability checks.

  7. 10 CFR Appendix J1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    .... The 500g requirement will only be used if a clothes washer design can achieve spin speeds in the 500g... Products, P.O. Box 5127, Toledo, OH 43611) or an equivalent extractor with same basket design (i.e... characteristics as to provide materially inaccurate comparative data, field testing may be appropriate for...

  8. 10 CFR Appendix J1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .... The 500g requirement will only be used if a clothes washer design can achieve spin speeds in the 500g... Products, P.O. Box 5127, Toledo, OH 43611) or an equivalent extractor with same basket design (i.e... characteristics as to provide materially inaccurate comparative data, field testing may be appropriate for...

  9. Prototype Technology for Monitoring Volatile Organics. Volume 1.

    DTIC Science & Technology

    1988-03-01

    117, pp. 285-294. Grote, J.O. and Westendorf , R.G., "An Automatic Purge and Trap Concentrator," American Laboratory, December 1979. Khromchenko, Y.L...Environmental Monitoring and Support Laboratory, Office of Research and Development, Cincinnati, OH. Westendorf , R.G., "Closed-loop Stripping Analysis...Technique and Applications," American Laboratory, December 1982. Westendorf , R.G., "Development Application of A Semi-Automatic Purge and Trap Concentrator

  10. Variably Transmittive, Electronically-Controlled Eyewear

    NASA Technical Reports Server (NTRS)

    Chapman, John J. (Inventor); Glaab, Louis J. (Inventor); Schott, Timothy D. (Inventor); Howell, Charles T. (Inventor); Fleck, Vincent J. (Inventor)

    2013-01-01

    A system and method for flight training and evaluation of pilots comprises electronically activated vision restriction glasses that detect the pilot's head position and automatically darken and restrict the pilot's ability to see through the front and side windscreens when the pilot-in-training attempts to see out the windscreen. Thus, the pilot-in-training sees only within the aircraft cockpit, forcing him or her to fly by instruments in the most restricted operational mode.

  11. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  12. QED contributions to electron g-2

    NASA Astrophysics Data System (ADS)

    Laporta, Stefano

    2018-05-01

    In this paper I briefly describe the results of the numerical evaluation of the mass-independent 4-loop contribution to the electron g-2 in QED with 1100 digits of precision. In particular I also show the semi-analytical fit to the numerical value, which contains harmonic polylogarithms of eiπ/3, e2iπ/3 and eiπ/2 one-dimensional integrals of products of complete elliptic integrals and six finite parts of master integrals, evaluated up to 4800 digits. I give also some information about the methods and the program used.

  13. Structuring Legacy Pathology Reports by openEHR Archetypes to Enable Semantic Querying.

    PubMed

    Kropf, Stefan; Krücken, Peter; Mueller, Wolf; Denecke, Kerstin

    2017-05-18

    Clinical information is often stored as free text, e.g. in discharge summaries or pathology reports. These documents are semi-structured using section headers, numbered lists, items and classification strings. However, it is still challenging to retrieve relevant documents since keyword searches applied on complete unstructured documents result in many false positive retrieval results. We are concentrating on the processing of pathology reports as an example for unstructured clinical documents. The objective is to transform reports semi-automatically into an information structure that enables an improved access and retrieval of relevant data. The data is expected to be stored in a standardized, structured way to make it accessible for queries that are applied to specific sections of a document (section-sensitive queries) and for information reuse. Our processing pipeline comprises information modelling, section boundary detection and section-sensitive queries. For enabling a focused search in unstructured data, documents are automatically structured and transformed into a patient information model specified through openEHR archetypes. The resulting XML-based pathology electronic health records (PEHRs) are queried by XQuery and visualized by XSLT in HTML. Pathology reports (PRs) can be reliably structured into sections by a keyword-based approach. The information modelling using openEHR allows saving time in the modelling process since many archetypes can be reused. The resulting standardized, structured PEHRs allow accessing relevant data by retrieving data matching user queries. Mapping unstructured reports into a standardized information model is a practical solution for a better access to data. Archetype-based XML enables section-sensitive retrieval and visualisation by well-established XML techniques. Focussing the retrieval to particular sections has the potential of saving retrieval time and improving the accuracy of the retrieval.

  14. Tight-binding modeling and low-energy behavior of the semi-Dirac point.

    PubMed

    Banerjee, S; Singh, R R P; Pardo, V; Pickett, W E

    2009-07-03

    We develop a tight-binding model description of semi-Dirac electronic spectra, with highly anisotropic dispersion around point Fermi surfaces, recently discovered in electronic structure calculations of VO2-TiO2 nanoheterostructures. We contrast their spectral properties with the well-known Dirac points on the honeycomb lattice relevant to graphene layers and the spectra of bands touching each other in zero-gap semiconductors. We also consider the lowest order dispersion around one of the semi-Dirac points and calculate the resulting electronic energy levels in an external magnetic field. In spite of apparently similar electronic structures, Dirac and semi-Dirac systems support diverse low-energy physics.

  15. A superpixel-based framework for automatic tumor segmentation on breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Yu, Ning; Wu, Jia; Weinstein, Susan P.; Gaonkar, Bilwaj; Keller, Brad M.; Ashraf, Ahmed B.; Jiang, YunQing; Davatzikos, Christos; Conant, Emily F.; Kontos, Despina

    2015-03-01

    Accurate and efficient automated tumor segmentation in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is highly desirable for computer-aided tumor diagnosis. We propose a novel automatic segmentation framework which incorporates mean-shift smoothing, superpixel-wise classification, pixel-wise graph-cuts partitioning, and morphological refinement. A set of 15 breast DCE-MR images, obtained from the American College of Radiology Imaging Network (ACRIN) 6657 I-SPY trial, were manually segmented to generate tumor masks (as ground truth) and breast masks (as regions of interest). Four state-of-the-art segmentation approaches based on diverse models were also utilized for comparison. Based on five standard evaluation metrics for segmentation, the proposed framework consistently outperformed all other approaches. The performance of the proposed framework was: 1) 0.83 for Dice similarity coefficient, 2) 0.96 for pixel-wise accuracy, 3) 0.72 for VOC score, 4) 0.79 mm for mean absolute difference, and 5) 11.71 mm for maximum Hausdorff distance, which surpassed the second best method (i.e., adaptive geodesic transformation), a semi-automatic algorithm depending on precise initialization. Our results suggest promising potential applications of our segmentation framework in assisting analysis of breast carcinomas.

  16. The ship-borne infrared searching and tracking system based on the inertial platform

    NASA Astrophysics Data System (ADS)

    Li, Yan; Zhang, Haibo

    2011-08-01

    As a result of the radar system got interferenced or in the state of half silent ,it can cause the guided precision drop badly In the modern electronic warfare, therefore it can lead to the equipment depended on electronic guidance cannot strike the incoming goals exactly. It will need to rely on optoelectronic devices to make up for its shortcomings, but when interference is in the process of radar leading ,especially the electro-optical equipment is influenced by the roll, pitch and yaw rotation ,it can affect the target appear outside of the field of optoelectronic devices for a long time, so the infrared optoelectronic equipment can not exert the superiority, and also it cannot get across weapon-control system "reverse bring" missile against incoming goals. So the conventional ship-borne infrared system unable to track the target of incoming quickly , the ability of optoelectronic rivalry declines heavily.Here we provide a brand new controlling algorithm for the semi-automatic searching and infrared tracking based on inertial navigation platform. Now it is applying well in our XX infrared optoelectronic searching and tracking system. The algorithm is mainly divided into two steps: The artificial mode turns into auto-searching when the deviation of guide exceeds the current scene under the course of leading for radar.When the threshold value of the image picked-up is satisfied by the contrast of the target in the searching scene, the speed computed by using the CA model Least Square Method feeds back to the speed loop. And then combine the infrared information to accomplish the closed-loop control of the infrared optoelectronic system tracking. The algorithm is verified via experiment. Target capturing distance is 22.3 kilometers on the great lead deviation by using the algorithm. But without using the algorithm the capturing distance declines 12 kilometers. The algorithm advances the ability of infrared optoelectronic rivalry and declines the target capturing time by using semi-automatic searching and reliable capturing-tracking, when the lead deviation of the radar is great.

  17. Human Factors and Safety Evaluation of the Automatic Test and Repair System (AN/MSM-105(V)1)

    DTIC Science & Technology

    1984-07-01

    box and the main breaker box In both the ETF and ERF did not conform to military standards In that they consisted of black letters on a gold ...transportable test and repair system for electronic equipment that consists of an electronic test facility ( ETF ) and an electronic repair facility (ERF...personal gear in both the ETF and the ERF, and in the ETF there was not nearly enough room for the storage of the interconnect devices, tapes and manuals

  18. Pancreas and cyst segmentation

    NASA Astrophysics Data System (ADS)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  19. Controlled cooling of an electronic system for reduced energy consumption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.

    Energy efficient control of a cooling system cooling an electronic system is provided. The control includes automatically determining at least one adjusted control setting for at least one adjustable cooling component of a cooling system cooling the electronic system. The automatically determining is based, at least in part, on power being consumed by the cooling system and temperature of a heat sink to which heat extracted by the cooling system is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the coolingmore » system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on one or more experimentally obtained models relating the targeted temperature and power consumption of the one or more adjustable cooling components of the cooling system.« less

  20. Clinical Assistant Diagnosis for Electronic Medical Record Based on Convolutional Neural Network.

    PubMed

    Yang, Zhongliang; Huang, Yongfeng; Jiang, Yiran; Sun, Yuxi; Zhang, Yu-Jin; Luo, Pengcheng

    2018-04-20

    Automatically extracting useful information from electronic medical records along with conducting disease diagnoses is a promising task for both clinical decision support(CDS) and neural language processing(NLP). Most of the existing systems are based on artificially constructed knowledge bases, and then auxiliary diagnosis is done by rule matching. In this study, we present a clinical intelligent decision approach based on Convolutional Neural Networks(CNN), which can automatically extract high-level semantic information of electronic medical records and then perform automatic diagnosis without artificial construction of rules or knowledge bases. We use collected 18,590 copies of the real-world clinical electronic medical records to train and test the proposed model. Experimental results show that the proposed model can achieve 98.67% accuracy and 96.02% recall, which strongly supports that using convolutional neural network to automatically learn high-level semantic features of electronic medical records and then conduct assist diagnosis is feasible and effective.

  1. Controlled cooling of an electronic system based on projected conditions

    DOEpatents

    David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.

    2016-05-17

    Energy efficient control of a cooling system cooling an electronic system is provided based, in part, on projected conditions. The control includes automatically determining an adjusted control setting(s) for an adjustable cooling component(s) of the cooling system. The automatically determining is based, at least in part, on projected power consumed by the electronic system at a future time and projected temperature at the future time of a heat sink to which heat extracted is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the cooling system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on an experimentally obtained model(s) relating the targeted temperature and power consumption of the adjustable cooling component(s) of the cooling system.

  2. Controlled cooling of an electronic system based on projected conditions

    DOEpatents

    David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.

    2015-08-18

    Energy efficient control of a cooling system cooling an electronic system is provided based, in part, on projected conditions. The control includes automatically determining an adjusted control setting(s) for an adjustable cooling component(s) of the cooling system. The automatically determining is based, at least in part, on projected power consumed by the electronic system at a future time and projected temperature at the future time of a heat sink to which heat extracted is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the cooling system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on an experimentally obtained model(s) relating the targeted temperature and power consumption of the adjustable cooling component(s) of the cooling system.

  3. Electron-lattice coupling after high-energy deposition in aluminum

    NASA Astrophysics Data System (ADS)

    Gorbunov, S. A.; Medvedev, N. A.; Terekhin, P. N.; Volkov, A. E.

    2015-07-01

    This paper presents an analysis of the parameters of highly-excited electron subsystem of aluminum, appearing e.g. after swift heavy ion impact or laser pulse irradiation. For elevated electron temperatures, the electron heat capacity and the screening parameter are evaluated. The electron-phonon approximation of electron-lattice coupling is compared with its precise formulation based on the dynamic structure factor (DSF) formalism. The DSF formalism takes into account collective response of a lattice to excitation including all possible limit cases of this response. In particular, it automatically provides realization of electron-phonon coupling as the low-temperature limit, while switching to the plasma-limit for high electron temperatures. Aluminum is chosen as a good model system for illustration of the presented methodology.

  4. A semi-automatic parachute separation system for balloon payloads

    NASA Astrophysics Data System (ADS)

    Farman, M.

    At the National Scientific balloon Facility (NSBF), when operating stratospheric balloons with scientific payloads, the current practice for separating the payload from the parachute after descent requires the sending of commands, over a UHF uplink, from the chase airplane or the ground control site. While this generally works well, there have been occasions when, due to shadowing of the receive antenna or unfavorable aircraft attitude, the command has not been received and the parachute has failed to separate. In these circumstances the payload may be dragged for long distances before being recovered, with consequent danger of damage to expensive and sometimes irreplaceable scientific instrumentation. The NSBF has therefore proposed a system which would automatically separate the parachute without the necessity for commanding after touchdown. Such a system is now under development.. Mechanical automatic release systems have been tried in the past with only limited success. The current design uses an electronic system based on a tilt sensor which measures the angle that the suspension train subtends relative to the gravity vector. With the suspension vertical, there is minimum output from the sensor. When the payload touches down, the parachute tilts and in any tilt direction the sensor output increases until a predetermined threshold is reached. At this point, a threshold detector is activated which fires the pyrotechnic cutter to release the parachute. The threshold level is adjustable prior to the flight to enable the optimum tilt angle to be determined from flight experience. The system will not operate until armed by command. This command is sent during the descent when communication with the on-board systems is still normally reliable. A safety interlock is included to inhibit arming if the threshold is already high at the time the command is sent. While this is intended to be the primary system, the manual option would be retained as a back- up. A market survey was carried out to choose a suitable tilt sensor and three prototype systems were built for evaluation. These were installed in standard NSBF terminate units, and flown on routine operational flights throughout 2001 with the automatic pyrotechnic cutter active but off-line. A data logger was also installed to record system parameters during the descent phase. The results of these flights validated the system concept and it was found that the telemetry threshold monitor was also an asset to the operator in deciding when it was safe to send a manual parachute release command. However, the accumu lated test experience indicated that the originally- chosen tilt sensor, which uses a liquid electrolyte and requires an in-flight microprocessor, was not sufficiently rugged or reliable. A solid-state accelerometer, with encapsulated analog signal processing, was therefore selected as a replacement and the threshold electronics redesigned to match this sensor. This system is currently being evaluated on NSBF operation al flights during 2002. On completion of this phase, NASA will review the results and a decision will be made whether to use this design as the primary operational system on future flights. This paper discusses the requirements for such a system and describes the current design in detail. It reports on the evaluation flights of 2001 and 2002 and their results to date.

  5. [Development and clinical evaluation of an anesthesia information management system].

    PubMed

    Feng, Jing-yi; Chen, Hua; Zhu, Sheng-mei

    2010-09-21

    To study the design, implementation and clinical evaluation of an anesthesia information management system. To record, process and store peri-operative patient data automatically, all kinds of bedside monitoring equipments are connected into the system based on information integrating technology; after a statistical analysis of those patient data by data mining technology, patient status can be evaluated automatically based on risk prediction standard and decision support system, and then anesthetist could perform reasonable and safe clinical processes; with clinical processes electronically recorded, standard record tables could be generated, and clinical workflow is optimized, as well. With the system, kinds of patient data could be collected, stored, analyzed and archived, kinds of anesthesia documents could be generated, and patient status could be evaluated to support clinic decision. The anesthesia information management system is useful for improving anesthesia quality, decreasing risk of patient and clinician, and aiding to provide clinical proof.

  6. A formal approach to the analysis of clinical computer-interpretable guideline modeling languages.

    PubMed

    Grando, M Adela; Glasspool, David; Fox, John

    2012-01-01

    To develop proof strategies to formally study the expressiveness of workflow-based languages, and to investigate their applicability to clinical computer-interpretable guideline (CIG) modeling languages. We propose two strategies for studying the expressiveness of workflow-based languages based on a standard set of workflow patterns expressed as Petri nets (PNs) and notions of congruence and bisimilarity from process calculus. Proof that a PN-based pattern P can be expressed in a language L can be carried out semi-automatically. Proof that a language L cannot provide the behavior specified by a PNP requires proof by exhaustion based on analysis of cases and cannot be performed automatically. The proof strategies are generic but we exemplify their use with a particular CIG modeling language, PROforma. To illustrate the method we evaluate the expressiveness of PROforma against three standard workflow patterns and compare our results with a previous similar but informal comparison. We show that the two proof strategies are effective in evaluating a CIG modeling language against standard workflow patterns. We find that using the proposed formal techniques we obtain different results to a comparable previously published but less formal study. We discuss the utility of these analyses as the basis for principled extensions to CIG modeling languages. Additionally we explain how the same proof strategies can be reused to prove the satisfaction of patterns expressed in the declarative language CIGDec. The proof strategies we propose are useful tools for analysing the expressiveness of CIG modeling languages. This study provides good evidence of the benefits of applying formal methods of proof over semi-formal ones. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Intrathoracic airway wall detection using graph search and scanner PSF information

    NASA Astrophysics Data System (ADS)

    Reinhardt, Joseph M.; Park, Wonkyu; Hoffman, Eric A.; Sonka, Milan

    1997-05-01

    Measurements of the in vivo bronchial tree can be used to assess regional airway physiology. High-resolution CT (HRCT) provides detailed images of the lungs and has been used to evaluate bronchial airway geometry. Such measurements have been sued to assess diseases affecting the airways, such as asthma and cystic fibrosis, to measure airway response to external stimuli, and to evaluate the mechanics of airway collapse in sleep apnea. To routinely use CT imaging in a clinical setting to evaluate the in vivo airway tree, there is a need for an objective, automatic technique for identifying the airway tree in the CT images and measuring airway geometry parameters. Manual or semi-automatic segmentation and measurement of the airway tree from a 3D data set may require several man-hours of work, and the manual approaches suffer from inter-observer and intra- observer variabilities. This paper describes a method for automatic airway tree analysis that combines accurate airway wall location estimation with a technique for optimal airway border smoothing. A fuzzy logic, rule-based system is used to identify the branches of the 3D airway tree in thin-slice HRCT images. Raycasting is combined with a model-based parameter estimation technique to identify the approximate inner and outer airway wall borders in 2D cross-sections through the image data set. Finally, a 2D graph search is used to optimize the estimated airway wall locations and obtain accurate airway borders. We demonstrate this technique using CT images of a plexiglass tube phantom.

  8. An efficient implementation of semi-numerical computation of the Hartree-Fock exchange on the Intel Phi processor

    NASA Astrophysics Data System (ADS)

    Liu, Fenglai; Kong, Jing

    2018-07-01

    Unique technical challenges and their solutions for implementing semi-numerical Hartree-Fock exchange on the Phil Processor are discussed, especially concerning the single- instruction-multiple-data type of processing and small cache size. Benchmark calculations on a series of buckyball molecules with various Gaussian basis sets on a Phi processor and a six-core CPU show that the Phi processor provides as much as 12 times of speedup with large basis sets compared with the conventional four-center electron repulsion integration approach performed on the CPU. The accuracy of the semi-numerical scheme is also evaluated and found to be comparable to that of the resolution-of-identity approach.

  9. Towards computerizing intensive care sedation guidelines: design of a rule-based architecture for automated execution of clinical guidelines

    PubMed Central

    2010-01-01

    Background Computerized ICUs rely on software services to convey the medical condition of their patients as well as assisting the staff in taking treatment decisions. Such services are useful for following clinical guidelines quickly and accurately. However, the development of services is often time-consuming and error-prone. Consequently, many care-related activities are still conducted based on manually constructed guidelines. These are often ambiguous, which leads to unnecessary variations in treatments and costs. The goal of this paper is to present a semi-automatic verification and translation framework capable of turning manually constructed diagrams into ready-to-use programs. This framework combines the strengths of the manual and service-oriented approaches while decreasing their disadvantages. The aim is to close the gap in communication between the IT and the medical domain. This leads to a less time-consuming and error-prone development phase and a shorter clinical evaluation phase. Methods A framework is proposed that semi-automatically translates a clinical guideline, expressed as an XML-based flow chart, into a Drools Rule Flow by employing semantic technologies such as ontologies and SWRL. An overview of the architecture is given and all the technology choices are thoroughly motivated. Finally, it is shown how this framework can be integrated into a service-oriented architecture (SOA). Results The applicability of the Drools Rule language to express clinical guidelines is evaluated by translating an example guideline, namely the sedation protocol used for the anaesthetization of patients, to a Drools Rule Flow and executing and deploying this Rule-based application as a part of a SOA. The results show that the performance of Drools is comparable to other technologies such as Web Services and increases with the number of decision nodes present in the Rule Flow. Most delays are introduced by loading the Rule Flows. Conclusions The framework is an effective solution for computerizing clinical guidelines as it allows for quick development, evaluation and human-readable visualization of the Rules and has a good performance. By monitoring the parameters of the patient to automatically detect exceptional situations and problems and by notifying the medical staff of tasks that need to be performed, the computerized sedation guideline improves the execution of the guideline. PMID:20082700

  10. Towards computerizing intensive care sedation guidelines: design of a rule-based architecture for automated execution of clinical guidelines.

    PubMed

    Ongenae, Femke; De Backere, Femke; Steurbaut, Kristof; Colpaert, Kirsten; Kerckhove, Wannes; Decruyenaere, Johan; De Turck, Filip

    2010-01-18

    Computerized ICUs rely on software services to convey the medical condition of their patients as well as assisting the staff in taking treatment decisions. Such services are useful for following clinical guidelines quickly and accurately. However, the development of services is often time-consuming and error-prone. Consequently, many care-related activities are still conducted based on manually constructed guidelines. These are often ambiguous, which leads to unnecessary variations in treatments and costs.The goal of this paper is to present a semi-automatic verification and translation framework capable of turning manually constructed diagrams into ready-to-use programs. This framework combines the strengths of the manual and service-oriented approaches while decreasing their disadvantages. The aim is to close the gap in communication between the IT and the medical domain. This leads to a less time-consuming and error-prone development phase and a shorter clinical evaluation phase. A framework is proposed that semi-automatically translates a clinical guideline, expressed as an XML-based flow chart, into a Drools Rule Flow by employing semantic technologies such as ontologies and SWRL. An overview of the architecture is given and all the technology choices are thoroughly motivated. Finally, it is shown how this framework can be integrated into a service-oriented architecture (SOA). The applicability of the Drools Rule language to express clinical guidelines is evaluated by translating an example guideline, namely the sedation protocol used for the anaesthetization of patients, to a Drools Rule Flow and executing and deploying this Rule-based application as a part of a SOA. The results show that the performance of Drools is comparable to other technologies such as Web Services and increases with the number of decision nodes present in the Rule Flow. Most delays are introduced by loading the Rule Flows. The framework is an effective solution for computerizing clinical guidelines as it allows for quick development, evaluation and human-readable visualization of the Rules and has a good performance. By monitoring the parameters of the patient to automatically detect exceptional situations and problems and by notifying the medical staff of tasks that need to be performed, the computerized sedation guideline improves the execution of the guideline.

  11. Total reduction of distorted echelle spectrograms - An automatic procedure. [for computer controlled microdensitometer

    NASA Technical Reports Server (NTRS)

    Peterson, R. C.; Title, A. M.

    1975-01-01

    A total reduction procedure, notable for its use of a computer-controlled microdensitometer for semi-automatically tracing curved spectra, is applied to distorted high-dispersion echelle spectra recorded by an image tube. Microdensitometer specifications are presented and the FORTRAN, TRACEN and SPOTS programs are outlined. The intensity spectrum of the photographic or electrographic plate is plotted on a graphic display. The time requirements are discussed in detail.

  12. 10 CFR Appendix J2 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... minutes with a minimum fill of 20 gallons of soft water (17 ppm hardness or less) using 27.0 grams + 4.0 grams per pound of cloth load of AHAM Standard detergent Formula 3. The wash temperature is to be... stain resistant finishes shall not be applied to the test cloth. The absence of such finishes shall be...

  13. Development and Evaluation of a Semi-automated Segmentation Tool and a Modified Ellipsoid Formula for Volumetric Analysis of the Kidney in Non-contrast T2-Weighted MR Images.

    PubMed

    Seuss, Hannes; Janka, Rolf; Prümmer, Marcus; Cavallaro, Alexander; Hammon, Rebecca; Theis, Ragnar; Sandmair, Martin; Amann, Kerstin; Bäuerle, Tobias; Uder, Michael; Hammon, Matthias

    2017-04-01

    Volumetric analysis of the kidney parenchyma provides additional information for the detection and monitoring of various renal diseases. Therefore the purposes of the study were to develop and evaluate a semi-automated segmentation tool and a modified ellipsoid formula for volumetric analysis of the kidney in non-contrast T2-weighted magnetic resonance (MR)-images. Three readers performed semi-automated segmentation of the total kidney volume (TKV) in axial, non-contrast-enhanced T2-weighted MR-images of 24 healthy volunteers (48 kidneys) twice. A semi-automated threshold-based segmentation tool was developed to segment the kidney parenchyma. Furthermore, the three readers measured renal dimensions (length, width, depth) and applied different formulas to calculate the TKV. Manual segmentation served as a reference volume. Volumes of the different methods were compared and time required was recorded. There was no significant difference between the semi-automatically and manually segmented TKV (p = 0.31). The difference in mean volumes was 0.3 ml (95% confidence interval (CI), -10.1 to 10.7 ml). Semi-automated segmentation was significantly faster than manual segmentation, with a mean difference = 188 s (220 vs. 408 s); p < 0.05. Volumes did not differ significantly comparing the results of different readers. Calculation of TKV with a modified ellipsoid formula (ellipsoid volume × 0.85) did not differ significantly from the reference volume; however, the mean error was three times higher (difference of mean volumes -0.1 ml; CI -31.1 to 30.9 ml; p = 0.95). Applying the modified ellipsoid formula was the fastest way to get an estimation of the renal volume (41 s). Semi-automated segmentation and volumetric analysis of the kidney in native T2-weighted MR data delivers accurate and reproducible results and was significantly faster than manual segmentation. Applying a modified ellipsoid formula quickly provides an accurate kidney volume.

  14. Determinants of wood dust exposure in the Danish furniture industry.

    PubMed

    Mikkelsen, Anders B; Schlunssen, Vivi; Sigsgaard, Torben; Schaumburg, Inger

    2002-11-01

    This paper investigates the relation between wood dust exposure in the furniture industry and occupational hygiene variables. During the winter 1997-98 54 factories were visited and 2362 personal, passive inhalable dust samples were obtained; the geometric mean was 0.95 mg/m(3) and the geometric standard deviation was 2.08. In a first measuring round 1685 dust concentrations were obtained. For some of the workers repeated measurements were carried out 1 (351) and 2 weeks (326) after the first measurement. Hygiene variables like job, exhaust ventilation, cleaning procedures, etc., were documented. A multivariate analysis based on mixed effects models was used with hygiene variables being fixed effects and worker, machine, department and factory being random effects. A modified stepwise strategy of model making was adopted taking into account the hierarchically structured variables and making possible the exclusion of non-influential random as well as fixed effects. For woodworking, the following determinants of exposure increase the dust concentration: manual and automatic sanding and use of compressed air with fully automatic and semi-automatic machines and for cleaning of work pieces. Decreased dust exposure resulted from the use of compressed air with manual machines, working at fully automatic or semi-automatic machines, functioning exhaust ventilation, work on the night shift, daily cleaning of rooms, cleaning of work pieces with a brush, vacuum cleaning of machines, supplementary fresh air intake and safety representative elected within the last 2 yr. For handling and assembling, increased exposure results from work at automatic machines and presence of wood dust on the workpieces. Work on the evening shift, supplementary fresh air intake, work in a chair factory and special cleaning staff produced decreased exposure to wood dust. The implications of the results for the prevention of wood dust exposure are discussed.

  15. Semi-classical approach to compute RABBITT traces in multi-dimensional complex field distributions.

    PubMed

    Lucchini, M; Ludwig, A; Kasmi, L; Gallmann, L; Keller, U

    2015-04-06

    We present a semi-classical model to calculate RABBITT (Reconstruction of Attosecond Beating By Interference of Two-photon Transitions) traces in the presence of a reference infrared field with a complex two-dimensional (2D) spatial distribution. The evolution of the electron spectra as a function of the pump-probe delay is evaluated starting from the solution of the classical equation of motion and incorporating the quantum phase acquired by the electron during the interaction with the infrared field. The total response to an attosecond pulse train is then evaluated by a coherent sum of the contributions generated by each individual attosecond pulse in the train. The flexibility of this model makes it possible to calculate spectrograms from non-trivial 2D field distributions. After confirming the validity of the model in a simple 1D case, we extend the discussion to describe the probe-induced phase in photo-emission experiments on an ideal metallic surface.

  16. 79. VIEW OF SPILLWAY THAT AUTOMATICALLY REGULATES HEIGHT OF WATER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    79. VIEW OF SPILLWAY THAT AUTOMATICALLY REGULATES HEIGHT OF WATER IN RESERVOIR, 'BACKWATER OVERFLOW,' Print No. 233, April 1904 - Electron Hydroelectric Project, Along Puyallup River, Electron, Pierce County, WA

  17. Development and validation of automatic tools for interactive recurrence analysis in radiation therapy: optimization of treatment algorithms for locally advanced pancreatic cancer.

    PubMed

    Kessel, Kerstin A; Habermehl, Daniel; Jäger, Andreas; Floca, Ralf O; Zhang, Lanlan; Bendl, Rolf; Debus, Jürgen; Combs, Stephanie E

    2013-06-07

    In radiation oncology recurrence analysis is an important part in the evaluation process and clinical quality assurance of treatment concepts. With the example of 9 patients with locally advanced pancreatic cancer we developed and validated interactive analysis tools to support the evaluation workflow. After an automatic registration of the radiation planning CTs with the follow-up images, the recurrence volumes are segmented manually. Based on these volumes the DVH (dose volume histogram) statistic is calculated, followed by the determination of the dose applied to the region of recurrence and the distance between the boost and recurrence volume. We calculated the percentage of the recurrence volume within the 80%-isodose volume and compared it to the location of the recurrence within the boost volume, boost + 1 cm, boost + 1.5 cm and boost + 2 cm volumes. Recurrence analysis of 9 patients demonstrated that all recurrences except one occurred within the defined GTV/boost volume; one recurrence developed beyond the field border/outfield. With the defined distance volumes in relation to the recurrences, we could show that 7 recurrent lesions were within the 2 cm radius of the primary tumor. Two large recurrences extended beyond the 2 cm, however, this might be due to very rapid growth and/or late detection of the tumor progression. The main goal of using automatic analysis tools is to reduce time and effort conducting clinical analyses. We showed a first approach and use of a semi-automated workflow for recurrence analysis, which will be continuously optimized. In conclusion, despite the limitations of the automatic calculations we contributed to in-house optimization of subsequent study concepts based on an improved and validated target volume definition.

  18. Eye-tracking for clinical decision support: A method to capture automatically what physicians are viewing in the EMR.

    PubMed

    King, Andrew J; Hochheiser, Harry; Visweswaran, Shyam; Clermont, Gilles; Cooper, Gregory F

    2017-01-01

    Eye-tracking is a valuable research tool that is used in laboratory and limited field environments. We take steps toward developing methods that enable widespread adoption of eye-tracking and its real-time application in clinical decision support. Eye-tracking will enhance awareness and enable intelligent views, more precise alerts, and other forms of decision support in the Electronic Medical Record (EMR). We evaluated a low-cost eye-tracking device and found the device's accuracy to be non-inferior to a more expensive device. We also developed and evaluated an automatic method for mapping eye-tracking data to interface elements in the EMR (e.g., a displayed laboratory test value). Mapping was 88% accurate across the six participants in our experiment. Finally, we piloted the use of the low-cost device and the automatic mapping method to label training data for a Learning EMR (LEMR) which is a system that highlights the EMR elements a physician is predicted to use.

  19. Eye-tracking for clinical decision support: A method to capture automatically what physicians are viewing in the EMR

    PubMed Central

    King, Andrew J.; Hochheiser, Harry; Visweswaran, Shyam; Clermont, Gilles; Cooper, Gregory F.

    2017-01-01

    Eye-tracking is a valuable research tool that is used in laboratory and limited field environments. We take steps toward developing methods that enable widespread adoption of eye-tracking and its real-time application in clinical decision support. Eye-tracking will enhance awareness and enable intelligent views, more precise alerts, and other forms of decision support in the Electronic Medical Record (EMR). We evaluated a low-cost eye-tracking device and found the device’s accuracy to be non-inferior to a more expensive device. We also developed and evaluated an automatic method for mapping eye-tracking data to interface elements in the EMR (e.g., a displayed laboratory test value). Mapping was 88% accurate across the six participants in our experiment. Finally, we piloted the use of the low-cost device and the automatic mapping method to label training data for a Learning EMR (LEMR) which is a system that highlights the EMR elements a physician is predicted to use. PMID:28815151

  20. ELSA: An integrated, semi-automated nebular abundance package

    NASA Astrophysics Data System (ADS)

    Johnson, Matthew D.; Levitt, Jesse S.; Henry, Richard B. C.; Kwitter, Karen B.

    We present ELSA, a new modular software package, written in C, to analyze and manage spectroscopic data from emission-line objects. In addition to calculating plasma diagnostics and abundances from nebular emission lines, the software provides a number of convenient features including the ability to ingest logs produced by IRAF's splot task, to semi-automatically merge spectra in different wavelength ranges, and to automatically generate various data tables in machine-readable or LaTeX format. ELSA features a highly sophisticated interstellar reddening correction scheme that takes into account temperature and density effects as well as He II contamination of the hydrogen Balmer lines. Abundance calculations are performed using a 5-level atom approximation with recent atomic data, based on R. Henry's ABUN program. Downloading and detailed documentation for all aspects of ELSA are available at the following URL:

  1. Learning a Health Knowledge Graph from Electronic Medical Records.

    PubMed

    Rotmensch, Maya; Halpern, Yoni; Tlimat, Abdulhakim; Horng, Steven; Sontag, David

    2017-07-20

    Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google's manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).

  2. Accuracy and Spatial Variability in GPS Surveying for Landslide Mapping on Road Inventories at a Semi-Detailed Scale: the Case in Colombia

    NASA Astrophysics Data System (ADS)

    Murillo Feo, C. A.; Martnez Martinez, L. J.; Correa Muñoz, N. A.

    2016-06-01

    The accuracy of locating attributes on topographic surfaces when, using GPS in mountainous areas, is affected by obstacles to wave propagation. As part of this research on the semi-automatic detection of landslides, we evaluate the accuracy and spatial distribution of the horizontal error in GPS positioning in the tertiary road network of six municipalities located in mountainous areas in the department of Cauca, Colombia, using geo-referencing with GPS mapping equipment and static-fast and pseudo-kinematic methods. We obtained quality parameters for the GPS surveys with differential correction, using a post-processing method. The consolidated database underwent exploratory analyses to determine the statistical distribution, a multivariate analysis to establish relationships and partnerships between the variables, and an analysis of the spatial variability and calculus of accuracy, considering the effect of non-Gaussian distribution errors. The evaluation of the internal validity of the data provide metrics with a confidence level of 95% between 1.24 and 2.45 m in the static-fast mode and between 0.86 and 4.2 m in the pseudo-kinematic mode. The external validity had an absolute error of 4.69 m, indicating that this descriptor is more critical than precision. Based on the ASPRS standard, the scale obtained with the evaluated equipment was in the order of 1:20000, a level of detail expected in the landslide-mapping project. Modelling the spatial variability of the horizontal errors from the empirical semi-variogram analysis showed predictions errors close to the external validity of the devices.

  3. Automatic radiation dose monitoring for CT of trauma patients with different protocols: feasibility and accuracy.

    PubMed

    Higashigaito, K; Becker, A S; Sprengel, K; Simmen, H-P; Wanner, G; Alkadhi, H

    2016-09-01

    To demonstrate the feasibility and accuracy of automatic radiation dose monitoring software for computed tomography (CT) of trauma patients in a clinical setting over time, and to evaluate the potential of radiation dose reduction using iterative reconstruction (IR). In a time period of 18 months, data from 378 consecutive thoraco-abdominal CT examinations of trauma patients were extracted using automatic radiation dose monitoring software, and patients were split into three cohorts: cohort 1, 64-section CT with filtered back projection, 200 mAs tube current-time product; cohort 2, 128-section CT with IR and identical imaging protocol; cohort 3, 128-section CT with IR, 150 mAs tube current-time product. Radiation dose parameters from the software were compared with the individual patient protocols. Image noise was measured and image quality was semi-quantitatively determined. Automatic extraction of radiation dose metrics was feasible and accurate in all (100%) patients. All CT examinations were of diagnostic quality. There were no differences between cohorts 1 and 2 regarding volume CT dose index (CTDIvol; p=0.62), dose-length product (DLP), and effective dose (ED, both p=0.95), while noise was significantly lower (chest and abdomen, both -38%, p<0.017). Compared to cohort 1, CTDIvol, DLP, and ED in cohort 3 were significantly lower (all -25%, p<0.017), similar to the noise in the chest (-32%) and abdomen (-27%, both p<0.017). Compared to cohort 2, CTDIvol (-28%), DLP, and ED (both -26%) in cohort 3 was significantly lower (all, p<0.017), while noise in the chest (+9%) and abdomen (+18%) was significantly higher (all, p<0.017). Automatic radiation dose monitoring software is feasible and accurate, and can be implemented in a clinical setting for evaluating the effects of lowering radiation doses of CT protocols over time. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  4. Automatic segmentation of meningioma from non-contrasted brain MRI integrating fuzzy clustering and region growing.

    PubMed

    Hsieh, Thomas M; Liu, Yi-Min; Liao, Chun-Chih; Xiao, Furen; Chiang, I-Jen; Wong, Jau-Min

    2011-08-26

    In recent years, magnetic resonance imaging (MRI) has become important in brain tumor diagnosis. Using this modality, physicians can locate specific pathologies by analyzing differences in tissue character presented in different types of MR images.This paper uses an algorithm integrating fuzzy-c-mean (FCM) and region growing techniques for automated tumor image segmentation from patients with menigioma. Only non-contrasted T1 and T2 -weighted MR images are included in the analysis. The study's aims are to correctly locate tumors in the images, and to detect those situated in the midline position of the brain. The study used non-contrasted T1- and T2-weighted MR images from 29 patients with menigioma. After FCM clustering, 32 groups of images from each patient group were put through the region-growing procedure for pixels aggregation. Later, using knowledge-based information, the system selected tumor-containing images from these groups and merged them into one tumor image. An alternative semi-supervised method was added at this stage for comparison with the automatic method. Finally, the tumor image was optimized by a morphology operator. Results from automatic segmentation were compared to the "ground truth" (GT) on a pixel level. Overall data were then evaluated using a quantified system. The quantified parameters, including the "percent match" (PM) and "correlation ratio" (CR), suggested a high match between GT and the present study's system, as well as a fair level of correspondence. The results were compatible with those from other related studies. The system successfully detected all of the tumors situated at the midline of brain.Six cases failed in the automatic group. One also failed in the semi-supervised alternative. The remaining five cases presented noticeable edema inside the brain. In the 23 successful cases, the PM and CR values in the two groups were highly related. Results indicated that, even when using only two sets of non-contrasted MR images, the system is a reliable and efficient method of brain-tumor detection. With further development the system demonstrates high potential for practical clinical use.

  5. Web platform using digital image processing and geographic information system tools: a Brazilian case study on dengue.

    PubMed

    Brasil, Lourdes M; Gomes, Marília M F; Miosso, Cristiano J; da Silva, Marlete M; Amvame-Nze, Georges D

    2015-07-16

    Dengue fever is endemic in Asia, the Americas, the East of the Mediterranean and the Western Pacific. According to the World Health Organization, it is one of the diseases of greatest impact on health, affecting millions of people each year worldwide. A fast detection of increases in populations of the transmitting vector, the Aedes aegypti mosquito, is essential to avoid dengue outbreaks. Unfortunately, in several countries, such as Brazil, the current methods for detecting populations changes and disseminating this information are too slow to allow efficient allocation of resources to fight outbreaks. To reduce the delay in providing the information regarding A. aegypti population changes, we propose, develop, and evaluate a system for counting the eggs found in special traps and to provide the collected data using a web structure with geographical location resources. One of the most useful tools for the detection and surveillance of arthropods is the ovitrap, a special trap built to collect the mosquito eggs. This allows for an egg counting process, which is still usually performed manually, in countries such as Brazil. We implement and evaluate a novel system for automatically counting the eggs found in the ovitraps' cardboards. The system we propose is based on digital image processing (DIP) techniques, as well as a Web based Semi-Automatic Counting System (SCSA-WEB). All data collected are geographically referenced in a geographic information system (GIS) and made available on a Web platform. The work was developed in Gama's administrative region, in Brasília/Brazil, with the aid of the Environmental Surveillance Directory (DIVAL-Gama) and Brasília's Board of Health (SSDF), in partnership with the University of Brasília (UnB). The system was built based on a field survey carried out during three months and provided by health professionals. These professionals provided 84 cardboards from 84 ovitraps, sized 15 × 5 cm. In developing the system, we conducted the following steps: i. Obtain images from the eggs on an ovitrap's cardboards, with a microscope. ii. Apply a proposed image-processing-based semi-automatic counting system. The system we developed uses the Java programming language and the Java Server Faces technology. This is a framework suite for web applications development. This approach will allow a simple migration to any Operating System platform and future applications on mobile devices. iii. Collect and store all data into a Database (DB) and then georeference them in a GIS. The Database Management System used to develop the DB is based on PostgreSQL. The GIS will assist in the visualization and spatial analysis of digital maps, allowing the location of Dengue outbreaks in the region of study. This will also facilitate the planning, analysis, and evaluation of temporal and spatial epidemiology, as required by the Brazilian Health Care Control Center. iv. Deploy the SCSA-WEB, DB and GIS on a single Web platform. The statistical results obtained by DIP were satisfactory when compared with the SCSA-WEB's semi-automated eggs count. The results also indicate that the time spent in manual counting has being considerably reduced when using our fully automated DIP algorithm and semi-automated SCSA-WEB. The developed georeferencing Web platform proves to be of great support for future visualization with statistical and trace analysis of the disease. The analyses suggest the efficiency of our algorithm for automatic eggs counting, in terms of expediting the work of the laboratory technician, reducing considerably its time and error counting rates. We believe that this kind of integrated platform and tools can simplify the decision making process of the Brazilian Health Care Control Center.

  6. A multiparametric assay for quantitative nerve regeneration evaluation.

    PubMed

    Weyn, B; van Remoortere, M; Nuydens, R; Meert, T; van de Wouwer, G

    2005-08-01

    We introduce an assay for the semi-automated quantification of nerve regeneration by image analysis. Digital images of histological sections of regenerated nerves are recorded using an automated inverted microscope and merged into high-resolution mosaic images representing the entire nerve. These are analysed by a dedicated image-processing package that computes nerve-specific features (e.g. nerve area, fibre count, myelinated area) and fibre-specific features (area, perimeter, myelin sheet thickness). The assay's performance and correlation of the automatically computed data with visually obtained data are determined on a set of 140 semithin sections from the distal part of a rat tibial nerve from four different experimental treatment groups (control, sham, sutured, cut) taken at seven different time points after surgery. Results show a high correlation between the manually and automatically derived data, and a high discriminative power towards treatment. Extra value is added by the large feature set. In conclusion, the assay is fast and offers data that currently can be obtained only by a combination of laborious and time-consuming tests.

  7. Automatic Synthetic Aperture Radar based oil spill detection and performance estimation via a semi-automatic operational service benchmark.

    PubMed

    Singha, Suman; Vespe, Michele; Trieschmann, Olaf

    2013-08-15

    Today the health of ocean is in danger as it was never before mainly due to man-made pollutions. Operational activities show regular occurrence of accidental and deliberate oil spill in European waters. Since the areas covered by oil spills are usually large, satellite remote sensing particularly Synthetic Aperture Radar represents an effective option for operational oil spill detection. This paper describes the development of a fully automated approach for oil spill detection from SAR. Total of 41 feature parameters extracted from each segmented dark spot for oil spill and 'look-alike' classification and ranked according to their importance. The classification algorithm is based on a two-stage processing that combines classification tree analysis and fuzzy logic. An initial evaluation of this methodology on a large dataset has been carried out and degree of agreement between results from proposed algorithm and human analyst was estimated between 85% and 93% respectively for ENVISAT and RADARSAT. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Appareillage automatisé de mesure simultanée du pouvoir thermoélectrique et de la conductivité électrique. Application à l'étude de couches polymères semi-conductrices

    NASA Astrophysics Data System (ADS)

    Moliton, A.; Ratier, B.; Moreau, C.; Froyer, G.

    1991-05-01

    In this paper, we present an automatized system for simultaneous measurement of conductivity σ, and thermoelectric power S : measurements are allowed for temperatures ranging from 130 K to 360 K on brittle semiconductor layers. As an example of the application, results obtained in the case of polymer (PPP) layers implanted with Na ions are presented : with high energy implantation (E = 250 keV) we observe only a defect semiconduction of p type while at low energy (30 keV) an electronic n type conduction appears. Nous présentons dans cet article un système de mesure simultanée de la conductivité σ, et du pouvoir thermoélectrique S : il permet des mesures en fonction de la température (entre 130 K et 360 K) dans le cas de couches semi-conductrices relativement fragiles. A titre d'application, nous indiquons les résultats que nous avons obtenus dans le cas de couches polymères (PPP) implantées avec des ions sodium: alors que seule une semi-conduction par défaut est générée par de fortes énergies d'implantation (E = 250 keV ), il apparaît une semiconduction induite par le dopage n lors d'implantations à basse énergie (E = 30 keV ).

  9. Quadrature formula for evaluating left bounded Hadamard type hypersingular integrals

    NASA Astrophysics Data System (ADS)

    Bichi, Sirajo Lawan; Eshkuvatov, Z. K.; Nik Long, N. M. A.; Okhunov, Abdurahim

    2014-12-01

    Left semi-bounded Hadamard type Hypersingular integral (HSI) of the form H(h,x) = 1/π √{1+x/1-x }∫-1 **1√{1-t/1+t }h(t)/(t-x)2 dt,x∈(-1.1), Where h(t) is a smooth function is considered. The automatic quadrature scheme (AQS) is constructed by approximating the density function h(t) by the truncated Chebyshev polynomials of the fourth kind. Numerical results revealed that the proposed AQS is highly accurate when h(t) is choosing to be the polynomial and rational functions. The results are in line with the theoretical findings.

  10. Automatic rocks detection and classification on high resolution images of planetary surfaces

    NASA Astrophysics Data System (ADS)

    Aboudan, A.; Pacifici, A.; Murana, A.; Cannarsa, F.; Ori, G. G.; Dell'Arciprete, I.; Allemand, P.; Grandjean, P.; Portigliotti, S.; Marcer, A.; Lorenzoni, L.

    2013-12-01

    High-resolution images can be used to obtain rocks location and size on planetary surfaces. In particular rock size-frequency distribution is a key parameter to evaluate the surface roughness, to investigate the geologic processes that formed the surface and to assess the hazards related with spacecraft landing. The manual search for rocks on high-resolution images (even for small areas) can be a very intensive work. An automatic or semi-automatic algorithm to identify rocks is mandatory to enable further processing as determining the rocks presence, size, height (by means of shadows) and spatial distribution over an area of interest. Accurate rocks and shadows contours localization are the key steps for rock detection. An approach to contour detection based on morphological operators and statistical thresholding is presented in this work. The identified contours are then fitted using a proper geometric model of the rocks or shadows and used to estimate salient rocks parameters (position, size, area, height). The performances of this approach have been evaluated both on images of Martian analogue area of Morocco desert and on HiRISE images. Results have been compared with ground truth obtained by means of manual rock mapping and proved the effectiveness of the algorithm. The rock abundance and rocks size-frequency distribution derived on selected HiRISE images have been compared with the results of similar analyses performed for the landing site certification of Mars landers (Viking, Pathfinder, MER, MSL) and with the available thermal data from IRTM and TES.

  11. Exploring cognitive support use and preference by college students with TBI: A mixed-methods study.

    PubMed

    Brown, Jessica; Hux, Karen; Hey, Morgan; Murphy, Madeline

    2017-01-01

    Many college students with TBI rely on external strategies and supports to compensate for persistent memory, organization, and planning deficits that interfere with recalling and executing daily tasks. Practitioners know little, however, about the supports students with TBI choose for this purpose, the reasoning behind their choice, or preferred features of selected supports. The purpose of this study was to explore these issues. We collected and analyzed quantitative and qualitative data from eight college students with TBI for completion of a concurrent triangulation mixed-methods design. Data analysis included evaluation and triangulation of participant demographic information, survey responses about persistent post-injury symptoms, transcripts from semi-structured interviews about cognitive support devices and strategies, and ranking results about specific compensatory tools. Results suggest that college students with TBI prefer high-tech external supports-sometimes with the addition of low-tech, paper supports-to assist them in managing daily tasks. This preference related to features of portability, accessibility, and automatic reminders. An electronic calendar was the most-preferred high-tech support, and a paper checklist was the most-preferred low-tech support. Rehabilitation professionals should consider implementing high-tech supports with preferred characteristics during treatment given the preferences of students with TBI and the consequent likelihood of their continued long-term use following reintegration to community settings.

  12. Morphometric synaptology of a whole neuron profile using a semiautomatic interactive computer system.

    PubMed

    Saito, K; Niki, K

    1983-07-01

    We propose a new method of dealing with morphometric synaptology that processes all synapses and boutons around the HRP marked neuron on a large composite electron micrograph, rather than a qualitative or a piecemeal quantitative study of a particular synapse and/or bouton that is not positioned on the surface of the neuron. This approach requires the development of both neuroanatomical procedures, by which a specific whole neuronal profile is identified, and valuable specialized tools, which support the collection and analysis of a great volume of morphometric data from composite electron micrographs, in order to reduce the burden of the morphologist. The present report is also concerned with the total and reliable semi-automatic interactive computer system for gathering and analyzing morphometric data that has been under development in our laboratory. A morphologist performs the pattern recognition portion by using a large-sized tablet digitizer and a menu-sheet command, and the system registers the various morphometric values of many different neurons and performs statistical analysis. Some examples of morphometric measurements and analysis show the usefulness and efficiency of the proposed system and method.

  13. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    PubMed

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  14. Symbolic Algebra Development for Higher-Order Electron Propagator Formulation and Implementation.

    PubMed

    Tamayo-Mendoza, Teresa; Flores-Moreno, Roberto

    2014-06-10

    Through the use of symbolic algebra, implemented in a program, the algebraic expression of the elements of the self-energy matrix for the electron propagator to different orders were obtained. In addition, a module for the software package Lowdin was automatically generated. Second- and third-order electron propagator results have been calculated to test the correct operation of the program. It was found that the Fortran 90 modules obtained automatically with our algorithm succeeded in calculating ionization energies with the second- and third-order electron propagator in the diagonal approximation. The strategy for the development of this symbolic algebra program is described in detail. This represents a solid starting point for the automatic derivation and implementation of higher-order electron propagator methods.

  15. Method for stitching microbial images using a neural network

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.; Tolstova, I. V.

    2017-05-01

    Currently an analog microscope has a wide distribution in the following fields: medicine, animal husbandry, monitoring technological objects, oceanography, agriculture and others. Automatic method is preferred because it will greatly reduce the work involved. Stepper motors are used to move the microscope slide and allow to adjust the focus in semi-automatic or automatic mode view with transfer images of microbiological objects from the eyepiece of the microscope to the computer screen. Scene analysis allows to locate regions with pronounced abnormalities for focusing specialist attention. This paper considers the method for stitching microbial images, obtained of semi-automatic microscope. The method allows to keep the boundaries of objects located in the area of capturing optical systems. Objects searching are based on the analysis of the data located in the area of the camera view. We propose to use a neural network for the boundaries searching. The stitching image boundary is held of the analysis borders of the objects. To auto focus, we use the criterion of the minimum thickness of the line boundaries of object. Analysis produced the object located in the focal axis of the camera. We use method of recovery of objects borders and projective transform for the boundary of objects which are based on shifted relative to the focal axis. Several examples considered in this paper show the effectiveness of the proposed approach on several test images.

  16. Automated Detection of Microaneurysms Using Scale-Adapted Blob Analysis and Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adal, Kedir M.; Sidebe, Desire; Ali, Sharib

    2014-01-07

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are then introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier to detect true MAs. The developed system is built using onlymore » few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images.« less

  17. Effect of mixing time and speed on experimental baking and dough testing with a 200g pin-mixer

    USDA-ARS?s Scientific Manuscript database

    Under mixing or over mixing the dough results in varied experimental loaf volumes. Bread preparation requires a trained baker to evaluate dough development and determine stop points of mixer. Instrumentation and electronic control of the dough mixer would allow for automatic mixing. This study us...

  18. Groping for quantitative digital 3-D image analysis: an approach to quantitative fluorescence in situ hybridization in thick tissue sections of prostate carcinoma.

    PubMed

    Rodenacker, K; Aubele, M; Hutzler, P; Adiga, P S

    1997-01-01

    In molecular pathology numerical chromosome aberrations have been found to be decisive for the prognosis of malignancy in tumours. The existence of such aberrations can be detected by interphase fluorescence in situ hybridization (FISH). The gain or loss of certain base sequences in the desoxyribonucleic acid (DNA) can be estimated by counting the number of FISH signals per cell nucleus. The quantitative evaluation of such events is a necessary condition for a prospective use in diagnostic pathology. To avoid occlusions of signals, the cell nucleus has to be analyzed in three dimensions. Confocal laser scanning microscopy is the means to obtain series of optical thin sections from fluorescence stained or marked material to fulfill the conditions mentioned above. A graphical user interface (GUI) to a software package for display, inspection, count and (semi-)automatic analysis of 3-D images for pathologists is outlined including the underlying methods of 3-D image interaction and segmentation developed. The preparative methods are briefly described. Main emphasis is given to the methodical questions of computer-aided analysis of large 3-D image data sets for pathologists. Several automated analysis steps can be performed for segmentation and succeeding quantification. However tumour material is in contrast to isolated or cultured cells even for visual inspection, a difficult material. For the present a fully automated digital image analysis of 3-D data is not in sight. A semi-automatic segmentation method is thus presented here.

  19. Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Guo, Jiateng; Liu, Shanjun; Zhang, Peina; Wu, Lixin; Zhou, Wenhui; Yu, Yinan

    2017-06-01

    Obtaining accurate information on rock mass discontinuities for deformation analysis and the evaluation of rock mass stability is important. Obtaining measurements for high and steep zones with the traditional compass method is difficult. Photogrammetry, three-dimensional (3D) laser scanning and other remote sensing methods have gradually become mainstream methods. In this study, a method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information. The original data are pre-treated prior to segmentation by removing outlier points. The next step is to segment the point cloud into different point subsets. Various parameters, such as the normal, dip/direction and dip, can be calculated for each point subset after obtaining the equation of the best fit plane for the relevant point subset. A cluster analysis (a point subset that satisfies some conditions and thus forms a cluster) is performed based on the normal vectors by introducing the firefly algorithm (FA) and the fuzzy c-means (FCM) algorithm. Finally, clusters that belong to the same discontinuity sets are merged and coloured for visualization purposes. A prototype system is developed based on this method to extract the points of the rock discontinuity from a 3D point cloud. A comparison with existing software shows that this method is feasible. This method can provide a reference for rock mechanics, 3D geological modelling and other related fields.

  20. High temperature electronic excitation and ionization rates in gases

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick

    1991-01-01

    The relaxation times for electronic excitation due to electron bombardment of atoms was found to be quite short, so that electron kinetic temperature (T sub e) and the electron excitation temperature (T asterisk) should equilibrate quickly whenever electrons are present. However, once equilibrium has been achieved, further energy to the excited electronic states and to the kinetic energy of free electrons must be fed in by collisions with heavy particles that cause vibrational and electronic state transitions. The rate coefficients for excitation of electronic states produced by heavy particle collision have not been well known. However, a relatively simple semi-classical theory has been developed here which is analytic up to the final integration over a Boltzmann distribution of collision energies; this integral can then be evaluated numerically by quadrature. Once the rate coefficients have been determined, the relaxation of electronic excitation energy can be evaluated and compared with the relaxation rates of vibrational excitation. Then the relative importance of these two factors, electronic excitation and vibrational excitation by heavy particle collision, on the transfer of energy to free electron motion, can be assessed.

  1. Electronic labelling in recycling of manufactured articles.

    PubMed

    Olejnik, Lech; Krammer, Alfred

    2002-12-01

    The concept of a recycling system aiming at the recovery of resources from manufactured articles is proposed. The system integrates electronic labels for product identification and internet for global data exchange. A prototype for the recycling of electric motors has been developed, which implements a condition-based recycling decision system to automatically select the environmentally and economically appropriate recycling strategy, thereby opening a potential market for second-hand motors and creating a profitable recycling process itself. The project has been designed to evaluate the feasibility of electronic identification applied on a large number of motors and to validate the system in real field conditions.

  2. 70. VIEW OF PARTIALLY COMPLETED FLUME BELOW THE AUTOMATIC SPILL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    70. VIEW OF PARTIALLY COMPLETED FLUME BELOW THE AUTOMATIC SPILL AT THE RESERVOIR, SHOWING MOUNT RAINIER IN THE DISTANCE, Print No. 192, December 1903 - Electron Hydroelectric Project, Along Puyallup River, Electron, Pierce County, WA

  3. The evaluation of a semi-automated procedure for classifying corn and soybeans without ground data

    NASA Technical Reports Server (NTRS)

    Metzler, M. D.; Cicone, R. C.; Johnson, K. I.

    1982-01-01

    Since the launch of Landsat 1 in 1973, research has been conducted with the objective to develop technology which would make it possible to achieve large area crop estimates on the basis of Landsat Multispectral Sensor (MSS) data without the benefit of ground observed training data. The present investigation is concerned with the evaluation of a technology which was developed to produce estimates of corn and soybean acreage in the central U.S. Corn Belt (Iowa, Illinois, and Indiana). A description of the employed technique is provided and details regarding the test of the developed technology are discussed. The obtained results show that considerable progress has been made toward creating an automatic, self-adapting procedure which has favorable bias and variance characteristics.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.

    Energy efficient control of a cooling system cooling an electronic system is provided. The control includes automatically determining at least one adjusted control setting for at least one adjustable cooling component of a cooling system cooling the electronic system. The automatically determining is based, at least in part, on power being consumed by the cooling system and temperature of a heat sink to which heat extracted by the cooling system is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the coolingmore » system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on one or more experimentally obtained models relating the targeted temperature and power consumption of the one or more adjustable cooling components of the cooling system.« less

  5. A green approach to prepare silver nanoparticles loaded gum acacia/poly(acrylate) hydrogels.

    PubMed

    Bajpai, S K; Kumari, Mamta

    2015-09-01

    In this work, gum acacia (GA)/poly(sodium acrylate) semi-interpenetrating polymer networks (Semi-IPN) have been fabricated via free radical initiated aqueous polymerization of monomer sodium acrylate (SA) in the presence of dissolved Gum acacia (GA), using N,N'-methylenebisacrylamide (MB) as cross-linker and potassium persulphate (KPS) as initiator. The semi-IPNs, synthesized, were characterized by various techniques such as X-ray diffraction (XRD), thermo gravimetric analysis (TGA) and Fourier transform infrared (FTIR) spectroscopy. The dynamic water uptake behavior of semi-IPNs was investigated and the data were interpreted by various kinetic models. The equilibrium swelling data were used to evaluate various network parameters. The semi-IPNs were used as template for the in situ preparation of silver nanoparticles using extract of Syzygium aromaticum (clove). The formation of silver nanoparticles was confirmed by surface plasmon resonance (SPR), XRD and transmission electron microscopy (TEM). Finally, the antibacterial activity of GA/poly(SA)/silver nanocomposites was tested against E. coli. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Using SAR Interferograms and Coherence Images for Object-Based Delineation of Unstable Slopes

    NASA Astrophysics Data System (ADS)

    Friedl, Barbara; Holbling, Daniel

    2015-05-01

    This study uses synthetic aperture radar (SAR) interferometric products for the semi-automated identification and delineation of unstable slopes and active landslides. Single-pair interferograms and coherence images are therefore segmented and classified in an object-based image analysis (OBIA) framework. The rule-based classification approach has been applied to landslide-prone areas located in Taiwan and Southern Germany. The semi-automatically obtained results were validated against landslide polygons derived from manual interpretation.

  7. Analysis of the Parameters Required for Performance Monitoring and Assessment of Military Communications Systems by Military Technical Controller

    DTIC Science & Technology

    1975-12-01

    139 APPENDIX A* BASIC CONCEPT OF MILITARY TECHNICAL CONTROL.142 6 APIENDIX Es TEST EQUIPMENI REQUIRED FOR lEASURF.4ENr OF 1AF’AMETE RS...Control ( SATEC ) Automatic Facilities heport Army Automated Quality Monitoring Reporting System (AQMPS) Army Autcmated Technical Control-Semi (ATC-Semi...technical control then beco.. es equipment status monitoring. All the major equipment in a system wculd have internal sensors with properly selected parameters

  8. Validation of 2 noninvasive, markerless reconstruction techniques in biplane high-speed fluoroscopy for 3-dimensional research of bovine distal limb kinematics.

    PubMed

    Weiss, M; Reich, E; Grund, S; Mülling, C K W; Geiger, S M

    2017-10-01

    Lameness severely impairs cattle's locomotion, and it is among the most important threats to animal welfare, performance, and productivity in the modern dairy industry. However, insight into the pathological alterations of claw biomechanics leading to lameness and an understanding of the biomechanics behind development of claw lesions causing lameness are limited. Biplane high-speed fluoroscopic kinematography is a new approach for the analysis of skeletal motion. Biplane high-speed videos in combination with bone scans can be used for 3-dimensional (3D) animations of bones moving in 3D space. The gold standard, marker-based animation, requires implantation of radio-opaque markers into bones, which impairs the practicability for lameness research in live animals. Therefore, the purpose of this study was to evaluate the comparative accuracy of 2 noninvasive, markerless animation techniques (semi-automatic and manual) in 3D animation of the bovine distal limb. Tantalum markers were implanted into each of the distal, middle, and proximal phalanges of 5 isolated bovine distal forelimbs, and biplane high-speed x-ray videos of each limb were recorded to capture the simulation of one step. The limbs were scanned by computed tomography to create bone models of the 6 digital bones, and 3D animation of the bones' movements were subsequently reconstructed using the marker-based, the semi-automatic, and the manual animation techniques. Manual animation translational bias and precision varied from 0.63 ± 0.26 mm to 0.80 ± 0.49 mm, and rotational bias and precision ranged from 2.41 ± 1.43° to 6.75 ± 4.67°. Semi-automatic translational values for bias and precision ranged from 1.26 ± 1.28 mm to 2.75 ± 2.17 mm, and rotational values varied from 3.81 ± 2.78° to 11.7 ± 8.11°. In our study, we demonstrated the successful application of biplane high-speed fluoroscopic kinematography to gait analysis of bovine distal limb. Using the manual animation technique, kinematics can be measured with sub-millimeter accuracy without the need for invasive marker implantation. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Impact of model structure on flow simulation and hydrological realism: from a lumped to a semi-distributed approach

    NASA Astrophysics Data System (ADS)

    Garavaglia, Federico; Le Lay, Matthieu; Gottardi, Fréderic; Garçon, Rémy; Gailhard, Joël; Paquet, Emmanuel; Mathevet, Thibault

    2017-08-01

    Model intercomparison experiments are widely used to investigate and improve hydrological model performance. However, a study based only on runoff simulation is not sufficient to discriminate between different model structures. Hence, there is a need to improve hydrological models for specific streamflow signatures (e.g., low and high flow) and multi-variable predictions (e.g., soil moisture, snow and groundwater). This study assesses the impact of model structure on flow simulation and hydrological realism using three versions of a hydrological model called MORDOR: the historical lumped structure and a revisited formulation available in both lumped and semi-distributed structures. In particular, the main goal of this paper is to investigate the relative impact of model equations and spatial discretization on flow simulation, snowpack representation and evapotranspiration estimation. Comparison of the models is based on an extensive dataset composed of 50 catchments located in French mountainous regions. The evaluation framework is founded on a multi-criterion split-sample strategy. All models were calibrated using an automatic optimization method based on an efficient genetic algorithm. The evaluation framework is enriched by the assessment of snow and evapotranspiration modeling against in situ and satellite data. The results showed that the new model formulations perform significantly better than the initial one in terms of the various streamflow signatures, snow and evapotranspiration predictions. The semi-distributed approach provides better calibration-validation performance for the snow cover area, snow water equivalent and runoff simulation, especially for nival catchments.

  10. Breadboard activities for advanced protein crystal growth

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Banish, Michael

    1993-01-01

    The proposed work entails the design, assembly, testing, and delivery of a turn-key system for the semi-automated determination of protein solubilities as a function of temperature. The system will utilize optical scintillation as a means of detecting and monitoring nucleation and crystallite growth during temperature lowering (or raising, with retrograde solubility systems). The deliverables of this contract are: (1) turn-key scintillation system for the semi-automatic determination of protein solubilities as a function of temperature, (2) instructions and software package for the operation of the scintillation system, and (3) one semi-annual and one final report including the test results obtained for ovostatin with the above scintillation system.

  11. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  12. Preliminary Investigation on the Effects of Shockwaves on Water Samples Using a Portable Semi-Automatic Shocktube

    NASA Astrophysics Data System (ADS)

    Wessley, G. Jims John

    2017-10-01

    The propagation of shock waves through any media results in an instantaneous increase in pressure and temperature behind the shockwave. The scope of utilizing this sudden rise in pressure and temperature in new industrial, biological and commercial areas has been explored and the opportunities are tremendous. This paper presents the design and testing of a portable semi-automatic shock tube on water samples mixed with salt. The preliminary analysis shows encouraging results as the salinity of water samples were reduced up to 5% when bombarded with 250 shocks generated using a pressure ratio of 2. 5. Paper used for normal printing is used as the diaphragm to generate the shocks. The impact of shocks of much higher intensity obtained using different diaphragms will lead to more reduction in the salinity of the sea water, thus leading to production of potable water from saline water, which is the need of the hour.

  13. GMP-conformant on-site manufacturing of a CD133+ stem cell product for cardiovascular regeneration.

    PubMed

    Skorska, Anna; Müller, Paula; Gaebel, Ralf; Große, Jana; Lemcke, Heiko; Lux, Cornelia A; Bastian, Manuela; Hausburg, Frauke; Zarniko, Nicole; Bubritzki, Sandra; Ruch, Ulrike; Tiedemann, Gudrun; David, Robert; Steinhoff, Gustav

    2017-02-10

    CD133 + stem cells represent a promising subpopulation for innovative cell-based therapies in cardiovascular regeneration. Several clinical trials have shown remarkable beneficial effects following their intramyocardial transplantation. Yet, the purification of CD133 + stem cells is typically performed in centralized clean room facilities using semi-automatic manufacturing processes based on magnetic cell sorting (MACS®). However, this requires time-consuming and cost-intensive logistics. CD133 + stem cells were purified from patient-derived sternal bone marrow using the recently developed automatic CliniMACS Prodigy® BM-133 System (Prodigy). The entire manufacturing process, as well as the subsequent quality control of the final cell product (CP), were realized on-site and in compliance with EU guidelines for Good Manufacturing Practice. The biological activity of automatically isolated CD133 + cells was evaluated and compared to manually isolated CD133 + cells via functional assays as well as immunofluorescence microscopy. In addition, the regenerative potential of purified stem cells was assessed 3 weeks after transplantation in immunodeficient mice which had been subjected to experimental myocardial infarction. We established for the first time an on-site manufacturing procedure for stem CPs intended for the treatment of ischemic heart diseases using an automatized system. On average, 0.88 × 10 6 viable CD133 + cells with a mean log 10 depletion of 3.23 ± 0.19 of non-target cells were isolated. Furthermore, we demonstrated that these automatically isolated cells bear proliferation and differentiation capacities comparable to manually isolated cells in vitro. Moreover, the automatically generated CP shows equal cardiac regeneration potential in vivo. Our results indicate that the Prodigy is a powerful system for automatic manufacturing of a CD133 + CP within few hours. Compared to conventional manufacturing processes, future clinical application of this system offers multiple benefits including stable CP quality and on-site purification under reduced clean room requirements. This will allow saving of time, reduced logistics and diminished costs.

  14. 40 CFR 49.4166 - Monitoring requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... burning pilot flame, electronically controlled automatic igniters, and monitoring system failures, using a... failure, electronically controlled automatic igniter failure, or improper monitoring equipment operation... and natural gas emissions in the event that natural gas recovered for pipeline injection must be...

  15. 40 CFR 49.4166 - Monitoring requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... burning pilot flame, electronically controlled automatic igniters, and monitoring system failures, using a... failure, electronically controlled automatic igniter failure, or improper monitoring equipment operation... and natural gas emissions in the event that natural gas recovered for pipeline injection must be...

  16. A new user-assisted segmentation and tracking technique for an object-based video editing system

    NASA Astrophysics Data System (ADS)

    Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark

    2004-03-01

    This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.

  17. Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm

    NASA Astrophysics Data System (ADS)

    Foroutan, M.; Zimbelman, J. R.

    2017-09-01

    Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.

  18. Trust, control strategies and allocation of function in human-machine systems.

    PubMed

    Lee, J; Moray, N

    1992-10-01

    As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a 'trust transfer function' is developed using time series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.

  19. A semi-analytical study of positive corona discharge in wire-plane electrode configuration

    NASA Astrophysics Data System (ADS)

    Yanallah, K.; Pontiga, F.; Chen, J. H.

    2013-08-01

    Wire-to-plane positive corona discharge in air has been studied using an analytical model of two species (electrons and positive ions). The spatial distributions of electric field and charged species are obtained by integrating Gauss's law and the continuity equations of species along the Laplacian field lines. The experimental values of corona current intensity and applied voltage, together with Warburg's law, have been used to formulate the boundary condition for the electron density on the corona wire. To test the accuracy of the model, the approximate electric field distribution has been compared with the exact numerical solution obtained from a finite element analysis. A parametrical study of wire-to-plane corona discharge has then been undertaken using the approximate semi-analytical solutions. Thus, the spatial distributions of electric field and charged particles have been computed for different values of the gas pressure, wire radius and electrode separation. Also, the two dimensional distribution of ozone density has been obtained using a simplified plasma chemistry model. The approximate semi-analytical solutions can be evaluated in a negligible computational time, yet provide precise estimates of corona discharge variables.

  20. Automatic Measurement of Fetal Brain Development from Magnetic Resonance Imaging: New Reference Data.

    PubMed

    Link, Daphna; Braginsky, Michael B; Joskowicz, Leo; Ben Sira, Liat; Harel, Shaul; Many, Ariel; Tarrasch, Ricardo; Malinger, Gustavo; Artzi, Moran; Kapoor, Cassandra; Miller, Elka; Ben Bashat, Dafna

    2018-01-01

    Accurate fetal brain volume estimation is of paramount importance in evaluating fetal development. The aim of this study was to develop an automatic method for fetal brain segmentation from magnetic resonance imaging (MRI) data, and to create for the first time a normal volumetric growth chart based on a large cohort. A semi-automatic segmentation method based on Seeded Region Growing algorithm was developed and applied to MRI data of 199 typically developed fetuses between 18 and 37 weeks' gestation. The accuracy of the algorithm was tested against a sub-cohort of ground truth manual segmentations. A quadratic regression analysis was used to create normal growth charts. The sensitivity of the method to identify developmental disorders was demonstrated on 9 fetuses with intrauterine growth restriction (IUGR). The developed method showed high correlation with manual segmentation (r2 = 0.9183, p < 0.001) as well as mean volume and volume overlap differences of 4.77 and 18.13%, respectively. New reference data on 199 normal fetuses were created, and all 9 IUGR fetuses were at or below the third percentile of the normal growth chart. The proposed method is fast, accurate, reproducible, user independent, applicable with retrospective data, and is suggested for use in routine clinical practice. © 2017 S. Karger AG, Basel.

  1. Automatic assessment of dynamic contrast-enhanced MRI in an ischemic rat hindlimb model: an exploratory study of transplanted multipotent progenitor cells.

    PubMed

    Hsu, Li-Yueh; Wragg, Andrew; Anderson, Stasia A; Balaban, Robert S; Boehm, Manfred; Arai, Andrew E

    2008-02-01

    This study presents computerized automatic image analysis for quantitatively evaluating dynamic contrast-enhanced MRI in an ischemic rat hindlimb model. MRI at 7 T was performed on animals in a blinded placebo-controlled experiment comparing multipotent adult progenitor cell-derived progenitor cell (MDPC)-treated, phosphate buffered saline (PBS)-injected, and sham-operated rats. Ischemic and non-ischemic limb regions of interest were automatically segmented from time-series images for detecting changes in perfusion and late enhancement. In correlation analysis of the time-signal intensity histograms, the MDPC-treated limbs correlated well with their corresponding non-ischemic limbs. However, the correlation coefficient of the PBS control group was significantly lower than that of the MDPC-treated and sham-operated groups. In semi-quantitative parametric maps of contrast enhancement, there was no significant difference in hypo-enhanced area between the MDPC and PBS groups at early perfusion-dependent time frames. However, the late-enhancement area was significantly larger in the PBS than the MDPC group. The results of this exploratory study show that MDPC-treated rats could be objectively distinguished from PBS controls. The differences were primarily determined by late contrast enhancement of PBS-treated limbs. These computerized methods appear promising for assessing perfusion and late enhancement in dynamic contrast-enhanced MRI.

  2. ECG artifact cancellation in surface EMG signals by fractional order calculus application.

    PubMed

    Miljković, Nadica; Popović, Nenad; Djordjević, Olivera; Konstantinović, Ljubica; Šekara, Tomislav B

    2017-03-01

    New aspects for automatic electrocardiography artifact removal from surface electromyography signals by application of fractional order calculus in combination with linear and nonlinear moving window filters are explored. Surface electromyography recordings of skeletal trunk muscles are commonly contaminated with spike shaped artifacts. This artifact originates from electrical heart activity, recorded by electrocardiography, commonly present in the surface electromyography signals recorded in heart proximity. For appropriate assessment of neuromuscular changes by means of surface electromyography, application of a proper filtering technique of electrocardiography artifact is crucial. A novel method for automatic artifact cancellation in surface electromyography signals by applying fractional order calculus and nonlinear median filter is introduced. The proposed method is compared with the linear moving average filter, with and without prior application of fractional order calculus. 3D graphs for assessment of window lengths of the filters, crest factors, root mean square differences, and fractional calculus orders (called WFC and WRC graphs) have been introduced. For an appropriate quantitative filtering evaluation, the synthetic electrocardiography signal and analogous semi-synthetic dataset have been generated. The examples of noise removal in 10 able-bodied subjects and in one patient with muscle dystrophy are presented for qualitative analysis. The crest factors, correlation coefficients, and root mean square differences of the recorded and semi-synthetic electromyography datasets showed that the most successful method was the median filter in combination with fractional order calculus of the order 0.9. Statistically more significant (p < 0.001) ECG peak reduction was obtained by the median filter application compared to the moving average filter in the cases of low level amplitude of muscle contraction compared to ECG spikes. The presented results suggest that the novel method combining a median filter and fractional order calculus can be used for automatic filtering of electrocardiography artifacts in the surface electromyography signal envelopes recorded in trunk muscles. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  3. StandFood: Standardization of Foods Using a Semi-Automatic System for Classifying and Describing Foods According to FoodEx2

    PubMed Central

    Eftimov, Tome; Korošec, Peter; Koroušić Seljak, Barbara

    2017-01-01

    The European Food Safety Authority has developed a standardized food classification and description system called FoodEx2. It uses facets to describe food properties and aspects from various perspectives, making it easier to compare food consumption data from different sources and perform more detailed data analyses. However, both food composition data and food consumption data, which need to be linked, are lacking in FoodEx2 because the process of classification and description has to be manually performed—a process that is laborious and requires good knowledge of the system and also good knowledge of food (composition, processing, marketing, etc.). In this paper, we introduce a semi-automatic system for classifying and describing foods according to FoodEx2, which consists of three parts. The first involves a machine learning approach and classifies foods into four FoodEx2 categories, with two for single foods: raw (r) and derivatives (d), and two for composite foods: simple (s) and aggregated (c). The second uses a natural language processing approach and probability theory to describe foods. The third combines the result from the first and the second part by defining post-processing rules in order to improve the result for the classification part. We tested the system using a set of food items (from Slovenia) manually-coded according to FoodEx2. The new semi-automatic system obtained an accuracy of 89% for the classification part and 79% for the description part, or an overall result of 79% for the whole system. PMID:28587103

  4. StandFood: Standardization of Foods Using a Semi-Automatic System for Classifying and Describing Foods According to FoodEx2.

    PubMed

    Eftimov, Tome; Korošec, Peter; Koroušić Seljak, Barbara

    2017-05-26

    The European Food Safety Authority has developed a standardized food classification and description system called FoodEx2. It uses facets to describe food properties and aspects from various perspectives, making it easier to compare food consumption data from different sources and perform more detailed data analyses. However, both food composition data and food consumption data, which need to be linked, are lacking in FoodEx2 because the process of classification and description has to be manually performed-a process that is laborious and requires good knowledge of the system and also good knowledge of food (composition, processing, marketing, etc.). In this paper, we introduce a semi-automatic system for classifying and describing foods according to FoodEx2, which consists of three parts. The first involves a machine learning approach and classifies foods into four FoodEx2 categories, with two for single foods: raw (r) and derivatives (d), and two for composite foods: simple (s) and aggregated (c). The second uses a natural language processing approach and probability theory to describe foods. The third combines the result from the first and the second part by defining post-processing rules in order to improve the result for the classification part. We tested the system using a set of food items (from Slovenia) manually-coded according to FoodEx2. The new semi-automatic system obtained an accuracy of 89% for the classification part and 79% for the description part, or an overall result of 79% for the whole system.

  5. Data mining for multiagent rules, strategies, and fuzzy decision tree structure

    NASA Astrophysics Data System (ADS)

    Smith, James F., III; Rhyne, Robert D., II; Fisher, Kristin

    2002-03-01

    A fuzzy logic based resource manager (RM) has been developed that automatically allocates electronic attack resources in real-time over many dissimilar platforms. Two different data mining algorithms have been developed to determine rules, strategies, and fuzzy decision tree structure. The first data mining algorithm uses a genetic algorithm as a data mining function and is called from an electronic game. The game allows a human expert to play against the resource manager in a simulated battlespace with each of the defending platforms being exclusively directed by the fuzzy resource manager and the attacking platforms being controlled by the human expert or operating autonomously under their own logic. This approach automates the data mining problem. The game automatically creates a database reflecting the domain expert's knowledge. It calls a data mining function, a genetic algorithm, for data mining of the database as required and allows easy evaluation of the information mined in the second step. The criterion for re- optimization is discussed as well as experimental results. Then a second data mining algorithm that uses a genetic program as a data mining function is introduced to automatically discover fuzzy decision tree structures. Finally, a fuzzy decision tree generated through this process is discussed.

  6. Semi-automatic assessment of skin capillary density: proof of principle and validation.

    PubMed

    Gronenschild, E H B M; Muris, D M J; Schram, M T; Karaca, U; Stehouwer, C D A; Houben, A J H M

    2013-11-01

    Skin capillary density and recruitment have been proven to be relevant measures of microvascular function. Unfortunately, the assessment of skin capillary density from movie files is very time-consuming, since this is done manually. This impedes the use of this technique in large-scale studies. We aimed to develop a (semi-) automated assessment of skin capillary density. CapiAna (Capillary Analysis) is a newly developed semi-automatic image analysis application. The technique involves four steps: 1) movement correction, 2) selection of the frame range and positioning of the region of interest (ROI), 3) automatic detection of capillaries, and 4) manual correction of detected capillaries. To gain insight into the performance of the technique, skin capillary density was measured in twenty participants (ten women; mean age 56.2 [42-72] years). To investigate the agreement between CapiAna and the classic manual counting procedure, we used weighted Deming regression and Bland-Altman analyses. In addition, intra- and inter-observer coefficients of variation (CVs), and differences in analysis time were assessed. We found a good agreement between CapiAna and the classic manual method, with a Pearson's correlation coefficient (r) of 0.95 (P<0.001) and a Deming regression coefficient of 1.01 (95%CI: 0.91; 1.10). In addition, we found no significant differences between the two methods, with an intercept of the Deming regression of 1.75 (-6.04; 9.54), while the Bland-Altman analysis showed a mean difference (bias) of 2.0 (-13.5; 18.4) capillaries/mm(2). The intra- and inter-observer CVs of CapiAna were 2.5% and 5.6% respectively, while for the classic manual counting procedure these were 3.2% and 7.2%, respectively. Finally, the analysis time for CapiAna ranged between 25 and 35min versus 80 and 95min for the manual counting procedure. We have developed a semi-automatic image analysis application (CapiAna) for the assessment of skin capillary density, which agrees well with the classic manual counting procedure, is time-saving, and has a better reproducibility as compared to the classic manual counting procedure. As a result, the use of skin capillaroscopy is feasible in large-scale studies, which importantly extends the possibilities to perform microcirculation research in humans. © 2013.

  7. Remote sensing monitoring of land restoration interventions in semi-arid environments with a before-after control-impact statistical design

    NASA Astrophysics Data System (ADS)

    Meroni, Michele; Schucknecht, Anne; Fasbender, Dominique; Rembold, Felix; Fava, Francesco; Mauclaire, Margaux; Goffner, Deborah; Di Lucchio, Luisa M.; Leonardi, Ugo

    2017-07-01

    Restoration interventions to combat land degradation are carried out in arid and semi-arid areas to improve vegetation cover and land productivity. Evaluating the success of an intervention over time is challenging due to various constraints (e.g. difficult-to-access areas, lack of long-term records) and the lack of standardised and affordable methodologies. We propose a semi-automatic methodology that uses remote sensing data to provide a rapid, standardised and objective assessment of the biophysical impact, in terms of vegetation cover, of restoration interventions. The Normalised Difference Vegetation Index (NDVI) is used as a proxy for vegetation cover. Recognising that changes in vegetation cover are naturally due to environmental factors such as seasonality and inter-annual climate variability, conclusions about the success of the intervention cannot be drawn by focussing on the intervention area only. We therefore use a comparative method that analyses the temporal variations (before and after the intervention) of the NDVI of the intervention area with respect to multiple control sites that are automatically and randomly selected from a set of candidates that are similar to the intervention area. Similarity is defined in terms of class composition as derived from an ISODATA classification of the imagery before the intervention. The method provides an estimate of the magnitude and significance of the difference in greenness change between the intervention area and control areas. As a case study, the methodology is applied to 15 restoration interventions carried out in Senegal. The impact of the interventions is analysed using 250-m MODIS and 30-m Landsat data. Results show that a significant improvement in vegetation cover was detectable only in one third of the analysed interventions, which is consistent with independent qualitative assessments based on field observations and visual analysis of high resolution imagery. Rural development agencies may potentially use the proposed method for a first screening of restoration interventions.

  8. Data processing software suite SITENNO for coherent X-ray diffraction imaging using the X-ray free-electron laser SACLA.

    PubMed

    Sekiguchi, Yuki; Oroguchi, Tomotaka; Takayama, Yuki; Nakasako, Masayoshi

    2014-05-01

    Coherent X-ray diffraction imaging is a promising technique for visualizing the structures of non-crystalline particles with dimensions of micrometers to sub-micrometers. Recently, X-ray free-electron laser sources have enabled efficient experiments in the `diffraction before destruction' scheme. Diffraction experiments have been conducted at SPring-8 Angstrom Compact free-electron LAser (SACLA) using the custom-made diffraction apparatus KOTOBUKI-1 and two multiport CCD detectors. In the experiments, ten thousands of single-shot diffraction patterns can be collected within several hours. Then, diffraction patterns with significant levels of intensity suitable for structural analysis must be found, direct-beam positions in diffraction patterns determined, diffraction patterns from the two CCD detectors merged, and phase-retrieval calculations for structural analyses performed. A software suite named SITENNO has been developed to semi-automatically apply the four-step processing to a huge number of diffraction data. Here, details of the algorithm used in the suite are described and the performance for approximately 9000 diffraction patterns collected from cuboid-shaped copper oxide particles reported. Using the SITENNO suite, it is possible to conduct experiments with data processing immediately after the data collection, and to characterize the size distribution and internal structures of the non-crystalline particles.

  9. Data processing software suite SITENNO for coherent X-ray diffraction imaging using the X-ray free-electron laser SACLA

    PubMed Central

    Sekiguchi, Yuki; Oroguchi, Tomotaka; Takayama, Yuki; Nakasako, Masayoshi

    2014-01-01

    Coherent X-ray diffraction imaging is a promising technique for visualizing the structures of non-crystalline particles with dimensions of micrometers to sub-micrometers. Recently, X-ray free-electron laser sources have enabled efficient experiments in the ‘diffraction before destruction’ scheme. Diffraction experiments have been conducted at SPring-8 Angstrom Compact free-electron LAser (SACLA) using the custom-made diffraction apparatus KOTOBUKI-1 and two multiport CCD detectors. In the experiments, ten thousands of single-shot diffraction patterns can be collected within several hours. Then, diffraction patterns with significant levels of intensity suitable for structural analysis must be found, direct-beam positions in diffraction patterns determined, diffraction patterns from the two CCD detectors merged, and phase-retrieval calculations for structural analyses performed. A software suite named SITENNO has been developed to semi-automatically apply the four-step processing to a huge number of diffraction data. Here, details of the algorithm used in the suite are described and the performance for approximately 9000 diffraction patterns collected from cuboid-shaped copper oxide particles reported. Using the SITENNO suite, it is possible to conduct experiments with data processing immediately after the data collection, and to characterize the size distribution and internal structures of the non-crystalline particles. PMID:24763651

  10. Changes to Workflow and Process Measures in the PICU During Transition From Semi to Full Electronic Health Record.

    PubMed

    Salib, Mina; Hoffmann, Raymond G; Dasgupta, Mahua; Zimmerman, Haydee; Hanson, Sheila

    2015-10-01

    Studies showing the changes in workflow during transition from semi to full electronic medical records are lacking. This objective study is to identify the changes in workflow in the PICU during transition from semi to full electronic health record. Prospective observational study. Children's Hospital of Wisconsin Institutional Review Board waived the need for approval so this study was institutional review board exempt. This study measured clinical workflow variables at a 72-bed PICU during different phases of transition to a full electronic health record, which occurred on November 4, 2012. Phases of electronic health record transition were defined as follows: pre-electronic health record (baseline data prior to transition to full electronic health record), transition phase (3 wk after electronic health record), and stabilization (6 mo after electronic health record). Data were analyzed for the three phases using Mann-Whitney U test with a two-sided p value of less than 0.05 considered significant. Seventy-two bed PICU. All patients in the PICU were included during the study periods. Five hundred and sixty-four patients with 2,355 patient days were evaluated in the three phases. Duration of rounds decreased from a median of 9 minutes per patient pre--electronic health record to 7 minutes per patient post electronic health record. Time to final note decreased from 2.06 days pre--electronic health record to 0.5 days post electronic health record. Time to first medication administration after admission also decreased from 33 minutes pre--electronic health record and 7 minutes post electronic health record. Time to Time to medication reconciliation was significantly higher pre-electronic health record than post electronic health record and percent of medication reconciliation completion was significantly lower pre--electronic health record than post electronic health record and percent of medication reconciliation completion was significantly higher pre--electronic health record than. There was no significant change in time between placement of discharge order and physical transfer from the unit [corrected].changes clinical workflow in a PICU with decreased duration of rounds, time to final note, time to medication administration, and time to medication reconciliation completion. There was no change in the duration from medical to physical transfer.

  11. Utilization of Automatic Tagging Using Web Information to Datamining

    NASA Astrophysics Data System (ADS)

    Sugimura, Hiroshi; Matsumoto, Kazunori

    This paper proposes a data annotation system using the automatic tagging approach. Although annotations of data are useful for deep analysis and mining of it, the cost of providing them becomes huge in most of the cases. In order to solve this problem, we develop a semi-automatic method that consists of two stages. In the first stage, it searches the Web space for relating information, and discovers candidates of effective annotations. The second stage uses knowledge of a human user. The candidates are investigated and refined by the user, and then they become annotations. We in this paper focus on time-series data, and show effectiveness of a GUI tool that supports the above process.

  12. Application of Semantic Tagging to Generate Superimposed Information on a Digital Encyclopedia

    NASA Astrophysics Data System (ADS)

    Garrido, Piedad; Tramullas, Jesus; Martinez, Francisco J.

    We can find in the literature several works regarding the automatic or semi-automatic processing of textual documents with historic information using free software technologies. However, more research work is needed to integrate the analysis of the context and provide coverage to the peculiarities of the Spanish language from a semantic point of view. This research work proposes a novel knowledge-based strategy based on combining subject-centric computing, a topic-oriented approach, and superimposed information. It subsequent combination with artificial intelligence techniques led to an automatic analysis after implementing a made-to-measure interpreted algorithm which, in turn, produced a good number of associations and events with 90% reliability.

  13. A numerical algorithm with preference statements to evaluate the performance of scientists.

    PubMed

    Ricker, Martin

    Academic evaluation committees have been increasingly receptive for using the number of published indexed articles, as well as citations, to evaluate the performance of scientists. It is, however, impossible to develop a stand-alone, objective numerical algorithm for the evaluation of academic activities, because any evaluation necessarily includes subjective preference statements. In a market, the market prices represent preference statements, but scientists work largely in a non-market context. I propose a numerical algorithm that serves to determine the distribution of reward money in Mexico's evaluation system, which uses relative prices of scientific goods and services as input. The relative prices would be determined by an evaluation committee. In this way, large evaluation systems (like Mexico's Sistema Nacional de Investigadores ) could work semi-automatically, but not arbitrarily or superficially, to determine quantitatively the academic performance of scientists every few years. Data of 73 scientists from the Biology Institute of Mexico's National University are analyzed, and it is shown that the reward assignation and academic priorities depend heavily on those preferences. A maximum number of products or activities to be evaluated is recommended, to encourage quality over quantity.

  14. Segmentation of 3D ultrasound computer tomography reflection images using edge detection and surface fitting

    NASA Astrophysics Data System (ADS)

    Hopp, T.; Zapf, M.; Ruiter, N. V.

    2014-03-01

    An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.

  15. Investigation of thermoelectricity in KScSn half-Heusler compound

    NASA Astrophysics Data System (ADS)

    Shrivastava, Deepika; Acharya, Nikita; Sanyal, Sankar P.

    2018-05-01

    The electronic and transport properties of KScSn half-Heusler (HH) compound have been investigated using first-principles density functional theory and semi classical Boltzmann transport theory. The electronic band structure and density of states (total and partial) show semiconducting nature of KScSn with band gap 0.48 eV which agree well with previously reported results. The transport coefficient such as electrical conductivity, Seebeck coefficient, electronic thermal conductivity and power factor as a function of chemical potential are evaluated. KScSn has high power factor for p-type doping and is a potential candidate for thermoelectric applications.

  16. A repeated-measures analysis of the effects of soft tissues on wrist range of motion in the extant phylogenetic bracket of dinosaurs: Implications for the functional origins of an automatic wrist folding mechanism in Crocodilia.

    PubMed

    Hutson, Joel David; Hutson, Kelda Nadine

    2014-07-01

    A recent study hypothesized that avian-like wrist folding in quadrupedal dinosaurs could have aided their distinctive style of locomotion with semi-pronated and therefore medially facing palms. However, soft tissues that automatically guide avian wrist folding rarely fossilize, and automatic wrist folding of unknown function in extant crocodilians has not been used to test this hypothesis. Therefore, an investigation of the relative contributions of soft tissues to wrist range of motion (ROM) in the extant phylogenetic bracket of dinosaurs, and the quadrupedal function of crocodilian wrist folding, could inform these questions. Here, we repeatedly measured wrist ROM in degrees through fully fleshed, skinned, minus muscles/tendons, minus ligaments, and skeletonized stages in the American alligator Alligator mississippiensis and the ostrich Struthio camelus. The effects of dissection treatment and observer were statistically significant for alligator wrist folding and ostrich wrist flexion, but not ostrich wrist folding. Final skeletonized wrist folding ROM was higher than (ostrich) or equivalent to (alligator) initial fully fleshed ROM, while final ROM was lower than initial ROM for ostrich wrist flexion. These findings suggest that, unlike the hinge/ball and socket-type elbow and shoulder joints in these archosaurs, ROM within gliding/planar diarthrotic joints is more restricted to the extent of articular surfaces. The alligator data indicate that the crocodilian wrist mechanism functions to automatically lock their semi-pronated palms into a rigid column, which supports the hypothesis that this palmar orientation necessitated soft tissue stiffening mechanisms in certain dinosaurs, although ROM-restricted articulations argue against the presence of an extensive automatic mechanism. Anat Rec, 297:1228-1249, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.

  17. Workspace definition for navigated control functional endoscopic sinus surgery

    NASA Astrophysics Data System (ADS)

    Gessat, Michael; Hofer, Mathias; Audette, Michael; Dietz, Andreas; Meixensberger, Jürgen; Stauß, Gero; Burgert, Oliver

    2007-03-01

    For the pre-operative definition of a surgical workspace for Navigated Control ® Functional Endoscopic Sinus Surgery (FESS), we developed a semi-automatic image processing system. Based on observations of surgeons using a manual system, we implemented a workflow-based engineering process that led us to the development of a system reducing time and workload spent during the workspace definition. The system uses a feature based on local curvature to align vertices of a polygonal outline along the bone structures defining the cavities of the inner nose. An anisotropic morphologic operator was developed solve problems arising from artifacts from noise and partial volume effects. We used time measurements and NASA's TLX questionnaire to evaluate our system.

  18. A Case Study on Sepsis Using PubMed and Deep Learning for Ontology Learning.

    PubMed

    Arguello Casteleiro, Mercedes; Maseda Fernandez, Diego; Demetriou, George; Read, Warren; Fernandez Prieto, Maria Jesus; Des Diz, Julio; Nenadic, Goran; Keane, John; Stevens, Robert

    2017-01-01

    We investigate the application of distributional semantics models for facilitating unsupervised extraction of biomedical terms from unannotated corpora. Term extraction is used as the first step of an ontology learning process that aims to (semi-)automatic annotation of biomedical concepts and relations from more than 300K PubMed titles and abstracts. We experimented with both traditional distributional semantics methods such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) as well as the neural language models CBOW and Skip-gram from Deep Learning. The evaluation conducted concentrates on sepsis, a major life-threatening condition, and shows that Deep Learning models outperform LSA and LDA with much higher precision.

  19. Markov random field based automatic image alignment for electron tomography.

    PubMed

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark

    2008-03-01

    We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.

  20. Definition and classification of evaluation units for tertiary structure prediction in CASP12 facilitated through semi-automated metrics.

    PubMed

    Abriata, Luciano A; Kinch, Lisa N; Tamò, Giorgio E; Monastyrskyy, Bohdan; Kryshtafovych, Andriy; Dal Peraro, Matteo

    2018-03-01

    For assessment purposes, CASP targets are split into evaluation units. We herein present the official definition of CASP12 evaluation units (EUs) and their classification into difficulty categories. Each target can be evaluated as one EU (the whole target) or/and several EUs (separate structural domains or groups of structural domains). The specific scenario for a target split is determined by the domain organization of available templates, the difference in server performance on separate domains versus combination of the domains, and visual inspection. In the end, 71 targets were split into 96 EUs. Classification of the EUs into difficulty categories was done semi-automatically with the assistance of metrics provided by the Prediction Center. These metrics account for sequence and structural similarities of the EUs to potential structural templates from the Protein Data Bank, and for the baseline performance of automated server predictions. The metrics readily separate the 96 EUs into 38 EUs that should be straightforward for template-based modeling (TBM) and 39 that are expected to be hard for homology modeling and are thus left for free modeling (FM). The remaining 19 borderline evaluation units were dubbed FM/TBM, and were inspected case by case. The article also overviews structural and evolutionary features of selected targets relevant to our accompanying article presenting the assessment of FM and FM/TBM predictions, and overviews structural features of the hardest evaluation units from the FM category. We finally suggest improvements for the EU definition and classification procedures. © 2017 Wiley Periodicals, Inc.

  1. Use of the QR Reader to Provide Real-Time Evaluation of Residents' Skills Following Surgical Procedures.

    PubMed

    Reynolds, Kellin; Barnhill, Danny; Sias, Jamie; Young, Amy; Polite, Florencia Greer

    2014-12-01

    A portable electronic method of providing instructional feedback and recording an evaluation of resident competency immediately following surgical procedures has not previously been documented in obstetrics and gynecology. This report presents a unique electronic format that documents resident competency and encourages verbal communication between faculty and residents immediately following operative procedures. The Microsoft Tag system and SurveyMonkey platform were linked by a 2-D QR code using Microsoft QR code generator. Each resident was given a unique code (TAG) embedded onto an ID card. An evaluation form was attached to each resident's file in SurveyMonkey. Postoperatively, supervising faculty scanned the resident's TAG with a smartphone and completed the brief evaluation using the phone's screen. The evaluation was reviewed with the resident and automatically submitted to the resident's educational file. The evaluation system was quickly accepted by residents and faculty. Of 43 residents and faculty in the study, 38 (88%) responded to a survey 8 weeks after institution of the electronic evaluation system. Thirty (79%) of the 38 indicated it was superior to the previously used handwritten format. The electronic system demonstrated improved utilization compared with paper evaluations, with a mean of 23 electronic evaluations submitted per resident during a 6-month period versus 14 paper assessments per resident during an earlier period of 6 months. This streamlined portable electronic evaluation is an effective tool for direct, formative feedback for residents, and it creates a longitudinal record of resident progress. Satisfaction with, and use of, this evaluation system was high.

  2. Use of the QR Reader to Provide Real-Time Evaluation of Residents' Skills Following Surgical Procedures

    PubMed Central

    Reynolds, Kellin; Barnhill, Danny; Sias, Jamie; Young, Amy; Polite, Florencia Greer

    2014-01-01

    Background A portable electronic method of providing instructional feedback and recording an evaluation of resident competency immediately following surgical procedures has not previously been documented in obstetrics and gynecology. Objective This report presents a unique electronic format that documents resident competency and encourages verbal communication between faculty and residents immediately following operative procedures. Methods The Microsoft Tag system and SurveyMonkey platform were linked by a 2-D QR code using Microsoft QR code generator. Each resident was given a unique code (TAG) embedded onto an ID card. An evaluation form was attached to each resident's file in SurveyMonkey. Postoperatively, supervising faculty scanned the resident's TAG with a smartphone and completed the brief evaluation using the phone's screen. The evaluation was reviewed with the resident and automatically submitted to the resident's educational file. Results The evaluation system was quickly accepted by residents and faculty. Of 43 residents and faculty in the study, 38 (88%) responded to a survey 8 weeks after institution of the electronic evaluation system. Thirty (79%) of the 38 indicated it was superior to the previously used handwritten format. The electronic system demonstrated improved utilization compared with paper evaluations, with a mean of 23 electronic evaluations submitted per resident during a 6-month period versus 14 paper assessments per resident during an earlier period of 6 months. Conclusions This streamlined portable electronic evaluation is an effective tool for direct, formative feedback for residents, and it creates a longitudinal record of resident progress. Satisfaction with, and use of, this evaluation system was high. PMID:26140128

  3. Automatic spatiotemporal matching of detected pleural thickenings

    NASA Astrophysics Data System (ADS)

    Chaisaowong, Kraisorn; Keller, Simon Kai; Kraus, Thomas

    2014-01-01

    Pleural thickenings can be found in asbestos exposed patient's lung. Non-invasive diagnosis including CT imaging can detect aggressive malignant pleural mesothelioma in its early stage. In order to create a quantitative documentation of automatic detected pleural thickenings over time, the differences in volume and thickness of the detected thickenings have to be calculated. Physicians usually estimate the change of each thickening via visual comparison which provides neither quantitative nor qualitative measures. In this work, automatic spatiotemporal matching techniques of the detected pleural thickenings at two points of time based on the semi-automatic registration have been developed, implemented, and tested so that the same thickening can be compared fully automatically. As result, the application of the mapping technique using the principal components analysis turns out to be advantageous than the feature-based mapping using centroid and mean Hounsfield Units of each thickening, since the resulting sensitivity was improved to 98.46% from 42.19%, while the accuracy of feature-based mapping is only slightly higher (84.38% to 76.19%).

  4. Flight evaluation results for a digital electronic engine control in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Burcham, F. W., Jr.; Myers, L. P.; Walsh, K. R.

    1983-01-01

    A digital electronic engine control (DEEC) system on an F100 engine in an F-15 airplane was evaluated in flight. Thirty flights were flown in a four-phase program from June 1981 to February 1983. Significant improvements in the operability and performance of the F100 engine were developed as a result of the flight evaluation: the augmentor envelope was increased by 15,000 ft, the airstart envelope was improved by 75 knots, and the need to periodically trim the engine was eliminated. The hydromechanical backup control performance was evaluated and was found to be satisfactory. Two system failures were encountered in the test program; both were detected and accommodated successfully. No transfers to the backup control system were required, and no automatic transfers occurred. As a result of the successful DEEC flight evaluation, the DEEC system has entered the full-scale development phase.

  5. Effect of fat types on the structural and textural properties of dough and semi-sweet biscuit.

    PubMed

    Mamat, Hasmadi; Hill, Sandra E

    2014-09-01

    Fat is an important ingredient in baking products and it plays many roles in providing desirable textural properties of baking products, particularly biscuit. In this study, the effect of fat types on dough rheological properties and quality of semi-sweet biscuit (rich tea type) were investigated using various techniques. Texture profile and extensibility analysis were used to study the dough rheology, while three-point bend test and scanning electron microscopy were used to analyse the textural characteristics of final product. TPA results showed that the type of fat significantly influenced dough textural properties. Biscuit produced with higher solid fat oil showed higher breaking force but this was not significantly different when evaluated by sensory panel. Scanning electron microscopy showed that biscuit produced with palm mid-fraction had an open internal microstructure and heterogeneous air cells as compared to other samples.

  6. Kwf-Grid workflow management system for Earth science applications

    NASA Astrophysics Data System (ADS)

    Tran, V.; Hluchy, L.

    2009-04-01

    In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.

  7. A Hybrid Hierarchical Approach for Brain Tissue Segmentation by Combining Brain Atlas and Least Square Support Vector Machine

    PubMed Central

    Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh

    2013-01-01

    In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth. PMID:24696800

  8. Assessing the Three-North Shelter Forest Program in China by a novel framework for characterizing vegetation changes

    NASA Astrophysics Data System (ADS)

    Qiu, Bingwen; Chen, Gong; Tang, Zhenghong; Lu, Difei; Wang, Zhuangzhuang; Chen, Chongchen

    2017-11-01

    The Three-North Shelter Forest Program (TNSFP) in China has been intensely invested for approximately 40 years. However, the efficacy of the TNSFP has been debatable due to the spatiotemporal complexity of vegetation changes. A novel framework was proposed for characterizing vegetation changes in the TNSFP region through Combining Trend and Temporal Similarity trajectory (COTTS). This framework could automatically and continuously address the fundamental questions on where, what, how and when vegetation changes have occurred. Vegetation trend was measured by a non-parametric method. The temporal similarity trajectory was tracked by the Jeffries-Matusita (JM) distance of the inter-annual vegetation indices temporal profiles and modeled using the logistic function. The COTTS approach was applied to examine the afforestation efforts of the TNSFP using 500 m 8-day composites MODIS datasets from 2001 to 2015. Accuracy assessment from the 1109 reference sites reveals that the COTTS is capable of automatically determining vegetation dynamic patterns, with an overall accuracy of 90.08% and a kappa coefficient of 0.8688. The efficacy of the TNSFP was evaluated through comprehensive considerations of vegetation, soil and wetness. Around 45.78% areas obtained increasing vegetation trend, 2.96% areas achieved bare soil decline and 4.50% areas exhibited increasing surface wetness. There were 4.49% areas under vegetation degradation & desertification. Spatiotemporal heterogeneity of efficacy of the TNSFP was revealed: great vegetation gain through the abrupt dynamic pattern in the semi-humid and humid regions, bare soil decline & potential efficacy in the semi-arid region and remarkable efficacy in functional region of Eastern Ordos.

  9. Automatic Association of News Items.

    ERIC Educational Resources Information Center

    Carrick, Christina; Watters, Carolyn

    1997-01-01

    Discussion of electronic news delivery systems and the automatic generation of electronic editions focuses on the association of related items of different media type, specifically photos and stories. The goal is to be able to determine to what degree any two news items refer to the same news event. (Author/LRW)

  10. A semi-automatic method for quantification and classification of erythrocytes infected with malaria parasites in microscopic images.

    PubMed

    Díaz, Gloria; González, Fabio A; Romero, Eduardo

    2009-04-01

    Visual quantification of parasitemia in thin blood films is a very tedious, subjective and time-consuming task. This study presents an original method for quantification and classification of erythrocytes in stained thin blood films infected with Plasmodium falciparum. The proposed approach is composed of three main phases: a preprocessing step, which corrects luminance differences. A segmentation step that uses the normalized RGB color space for classifying pixels either as erythrocyte or background followed by an Inclusion-Tree representation that structures the pixel information into objects, from which erythrocytes are found. Finally, a two step classification process identifies infected erythrocytes and differentiates the infection stage, using a trained bank of classifiers. Additionally, user intervention is allowed when the approach cannot make a proper decision. Four hundred fifty malaria images were used for training and evaluating the method. Automatic identification of infected erythrocytes showed a specificity of 99.7% and a sensitivity of 94%. The infection stage was determined with an average sensitivity of 78.8% and average specificity of 91.2%.

  11. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    NASA Astrophysics Data System (ADS)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  12. Tuned grid generation with ICEM CFD

    NASA Technical Reports Server (NTRS)

    Wulf, Armin; Akdag, Vedat

    1995-01-01

    ICEM CFD is a CAD based grid generation package that supports multiblock structured, unstructured tetrahedral and unstructured hexahedral grids. Major development efforts have been spent to extend ICEM CFD's multiblock structured and hexahedral unstructured grid generation capabilities. The modules added are: a parametric grid generation module and a semi-automatic hexahedral grid generation module. A fully automatic version of the hexahedral grid generation module for around a set of predefined objects in rectilinear enclosures has been developed. These modules will be presented and the procedures used will be described, and examples will be discussed.

  13. MedSynDiKATe--design considerations for an ontology-based medical text understanding system.

    PubMed Central

    Hahn, U.; Romacker, M.; Schulz, S.

    2000-01-01

    MedSynDiKATe is a natural language processor for automatically acquiring knowledge from medical finding reports. The content of these documents is transferred to formal representation structures which constitute a corresponding text knowledge base. The general system architecture we present integrates requirements from the analysis of single sentences, as well as those of referentially linked sentences forming cohesive texts. The strong demands MedSynDiKATe poses to the availability of expressive knowledge sources are accounted for by two alternative approaches to (semi)automatic ontology engineering. PMID:11079899

  14. Vegetation survey in Amazonia using LANDSAT data. [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Shimabukuro, Y. E.; Dossantos, J. R.; Deaquino, L. C. S.

    1982-01-01

    Automatic Image-100 analysis of LANDSAT data was performed using the MAXVER classification algorithm. In the pilot area, four vegetation units were mapped automatically in addition to the areas occupied for agricultural activities. The Image-100 classified results together with a soil map and information from RADAR images, permitted the establishment of the final legend with six classes: semi-deciduous tropical forest; low land evergreen tropical forest; secondary vegetation; tropical forest of humid areas, predominant pastureland and flood plains. Two water types were identified based on their sediments indicating different geological and geomorphological aspects.

  15. Fully automatic region of interest selection in glomerular filtration rate estimation from 99mTc-DTPA renogram.

    PubMed

    Lin, Kun-Ju; Huang, Jia-Yann; Chen, Yung-Sheng

    2011-12-01

    Glomerular filtration rate (GFR) is a common accepted standard estimation of renal function. Gamma camera-based methods for estimating renal uptake of (99m)Tc-diethylenetriaminepentaacetic acid (DTPA) without blood or urine sampling have been widely used. Of these, the method introduced by Gates has been the most common method. Currently, most of gamma cameras are equipped with a commercial program for GFR determination, a semi-quantitative analysis by manually drawing region of interest (ROI) over each kidney. Then, the GFR value can be computed from the scintigraphic determination of (99m)Tc-DTPA uptake within the kidney automatically. Delineating the kidney area is difficult when applying a fixed threshold value. Moreover, hand-drawn ROIs are tedious, time consuming, and dependent highly on operator skill. Thus, we developed a fully automatic renal ROI estimation system based on the temporal changes in intensity counts, intensity-pair distribution image contrast enhancement method, adaptive thresholding, and morphological operations that can locate the kidney area and obtain the GFR value from a (99m)Tc-DTPA renogram. To evaluate the performance of the proposed approach, 30 clinical dynamic renograms were introduced. The fully automatic approach failed in one patient with very poor renal function. Four patients had a unilateral kidney, and the others had bilateral kidneys. The automatic contours from the remaining 54 kidneys were compared with the contours of manual drawing. The 54 kidneys were included for area error and boundary error analyses. There was high correlation between two physicians' manual contours and the contours obtained by our approach. For area error analysis, the mean true positive area overlap is 91%, the mean false negative is 13.4%, and the mean false positive is 9.3%. The boundary error is 1.6 pixels. The GFR calculated using this automatic computer-aided approach is reproducible and may be applied to help nuclear medicine physicians in clinical practice.

  16. Antibiogramj: A tool for analysing images from disk diffusion tests.

    PubMed

    Alonso, C A; Domínguez, C; Heras, J; Mata, E; Pascual, V; Torres, C; Zarazaga, M

    2017-05-01

    Disk diffusion testing, known as antibiogram, is widely applied in microbiology to determine the antimicrobial susceptibility of microorganisms. The measurement of the diameter of the zone of growth inhibition of microorganisms around the antimicrobial disks in the antibiogram is frequently performed manually by specialists using a ruler. This is a time-consuming and error-prone task that might be simplified using automated or semi-automated inhibition zone readers. However, most readers are usually expensive instruments with embedded software that require significant changes in laboratory design and workflow. Based on the workflow employed by specialists to determine the antimicrobial susceptibility of microorganisms, we have designed a software tool that, from images of disk diffusion tests, semi-automatises the process. Standard computer vision techniques are employed to achieve such an automatisation. We present AntibiogramJ, a user-friendly and open-source software tool to semi-automatically determine, measure and categorise inhibition zones of images from disk diffusion tests. AntibiogramJ is implemented in Java and deals with images captured with any device that incorporates a camera, including digital cameras and mobile phones. The fully automatic procedure of AntibiogramJ for measuring inhibition zones achieves an overall agreement of 87% with an expert microbiologist; moreover, AntibiogramJ includes features to easily detect when the automatic reading is not correct and fix it manually to obtain the correct result. AntibiogramJ is a user-friendly, platform-independent, open-source, and free tool that, up to the best of our knowledge, is the most complete software tool for antibiogram analysis without requiring any investment in new equipment or changes in the laboratory. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Semi-Automatic Normalization of Multitemporal Remote Images Based on Vegetative Pseudo-Invariant Features

    PubMed Central

    Garcia-Torres, Luis; Caballero-Novella, Juan J.; Gómez-Candón, David; De-Castro, Ana Isabel

    2014-01-01

    A procedure to achieve the semi-automatic relative image normalization of multitemporal remote images of an agricultural scene called ARIN was developed using the following procedures: 1) defining the same parcel of selected vegetative pseudo-invariant features (VPIFs) in each multitemporal image; 2) extracting data concerning the VPIF spectral bands from each image; 3) calculating the correction factors (CFs) for each image band to fit each image band to the average value of the image series; and 4) obtaining the normalized images by linear transformation of each original image band through the corresponding CF. ARIN software was developed to semi-automatically perform the ARIN procedure. We have validated ARIN using seven GeoEye-1 satellite images taken over the same location in Southern Spain from early April to October 2010 at an interval of approximately 3 to 4 weeks. The following three VPIFs were chosen: citrus orchards (CIT), olive orchards (OLI) and poplar groves (POP). In the ARIN-normalized images, the range, standard deviation (s. d.) and root mean square error (RMSE) of the spectral bands and vegetation indices were considerably reduced compared to the original images, regardless of the VPIF or the combination of VPIFs selected for normalization, which demonstrates the method’s efficacy. The correlation coefficients between the CFs among VPIFs for any spectral band (and all bands overall) were calculated to be at least 0.85 and were significant at P = 0.95, indicating that the normalization procedure was comparably performed regardless of the VPIF chosen. ARIN method was designed only for agricultural and forestry landscapes where VPIFs can be identified. PMID:24604031

  18. Quantitative analysis of hyperpolarized 129Xe ventilation imaging in healthy volunteers and subjects with chronic obstructive pulmonary disease

    PubMed Central

    Virgincar, Rohan S.; Cleveland, Zackary I.; Kaushik, S. Sivaram; Freeman, Matthew S.; Nouls, John; Cofer, Gary P.; Martinez-Jimenez, Santiago; He, Mu; Kraft, Monica; Wolber, Jan; McAdams, H. Page; Driehuys, Bastiaan

    2013-01-01

    In this study, hyperpolarized (HP) 129Xe MR ventilation and 1H anatomical images were obtained from 3 subject groups: young healthy volunteers (HV), subjects with chronic obstructive pulmonary disease (COPD), and age-matched control subjects (AMC). Ventilation images were quantified by 2 methods: an expert reader-based ventilation defect score percentage (VDS%) and a semi-automatic segmentation-based ventilation defect percentage (VDP). Reader-based values were assigned by two experienced radiologists and resolved by consensus. In the semi-automatic analysis, 1H anatomical images and 129Xe ventilation images were both segmented following registration, to obtain the thoracic cavity volume (TCV) and ventilated volume (VV), respectively, which were then expressed as a ratio to obtain the VDP. Ventilation images were also characterized by generating signal intensity histograms from voxels within the TCV, and heterogeneity was analyzed using the coefficient of variation (CV). The reader-based VDS% correlated strongly with the semi-automatically generated VDP (r = 0.97, p < 0.0001), and with CV (r = 0.82, p < 0.0001). Both 129Xe ventilation defect scoring metrics readily separated the 3 groups from one another and correlated significantly with FEV1 (VDS%: r = -0.78, p = 0.0002; VDP: r = -0.79, p = 0.0003; CV: r = -0.66, p = 0.0059) and other pulmonary function tests. In the healthy subject groups (HV and AMC), the prevalence of ventilation defects also increased with age (VDS%: r = 0.61, p = 0.0002; VDP: r = 0.63, p = 0.0002). Moreover, ventilation histograms and their associated CVs distinguished between COPD subjects with similar ventilation defect scores but visibly different ventilation patterns. PMID:23065808

  19. Neural Network for Nanoscience Scanning Electron Microscope Image Recognition.

    PubMed

    Modarres, Mohammad Hadi; Aversa, Rossella; Cozzini, Stefano; Ciancio, Regina; Leto, Angelo; Brandino, Giuseppe Piero

    2017-10-16

    In this paper we applied transfer learning techniques for image recognition, automatic categorization, and labeling of nanoscience images obtained by scanning electron microscope (SEM). Roughly 20,000 SEM images were manually classified into 10 categories to form a labeled training set, which can be used as a reference set for future applications of deep learning enhanced algorithms in the nanoscience domain. The categories chosen spanned the range of 0-Dimensional (0D) objects such as particles, 1D nanowires and fibres, 2D films and coated surfaces, and 3D patterned surfaces such as pillars. The training set was used to retrain on the SEM dataset and to compare many convolutional neural network models (Inception-v3, Inception-v4, ResNet). We obtained compatible results by performing a feature extraction of the different models on the same dataset. We performed additional analysis of the classifier on a second test set to further investigate the results both on particular cases and from a statistical point of view. Our algorithm was able to successfully classify around 90% of a test dataset consisting of SEM images, while reduced accuracy was found in the case of images at the boundary between two categories or containing elements of multiple categories. In these cases, the image classification did not identify a predominant category with a high score. We used the statistical outcomes from testing to deploy a semi-automatic workflow able to classify and label images generated by the SEM. Finally, a separate training was performed to determine the volume fraction of coherently aligned nanowires in SEM images. The results were compared with what was obtained using the Local Gradient Orientation method. This example demonstrates the versatility and the potential of transfer learning to address specific tasks of interest in nanoscience applications.

  20. Automatic analysis of microscopic images of red blood cell aggregates

    NASA Astrophysics Data System (ADS)

    Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.

    2015-06-01

    Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).

  1. Evaluation Model for Pavement Surface Distress on 3d Point Clouds from Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Aoki, K.; Yamamoto, K.; Shimamura, H.

    2012-07-01

    This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS). The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments' specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.

  2. Communication: Recovering the flat-plane condition in electronic structure theory at semi-local DFT cost

    NASA Astrophysics Data System (ADS)

    Bajaj, Akash; Janet, Jon Paul; Kulik, Heather J.

    2017-11-01

    The flat-plane condition is the union of two exact constraints in electronic structure theory: (i) energetic piecewise linearity with fractional electron removal or addition and (ii) invariant energetics with change in electron spin in a half filled orbital. Semi-local density functional theory (DFT) fails to recover the flat plane, exhibiting convex fractional charge errors (FCE) and concave fractional spin errors (FSE) that are related to delocalization and static correlation errors. We previously showed that DFT+U eliminates FCE but now demonstrate that, like other widely employed corrections (i.e., Hartree-Fock exchange), it worsens FSE. To find an alternative strategy, we examine the shape of semi-local DFT deviations from the exact flat plane and we find this shape to be remarkably consistent across ions and molecules. We introduce the judiciously modified DFT (jmDFT) approach, wherein corrections are constructed from few-parameter, low-order functional forms that fit the shape of semi-local DFT errors. We select one such physically intuitive form and incorporate it self-consistently to correct semi-local DFT. We demonstrate on model systems that jmDFT represents the first easy-to-implement, no-overhead approach to recovering the flat plane from semi-local DFT.

  3. Advances in snow cover distributed modelling via ensemble simulations and assimilation of satellite data

    NASA Astrophysics Data System (ADS)

    Revuelto, J.; Dumont, M.; Tuzet, F.; Vionnet, V.; Lafaysse, M.; Lecourt, G.; Vernay, M.; Morin, S.; Cosme, E.; Six, D.; Rabatel, A.

    2017-12-01

    Nowadays snowpack models show a good capability in simulating the evolution of snow in mountain areas. However singular deviations of meteorological forcing and shortcomings in the modelling of snow physical processes, when accumulated on time along a snow season, could produce large deviations from real snowpack state. The evaluation of these deviations is usually assessed with on-site observations from automatic weather stations. Nevertheless the location of these stations could strongly influence the results of these evaluations since local topography may have a marked influence on snowpack evolution. Despite the evaluation of snowpack models with automatic weather stations usually reveal good results, there exist a lack of large scale evaluations of simulations results on heterogeneous alpine terrain subjected to local topographic effects.This work firstly presents a complete evaluation of the detailed snowpack model Crocus over an extended mountain area, the Arve upper catchment (western European Alps). This catchment has a wide elevation range with a large area above 2000m a.s.l. and/or glaciated. The evaluation compares results obtained with distributed and semi-distributed simulations (the latter nowadays used on the operational forecasting). Daily observations of the snow covered area from MODIS satellite sensor, seasonal glacier surface mass balance evolution measured in more than 65 locations and the galciers annual equilibrium line altitude from Landsat/Spot/Aster satellites, have been used for model evaluation. Additionally the latest advances in producing ensemble snowpack simulations for assimilating satellite reflectance data over extended areas will be presented. These advances comprises the generation of an ensemble of downscaled high-resolution meteorological forcing from meso-scale meteorological models and the application of a particle filter scheme for assimilating satellite observations. Despite the results are prefatory, they show a good potential improving snowpack forecasting capabilities.

  4. RFA-cut: Semi-automatic segmentation of radiofrequency ablation zones with and without needles via optimal s-t-cuts.

    PubMed

    Egger, Jan; Busse, Harald; Brandmaier, Philipp; Seider, Daniel; Gawlitza, Matthias; Strocka, Steffen; Voglreiter, Philip; Dokter, Mark; Hofmann, Michael; Kainz, Bernhard; Chen, Xiaojun; Hann, Alexander; Boechat, Pedro; Yu, Wei; Freisleben, Bernd; Alhonnoro, Tuomas; Pollari, Mika; Moche, Michael; Schmalstieg, Dieter

    2015-01-01

    In this contribution, we present a semi-automatic segmentation algorithm for radiofrequency ablation (RFA) zones via optimal s-t-cuts. Our interactive graph-based approach builds upon a polyhedron to construct the graph and was specifically designed for computed tomography (CT) acquisitions from patients that had RFA treatments of Hepatocellular Carcinomas (HCC). For evaluation, we used twelve post-interventional CT datasets from the clinical routine and as evaluation metric we utilized the Dice Similarity Coefficient (DSC), which is commonly accepted for judging computer aided medical segmentation tasks. Compared with pure manual slice-by-slice expert segmentations from interventional radiologists, we were able to achieve a DSC of about eighty percent, which is sufficient for our clinical needs. Moreover, our approach was able to handle images containing (DSC=75.9%) and not containing (78.1%) the RFA needles still in place. Additionally, we found no statistically significant difference (p<;0.423) between the segmentation results of the subgroups for a Mann-Whitney test. Finally, to the best of our knowledge, this is the first time a segmentation approach for CT scans including the RFA needles is reported and we show why another state-of-the-art segmentation method fails for these cases. Intraoperative scans including an RFA probe are very critical in the clinical practice and need a very careful segmentation and inspection to avoid under-treatment, which may result in tumor recurrence (up to 40%). If the decision can be made during the intervention, an additional ablation can be performed without removing the entire needle. This decreases the patient stress and associated risks and costs of a separate intervention at a later date. Ultimately, the segmented ablation zone containing the RFA needle can be used for a precise ablation simulation as the real needle position is known.

  5. Quantitative micro-CT based coronary artery profiling using interactive local thresholding and cylindrical coordinates.

    PubMed

    Panetta, Daniele; Pelosi, Gualtiero; Viglione, Federica; Kusmic, Claudia; Terreni, Marianna; Belcari, Nicola; Guerra, Alberto Del; Athanasiou, Lambros; Exarchos, Themistoklis; Fotiadis, Dimitrios I; Filipovic, Nenad; Trivella, Maria Giovanna; Salvadori, Piero A; Parodi, Oberdan

    2015-01-01

    Micro-CT is an established imaging technique for high-resolution non-destructive assessment of vascular samples, which is gaining growing interest for investigations of atherosclerotic arteries both in humans and in animal models. However, there is still a lack in the definition of micro-CT image metrics suitable for comprehensive evaluation and quantification of features of interest in the field of experimental atherosclerosis (ATS). A novel approach to micro-CT image processing for profiling of coronary ATS is described, providing comprehensive visualization and quantification of contrast agent-free 3D high-resolution reconstruction of full-length artery walls. Accelerated coronary ATS has been induced by high fat cholesterol-enriched diet in swine and left coronary artery (LCA) harvested en bloc for micro-CT scanning and histologic processing. A cylindrical coordinate system has been defined on the image space after curved multiplanar reformation of the coronary vessel for the comprehensive visualization of the main vessel features such as wall thickening and calcium content. A novel semi-automatic segmentation procedure based on 2D histograms has been implemented and the quantitative results validated by histology. The potentiality of attenuation-based micro-CT at low kV to reliably separate arterial wall layers from adjacent tissue as well as identify wall and plaque contours and major tissue components has been validated by histology. Morphometric indexes from histological data corresponding to several micro-CT slices have been derived (double observer evaluation at different coronary ATS stages) and highly significant correlations (R2 > 0.90) evidenced. Semi-automatic morphometry has been validated by double observer manual morphometry of micro-CT slices and highly significant correlations were found (R2 > 0.92). The micro-CT methodology described represents a handy and reliable tool for quantitative high resolution and contrast agent free full length coronary wall profiling, able to assist atherosclerotic vessels morphometry in a preclinical experimental model of coronary ATS and providing a link between in vivo imaging and histology.

  6. Guar gum based biodegradable, antibacterial and electrically conductive hydrogels.

    PubMed

    Kaith, Balbir S; Sharma, Reena; Kalia, Susheel

    2015-04-01

    Guar gum-polyacrylic acid-polyaniline based biodegradable electrically conductive interpenetrating network (IPN) structures were prepared through a two-step aqueous polymerization. Hexamine and ammonium persulfate (APS) were used as a cross linker-initiator system to crosslink the poly(AA) chains on Guar gum (Ggum) backbone. Optimum reaction conditions for maximum percentage swelling (7470.23%) were time (min) = 60; vacuum (mmHg) = 450; pH = 7.0; solvent (mL) = 27.5; [APS] (mol L(-1)) = 0.306 × 10(-1); [AA] (mol L(-1)) = 0.291 × 10(-3) and [hexamine] (mol L(-1))=0.356 × 10(-1). The semi-interpenetrating networks (semi-IPNs) were converted into IPNs through impregnation of polyaniline chains under acidic and neutral conditions. Fourier transform infra-red spectroscopy (FTIR), thermogravimetric analysis (TGA) and scanning electron microscopy (SEM) techniques were used to characterize the semi-IPNs and IPNs. Synthesized semi-IPNs and IPNs were further evaluated for moisture retention in different soils, antibacterial and biodegradation behavior. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Use of Semi-Autonomous Tools for ISS Commanding and Monitoring

    NASA Technical Reports Server (NTRS)

    Brzezinski, Amy S.

    2014-01-01

    As the International Space Station (ISS) has moved into a utilization phase, operations have shifted to become more ground-based with fewer mission control personnel monitoring and commanding multiple ISS systems. This shift to fewer people monitoring more systems has prompted use of semi-autonomous console tools in the ISS Mission Control Center (MCC) to help flight controllers command and monitor the ISS. These console tools perform routine operational procedures while keeping the human operator "in the loop" to monitor and intervene when off-nominal events arise. Two such tools, the Pre-positioned Load (PPL) Loader and Automatic Operators Recorder Manager (AutoORM), are used by the ISS Communications RF Onboard Networks Utilization Specialist (CRONUS) flight control position. CRONUS is responsible for simultaneously commanding and monitoring the ISS Command & Data Handling (C&DH) and Communications and Tracking (C&T) systems. PPL Loader is used to uplink small pieces of frequently changed software data tables, called PPLs, to ISS computers to support different ISS operations. In order to uplink a PPL, a data load command must be built that contains multiple user-input fields. Next, a multiple step commanding and verification procedure must be performed to enable an onboard computer for software uplink, uplink the PPL, verify the PPL has incorporated correctly, and disable the computer for software uplink. PPL Loader provides different levels of automation in both building and uplinking these commands. In its manual mode, PPL Loader automatically builds the PPL data load commands but allows the flight controller to verify and save the commands for future uplink. In its auto mode, PPL Loader automatically builds the PPL data load commands for flight controller verification, but automatically performs the PPL uplink procedure by sending commands and performing verification checks while notifying CRONUS of procedure step completion. If an off-nominal condition occurs during procedure execution, PPL Loader notifies CRONUS through popup messages, allowing CRONUS to examine the situation and choose an option of how PPL loader should proceed with the procedure. The use of PPL Loader to perform frequent, routine PPL uplinks offloads CRONUS to better monitor two ISS systems. It also reduces procedure performance time and decreases risk of command errors. AutoORM identifies ISS communication outage periods and builds commands to lock, playback, and unlock ISS Operations Recorder files. Operation Recorder files are circular buffer files of continually recorded ISS telemetry data. Sections of these files can be locked from further writing, be played back to capture telemetry data that occurred during an ISS loss of signal (LOS) period, and then be unlocked for future recording use. Downlinked Operation Recorder files are used by mission support teams for data analysis, especially if failures occur during LOS. The commands to lock, playback, and unlock Operations Recorder files are encompassed in three different operational procedures and contain multiple user-input fields. AutoORM provides different levels of automation for building and uplinking the commands to lock, playback, and unlock Operations Recorder files. In its automatic mode, AutoORM automatically detects ISS LOS periods, then generates and uplinks the commands to lock, playback, and unlock Operations Recorder files when MCC regains signal with ISS. AutoORM also features semi-autonomous and manual modes which integrate CRONUS more into the command verification and uplink process. AutoORMs ability to automatically detect ISS LOS periods and build the necessary commands to preserve, playback, and release recorded telemetry data greatly offloads CRONUS to perform more high-level cognitive tasks, such as mission planning and anomaly troubleshooting. Additionally, since Operations Recorder commands contain numerical time input fields which are tedious for a human to manually build, AutoORM's ability to automatically build commands reduces operational command errors. PPL Loader and AutoORM demonstrate principles of semi-autonomous operational tools that will benefit future space mission operations. Both tools employ different levels of automation to perform simple and routine procedures, thereby offloading human operators to perform higher-level cognitive tasks. Because both tools provide procedure execution status and highlight off-nominal indications, the flight controller is able to intervene during procedure execution if needed. Semi-autonomous tools and systems that can perform routine procedures, yet keep human operators informed of execution, will be essential in future long-duration missions where the onboard crew will be solely responsible for spacecraft monitoring and control.

  8. Thermoelectric properties of fully hydrogenated graphene: Semi-classical Boltzmann theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reshak, A. H., E-mail: maalidph@yahoo.co.uk; Center of Excellence Geopolymer and Green Technology, School of Material Engineering, University Malaysia Perlis, 01007 Kangar, Perlis

    2015-06-14

    Based on the calculated band structure, the electronic transport coefficients of chair-/boat-like graphane were evaluated by using the semi-classical Boltzmann theory and rigid band model. The maximum value of electrical conductivity for chair (boat)-like graphane of about 1.4 (0.6) × 10{sup 19} (Ωms){sup −1} is achieved at 600 K. The charge carrier concentration and the electrical conductivity linearly increase with increasing the temperature in agreement with the experimental work for graphene. The investigated materials exhibit the highest value of Seebeck coefficient at 300 K. We should emphasize that in the chemical potential between ∓0.125 μ(eV) the investigated materials exhibit minimum value of electronic thermalmore » conductivity, therefore, maximum efficiency. As the temperature increases, the electronic thermal conductivity increases exponentially, in agreement with the experimental data of graphene. We also calculated the power factor of chair-/boat-like graphane at 300 and 600 K as a function of chemical potential between ∓0.25 μ(eV)« less

  9. Semi-automated potentiometric titration method for uranium characterization.

    PubMed

    Cristiano, B F G; Delgado, J U; da Silva, J W S; de Barros, P D; de Araújo, R M S; Lopes, R T

    2012-07-01

    The manual version of the potentiometric titration method has been used for certification and characterization of uranium compounds. In order to reduce the analysis time and the influence of the analyst, a semi-automatic version of the method was developed in the Brazilian Nuclear Energy Commission. The method was applied with traceability assured by using a potassium dichromate primary standard. The combined standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Soil Moisture Estimate Under Forest Using a Semi-Empirical Model at P-Band

    NASA Technical Reports Server (NTRS)

    Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak

    2013-01-01

    Here we present the result of a semi-empirical inversion model for soil moisture retrieval using the three backscattering coefficients: sigma(sub HH), sigma(sub VV) and sigma(sub HV). In this paper we focus on the soil moisture estimate and use the biomass as an ancillary parameter estimated automatically from the algorithm and used as a validation parameter, We will first remind the model analytical formulation. Then we will sow some results obtained with real SAR data and compare them to ground estimates.

  11. ASSIST: User's manual

    NASA Technical Reports Server (NTRS)

    Johnson, S. C.

    1986-01-01

    Semi-Markov models can be used to compute the reliability of virtually any fault-tolerant system. However, the process of delineating all of the states and transitions in a model of a complex system can be devastingly tedious and error-prone. The ASSIST program allows the user to describe the semi-Markov model in a high-level language. Instead of specifying the individual states of the model, the user specifies the rules governing the behavior of the system and these are used by ASSIST to automatically generate the model. The ASSIST program is described and illustrated by examples.

  12. Mosaic construction, processing, and review of very large electron micrograph composites

    NASA Astrophysics Data System (ADS)

    Vogt, Robert C., III; Trenkle, John M.; Harmon, Laurel A.

    1996-11-01

    A system of programs is described for acquisition, mosaicking, cueing and interactive review of large-scale transmission electron micrograph composite images. This work was carried out as part of a final-phase clinical analysis study of a drug for the treatment of diabetic peripheral neuropathy. MOre than 500 nerve biopsy samples were prepared, digitally imaged, processed, and reviewed. For a given sample, typically 1000 or more 1.5 megabyte frames were acquired, for a total of between 1 and 2 gigabytes of data per sample. These frames were then automatically registered and mosaicked together into a single virtual image composite, which was subsequently used to perform automatic cueing of axons and axon clusters, as well as review and marking by qualified neuroanatomists. Statistics derived from the review process were used to evaluate the efficacy of the drug in promoting regeneration of myelinated nerve fibers. This effort demonstrates a new, entirely digital capability for doing large-scale electron micrograph studies, in which all of the relevant specimen data can be included at high magnification, as opposed to simply taking a random sample of discrete locations. It opens up the possibility of a new era in electron microscopy--one which broadens the scope of questions that this imaging modality can be used to answer.

  13. Radiotherapy in the management of keloids. Clinical experience with electron beam irradiation and comparison with X-ray therapy.

    PubMed

    Maarouf, Mohammad; Schleicher, Ursula; Schmachtenberg, Axel; Ammon, Jürgen

    2002-06-01

    Aim of this study was to evaluate the advantages of electron beam irradiation compared to kilovoltage X-ray therapy in the treatment of keloids. Furthermore, the risk of developing malignancy following keloid radiotherapy was assessed. An automatic water phantom was used to evaluate the dose distribution in tissue. Furthermore, a series of measurements was done on the patients using thermoluminescence dosimeters (TLD) to estimate the doses absorbed by the organs at risk. We also report our clinical experience with electron beam radiation of 134 keloids following surgical excision. Electron beam irradiation offers a high control rate (84%) with minimal side effects for keloids. Electron irradiation provides better dose distribution in tissue, and therefore less radiation burden to the organs at risk. After a mean follow-up period of 7.2 years, no severe side effects or malignancies were observed after keloid radiotherapy. Electron radiation therapy is superior to kilovoltage irradiation for treating keloids due to better dose distribution in tissue. In agreement with the literature, no cases of malignancy were observed after keloid irradiation.

  14. Development of a non-piston MR suspension rod for variable mass systems

    NASA Astrophysics Data System (ADS)

    Deng, Huaxia; Han, Guanghui; Zhang, Jin; Wang, Mingxian; Ma, Mengchao; Zhong, Xiang; Yu, Liandong

    2018-06-01

    The semi-active suspension systems for variable mass systems require long work stroke and variable damping, while the currently piston structure limits the work stroke for the magnetorheological (MR) dampers. The main work of this paper is to design a semi-active non-piston MR (NPMR) suspension rod for the reduction of the vibration of an automatic impeller washing machine, which is a typical variable mass system. The designed suspension rod locates in the suspension system that links the internal tub to the washing machine cabinet. The NPMR suspension rod includes a MR part and a air part. The MR part can provide low initial damping force and the unlimited work stroke compared with the piston MR damper. The hysteretic response tests and vibration performance evaluation with different loadings are conducted to verify the dynamic performance for the designed rod. The measured damping force of the MR part varies from 5 to 20 N. Studies of dehydration mode experiments of the washing machine indicate that its vibration acceleration with the NPMR suspension rods can reduce to half of the original passive ones in certain conditions.

  15. Physiological typing of Pseudallescheria and Scedosporium strains using Taxa Profile, a semi-automated, 384-well microtitre system.

    PubMed

    Horré, R; Schaal, K P; Marklein, G; de Hoog, G S; Reiffert, S-M

    2011-10-01

    During the last few decades, Pseudallescheria and Scedosporium infections in humans are noted with increasing frequency. Multi-drug resistance commonly occurring in this species complex interferes with adequate therapy. Rapid and correct identification of clinical isolates is of paramount significance for optimal treatment in the early stages of infection, while strain typing is necessary for epidemiological purposes. In view of the development of physiological diagnostic parameters, 570 physiological reactions were evaluated using the Taxa Profile Micronaut system, a semi-automatic, computer-assisted, 384-well microtitre platform. Thirty two strains of the Pseudallescheria and Scedosporium complex were analysed after molecular verification of correct species attribution. Of the compounds tested, 254 proved to be polymorphic. Cluster analysis was performed with the Micronaut profile software, which is linked to the ntsypc® program. The systemic opportunist S. prolificans was unambiguously separated from the remaining species. Within the P. boydii/P. apiosperma complex differentiation was noted at the level of individual strains, but no unambiguous parameters for species recognition were revealed. © 2011 Blackwell Verlag GmbH.

  16. Automated detection of microaneurysms using scale-adapted blob analysis and semi-supervised learning.

    PubMed

    Adal, Kedir M; Sidibé, Désiré; Ali, Sharib; Chaum, Edward; Karnowski, Thomas P; Mériaudeau, Fabrice

    2014-04-01

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier which can detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    NASA Astrophysics Data System (ADS)

    Dall'Asta, E.; Roncella, R.

    2014-06-01

    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  18. A semi-automatic method for extracting thin line structures in images as rooted tree network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brazzini, Jacopo; Dillard, Scott; Soille, Pierre

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the targetmore » network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.« less

  19. Semi-automatic recognition of marine debris on beaches

    NASA Astrophysics Data System (ADS)

    Ge, Zhenpeng; Shi, Huahong; Mei, Xuefei; Dai, Zhijun; Li, Daoji

    2016-05-01

    An increasing amount of anthropogenic marine debris is pervading the earth’s environmental systems, resulting in an enormous threat to living organisms. Additionally, the large amount of marine debris around the world has been investigated mostly through tedious manual methods. Therefore, we propose the use of a new technique, light detection and ranging (LIDAR), for the semi-automatic recognition of marine debris on a beach because of its substantially more efficient role in comparison with other more laborious methods. Our results revealed that LIDAR should be used for the classification of marine debris into plastic, paper, cloth and metal. Additionally, we reconstructed a 3-dimensional model of different types of debris on a beach with a high validity of debris revivification using LIDAR-based individual separation. These findings demonstrate that the availability of this new technique enables detailed observations to be made of debris on a large beach that was previously not possible. It is strongly suggested that LIDAR could be implemented as an appropriate monitoring tool for marine debris by global researchers and governments.

  20. Conceptual design of semi-automatic wheelbarrow to overcome ergonomics problems among palm oil plantation workers

    NASA Astrophysics Data System (ADS)

    Nawik, N. S. M.; Deros, B. M.; Rahman, M. N. A.; Sukadarin, E. H.; Nordin, N.; Tamrin, S. B. M.; Bakar, S. A.; Norzan, M. L.

    2015-12-01

    An ergonomics problem is one of the main issues faced by palm oil plantation workers especially during harvesting and collecting of fresh fruit bunches (FFB). Intensive manual handling and labor activities involved have been associated with high prevalence of musculoskeletal disorders (MSDs) among palm oil plantation workers. New and safe technology on machines and equipment in palm oil plantation are very important in order to help workers reduce risks and injuries while working. The aim of this research is to improve the design of a wheelbarrow, which is suitable for workers and a small size oil palm plantation. The wheelbarrow design was drawn using CATIA ergonomic features. The characteristic of ergonomics assessment is performed by comparing the existing design of wheelbarrow. Conceptual design was developed based on the problems that have been reported by workers. From the analysis of the problem, finally have resulting concept design the ergonomic quality of semi-automatic wheelbarrow with safe and suitable used for palm oil plantation workers.

  1. Model reduction by trimming for a class of semi-Markov reliability models and the corresponding error bound

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Palumbo, Daniel L.

    1991-01-01

    Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.

  2. Semi-automatic image analysis methodology for the segmentation of bubbles and drops in complex dispersions occurring in bioreactors

    NASA Astrophysics Data System (ADS)

    Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.

    2006-09-01

    Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.

  3. Bound States and the Third Harmonic Generation in an Electric Field Biased Semi-parabolic Quantum Well

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Xie, Hong-Jing

    2003-11-01

    Within the framework of the compact density matrix approach, the third-harmonic generation (THG) in an electric-field-biased semi-parabolic quantum well (QW) has been deduced and investigated. Via variant of displacement harmonic oscillation, the exact electronic states in the semi-parabolic QW with an applied electric field have also been obtained and discussed. Numerical results on typical GaAs material reveal that, electric fields and confined potential frequency of semi-parabolic QW have obvious influences on the energy levels of electronic states and the THG in the semi-parabolic QW systems. The project supported in part by Guangdong Provincial Natural Science Foundation of China

  4. Accuracy and reproducibility of aortic annular measurements obtained from echocardiographic 3D manual and semi-automated software analyses in patients referred for transcatheter aortic valve implantation: implication for prosthesis size selection.

    PubMed

    Stella, Stefano; Italia, Leonardo; Geremia, Giulia; Rosa, Isabella; Ancona, Francesco; Marini, Claudia; Capogrosso, Cristina; Giglio, Manuela; Montorfano, Matteo; Latib, Azeem; Margonato, Alberto; Colombo, Antonio; Agricola, Eustachio

    2018-02-06

    A 3D transoesophageal echocardiography (3D-TOE) reconstruction tool has recently been introduced. The system automatically configures a geometric model of the aortic root and performs quantitative analysis of these structures. We compared the measurements of the aortic annulus (AA) obtained by semi-automated 3D-TOE quantitative software and manual analysis vs. multislice computed tomography (MSCT) ones. One hundred and seventy-five patients (mean age 81.3 ± 6.3 years, 77 men) who underwent both MSCT and 3D-TOE for annulus assessment before transcatheter aortic valve implantation were analysed. Hypothetical prosthetic valve sizing was evaluated using the 3D manual, semi-automated measurements using manufacturer-recommended CT-based sizing algorithm as gold standard. Good correlation between 3D-TOE methods vs. MSCT measurements was found, but the semi-automated analysis demonstrated slightly better correlations for AA major diameter (r = 0.89), perimeter (r = 0.89), and area (r = 0.85) (all P < 0.0001) than manual one. Both 3D methods underestimated the MSCT measurements, but semi-automated measurements showed narrower limits of agreement and lesser bias than manual measurements for most of AA parameters. On average, 3D-TOE semi-automated major diameter, area, and perimeter underestimated the respective MSCT measurements by 7.4%, 3.5%, and 4.4%, respectively, whereas minor diameter was overestimated by 0.3%. Moderate agreement for valve sizing for both 3D-TOE techniques was found: Kappa agreement 0.5 for both semi-automated and manual analysis. Interobserver and intraobserver agreements for the AA measurements were excellent for both techniques (intraclass correlation coefficients for all parameters >0.80). The 3D-TOE semi-automated analysis of AA is feasible and reliable and can be used in clinical practice as an alternative to MSCT for AA assessment. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author(s) 2018. For permissions, please email: journals.permissions@oup.com.

  5. Automatic Electronic Oxygen Supply

    PubMed Central

    Ford, Patricia; Hoodless, D. J.

    1971-01-01

    An automatic electronic oxygen system has been devised to supply an intensive care unit with a “fail-safe” supply of continuous oxygen. All parts of the system are fitted with alarms, as the oxygen powers gas-driven ventilators. Since the system is cheap it can be installed in hospitals where finance is limited. PMID:5278618

  6. An automatic lightning detection and photographic system

    NASA Technical Reports Server (NTRS)

    Wojtasinski, R. J.; Holley, L. D.; Gray, J. L.; Hoover, R. B.

    1973-01-01

    Conventional 35-mm camera is activated by an electronic signal every time lightning strikes in general vicinity. Electronic circuit detects lightning by means of antenna which picks up atmospheric radio disturbances. Camera is equipped with fish-eye lense, automatic shutter advance, and small 24-hour clock to indicate time when exposures are made.

  7. An automatic method for segmentation of fission tracks in epidote crystal photomicrographs

    NASA Astrophysics Data System (ADS)

    de Siqueira, Alexandre Fioravante; Nakasuga, Wagner Massayuki; Pagamisse, Aylton; Tello Saenz, Carlos Alberto; Job, Aldo Eloizo

    2014-08-01

    Manual identification of fission tracks has practical problems, such as variation due to observe-observation efficiency. An automatic processing method that could identify fission tracks in a photomicrograph could solve this problem and improve the speed of track counting. However, separation of nontrivial images is one of the most difficult tasks in image processing. Several commercial and free softwares are available, but these softwares are meant to be used in specific images. In this paper, an automatic method based on starlet wavelets is presented in order to separate fission tracks in mineral photomicrographs. Automatization is obtained by the Matthews correlation coefficient, and results are evaluated by precision, recall and accuracy. This technique is an improvement of a method aimed at segmentation of scanning electron microscopy images. This method is applied in photomicrographs of epidote phenocrystals, in which accuracy higher than 89% was obtained in fission track segmentation, even for difficult images. Algorithms corresponding to the proposed method are available for download. Using the method presented here, a user could easily determine fission tracks in photomicrographs of mineral samples.

  8. iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization.

    PubMed

    Blenkmann, Alejandro O; Phillips, Holly N; Princich, Juan P; Rowe, James B; Bekinschtein, Tristan A; Muravchik, Carlos H; Kochen, Silvia

    2017-01-01

    The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2-3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions.

  9. Automatic media-adventitia IVUS image segmentation based on sparse representation framework and dynamic directional active contour model.

    PubMed

    Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye

    2017-10-01

    Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization

    PubMed Central

    Blenkmann, Alejandro O.; Phillips, Holly N.; Princich, Juan P.; Rowe, James B.; Bekinschtein, Tristan A.; Muravchik, Carlos H.; Kochen, Silvia

    2017-01-01

    The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2–3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions. PMID:28303098

  11. Automatic measurement of prosody in behavioral variant FTD.

    PubMed

    Nevler, Naomi; Ash, Sharon; Jester, Charles; Irwin, David J; Liberman, Mark; Grossman, Murray

    2017-08-15

    To help understand speech changes in behavioral variant frontotemporal dementia (bvFTD), we developed and implemented automatic methods of speech analysis for quantification of prosody, and evaluated clinical and anatomical correlations. We analyzed semi-structured, digitized speech samples from 32 patients with bvFTD (21 male, mean age 63 ± 8.5, mean disease duration 4 ± 3.1 years) and 17 matched healthy controls (HC). We automatically extracted fundamental frequency (f0, the physical property of sound most closely correlating with perceived pitch) and computed pitch range on a logarithmic scale (semitone) that controls for individual and sex differences. We correlated f0 range with neuropsychiatric tests, and related f0 range to gray matter (GM) atrophy using 3T T1 MRI. We found significantly reduced f0 range in patients with bvFTD (mean 4.3 ± 1.8 ST) compared to HC (5.8 ± 2.1 ST; p = 0.03). Regression related reduced f0 range in bvFTD to GM atrophy in bilateral inferior and dorsomedial frontal as well as left anterior cingulate and anterior insular regions. Reduced f0 range reflects impaired prosody in bvFTD. This is associated with neuroanatomic networks implicated in language production and social disorders centered in the frontal lobe. These findings support the feasibility of automated speech analysis in frontotemporal dementia and other disorders. © 2017 American Academy of Neurology.

  12. The casting and mechanism of formation of semi-permeable polymer membranes in a microgravity environment

    NASA Astrophysics Data System (ADS)

    Vera, I.

    The National Electric Company of Venezuela, C.A.D.A.F.E., is sponsoring the development of this experiment which represents Venezuela's first scientific experiment in space. The apparatus for the automatic casting of polymer thin films will be contained in NASA's payload No. G-559 of the Get Away Special program for a future orbital space flight in the U.S. Space Shuttle. Semi-permeable polymer membranes have important applications in a variety of fields, such as medecine, energy, and pharmaceuticals, and in general fluid separation processes such as reverse osmosis, ultra-filtration, and electro-dialysis. The casting of semi-permeable membranes in space will help to identify the roles of convection in determining the strucutre of these membranes.

  13. A novel colourimetric technique to assess chewing function using two-coloured specimens: Validation and application.

    PubMed

    Schimmel, Martin; Christou, Panagiotis; Miyazaki, Hideo; Halazonetis, Demetrios; Herrmann, François R; Müller, Frauke

    2015-08-01

    Chewing efficiency may be evaluated using cohesive specimen, especially in elderly or dysphagic patients. The aim of this study was to evaluate three two-coloured chewing gums for a colour-mixing ability test and to validate a new purpose built software (ViewGum©). Dentate participants (dentate-group) and edentulous patients with mandibular two-implant overdentures (IOD-group) were recruited. First, the dentate-group chewed three different types of two-coloured gum (gum1-gum3) for 5, 10, 20, 30 and 50 chewing cycles. Subsequently the number of chewing cycles with the highest intra- and inter-rater agreement was determined visually by applying a scale (SA) and opto-electronically (ViewGum©, Bland-Altman analysis). The ViewGum© software determines semi-automatically the variance of hue (VOH); inadequate mixing presents with larger VOH than complete mixing. Secondly, the dentate-group and the IOD-group were compared. The dentate-group comprised 20 participants (10 female, 30.3±6.7 years); the IOD-group 15 participants (10 female, 74.6±8.3 years). Intra-rater and inter-rater agreement (SA) was very high at 20 chewing cycles (95.00-98.75%). Gums 1-3 showed different colour-mixing characteristics as a function of chewing cycles, gum1 showed a logarithmic association; gum2 and gum3 demonstrated more linear behaviours. However, the number of chewing cycles could be predicted in all specimens from VOH (all p<0.0001, mixed linear regression models). Both analyses proved discriminative to the dental state. ViewGum© proved to be a reliable and discriminative tool to opto-electronically assess chewing efficiency, given an elastic specimen is chewed for 20 cycles and could be recommended for the evaluation of chewing efficiency in a clinical and research setting. Chewing is a complex function of the oro-facial structures and the central nervous system. The application of the proposed assessments of the chewing function in geriatrics or special care dentistry could help visualising oro-functional or dental comorbidities in dysphagic patients or those suffering from protein-energy malnutrition. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Chemical vapor deposition for automatic processing of integrated circuits

    NASA Technical Reports Server (NTRS)

    Kennedy, B. W.

    1980-01-01

    Chemical vapor deposition for automatic processing of integrated circuits including the wafer carrier and loading from a receiving air track into automatic furnaces and unloading on to a sending air track is discussed. Passivation using electron beam deposited quartz is also considered.

  15. A semi-automatic microextraction in packed sorbent, using a digitally controlled syringe, combined with ultra-high pressure liquid chromatography as a new and ultra-fast approach for the determination of prenylflavonoids in beers.

    PubMed

    Gonçalves, João L; Alves, Vera L; Rodrigues, Fátima P; Figueira, José A; Câmara, José S

    2013-08-23

    In this work a highly selective and sensitive analytical procedure based on semi-automatic microextraction by packed sorbents (MEPS) technique, using a new digitally controlled syringe (eVol(®)) combined with ultra-high pressure liquid chromatography (UHPLC), is proposed to determine the prenylated chalcone derived from the hop (Humulus lupulus L.), xanthohumol (XN), and its isomeric flavonone isoxanthohumol (IXN) in beers. Extraction and UHPLC parameters were accurately optimized to achieve the highest recoveries and to enhance the analytical characteristics of the method. Important parameters affecting MEPS performance, namely the type of sorbent material (C2, C8, C18, SIL, and M1), elution solvent system, number of extraction cycles (extract-discard), sample volume, elution volume, and sample pH, were evaluated. The optimal experimental conditions involves the loading of 500μL of sample through a C18 sorbent in a MEPS syringe placed in the semi-automatic eVol(®) syringe followed by elution using 250μL of acetonitrile (ACN) in a 10 extractions cycle (about 5min for the entire sample preparation step). The obtained extract is directly analyzed in the UHPLC system using a binary mobile phase composed of aqueous 0.1% formic acid (eluent A) and ACN (eluent B) in the gradient elution mode (10min total analysis). Under optimized conditions good results were obtained in terms of linearity within the established concentration range with correlation coefficients (R) values higher than 0.986, with a residual deviation for each calibration point below 12%. The limit of detection (LOD) and limit of quantification (LOQ) obtained were 0.4ngmL(-1) and 1.0ngmL(-1) for IXN, and 0.9ngmL(-1) and 3.0ngmL(-1) for XN, respectively. Precision was lower than 4.6% for IXN and 8.4% for XN. Typical recoveries ranged between 67.1% and 99.3% for IXN and between 74.2% and 99.9% for XN, with relative standard deviations %RSD no larger than 8%. The applicability of the proposed analytical procedure in commercial beers, revealed the presence of both target prenylchalcones in all samples being IXN the most abundant with concentration of between 0.126 and 0.200μgmL(-1). Copyright © 2013 Elsevier B.V. All rights reserved.

  16. The “2T” ion-electron semi-analytic shock solution for code-comparison with xRAGE: A report for FY16

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, Jim Michael

    2016-10-05

    This report documents an effort to generate the semi-analytic "2T" ion-electron shock solution developed in the paper by Masser, Wohlbier, and Lowrie, and the initial attempts to understand how to use this solution as a code-verification tool for one of LANL's ASC codes, xRAGE. Most of the work so far has gone into generating the semi-analytic solution. Considerable effort will go into understanding how to write the xRAGE input deck that both matches the boundary conditions imposed by the solution, and also what physics models must be implemented within the semi-analytic solution itself to match the model assumptions inherit withinmore » xRAGE. Therefore, most of this report focuses on deriving the equations for the semi-analytic 1D-planar time-independent "2T" ion-electron shock solution, and is written in a style that is intended to provide clear guidance for anyone writing their own solver.« less

  17. Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study

    PubMed Central

    Thomaz, Edison; Zhang, Cheng; Essa, Irfan; Abowd, Gregory D.

    2015-01-01

    Dietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors. PMID:25859566

  18. Rendering-based video-CT registration with physical constraints for image-guided endoscopic sinus surgery

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Leonard, S.; Reiter, A.; Rajan, P.; Siewerdsen, J. H.; Ishii, M.; Taylor, R. H.; Hager, G. D.

    2015-03-01

    We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.

  19. Novel semi-automated kidney volume measurements in autosomal dominant polycystic kidney disease.

    PubMed

    Muto, Satoru; Kawano, Haruna; Isotani, Shuji; Ide, Hisamitsu; Horie, Shigeo

    2018-06-01

    We assessed the effectiveness and convenience of a novel semi-automatic kidney volume (KV) measuring high-speed 3D-image analysis system SYNAPSE VINCENT ® (Fuji Medical Systems, Tokyo, Japan) for autosomal dominant polycystic kidney disease (ADPKD) patients. We developed a novel semi-automated KV measurement software for patients with ADPKD to be included in the imaging analysis software SYNAPSE VINCENT ® . The software extracts renal regions using image recognition software and measures KV (VINCENT KV). The algorithm was designed to work with the manual designation of a long axis of a kidney including cysts. After using the software to assess the predictive accuracy of the VINCENT method, we performed an external validation study and compared accurate KV and ellipsoid KV based on geometric modeling by linear regression analysis and Bland-Altman analysis. Median eGFR was 46.9 ml/min/1.73 m 2 . Median accurate KV, Vincent KV and ellipsoid KV were 627.7, 619.4 ml (IQR 431.5-947.0) and 694.0 ml (IQR 488.1-1107.4), respectively. Compared with ellipsoid KV (r = 0.9504), Vincent KV correlated strongly with accurate KV (r = 0.9968), without systematic underestimation or overestimation (ellipsoid KV; 14.2 ± 22.0%, Vincent KV; - 0.6 ± 6.0%). There were no significant slice thickness-specific differences (p = 0.2980). The VINCENT method is an accurate and convenient semi-automatic method to measure KV in patients with ADPKD compared with the conventional ellipsoid method.

  20. Nonlinear model for thermal effects in free-electron lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peter, E., E-mail: peterpeter@uol.com.br; Endler, A., E-mail: aendler@if.ufrgs.br; Rizzato, F. B., E-mail: rizzato@if.ufrgs.br

    2014-11-15

    In the present work, we extend results of a previous paper [Peter et al., Phys. Plasmas 20, 12 3104 (2013)] and develop a semi-analytical model to account for thermal effects on the nonlinear dynamics of the electron beam in free-electron lasers. We relax the condition of a cold electron beam but still use the concept of compressibility, now associated with a warm beam model, to evaluate the time scale for saturation and the peak laser intensity in high-gain regimes. Although vanishing compressibilites and the associated divergent densities are absent in warm models, a series of discontinuities in the electron density precedemore » the saturation process. We show that full wave-particle simulations agree well with the predictions of the model.« less

  1. Increasing Special Library Collection Use in Very Computer Intensive Environments: Automatic Bibliographic Compilation and the Dissemination of Electronic Newsletters.

    ERIC Educational Resources Information Center

    Sanchez, James Joseph

    This paper describes the development and implementation of an automatic bibliographic facility and an electronic newsletter created for a special collection of aerospace and mechanical engineering monographs and articles at the University of Arizona. The project included the development of an online catalog, increasing the depth of bibliographic…

  2. Electron Source Brightness and Illumination Semi-Angle Distribution Measurement in a Transmission Electron Microscope.

    PubMed

    Börrnert, Felix; Renner, Julian; Kaiser, Ute

    2018-05-21

    The electron source brightness is an important parameter in an electron microscope. Reliable and easy brightness measurement routes are not easily found. A determination method for the illumination semi-angle distribution in transmission electron microscopy is even less well documented. Herein, we report a simple measurement route for both entities and demonstrate it on a state-of-the-art instrument. The reduced axial brightness of the FEI X-FEG with a monochromator was determined to be larger than 108 A/(m2 sr V).

  3. Neurodegenerative changes in Alzheimer's disease: a comparative study of manual, semi-automated, and fully automated assessment using MRI

    NASA Astrophysics Data System (ADS)

    Fritzsche, Klaus H.; Giesel, Frederik L.; Heimann, Tobias; Thomann, Philipp A.; Hahn, Horst K.; Pantel, Johannes; Schröder, Johannes; Essig, Marco; Meinzer, Hans-Peter

    2008-03-01

    Objective quantification of disease specific neurodegenerative changes can facilitate diagnosis and therapeutic monitoring in several neuropsychiatric disorders. Reproducibility and easy-to-perform assessment are essential to ensure applicability in clinical environments. Aim of this comparative study is the evaluation of a fully automated approach that assesses atrophic changes in Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI). 21 healthy volunteers (mean age 66.2), 21 patients with MCI (66.6), and 10 patients with AD (65.1) were enrolled. Subjects underwent extensive neuropsychological testing and MRI was conducted on a 1.5 Tesla clinical scanner. Atrophic changes were measured automatically by a series of image processing steps including state of the art brain mapping techniques. Results were compared with two reference approaches: a manual segmentation of the hippocampal formation and a semi-automated estimation of temporal horn volume, which is based upon interactive selection of two to six landmarks in the ventricular system. All approaches separated controls and AD patients significantly (10 -5 < p < 10 -4) and showed a slight but not significant increase of neurodegeneration for subjects with MCI compared to volunteers. The automated approach correlated significantly with the manual (r = -0.65, p < 10 -6) and semi automated (r = -0.83, p < 10 -13) measurements. It proved high accuracy and at the same time maximized observer independency, time reduction and thus usefulness for clinical routine.

  4. An evaluation of semi-automated methods for collecting ecosystem-level data in temperate marine systems.

    PubMed

    Griffin, Kingsley J; Hedge, Luke H; González-Rivero, Manuel; Hoegh-Guldberg, Ove I; Johnston, Emma L

    2017-07-01

    Historically, marine ecologists have lacked efficient tools that are capable of capturing detailed species distribution data over large areas. Emerging technologies such as high-resolution imaging and associated machine-learning image-scoring software are providing new tools to map species over large areas in the ocean. Here, we combine a novel diver propulsion vehicle (DPV) imaging system with free-to-use machine-learning software to semi-automatically generate dense and widespread abundance records of a habitat-forming algae over ~5,000 m 2 of temperate reef. We employ replicable spatial techniques to test the effectiveness of traditional diver-based sampling, and better understand the distribution and spatial arrangement of one key algal species. We found that the effectiveness of a traditional survey depended on the level of spatial structuring, and generally 10-20 transects (50 × 1 m) were required to obtain reliable results. This represents 2-20 times greater replication than have been collected in previous studies. Furthermore, we demonstrate the usefulness of fine-resolution distribution modeling for understanding patterns in canopy algae cover at multiple spatial scales, and discuss applications to other marine habitats. Our analyses demonstrate that semi-automated methods of data gathering and processing provide more accurate results than traditional methods for describing habitat structure at seascape scales, and therefore represent vastly improved techniques for understanding and managing marine seascapes.

  5. An efficient scheme for automatic web pages categorization using the support vector machine

    NASA Astrophysics Data System (ADS)

    Bhalla, Vinod Kumar; Kumar, Neeraj

    2016-07-01

    In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.

  6. Statistical colour models: an automated digital image analysis method for quantification of histological biomarkers.

    PubMed

    Shu, Jie; Dolman, G E; Duan, Jiang; Qiu, Guoping; Ilyas, Mohammad

    2016-04-27

    Colour is the most important feature used in quantitative immunohistochemistry (IHC) image analysis; IHC is used to provide information relating to aetiology and to confirm malignancy. Statistical modelling is a technique widely used for colour detection in computer vision. We have developed a statistical model of colour detection applicable to detection of stain colour in digital IHC images. Model was first trained by massive colour pixels collected semi-automatically. To speed up the training and detection processes, we removed luminance channel, Y channel of YCbCr colour space and chose 128 histogram bins which is the optimal number. A maximum likelihood classifier is used to classify pixels in digital slides into positively or negatively stained pixels automatically. The model-based tool was developed within ImageJ to quantify targets identified using IHC and histochemistry. The purpose of evaluation was to compare the computer model with human evaluation. Several large datasets were prepared and obtained from human oesophageal cancer, colon cancer and liver cirrhosis with different colour stains. Experimental results have demonstrated the model-based tool achieves more accurate results than colour deconvolution and CMYK model in the detection of brown colour, and is comparable to colour deconvolution in the detection of pink colour. We have also demostrated the proposed model has little inter-dataset variations. A robust and effective statistical model is introduced in this paper. The model-based interactive tool in ImageJ, which can create a visual representation of the statistical model and detect a specified colour automatically, is easy to use and available freely at http://rsb.info.nih.gov/ij/plugins/ihc-toolbox/index.html . Testing to the tool by different users showed only minor inter-observer variations in results.

  7. ClinicalTrials.gov as a data source for semi-automated point-of-care trial eligibility screening.

    PubMed

    Pfiffner, Pascal B; Oh, JiWon; Miller, Timothy A; Mandl, Kenneth D

    2014-01-01

    Implementing semi-automated processes to efficiently match patients to clinical trials at the point of care requires both detailed patient data and authoritative information about open studies. To evaluate the utility of the ClinicalTrials.gov registry as a data source for semi-automated trial eligibility screening. Eligibility criteria and metadata for 437 trials open for recruitment in four different clinical domains were identified in ClinicalTrials.gov. Trials were evaluated for up to date recruitment status and eligibility criteria were evaluated for obstacles to automated interpretation. Finally, phone or email outreach to coordinators at a subset of the trials was made to assess the accuracy of contact details and recruitment status. 24% (104 of 437) of trials declaring on open recruitment status list a study completion date in the past, indicating out of date records. Substantial barriers to automated eligibility interpretation in free form text are present in 81% to up to 94% of all trials. We were unable to contact coordinators at 31% (45 of 146) of the trials in the subset, either by phone or by email. Only 53% (74 of 146) would confirm that they were still recruiting patients. Because ClinicalTrials.gov has entries on most US and many international trials, the registry could be repurposed as a comprehensive trial matching data source. Semi-automated point of care recruitment would be facilitated by matching the registry's eligibility criteria against clinical data from electronic health records. But the current entries fall short. Ultimately, improved techniques in natural language processing will facilitate semi-automated complex matching. As immediate next steps, we recommend augmenting ClinicalTrials.gov data entry forms to capture key eligibility criteria in a simple, structured format.

  8. A Semi-Automatic Approach to Construct Vietnamese Ontology from Online Text

    ERIC Educational Resources Information Center

    Nguyen, Bao-An; Yang, Don-Lin

    2012-01-01

    An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with…

  9. Semi-Automatic Assembly of Learning Resources

    ERIC Educational Resources Information Center

    Verbert, K.; Ochoa, X.; Derntl, M.; Wolpers, M.; Pardo, A.; Duval, E.

    2012-01-01

    Technology Enhanced Learning is a research field that has matured considerably over the last decade. Many technical solutions to support design, authoring and use of learning activities and resources have been developed. The first datasets that reflect the tracking of actual use of these tools in real-life settings are beginning to become…

  10. 78 FR 37520 - Order Denying Export Privileges

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-21

    ... DEPARTMENT OF COMMERCE Bureau of Industry and Security Order Denying Export Privileges In the... Molina, Jr. (``Molina'') was convicted of violating Section 38 of the Arms Export Control Act (22 U.S.C... attempting to export and causing to be exported from the United States to Mexico two AK47 semi-automatic...

  11. Wheelchair ergometer. Development of a prototype with electronic braking.

    PubMed

    Forchheimer, F; Lundberg, A

    1986-01-01

    A new wheelchair ergometer is described, which compensates for the pulsating character of the work by an automatic control system. This makes it possible to maintain a constant level of power during wheelchair work. An automatic control system has been integrated in an electronically braked bicycle ergometer, and a pedal unit from Rodby Electronic bicycle ergometer RE 820 has been coupled to a modified test wheelchair. With this device, the physical working capacity during submaximal circumstances can be tested in handicapped persons.

  12. Construction of exchange-correlation functionals through interpolation between the non-interacting and the strong-correlation limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yongxi; Ernzerhof, Matthias, E-mail: Matthias.Ernzerhof@UMontreal.ca; Bahmann, Hilke

    Drawing on the adiabatic connection of density functional theory, exchange-correlation functionals of Kohn-Sham density functional theory are constructed which interpolate between the extreme limits of the electron-electron interaction strength. The first limit is the non-interacting one, where there is only exchange. The second limit is the strong correlated one, characterized as the minimum of the electron-electron repulsion energy. The exchange-correlation energy in the strong-correlation limit is approximated through a model for the exchange-correlation hole that is referred to as nonlocal-radius model [L. O. Wagner and P. Gori-Giorgi, Phys. Rev. A 90, 052512 (2014)]. Using the non-interacting and strong-correlated extremes, variousmore » interpolation schemes are presented that yield new approximations to the adiabatic connection and thus to the exchange-correlation energy. Some of them rely on empiricism while others do not. Several of the proposed approximations yield the exact exchange-correlation energy for one-electron systems where local and semi-local approximations often fail badly. Other proposed approximations generalize existing global hybrids by using a fraction of the exchange-correlation energy in the strong-correlation limit to replace an equal fraction of the semi-local approximation to the exchange-correlation energy in the strong-correlation limit. The performance of the proposed approximations is evaluated for molecular atomization energies, total atomic energies, and ionization potentials.« less

  13. Estimating ice albedo from fine debris cover quantified by a semi-automatic method: the case study of Forni Glacier, Italian Alps

    NASA Astrophysics Data System (ADS)

    Azzoni, Roberto Sergio; Senese, Antonella; Zerboni, Andrea; Maugeri, Maurizio; Smiraglia, Claudio; Diolaiuti, Guglielmina Adele

    2016-03-01

    In spite of the quite abundant literature focusing on fine debris deposition over glacier accumulation areas, less attention has been paid to the glacier melting surface. Accordingly, we proposed a novel method based on semi-automatic image analysis to estimate ice albedo from fine debris coverage (d). Our procedure was tested on the surface of a wide Alpine valley glacier (the Forni Glacier, Italy), in summer 2011, 2012 and 2013, acquiring parallel data sets of in situ measurements of ice albedo and high-resolution surface images. Analysis of 51 images yielded d values ranging from 0.01 to 0.63 and albedo was found to vary from 0.06 to 0.32. The estimated d values are in a linear relation with the natural logarithm of measured ice albedo (R = -0.84). The robustness of our approach in evaluating d was analyzed through five sensitivity tests, and we found that it is largely replicable. On the Forni Glacier, we also quantified a mean debris coverage rate (Cr) equal to 6 g m-2 per day during the ablation season of 2013, thus supporting previous studies that describe ongoing darkening phenomena at Alpine debris-free glaciers surface. In addition to debris coverage, we also considered the impact of water (both from melt and rainfall) as a factor that tunes albedo: meltwater occurs during the central hours of the day, decreasing the albedo due to its lower reflectivity; instead, rainfall causes a subsequent mean daily albedo increase slightly higher than 20 %, although it is short-lasting (from 1 to 4 days).

  14. The Reliability of Technical and Tactical Tagging Analysis Conducted by a Semi-Automatic VTS in Soccer.

    PubMed

    Beato, Marco; Jamil, Mikael; Devereux, Gavin

    2018-06-01

    The Video Tracking multiple cameras system (VTS) is a technology that records two-dimensional position data (x and y) at high sampling rates (over 25 Hz). The VTS is of great interest because it can record external load variables as well as collect technical and tactical parameters. Performance analysis is mainly focused on physical demands, yet less attention has been afforded to technical and tactical factors. Digital.Stadium® VTS is a performance analysis device widely used at national and international levels (i.e. Italian Serie A, Euro 2016) and the reliability evaluation of its technical tagging analysis (e.g. shots, passes, assists, set pieces) could be paramount for its application at elite level competitions, as well as in research studies. Two professional soccer teams, with 30 male players (age 23 ± 5 years, body mass 78.3 ± 6.9 kg, body height 1.81 ± 0.06 m), were monitored in the 2016 season during a friendly match and data analysis was performed immediately after the game ended. This process was then replicated a week later (4 operators conducted the data analysis in each week). This study reports a near perfect relationship between Match and its Replication. R2 coefficients (relationships between Match and Replication) were highly significant for each of the technical variables considered (p < 0.001). In particular, a high score of interclass correlation and a small coefficient of variation were reported. This study reports meaningless differences between Match and its Replication (intra-day reliability). We concluded that the semi-automatic process behind the Digital.Stadium® VTS was more than capable of recording technical tagging data accurately.

  15. Semi-automatic forensic approach using mandibular midline lingual structures as fingerprint: a pilot study.

    PubMed

    Shaheen, E; Mowafy, B; Politis, C; Jacobs, R

    2017-12-01

    Previous research proposed the use of the mandibular midline neurovascular canal structures as a forensic finger print. In their observer study, an average correct identification of 95% was reached which triggered this study. To present a semi-automatic computer recognition approach to replace the observers and to validate the accuracy of this newly proposed method. Imaging data from Computer Tomography (CT) and Cone Beam Computer Tomography (CBCT) of mandibles scanned at two different moments were collected to simulate an AM and PM situation where the first scan presented AM and the second scan was used to simulate PM. Ten cases with 20 scans were used to build a classifier which relies on voxel based matching and results with classification into one of two groups: "Unmatched" and "Matched". This protocol was then tested using five other scans out of the database. Unpaired t-testing was applied and accuracy of the computerized approach was determined. A significant difference was found between the "Unmatched" and "Matched" classes with means of 0.41 and 0.86 respectively. Furthermore, the testing phase showed an accuracy of 100%. The validation of this method pushes this protocol further to a fully automatic identification procedure for victim identification based on the mandibular midline canals structures only in cases with available AM and PM CBCT/CT data.

  16. Assessment of Electronic Government Information Products

    DTIC Science & Technology

    1999-03-30

    Center for Environmental Info. & Statistics CM Consumer Handbook for Reducing Solid Waste CM Envirofacts Warehouse CM EPA Online Library System (OLS) CP...Hazardous Waste Site Query (CERCLIS Data) CM Surf Your Watershed CM Test Methods for Evaluating Solid Waste : Physical/Chemical Methods (SW-846) CM...SGML because they consider it "intelligent data" that can automatically generate other formats (e.g., web, BBS, Fax on Demand) through templates and

  17. A video-angiometer for simultaneous and continuous measurement of inner and outer vessel diameters. Technical report.

    PubMed

    Assmann, R; Henrich, H

    1978-09-29

    A system is described for continuously measuring vessel diameters. It bases on the evaluation of video signal differences of a video camera which are induced by light intensity differences (grey levels) caused by the vascular wall structures. The system is electronically linear, automatically measuring and in addition eyeball controlled by the human sensor: the inaccuracy does not exceed the 5% level.

  18. Manufacturing polymer thin films in a micro-gravity environment

    NASA Technical Reports Server (NTRS)

    Vera, Ivan

    1987-01-01

    This project represents Venezuela's first scientific experiment in space. The apparatus for the automatic casting of two polymer thin films will be contained in NASA's Payload No. G-559 of the Get Away Special program for a future orbital space flight in the U.S. Space Shuttle. Semi-permeable polymer membranes have important applications in a variety of fields, such as medicine, energy, and pharmaceuticals and in general fluid separation processes, such as reverse osmosis, ultrafiltration, and electrodialysis. The casting of semi-permeable membranes in space will help to identify the roles of convection in determining the structure of these membranes.

  19. Application of automatic image analysis in wood science

    Treesearch

    Charles W. McMillin

    1982-01-01

    In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...

  20. Adding Automatic Evaluation to Interactive Virtual Labs

    ERIC Educational Resources Information Center

    Farias, Gonzalo; Muñoz de la Peña, David; Gómez-Estern, Fabio; De la Torre, Luis; Sánchez, Carlos; Dormido, Sebastián

    2016-01-01

    Automatic evaluation is a challenging field that has been addressed by the academic community in order to reduce the assessment workload. In this work we present a new element for the authoring tool Easy Java Simulations (EJS). This element, which is named automatic evaluation element (AEE), provides automatic evaluation to virtual and remote…

  1. An automatic multi-atlas prostate segmentation in MRI using a multiscale representation and a label fusion strategy

    NASA Astrophysics Data System (ADS)

    Álvarez, Charlens; Martínez, Fabio; Romero, Eduardo

    2015-01-01

    The pelvic magnetic Resonance images (MRI) are used in Prostate cancer radiotherapy (RT), a process which is part of the radiation planning. Modern protocols require a manual delineation, a tedious and variable activity that may take about 20 minutes per patient, even for trained experts. That considerable time is an important work ow burden in most radiological services. Automatic or semi-automatic methods might improve the efficiency by decreasing the measure times while conserving the required accuracy. This work presents a fully automatic atlas- based segmentation strategy that selects the more similar templates for a new MRI using a robust multi-scale SURF analysis. Then a new segmentation is achieved by a linear combination of the selected templates, which are previously non-rigidly registered towards the new image. The proposed method shows reliable segmentations, obtaining an average DICE Coefficient of 79%, when comparing with the expert manual segmentation, under a leave-one-out scheme with the training database.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jurrus, Elizabeth R.; Hodas, Nathan O.; Baker, Nathan A.

    Forensic analysis of nanoparticles is often conducted through the collection and identifi- cation of electron microscopy images to determine the origin of suspected nuclear material. Each image is carefully studied by experts for classification of materials based on texture, shape, and size. Manually inspecting large image datasets takes enormous amounts of time. However, automatic classification of large image datasets is a challenging problem due to the complexity involved in choosing image features, the lack of training data available for effective machine learning methods, and the availability of user interfaces to parse through images. Therefore, a significant need exists for automatedmore » and semi-automated methods to help analysts perform accurate image classification in large image datasets. We present INStINCt, our Intelligent Signature Canvas, as a framework for quickly organizing image data in a web based canvas framework. Images are partitioned using small sets of example images, chosen by users, and presented in an optimal layout based on features derived from convolutional neural networks.« less

  3. Personalised Care Plan Management Utilizing Guideline-Driven Clinical Decision Support Systems.

    PubMed

    Laleci Erturkmen, Gokce Banu; Yuksel, Mustafa; Sarigul, Bunyamin; Lilja, Mikael; Chen, Rong; Arvanitis, Theodoros N

    2018-01-01

    Older age is associated with an increased accumulation of multiple chronic conditions. The clinical management of patients suffering from multiple chronic conditions is very complex, disconnected and time-consuming with the traditional care settings. Integrated care is a means to address the growing demand for improved patient experience and health outcomes of multimorbid and long-term care patients. Care planning is a prevalent approach of integrated care, where the aim is to deliver more personalized and targeted care creating shared care plans by clearly articulating the role of each provider and patient in the care process. In this paper, we present a method and corresponding implementation of a semi-automatic care plan management tool, integrated with clinical decision support services which can seamlessly access and assess the electronic health records (EHRs) of the patient in comparison with evidence based clinical guidelines to suggest personalized recommendations for goals and interventions to be added to the individualized care plans.

  4. Three dimensional quantitative characterization of magnetite nanoparticles embedded in mesoporous silicon: local curvature, demagnetizing factors and magnetic Monte Carlo simulations.

    PubMed

    Uusimäki, Toni; Margaris, Georgios; Trohidou, Kalliopi; Granitzer, Petra; Rumpf, Klemens; Sezen, Meltem; Kothleitner, Gerald

    2013-12-07

    Magnetite nanoparticles embedded within the pores of a mesoporous silicon template have been characterized using electron tomography. Linear least squares optimization was used to fit an arbitrary ellipsoid to each segmented particle from the three dimensional reconstruction. It was then possible to calculate the demagnetizing factors and the direction of the shape anisotropy easy axis for every particle. The demagnetizing factors, along with the knowledge of spatial and volume distribution of the superparamagnetic nanoparticles, were used as a model for magnetic Monte Carlo simulations, yielding zero field cooling/field cooling and magnetic hysteresis curves, which were compared to the measured ones. Additionally, the local curvature of the magnetite particles' docking site within the mesoporous silicon's surface was obtained in two different ways and a comparison will be given. A new iterative semi-automatic image alignment program was written and the importance of image segmentation for a truly objective analysis is also addressed.

  5. Selvester scoring in patients with strict LBBB using the QUARESS software.

    PubMed

    Xia, Xiaojuan; Chaudhry, Uzma; Wieslander, Björn; Borgquist, Rasmus; Wagner, Galen S; Strauss, David G; Platonov, Pyotr; Ugander, Martin; Couderc, Jean-Philippe

    2015-01-01

    Estimation of the infarct size from body-surface ECGs in post-myocardial infarction patients has become possible using the Selvester scoring method. Automation of this scoring has been proposed in order to speed-up the measurement of the score and improving the inter-observer variability in computing a score that requires strong expertise in electrocardiography. In this work, we evaluated the quality of the QuAReSS software for delivering correct Selvester scoring in a set of standard 12-lead ECGs. Standard 12-lead ECGs were recorded in 105 post-MI patients prescribed implantation of an implantable cardiodefibrillator (ICD). Amongst the 105 patients with standard clinical left bundle branch block (LBBB) patterns, 67 had a LBBB pattern meeting the strict criteria. The QuAReSS software was applied to these 67 tracings by two independent groups of cardiologists (from a clinical group and an ECG core laboratory) to measure the Selvester score semi-automatically. Using various level of agreement metrics, we compared the scores between groups and when automatically measured by the software. The average of the absolute difference in Selvester scores measured by the two independent groups was 1.4±1.5 score points, whereas the difference between automatic method and the two manual adjudications were 1.2±1.2 and 1.3±1.2 points. Eighty-two percent score agreement was observed between the two independent measurements when the difference of score was within two point ranges, while 90% and 84% score agreements were reached using the automatic method compared to the two manual adjudications. The study confirms that the QuAReSS software provides valid measurements of the Selvester score in patients with strict LBBB with minimal correction from cardiologists. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Brain extraction in partial volumes T2*@7T by using a quasi-anatomic segmentation with bias field correction.

    PubMed

    Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S

    2018-02-01

    Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Evaluation of Markov-Decision Model for Instructional Sequence Optimization. Semi-Annual Technical Report for the period 1 July-31 December 1975. Technical Report No. 76.

    ERIC Educational Resources Information Center

    Wollmer, Richard D.; Bond, Nicholas A.

    Two computer-assisted instruction programs were written in electronics and trigonometry to test the Wollmer Markov Model for optimizing hierarchial learning; calibration samples totalling 110 students completed these programs. Since the model postulated that transfer effects would be a function of the amount of practice, half of the students were…

  8. A comparison between semi-spheroid- and dome-shaped quantum dots coupled to wetting layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shahzadeh, Mohammadreza; Sabaeian, Mohammad, E-mail: Sabaeian@scu.ac.ir

    2014-06-15

    During the epitaxial growth method, self-assembled semi-spheroid-shaped quantum dots (QDs) are formed on the wetting layer (WL). However for sake of simplicity, researchers sometimes assume semi-spheroid-shaped QDs to be dome-shaped (hemisphere). In this work, a detailed and comprehensive study on the difference between electronic and transition properties of dome- and semi-spheroid-shaped quantum dots is presented. We will explain why the P-to-S intersubband transition behaves the way it does. The calculated results for intersubband P-to-S transition properties of quantum dots show two different trends for dome-shaped and semi-spheroid-shaped quantum dots. The results are interpreted using the probability of finding electron insidemore » the dome/spheroid region, with emphasis on the effects of wetting layer. It is shown that dome-shaped and semi-spheroid-shaped quantum dots feature different electronic and transition properties, arising from the difference in lateral dimensions between dome- and semi-spheroid-shaped QDs. Moreover, an analogy is presented between the bound S-states in the quantum dots and a simple 3D quantum mechanical particle in a box, and effective sizes are calculated. The results of this work will benefit researchers to present more realistic models of coupled QD/WL systems and explain their properties more precisely.« less

  9. A Saliency Guided Semi-Supervised Building Change Detection Method for High Resolution Remote Sensing Images

    PubMed Central

    Hou, Bin; Wang, Yunhong; Liu, Qingjie

    2016-01-01

    Characterizations of up to date information of the Earth’s surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD) methods have been developed to solve them by utilizing remote sensing (RS) images. The advent of high resolution (HR) remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs) allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC) segmentation. Then, saliency and morphological building index (MBI) extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF). Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation. PMID:27618903

  10. Semi-Supervised Geographical Feature Detection

    NASA Astrophysics Data System (ADS)

    Yu, H.; Yu, L.; Kuo, K. S.

    2016-12-01

    Extraction and tracking geographical features is a fundamental requirement in many geoscience fields. However, this operation has become an increasingly challenging task for domain scientists when tackling a large amount of geoscience data. Although domain scientists may have a relatively clear definition of features, it is difficult to capture the presence of features in an accurate and efficient fashion. We propose a semi-supervised approach to address large geographical feature detection. Our approach has two main components. First, we represent a heterogeneous geoscience data in a unified high-dimensional space, which can facilitate us to evaluate the similarity of data points with respect to geolocation, time, and variable values. We characterize the data from these measures, and use a set of hash functions to parameterize the initial knowledge of the data. Second, for any user query, our approach can automatically extract the initial results based on the hash functions. To improve the accuracy of querying, our approach provides a visualization interface to display the querying results and allow users to interactively explore and refine them. The user feedback will be used to enhance our knowledge base in an iterative manner. In our implementation, we use high-performance computing techniques to accelerate the construction of hash functions. Our design facilitates a parallelization scheme for feature detection and extraction, which is a traditionally challenging problem for large-scale data. We evaluate our approach and demonstrate the effectiveness using both synthetic and real world datasets.

  11. A Saliency Guided Semi-Supervised Building Change Detection Method for High Resolution Remote Sensing Images.

    PubMed

    Hou, Bin; Wang, Yunhong; Liu, Qingjie

    2016-08-27

    Characterizations of up to date information of the Earth's surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD) methods have been developed to solve them by utilizing remote sensing (RS) images. The advent of high resolution (HR) remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs) allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC) segmentation. Then, saliency and morphological building index (MBI) extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF). Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation.

  12. Analysis of the Variation of Energetic Electron Flux with Respect to Longitude and Distance Normal to the Magnetic Equatorial Plane for Galileo Energetic Particle Detector Data

    NASA Technical Reports Server (NTRS)

    Swimm, Randall; Garrett, Henry B.; Jun, Insoo; Evans, Robin W.

    2004-01-01

    In this study we examine ten-minute omni-directional averages of energetic electron data measured by the Galileo spacecraft Energetic Particle Detector (EPD). Count rates from electron channels B1, DC2, and DC3 are evaluated using a power law model to yield estimates of the differential electron fluxes from 1 MeV to 11 MeV at distances between 8 and 51 Jupiter radii. Whereas the orbit of the Galileo spacecraft remained close to the rotational equatorial plane of Jupiter, the approximately 11 degree tilt of the magnetic axis of Jupiter relative to its rotational axis allowed the EPD instrument to sample high energy electrons at limited distances normal to the magnetic equatorial plane. We present a Fourier analysis of the semi-diurnal variation of electron fluxes with longitude.

  13. The impact of OCR accuracy on automated cancer classification of pathology reports.

    PubMed

    Zuccon, Guido; Nguyen, Anthony N; Bergheim, Anton; Wickman, Sandra; Grayson, Narelle

    2012-01-01

    To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.

  14. Functional-to-form mapping for assembly design automation

    NASA Astrophysics Data System (ADS)

    Xu, Z. G.; Liu, W. M.; Shen, W. D.; Yang, D. Y.; Liu, T. T.

    2017-11-01

    Assembly-level function-to-form mapping is the most effective procedure towards design automation. The research work mainly includes: the assembly-level function definitions, product network model and the two-step mapping mechanisms. The function-to-form mapping is divided into two steps, i.e. mapping of function-to-behavior, called the first-step mapping, and the second-step mapping, i.e. mapping of behavior-to-structure. After the first step mapping, the three dimensional transmission chain (or 3D sketch) is studied, and the feasible design computing tools are developed. The mapping procedure is relatively easy to be implemented interactively, but, it is quite difficult to finish it automatically. So manual, semi-automatic, automatic and interactive modification of the mapping model are studied. A mechanical hand F-F mapping process is illustrated to verify the design methodologies.

  15. Semi-Automatic Methods of Knowledge Enhancement

    DTIC Science & Technology

    1988-12-05

    pL . Response was patchy. Apparently awed by the complexity of the problem only 3 GM’s responded and all asked for no public use to be made of their...by the SERC . Thanks are due to the Turing Institute and Edinburgh University Ai department for resource and facilities. We would also like to thank

  16. Resolving carbonate platform geometries on the Island of Bonaire, Caribbean Netherlands through semi-automatic GPR facies classification

    NASA Astrophysics Data System (ADS)

    Bowling, R. D.; Laya, J. C.; Everett, M. E.

    2018-07-01

    The study of exposed carbonate platforms provides observational constraints on regional tectonics and sea-level history. In this work Miocene-aged carbonate platform units of the Seroe Domi Formation are investigated on the island of Bonaire, located in the Southern Caribbean. Ground penetrating radar (GPR) was used to probe near-surface structural geometries associated with these lithologies. The single cross-island transect described herein allowed for continuous mapping of geologic structures on kilometre length scales. Numerical analysis was applied to the data in the form of k-means clustering of structure-parallel vectors derived from image structure tensors. This methodology enables radar facies along the survey transect to be semi-automatically mapped. The results provide subsurface evidence to support previous surficial and outcrop observations, and reveal complex stratigraphy within the platform. From the GPR data analysis, progradational clinoform geometries were observed on the northeast side of the island which support the tectonics and depositional trends of the region. Furthermore, several leeward-side radar facies are identified which correlate to environments of deposition conducive to dolomitization via reflux mechanisms.

  17. Semi-automatic recognition of marine debris on beaches

    PubMed Central

    Ge, Zhenpeng; Shi, Huahong; Mei, Xuefei; Dai, Zhijun; Li, Daoji

    2016-01-01

    An increasing amount of anthropogenic marine debris is pervading the earth’s environmental systems, resulting in an enormous threat to living organisms. Additionally, the large amount of marine debris around the world has been investigated mostly through tedious manual methods. Therefore, we propose the use of a new technique, light detection and ranging (LIDAR), for the semi-automatic recognition of marine debris on a beach because of its substantially more efficient role in comparison with other more laborious methods. Our results revealed that LIDAR should be used for the classification of marine debris into plastic, paper, cloth and metal. Additionally, we reconstructed a 3-dimensional model of different types of debris on a beach with a high validity of debris revivification using LIDAR-based individual separation. These findings demonstrate that the availability of this new technique enables detailed observations to be made of debris on a large beach that was previously not possible. It is strongly suggested that LIDAR could be implemented as an appropriate monitoring tool for marine debris by global researchers and governments. PMID:27156433

  18. Semi-automatic segmentation of brain tumors using population and individual information.

    PubMed

    Wu, Yao; Yang, Wei; Jiang, Jun; Li, Shuanqian; Feng, Qianjin; Chen, Wufan

    2013-08-01

    Efficient segmentation of tumors in medical images is of great practical importance in early diagnosis and radiation plan. This paper proposes a novel semi-automatic segmentation method based on population and individual statistical information to segment brain tumors in magnetic resonance (MR) images. First, high-dimensional image features are extracted. Neighborhood components analysis is proposed to learn two optimal distance metrics, which contain population and patient-specific information, respectively. The probability of each pixel belonging to the foreground (tumor) and the background is estimated by the k-nearest neighborhood classifier under the learned optimal distance metrics. A cost function for segmentation is constructed through these probabilities and is optimized using graph cuts. Finally, some morphological operations are performed to improve the achieved segmentation results. Our dataset consists of 137 brain MR images, including 68 for training and 69 for testing. The proposed method overcomes segmentation difficulties caused by the uneven gray level distribution of the tumors and even can get satisfactory results if the tumors have fuzzy edges. Experimental results demonstrate that the proposed method is robust to brain tumor segmentation.

  19. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure.

    PubMed

    Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-07-28

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.

  20. Semi-automatic delineation of the spino-laminar junction curve on lateral x-ray radiographs of the cervical spine

    NASA Astrophysics Data System (ADS)

    Narang, Benjamin; Phillips, Michael; Knapp, Karen; Appelboam, Andy; Reuben, Adam; Slabaugh, Greg

    2015-03-01

    Assessment of the cervical spine using x-ray radiography is an important task when providing emergency room care to trauma patients suspected of a cervical spine injury. In routine clinical practice, a physician will inspect the alignment of the cervical spine vertebrae by mentally tracing three alignment curves along the anterior and posterior sides of the cervical vertebral bodies, as well as one along the spinolaminar junction. In this paper, we propose an algorithm to semi-automatically delineate the spinolaminar junction curve, given a single reference point and the corners of each vertebral body. From the reference point, our method extracts a region of interest, and performs template matching using normalized cross-correlation to find matching regions along the spinolaminar junction. Matching points are then fit to a third order spline, producing an interpolating curve. Experimental results demonstrate promising results, on average producing a modified Hausdorff distance of 1.8 mm, validated on a dataset consisting of 29 patients including those with degenerative change, retrolisthesis, and fracture.

  1. Resolving Carbonate Platform Geometries on the Island of Bonaire, Caribbean Netherlands through Semi-Automatic GPR Facies Classification

    NASA Astrophysics Data System (ADS)

    Bowling, R. D.; Laya, J. C.; Everett, M. E.

    2018-05-01

    The study of exposed carbonate platforms provides observational constraints on regional tectonics and sea-level history. In this work Miocene-aged carbonate platform units of the Seroe Domi Formation are investigated, on the island of Bonaire, located in the Southern Caribbean. Ground penetrating radar (GPR) was used to probe near-surface structural geometries associated with these lithologies. The single cross-island transect described herein allowed for continuous mapping of geologic structures on kilometer length scales. Numerical analysis was applied to the data in the form of k-means clustering of structure-parallel vectors derived from image structure tensors. This methodology enables radar facies along the survey transect to be semi-automatically mapped. The results provide subsurface evidence to support previous surficial and outcrop observations, and reveal complex stratigraphy within the platform. From the GPR data analysis, progradational clinoform geometries were observed on the northeast side of the island which supports the tectonics and depositional trends of the region. Furthermore, several leeward-side radar facies are identified which correlate to environments of deposition conducive to dolomitization via reflux mechanisms.

  2. Smartphone based automatic organ validation in ultrasound video.

    PubMed

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  3. Automated synthesis, insertion and detection of polyps for CT colonography

    NASA Astrophysics Data System (ADS)

    Sezille, Nicolas; Sadleir, Robert J. T.; Whelan, Paul F.

    2003-03-01

    CT Colonography (CTC) is a new non-invasive colon imaging technique which has the potential to replace conventional colonoscopy for colorectal cancer screening. A novel system which facilitates automated detection of colorectal polyps at CTC is introduced. As exhaustive testing of such a system using real patient data is not feasible, more complete testing is achieved through synthesis of artificial polyps and insertion into real datasets. The polyp insertion is semi-automatic: candidate points are manually selected using a custom GUI, suitable points are determined automatically from an analysis of the local neighborhood surrounding each of the candidate points. Local density and orientation information are used to generate polyps based on an elliptical model. Anomalies are identified from the modified dataset by analyzing the axial images. Detected anomalies are classified as potential polyps or natural features using 3D morphological techniques. The final results are flagged for review. The system was evaluated using 15 scenarios. The sensitivity of the system was found to be 65% with 34% false positive detections. Automated diagnosis at CTC is possible and thorough testing is facilitated by augmenting real patient data with computer generated polyps. Ultimately, automated diagnosis will enhance standard CTC and increase performance.

  4. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  5. A dorsolateral prefrontal cortex semi-automatic segmenter

    NASA Astrophysics Data System (ADS)

    Al-Hakim, Ramsey; Fallon, James; Nain, Delphine; Melonakos, John; Tannenbaum, Allen

    2006-03-01

    Structural, functional, and clinical studies in schizophrenia have, for several decades, consistently implicated dysfunction of the prefrontal cortex in the etiology of the disease. Functional and structural imaging studies, combined with clinical, psychometric, and genetic analyses in schizophrenia have confirmed the key roles played by the prefrontal cortex and closely linked "prefrontal system" structures such as the striatum, amygdala, mediodorsal thalamus, substantia nigra-ventral tegmental area, and anterior cingulate cortices. The nodal structure of the prefrontal system circuit is the dorsal lateral prefrontal cortex (DLPFC), or Brodmann area 46, which also appears to be the most commonly studied and cited brain area with respect to schizophrenia. 1, 2, 3, 4 In 1986, Weinberger et. al. tied cerebral blood flow in the DLPFC to schizophrenia.1 In 2001, Perlstein et. al. demonstrated that DLPFC activation is essential for working memory tasks commonly deficient in schizophrenia. 2 More recently, groups have linked morphological changes due to gene deletion and increased DLPFC glutamate concentration to schizophrenia. 3, 4 Despite the experimental and clinical focus on the DLPFC in structural and functional imaging, the variability of the location of this area, differences in opinion on exactly what constitutes DLPFC, and inherent difficulties in segmenting this highly convoluted cortical region have contributed to a lack of widely used standards for manual or semi-automated segmentation programs. Given these implications, we developed a semi-automatic tool to segment the DLPFC from brain MRI scans in a reproducible way to conduct further morphological and statistical studies. The segmenter is based on expert neuroanatomist rules (Fallon-Kindermann rules), inspired by cytoarchitectonic data and reconstructions presented by Rajkowska and Goldman-Rakic. 5 It is semi-automated to provide essential user interactivity. We present our results and provide details on our DLPFC open-source tool.

  6. Calculations of absorbed fractions in small water spheres for low-energy monoenergetic electrons and the Auger-emitting radionuclides (123)Ι and (125)Ι.

    PubMed

    Bousis, Christos; Emfietzoglou, Dimitris; Nikjoo, Hooshang

    2012-12-01

    To calculate the absorbed fraction (AF) of low energy electrons in small tissue-equivalent spherical volumes by Monte Carlo (MC) track structure simulation and assess the influence of phase (liquid water versus density-scaled water vapor) and of the continuous-slowing-down approximation (CSDA) used in semi-analytic calculations. An event-by-event MC code simulating the transport of electrons in both the vapor and liquid phase of water using appropriate electron-water interaction cross sections was used to quantify the energy deposition of low-energy electrons in spherical volumes. Semi-analytic calculations within the CSDA using a convolution integral of the Howell range-energy expressions are also presented for comparison. The AF for spherical volumes of radii from 10-1000 nm are presented for monoenergetic electrons over the energy range 100-10,000 eV and the two Auger-emitting radionuclides (125)I and (123)I. The MC calculated AF for the liquid phase are found to be smaller than those of the (density scaled) gas phase by up to 10-20% for the monoenergetic electrons and 10% for the two Auger-emitters. Differences between the liquid-phase MC results and the semi-analytic CSDA calculations are up to ∼ 55% for the monoenergetic electrons and up to ∼ 35% for the two Auger-emitters. Condensed-phase effects in the inelastic interaction of low-energy electrons with water have a noticeable but relatively small impact on the AF for the energy range and target sizes examined. Depending on the electron energies, the semi-analytic approach may lead to sizeable errors for target sizes with linear dimensions below 1 micron.

  7. Affective Evaluations of Exercising: The Role of Automatic-Reflective Evaluation Discrepancy.

    PubMed

    Brand, Ralf; Antoniewicz, Franziska

    2016-12-01

    Sometimes our automatic evaluations do not correspond well with those we can reflect on and articulate. We present a novel approach to the assessment of automatic and reflective affective evaluations of exercising. Based on the assumptions of the associative-propositional processes in evaluation model, we measured participants' automatic evaluations of exercise and then shared this information with them, asked them to reflect on it and rate eventual discrepancy between their reflective evaluation and the assessment of their automatic evaluation. We found that mismatch between self-reported ideal exercise frequency and actual exercise frequency over the previous 14 weeks could be regressed on the discrepancy between a relatively negative automatic and a more positive reflective evaluation. This study illustrates the potential of a dual-process approach to the measurement of evaluative responses and suggests that mistrusting one's negative spontaneous reaction to exercise and asserting a very positive reflective evaluation instead leads to the adoption of inflated exercise goals.

  8. Semi-automatic registration of 3D orthodontics models from photographs

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  9. On the use of unshielded cables in ionization chamber dosimetry for total-skin electron therapy.

    PubMed

    Chen, Z; Agostinelli, A; Nath, R

    1998-03-01

    The dosimetry of total-skin electron therapy (TSET) usually requires ionization chamber measurements in a large electron beam (up to 120 cm x 200 cm). Exposing the chamber's electric cable, its connector and part of the extension cable to the large electron beam will introduce unwanted electronic signals that may lead to inaccurate dosimetry results. While the best strategy to minimize the cable-induced electronic signal is to shield the cables and its connector from the primary electrons, as has been recommended by the AAPM Task Group Report 23 on TSET, cables without additional shielding are often used in TSET dosimetry measurements for logistic reasons, for example when an automatic scanning dosimetry is used. This paper systematically investigates the consequences and the acceptability of using an unshielded cable in ionization chamber dosimetry in a large TSET electron beam. In this paper, we separate cable-induced signals into two types. The type-I signal includes all charges induced which do not change sign upon switching the chamber polarity, and type II includes all those that do. The type-I signal is easily cancelled by the polarity averaging method. The type-II cable-induced signal is independent of the depth of the chamber in a phantom and its magnitude relative to the true signal determines the acceptability of a cable for use under unshielded conditions. Three different cables were evaluated in two different TSET beams in this investigation. For dosimetry near the depth of maximum buildup, the cable-induced dosimetry error was found to be less than 0.2% when the two-polarity averaging technique was applied. At greater depths, the relative dosimetry error was found to increase at a rate approximately equal to the inverse of the electron depth dose. Since the application of the two-polarity averaging technique requires a constant-irradiation condition, it was demonstrated than an additional error of up to 4% could be introduced if the unshielded cable's spatial configuration were altered during the two-polarity measurements. This suggests that automatic scanning systems with unshielded cables should not be used in TSET ionization chamber dosimetry. However, the data did show that an unshielded cable may be used in TSET ionization chamber dosimetry if the size of cable-induced error in a given TSET beam is pre-evaluated and the measurement is carefully conducted. When such an evaluation has not been performed, additional shielding should be applied to the cable being used, making measurements at multiple points difficult.

  10. Investigating helmet promotion for cyclists: results from a randomised study with observation of behaviour, using a semi-automatic video system.

    PubMed

    Constant, Aymery; Messiah, Antoine; Felonneau, Marie-Line; Lagarde, Emmanuel

    2012-01-01

    Half of fatal injuries among bicyclists are head injuries. While helmet use is likely to provide protection, their use often remains rare. We assessed the influence of strategies for promotion of helmet use with direct observation of behaviour by a semi-automatic video system. We performed a single-centre randomised controlled study, with 4 balanced randomisation groups. Participants were non-helmet users, aged 18-75 years, recruited at a loan facility in the city of Bordeaux, France. After completing a questionnaire investigating their attitudes towards road safety and helmet use, participants were randomly assigned to three groups with the provision of "helmet only", "helmet and information" or "information only", and to a fourth control group. Bikes were labelled with a colour code designed to enable observation of helmet use by participants while cycling, using a 7-spot semi-automatic video system located in the city. A total of 1557 participants were included in the study. Between October 15th 2009 and September 28th 2010, 2621 cyclists' movements, made by 587 participants, were captured by the video system. Participants seen at least once with a helmet amounted to 6.6% of all observed participants, with higher rates in the two groups that received a helmet at baseline. The likelihood of observed helmet use was significantly increased among participants of the "helmet only" group (OR = 7.73 [2.09-28.5]) and this impact faded within six months following the intervention. No effect of information delivery was found. Providing a helmet may be of value, but will not be sufficient to achieve high rates of helmet wearing among adult cyclists. Integrated and repeated prevention programmes will be needed, including free provision of helmets, but also information on the protective effect of helmets and strategies to increase peer and parental pressure.

  11. Investigating Helmet Promotion for Cyclists: Results from a Randomised Study with Observation of Behaviour, Using a Semi-Automatic Video System

    PubMed Central

    Constant, Aymery; Messiah, Antoine; Felonneau, Marie-Line; Lagarde, Emmanuel

    2012-01-01

    Introduction Half of fatal injuries among bicyclists are head injuries. While helmet use is likely to provide protection, their use often remains rare. We assessed the influence of strategies for promotion of helmet use with direct observation of behaviour by a semi-automatic video system. Methods We performed a single-centre randomised controlled study, with 4 balanced randomisation groups. Participants were non-helmet users, aged 18–75 years, recruited at a loan facility in the city of Bordeaux, France. After completing a questionnaire investigating their attitudes towards road safety and helmet use, participants were randomly assigned to three groups with the provision of “helmet only”, “helmet and information” or “information only”, and to a fourth control group. Bikes were labelled with a colour code designed to enable observation of helmet use by participants while cycling, using a 7-spot semi-automatic video system located in the city. A total of 1557 participants were included in the study. Results Between October 15th 2009 and September 28th 2010, 2621 cyclists' movements, made by 587 participants, were captured by the video system. Participants seen at least once with a helmet amounted to 6.6% of all observed participants, with higher rates in the two groups that received a helmet at baseline. The likelihood of observed helmet use was significantly increased among participants of the “helmet only” group (OR = 7.73 [2.09–28.5]) and this impact faded within six months following the intervention. No effect of information delivery was found. Conclusion Providing a helmet may be of value, but will not be sufficient to achieve high rates of helmet wearing among adult cyclists. Integrated and repeated prevention programmes will be needed, including free provision of helmets, but also information on the protective effect of helmets and strategies to increase peer and parental pressure. PMID:22355384

  12. Performance of a semi-automated approach for risk estimation using a common data model for longitudinal healthcare databases.

    PubMed

    Van Le, Hoa; Beach, Kathleen J; Powell, Gregory; Pattishall, Ed; Ryan, Patrick; Mera, Robertino M

    2013-02-01

    Different structures and coding schemes may limit rapid evaluation of a large pool of potential drug safety signals using multiple longitudinal healthcare databases. To overcome this restriction, a semi-automated approach utilising common data model (CDM) and robust pharmacoepidemiologic methods was developed; however, its performance needed to be evaluated. Twenty-three established drug-safety associations from publications were reproduced in a healthcare claims database and four of these were also repeated in electronic health records. Concordance and discrepancy of pairwise estimates were assessed between the results derived from the publication and results from this approach. For all 27 pairs, an observed agreement between the published results and the results from the semi-automated approach was greater than 85% and Kappa coefficient was 0.61, 95% CI: 0.19-1.00. Ln(IRR) differed by less than 50% for 13/27 pairs, and the IRR varied less than 2-fold for 19/27 pairs. Reproducibility based on the intra-class correlation coefficient was 0.54. Most covariates (>90%) in the publications were available for inclusion in the models. Once the study populations and inclusion/exclusion criteria were obtained from the literature, the analysis was able to be completed in 2-8 h. The semi-automated methodology using a CDM produced consistent risk estimates compared to the published findings for most selected drug-outcome associations, regardless of original study designs, databases, medications and outcomes. Further assessment of this approach is useful to understand its roles, strengths and limitations in rapidly evaluating safety signals.

  13. Semi-quantitative assessment of pulmonary perfusion in children using dynamic contrast-enhanced MRI

    NASA Astrophysics Data System (ADS)

    Fetita, Catalin; Thong, William E.; Ou, Phalla

    2013-03-01

    This paper addresses the study of semi-quantitative assessment of pulmonary perfusion acquired from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in a study population mainly composed of children with pulmonary malformations. The automatic analysis approach proposed is based on the indicator-dilution theory introduced in 1954. First, a robust method is developed to segment the pulmonary artery and the lungs from anatomical MRI data, exploiting 2D and 3D mathematical morphology operators. Second, the time-dependent contrast signal of the lung regions is deconvolved by the arterial input function for the assessment of the local hemodynamic system parameters, ie. mean transit time, pulmonary blood volume and pulmonary blood flow. The discrete deconvolution method implements here a truncated singular value decomposition (tSVD) method. Parametric images for the entire lungs are generated as additional elements for diagnosis and quantitative follow-up. The preliminary results attest the feasibility of perfusion quantification in pulmonary DCE-MRI and open an interesting alternative to scintigraphy for this type of evaluation, to be considered at least as a preliminary decision in the diagnostic due to the large availability of the technique and to the non-invasive aspects.

  14. An immune-inspired semi-supervised algorithm for breast cancer diagnosis.

    PubMed

    Peng, Lingxi; Chen, Wenbin; Zhou, Wubai; Li, Fufang; Yang, Jin; Zhang, Jiandong

    2016-10-01

    Breast cancer is the most frequently and world widely diagnosed life-threatening cancer, which is the leading cause of cancer death among women. Early accurate diagnosis can be a big plus in treating breast cancer. Researchers have approached this problem using various data mining and machine learning techniques such as support vector machine, artificial neural network, etc. The computer immunology is also an intelligent method inspired by biological immune system, which has been successfully applied in pattern recognition, combination optimization, machine learning, etc. However, most of these diagnosis methods belong to a supervised diagnosis method. It is very expensive to obtain labeled data in biology and medicine. In this paper, we seamlessly integrate the state-of-the-art research on life science with artificial intelligence, and propose a semi-supervised learning algorithm to reduce the need for labeled data. We use two well-known benchmark breast cancer datasets in our study, which are acquired from the UCI machine learning repository. Extensive experiments are conducted and evaluated on those two datasets. Our experimental results demonstrate the effectiveness and efficiency of our proposed algorithm, which proves that our algorithm is a promising automatic diagnosis method for breast cancer. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Semi-continuous ultrasonic sounding and changes of ultrasonic signal characteristics as a sensitive tool for the evaluation of ongoing microstructural changes of experimental mortar bars tested for their ASR potential.

    PubMed

    Lokajíček, T; Kuchařová, A; Petružálek, M; Šachlová, Š; Svitek, T; Přikryl, R

    2016-09-01

    Semi-continuous ultrasonic sounding of experimental mortar bars used in the accelerated alkali silica reactivity laboratory test (ASTM C1260) is proposed as a supplementary measurement technique providing data that are highly sensitive to minor changes in the microstructure of hardening/deteriorating concrete mixture. A newly designed, patent pending, heating chamber was constructed allowing ultrasonic sounding of mortar bars, stored in accelerating solution without necessity to remove the test specimens from the bath during the measurement. Subsequent automatic data analysis of recorded ultrasonic signals proved their high correlation to the measured length changes (expansion) and their high sensitivity to microstructural changes. The changes of P-wave velocity, and of the energy, amplitude, and frequency of ultrasonic signal, were in the range of 10-80%, compared to 0.51% change of the length. Results presented in this study thus show that ultrasonic sounding seems to be more sensitive to microstructural changes due to ongoing deterioration of concrete microstructure by alkali-silica reaction than the dimensional changes. Copyright © 2016. Published by Elsevier B.V.

  16. Collecting and registering sexual health information in the context of HIV risk in the electronic medical record of general practitioners: a qualitative exploration of the preference of general practitioners in urban communities in Flanders (Belgium).

    PubMed

    Vos, Jolien; Pype, Peter; Deblonde, Jessika; Van den Eynde, Sandra; Aelbrecht, Karolien; Deveugele, Myriam; Avonts, Dirk

    2016-07-01

    Background and aim Current health-care delivery requires increasingly proactive and inter-professional work. Therefore, collecting patient information and knowledge management is of paramount importance. General practitioners (GPs) are well placed to lead these evolving models of care delivery. However, it is unclear how they are handling these changes. To gain an insight into this matter, the HIV epidemic was chosen as a test case. Data were collected and analysed from 13 semi-structured interviews with GPs, working in urban communities in Flanders. Findings GPs use various types of patient information to estimate patients' risk of HIV. The way in which sexual health information is collected and registered, depends on the type of information under discussion. General patient information and medical history data are often automatically collected and registered. Proactively collecting sexual health information is uncommon. Moreover, the registration of the latter is not obvious, mostly owing to insufficient space in the electronic medical record (EMR). GPs seem willing to systematically collect and register sexual health information, in particular about HIV-risk factors. They expressed a need for guidance together with practical adjustments of the EMR to adequately capture and share this information.

  17. Automatic Whistler Detector and Analyzer system: Implementation of the analyzer algorithm

    NASA Astrophysics Data System (ADS)

    Lichtenberger, JáNos; Ferencz, Csaba; Hamar, Daniel; Steinbach, Peter; Rodger, Craig J.; Clilverd, Mark A.; Collier, Andrew B.

    2010-12-01

    The full potential of whistlers for monitoring plasmaspheric electron density variations has not yet been realized. The primary reason is the vast human effort required for the analysis of whistler traces. Recently, the first part of a complete whistler analysis procedure was successfully automated, i.e., the automatic detection of whistler traces from the raw broadband VLF signal was achieved. This study describes a new algorithm developed to determine plasmaspheric electron density measurements from whistler traces, based on a Virtual (Whistler) Trace Transformation, using a 2-D fast Fourier transform transformation. This algorithm can be automated and can thus form the final step to complete an Automatic Whistler Detector and Analyzer (AWDA) system. In this second AWDA paper, the practical implementation of the Automatic Whistler Analyzer (AWA) algorithm is discussed and a feasible solution is presented. The practical implementation of the algorithm is able to track the variations of plasmasphere in quasi real time on a PC cluster with 100 CPU cores. The electron densities obtained by the AWA method can be used in investigations such as plasmasphere dynamics, ionosphere-plasmasphere coupling, or in space weather models.

  18. Comparison of SAM and OBIA as Tools for Lava Morphology Classification - A Case Study in Krafla, NE Iceland

    NASA Astrophysics Data System (ADS)

    Aufaristama, Muhammad; Hölbling, Daniel; Höskuldsson, Ármann; Jónsdóttir, Ingibjörg

    2017-04-01

    The Krafla volcanic system is part of the Icelandic North Volcanic Zone (NVZ). During Holocene, two eruptive events occurred in Krafla, 1724-1729 and 1975-1984. The last eruptive episode (1975-1984), known as the "Krafla Fires", resulted in nine volcanic eruption episodes. The total area covered by the lavas from this eruptive episode is 36 km2 and the volume is about 0.25-0.3 km3. Lava morphology is related to the characteristics of the surface morphology of a lava flow after solidification. The typical morphology of lava can be used as primary basis for the classification of lava flows when rheological properties cannot be directly observed during emplacement, and also for better understanding the behavior of lava flow models. Although mapping of lava flows in the field is relatively accurate such traditional methods are time consuming, especially when the lava covers large areas such as it is the case in Krafla. Semi-automatic mapping methods that make use of satellite remote sensing data allow for an efficient and fast mapping of lava morphology. In this study, two semi-automatic methods for lava morphology classification are presented and compared using Landsat 8 (30 m spatial resolution) and SPOT-5 (10 m spatial resolution) satellite images. For assessing the classification accuracy, the results from semi-automatic mapping were compared to the respective results from visual interpretation. On the one hand, the Spectral Angle Mapper (SAM) classification method was used. With this method an image is classified according to the spectral similarity between the image reflectance spectrums and the reference reflectance spectra. SAM successfully produced detailed lava surface morphology maps. However, the pixel-based approach partly leads to a salt-and-pepper effect. On the other hand, we applied the Random Forest (RF) classification method within an object-based image analysis (OBIA) framework. This statistical classifier uses a randomly selected subset of training samples to produce multiple decision trees. For final classification of pixels or - in the present case - image objects, the average of the class assignments probability predicted by the different decision trees is used. While the resulting OBIA classification of lava morphology types shows a high coincidence with the reference data, the approach is sensitive to the segmentation-derived image objects that constitute the base units for classification. Both semi-automatic methods produce reasonable results in the Krafla lava field, even if the identification of different pahoehoe and aa types of lava appeared to be difficult. The use of satellite remote sensing data shows a high potential for fast and efficient classification of lava morphology, particularly over large and inaccessible areas.

  19. Performance Evaluation of Strain Gauge Printed Using Automatic Fluid Dispensing System on Conformal Substrates

    NASA Astrophysics Data System (ADS)

    Khairilhijra Khirotdin, Rd.; Faridzuan Ngadiron, Mohamad; Adzeem Mahadzir, Muhammad; Hassan, Nurhafizzah

    2017-08-01

    Smart textiles require flexible electronics that can withstand daily stresses like bends and stretches. Printing using conductive inks provides the flexibility required but the current printing techniques suffered from ink incompatibility, limited of substrates to be printed with and incompatible with conformal substrates due to its rigidity and low flexibility. An alternate printing technique via automatic fluid dispensing system is proposed and its performances on printing strain gauge on conformal substrates were evaluated to determine its feasibility. Process parameters studied including printing speed, deposition height, curing time and curing temperature. It was found that the strain gauge is proven functional as expected since different strains were induced when bent on variation of bending angles and curvature radiuses from designated bending fixtures. The average change of resistances were doubled before the strain gauge starts to break. Printed strain gauges also exhibited some excellence elasticity as they were able to resist bending up to 70° angle and 3 mm of curvature radius.

  20. ASSIST user manual

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.; Boerschlein, David P.

    1995-01-01

    Semi-Markov models can be used to analyze the reliability of virtually any fault-tolerant system. However, the process of delineating all the states and transitions in a complex system model can be devastatingly tedious and error prone. The Abstract Semi-Markov Specification Interface to the SURE Tool (ASSIST) computer program allows the user to describe the semi-Markov model in a high-level language. Instead of listing the individual model states, the user specifies the rules governing the behavior of the system, and these are used to generate the model automatically. A few statements in the abstract language can describe a very large, complex model. Because no assumptions are made about the system being modeled, ASSIST can be used to generate models describing the behavior of any system. The ASSIST program and its input language are described and illustrated by examples.

  1. Model Development for Graphene Spintronics

    DTIC Science & Technology

    2015-09-21

    structure near the dirac point. Scattering was evaluated in born approximation. Screening and transport were treated semi-classically...requested and granted by the cognizant office, and the program ran through 25 May 2015. Graphene is a promising material for electronic and spintronic...of an insulating material such as Al2O3, to enable efficient spin injection. The graphene layer is beneath the tunnel barrier, followed by SiO2 (on

  2. X-ray microscope for solidification studies

    NASA Technical Reports Server (NTRS)

    Kaukler, William

    1995-01-01

    This report covers the second 6 month period for the year March 1, 1994 to February 28, 1995. The material outlined in this semi-annual report continues from the previous semi-annual report. The Fein Focus Inc. x-ray source was delivered in September and coincides with the beginning of the second 6 month effort. As a result, and as outlined in the statement of work, this period was dedicated to the evaluation, testing and calibration of the x-ray source. In addition, in this period the modeling effort was continued and extended by the Tiger series of Monte-Carlo simulation programs for photon and electron interactions with materials obtained from the Oak Ridge RISC Library. Some further calculations were also made with the absorption model.

  3. X-ray microscope for solidification studies

    NASA Astrophysics Data System (ADS)

    Kaukler, William

    1995-02-01

    This report covers the second 6 month period for the year March 1, 1994 to February 28, 1995. The material outlined in this semi-annual report continues from the previous semi-annual report. The Fein Focus Inc. x-ray source was delivered in September and coincides with the beginning of the second 6 month effort. As a result, and as outlined in the statement of work, this period was dedicated to the evaluation, testing and calibration of the x-ray source. In addition, in this period the modeling effort was continued and extended by the Tiger series of Monte-Carlo simulation programs for photon and electron interactions with materials obtained from the Oak Ridge RISC Library. Some further calculations were also made with the absorption model.

  4. The preprocessed connectomes project repository of manually corrected skull-stripped T1-weighted anatomical MRI data.

    PubMed

    Puccio, Benjamin; Pooley, James P; Pellman, John S; Taverna, Elise C; Craddock, R Cameron

    2016-10-25

    Skull-stripping is the procedure of removing non-brain tissue from anatomical MRI data. This procedure can be useful for calculating brain volume and for improving the quality of other image processing steps. Developing new skull-stripping algorithms and evaluating their performance requires gold standard data from a variety of different scanners and acquisition methods. We complement existing repositories with manually corrected brain masks for 125 T1-weighted anatomical scans from the Nathan Kline Institute Enhanced Rockland Sample Neurofeedback Study. Skull-stripped images were obtained using a semi-automated procedure that involved skull-stripping the data using the brain extraction based on nonlocal segmentation technique (BEaST) software, and manually correcting the worst results. Corrected brain masks were added into the BEaST library and the procedure was repeated until acceptable brain masks were available for all images. In total, 85 of the skull-stripped images were hand-edited and 40 were deemed to not need editing. The results are brain masks for the 125 images along with a BEaST library for automatically skull-stripping other data. Skull-stripped anatomical images from the Neurofeedback sample are available for download from the Preprocessed Connectomes Project. The resulting brain masks can be used by researchers to improve preprocessing of the Neurofeedback data, as training and testing data for developing new skull-stripping algorithms, and for evaluating the impact on other aspects of MRI preprocessing. We have illustrated the utility of these data as a reference for comparing various automatic methods and evaluated the performance of the newly created library on independent data.

  5. Charge exchange cross sections in slow collisions of Si3+ with Hydrogen atom

    NASA Astrophysics Data System (ADS)

    Joseph, Dwayne; Quashie, Edwin; Saha, Bidhan

    2011-05-01

    In recent years both the experimental and theoretical studies of electron transfer in ion-atom collisions have progressed considerably. Accurate determination of the cross sections and an understanding of the dynamics of the electron-capture process by multiply charged ions from atomic hydrogen over a wide range of projectile velocities are important in various field ranging from fusion plasma to astrophysics. The soft X-ray emission from comets has been explained by charge transfer of solar wind ions, among them Si3+, with neutrals in the cometary gas vapor. The cross sections are evaluated using the (a) full quantum and (b) semi-classical molecular orbital close coupling (MOCC) methods. Adiabatic potentials and wave functions for relavent singlet and triplet states are generated using the MRDCI structure codes. Details will be presented at the conference. In recent years both the experimental and theoretical studies of electron transfer in ion-atom collisions have progressed considerably. Accurate determination of the cross sections and an understanding of the dynamics of the electron-capture process by multiply charged ions from atomic hydrogen over a wide range of projectile velocities are important in various field ranging from fusion plasma to astrophysics. The soft X-ray emission from comets has been explained by charge transfer of solar wind ions, among them Si3+, with neutrals in the cometary gas vapor. The cross sections are evaluated using the (a) full quantum and (b) semi-classical molecular orbital close coupling (MOCC) methods. Adiabatic potentials and wave functions for relavent singlet and triplet states are generated using the MRDCI structure codes. Details will be presented at the conference. Work supported by NSF CREST project (grant #0630370).

  6. Towards Automated Screening of Two-dimensional Crystals

    PubMed Central

    Cheng, Anchi; Leung, Albert; Fellmann, Denis; Quispe, Joel; Suloway, Christian; Pulokas, James; Carragher, Bridget; Potter, Clinton S.

    2007-01-01

    Screening trials to determine the presence of two-dimensional (2D) protein crystals suitable for three-dimensional structure determination using electron crystallography is a very labor-intensive process. Methods compatible with fully automated screening have been developed for the process of crystal production by dialysis and for producing negatively stained grids of the resulting trials. Further automation via robotic handling of the EM grids, and semi-automated transmission electron microscopic imaging and evaluation of the trial grids is also possible. We, and others, have developed working prototypes for several of these tools and tested and evaluated them in a simple screen of 24 crystallization conditions. While further development of these tools is certainly required for a turn-key system, the goal of fully automated screening appears to be within reach. PMID:17977016

  7. Comparative study of the compensated semi-metals LaBi and LuBi: a first-principles approach.

    PubMed

    Dey, Urmimala

    2018-05-23

    We have investigated the electronic structures of LaBi and LuBi, employing the full-potential all electron method as implemented in Wien2k. Using this, we have studied in detail both the bulk and the surface states of these materials. From our band structure calculations we find that LuBi, like LaBi, is a compensated semi-metal with almost equal and sizable electron and hole pockets. In analogy with experimental evidence in LaBi, we thus predict that LuBi will also be a candidate for extremely large magneto-resistance (XMR), which should be of immense technological interest. Our calculations reveal that LaBi, despite being gapless in the bulk spectrum, displays the characteristic features of a [Formula: see text] topological semi-metal, resulting in gapless Dirac cones on the surface, whereas LuBi only shows avoided band inversion in the bulk and is thus a conventional compensated semi-metal with extremely large magneto-resistance.

  8. Comparative study of the compensated semi-metals LaBi and LuBi: a first-principles approach

    NASA Astrophysics Data System (ADS)

    Dey, Urmimala

    2018-05-01

    We have investigated the electronic structures of LaBi and LuBi, employing the full-potential all electron method as implemented in Wien2k. Using this, we have studied in detail both the bulk and the surface states of these materials. From our band structure calculations we find that LuBi, like LaBi, is a compensated semi-metal with almost equal and sizable electron and hole pockets. In analogy with experimental evidence in LaBi, we thus predict that LuBi will also be a candidate for extremely large magneto-resistance (XMR), which should be of immense technological interest. Our calculations reveal that LaBi, despite being gapless in the bulk spectrum, displays the characteristic features of a topological semi-metal, resulting in gapless Dirac cones on the surface, whereas LuBi only shows avoided band inversion in the bulk and is thus a conventional compensated semi-metal with extremely large magneto-resistance.

  9. Electron transport in doped fullerene molecular junctions

    NASA Astrophysics Data System (ADS)

    Kaur, Milanpreet; Sawhney, Ravinder Singh; Engles, Derick

    The effect of doping on the electron transport of molecular junctions is analyzed in this paper. The doped fullerene molecules are stringed to two semi-infinite gold electrodes and analyzed at equilibrium and nonequilibrium conditions of these device configurations. The contemplation is done using nonequilibrium Green’s function (NEGF)-density functional theory (DFT) to evaluate its density of states (DOS), transmission coefficient, molecular orbitals, electron density, charge transfer, current, and conductance. We conclude from the elucidated results that Au-C16Li4-Au and Au-C16Ne4-Au devices behave as an ordinary p-n junction diode and a Zener diode, respectively. Moreover, these doped fullerene molecules do not lose their metallic nature when sandwiched between the pair of gold electrodes.

  10. A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.

    PubMed

    Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2015-12-01

    Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Visual vs Fully Automatic Histogram-Based Assessment of Idiopathic Pulmonary Fibrosis (IPF) Progression Using Sequential Multidetector Computed Tomography (MDCT)

    PubMed Central

    Colombi, Davide; Dinkel, Julien; Weinheimer, Oliver; Obermayer, Berenike; Buzan, Teodora; Nabers, Diana; Bauer, Claudia; Oltmanns, Ute; Palmowski, Karin; Herth, Felix; Kauczor, Hans Ulrich; Sverzellati, Nicola

    2015-01-01

    Objectives To describe changes over time in extent of idiopathic pulmonary fibrosis (IPF) at multidetector computed tomography (MDCT) assessed by semi-quantitative visual scores (VSs) and fully automatic histogram-based quantitative evaluation and to test the relationship between these two methods of quantification. Methods Forty IPF patients (median age: 70 y, interquartile: 62-75 years; M:F, 33:7) that underwent 2 MDCT at different time points with a median interval of 13 months (interquartile: 10-17 months) were retrospectively evaluated. In-house software YACTA quantified automatically lung density histogram (10th-90th percentile in 5th percentile steps). Longitudinal changes in VSs and in the percentiles of attenuation histogram were obtained in 20 untreated patients and 20 patients treated with pirfenidone. Pearson correlation analysis was used to test the relationship between VSs and selected percentiles. Results In follow-up MDCT, visual overall extent of parenchymal abnormalities (OE) increased in median by 5 %/year (interquartile: 0 %/y; +11 %/y). Substantial difference was found between treated and untreated patients in HU changes of the 40th and of the 80th percentiles of density histogram. Correlation analysis between VSs and selected percentiles showed higher correlation between the changes (Δ) in OE and Δ 40th percentile (r=0.69; p<0.001) as compared to Δ 80th percentile (r=0.58; p<0.001); closer correlation was found between Δ ground-glass extent and Δ 40th percentile (r=0.66, p<0.001) as compared to Δ 80th percentile (r=0.47, p=0.002), while the Δ reticulations correlated better with the Δ 80th percentile (r=0.56, p<0.001) in comparison to Δ 40th percentile (r=0.43, p=0.003). Conclusions There is a relevant and fully automatically measurable difference at MDCT in VSs and in histogram analysis at one year follow-up of IPF patients, whether treated or untreated: Δ 40th percentile might reflect the change in overall extent of lung abnormalities, notably of ground-glass pattern; furthermore Δ 80th percentile might reveal the course of reticular opacities. PMID:26110421

  12. Magsat investigation. [Canadian shield

    NASA Technical Reports Server (NTRS)

    Hall, D. H. (Principal Investigator)

    1980-01-01

    A computer program was prepared for modeling segments of the Earth's crust allowing for heterogeneity in magnetization in calculating the Earth's field at Magsat heights. This permits investigation of a large number of possible models in assessing the magnetic signatures of subprovinces of the Canadian shield. The fit between the model field and observed fields is optimized in a semi-automatic procedure.

  13. Radiation hardness study of semi-insulating GaAs detectors against 5 MeV electrons

    NASA Astrophysics Data System (ADS)

    Šagátová, A.; Zaťko, B.; Nečas, V.; Sedlačková, K.; Boháček, P.; Fülöp, M.; Pavlovič, M.

    2018-01-01

    A radiation hardness study of Semi-Insulating (SI) GaAs detectors against 5 MeV electrons is described in this paper. The influence of two parameters, the accumulative absorbed dose (from 1 to 200 kGy) and the applied dose rate (20, 40 or 80 kGy/h), on detector spectrometric properties were studied. The accumulative dose has influenced all evaluated spectrometric properties and also negatively affected the detector CCE (Charge Collection Efficiency). We have observed its systematic reduction from an initial 79% before irradiation down to about 51% at maximum dose of 200 kGy. Relative energy resolution was also influenced by electron irradiation. Its degradation was obvious in the range of doses from 24 up to a maximum dose of 200 kGy, where an increase from 19% up to 31% at 200 V reverse voltage was noticed. On the other hand, a global increase of detection efficiency with accumulative absorbed dose was observed for all samples. Concerning the actual detector degradation we can assume that the tested SI GaAs detectors will be able to operate up to a dose of 300 kGy at least, when irradiated by 5 MeV electrons. The second investigated parameter of irradiation, the dose rate of chosen ranges, did not greatly alter the spectrometric properties of studied detectors.

  14. MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans.

    PubMed

    Mendrik, Adriënne M; Vincken, Koen L; Kuijf, Hugo J; Breeuwer, Marcel; Bouvy, Willem H; de Bresser, Jeroen; Alansary, Amir; de Bruijne, Marleen; Carass, Aaron; El-Baz, Ayman; Jog, Amod; Katyal, Ranveer; Khan, Ali R; van der Lijn, Fedde; Mahmood, Qaiser; Mukherjee, Ryan; van Opbroek, Annegreet; Paneri, Sahil; Pereira, Sérgio; Persson, Mikael; Rajchl, Martin; Sarikaya, Duygu; Smedby, Örjan; Silva, Carlos A; Vrooman, Henri A; Vyas, Saurabh; Wang, Chunliang; Zhao, Liang; Biessels, Geert Jan; Viergever, Max A

    2015-01-01

    Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65-80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand.

  15. MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans

    PubMed Central

    Mendrik, Adriënne M.; Vincken, Koen L.; Kuijf, Hugo J.; Breeuwer, Marcel; Bouvy, Willem H.; de Bresser, Jeroen; Alansary, Amir; de Bruijne, Marleen; Carass, Aaron; El-Baz, Ayman; Jog, Amod; Katyal, Ranveer; Khan, Ali R.; van der Lijn, Fedde; Mahmood, Qaiser; Mukherjee, Ryan; van Opbroek, Annegreet; Paneri, Sahil; Pereira, Sérgio; Rajchl, Martin; Sarikaya, Duygu; Smedby, Örjan; Silva, Carlos A.; Vrooman, Henri A.; Vyas, Saurabh; Wang, Chunliang; Zhao, Liang; Biessels, Geert Jan; Viergever, Max A.

    2015-01-01

    Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65–80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand. PMID:26759553

  16. POPCORN: a Supervisory Control Simulation for Workload and Performance Research

    NASA Technical Reports Server (NTRS)

    Hart, S. G.; Battiste, V.; Lester, P. T.

    1984-01-01

    A multi-task simulation of a semi-automatic supervisory control system was developed to provide an environment in which training, operator strategy development, failure detection and resolution, levels of automation, and operator workload can be investigated. The goal was to develop a well-defined, but realistically complex, task that would lend itself to model-based analysis. The name of the task (POPCORN) reflects the visual display that depicts different task elements milling around waiting to be released and pop out to be performed. The operator's task was to complete each of 100 task elements that ere represented by different symbols, by selecting a target task and entering the desired a command. The simulated automatic system then completed the selected function automatically. Highly significant differences in performance, strategy, and rated workload were found as a function of all experimental manipulations (except reward/penalty).

  17. Novel Automatic Electrochemical-mechanical Polishing (ECMP) of Metals for Scanning Electron Microscopy (Postprint)

    DTIC Science & Technology

    2010-03-23

    Micron 41 (2010) 615–621 619 Fig. 4 . XPS binding energy (eV) versus sputtering time (s) results for the Ti 2p peaks for the titanium samples: (a...improved the IQ values. 4 . Conclusions The electrochemical–mechanical polishing system (ECMP) removed material from titanium and nickel alloys at a...March 2014 4 . TITLE AND SUBTITLE NOVEL AUTOMATIC ELECTROCHEMICAL-MECHANICAL POLISHING (ECMP) OF METALS FOR SCANNING ELECTRON MICROSCOPY

  18. Semi Automated Land Cover Layer Updating Process Utilizing Spectral Analysis and GIS Data Fusion

    NASA Astrophysics Data System (ADS)

    Cohen, L.; Keinan, E.; Yaniv, M.; Tal, Y.; Felus, A.; Regev, R.

    2018-04-01

    Technological improvements made in recent years of mass data gathering and analyzing, influenced the traditional methods of updating and forming of the national topographic database. It has brought a significant increase in the number of use cases and detailed geo information demands. Processes which its purpose is to alternate traditional data collection methods developed in many National Mapping and Cadaster Agencies. There has been significant progress in semi-automated methodologies aiming to facilitate updating of a topographic national geodatabase. Implementation of those is expected to allow a considerable reduction of updating costs and operation times. Our previous activity has focused on building automatic extraction (Keinan, Zilberstein et al, 2015). Before semiautomatic updating method, it was common that interpreter identification has to be as detailed as possible to hold most reliable database eventually. When using semi-automatic updating methodologies, the ability to insert human insights based knowledge is limited. Therefore, our motivations were to reduce the created gap by allowing end-users to add their data inputs to the basic geometric database. In this article, we will present a simple Land cover database updating method which combines insights extracted from the analyzed image, and a given spatial data of vector layers. The main stages of the advanced practice are multispectral image segmentation and supervised classification together with given vector data geometric fusion while maintaining the principle of low shape editorial work to be done. All coding was done utilizing open source software components.

  19. Semi-automated intra-operative fluoroscopy guidance for osteotomy and external-fixator.

    PubMed

    Lin, Hong; Samchukov, Mikhail L; Birch, John G; Cherkashin, Alexander

    2006-01-01

    This paper outlines a semi-automated intra-operative fluoroscopy guidance and monitoring approach for osteotomy and external-fixator application in orthopedic surgery. Intra-operative Guidance module is one component of the "LegPerfect Suite" developed for assisting the surgical correction of lower extremity angular deformity. The Intra-operative Guidance module utilizes information from the preoperative surgical planning module as a guideline to overlay (register) its bone outline semi-automatically with the bone edge from the real-time fluoroscopic C-Arm X-Ray image in the operating room. In the registration process, scaling factor is obtained automatically through matching a fiducial template in the fluoroscopic image and a marker in the module. A triangle metal plate, placed on the operating table is used as fiducial template. The area of template image within the viewing area of the fluoroscopy machine is obtained by the image processing techniques such as edge detection and Hough transformation to extract the template from other objects in the fluoroscopy image. The area of fiducial template from fluoroscopic image is then compared with the area of the marker from the planning so as to obtain the scaling factor. After the scaling factor is obtained, the user can use simple operations by mouse to shift and rotate the preoperative planning to overlay the bone outline from planning with the bone edge from fluoroscopy image. In this way osteotomy levels and external fixator positioning on the limb can guided by the computerized preoperative plan.

  20. Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans

    NASA Astrophysics Data System (ADS)

    Lassen, B. C.; Jacobs, C.; Kuhnigk, J.-M.; van Ginneken, B.; van Rikxoort, E. M.

    2015-02-01

    The malignancy of lung nodules is most often detected by analyzing changes of the nodule diameter in follow-up scans. A recent study showed that comparing the volume or the mass of a nodule over time is much more significant than comparing the diameter. Since the survival rate is higher when the disease is still in an early stage it is important to detect the growth rate as soon as possible. However manual segmentation of a volume is time-consuming. Whereas there are several well evaluated methods for the segmentation of solid nodules, less work is done on subsolid nodules which actually show a higher malignancy rate than solid nodules. In this work we present a fast, semi-automatic method for segmentation of subsolid nodules. As minimal user interaction the method expects a user-drawn stroke on the largest diameter of the nodule. First, a threshold-based region growing is performed based on intensity analysis of the nodule region and surrounding parenchyma. In the next step the chest wall is removed by a combination of a connected component analyses and convex hull calculation. Finally, attached vessels are detached by morphological operations. The method was evaluated on all nodules of the publicly available LIDC/IDRI database that were manually segmented and rated as non-solid or part-solid by four radiologists (Dataset 1) and three radiologists (Dataset 2). For these 59 nodules the Jaccard index for the agreement of the proposed method with the manual reference segmentations was 0.52/0.50 (Dataset 1/Dataset 2) compared to an inter-observer agreement of the manual segmentations of 0.54/0.58 (Dataset 1/Dataset 2). Furthermore, the inter-observer agreement using the proposed method (i.e. different input strokes) was analyzed and gave a Jaccard index of 0.74/0.74 (Dataset 1/Dataset 2). The presented method provides satisfactory segmentation results with minimal observer effort in minimal time and can reduce the inter-observer variability for segmentation of subsolid nodules in clinical routine.

  1. Automatic electronic fish tracking system

    NASA Technical Reports Server (NTRS)

    Osborne, P. W.; Hoffman, E.; Merriner, J. V.; Richards, C. E.; Lovelady, R. W.

    1976-01-01

    A newly developed electronic fish tracking system to automatically monitor the movements and migratory habits of fish is reported. The system is aimed particularly at studies of effects on fish life of industrial facilities which use rivers or lakes to dump their effluents. Location of fish is acquired by means of acoustic links from the fish to underwater Listening Stations, and by radio links which relay tracking information to a shore-based Data Base. Fish over 4 inches long may be tracked over a 5 x 5 mile area. The electronic fish tracking system provides the marine scientist with electronics which permit studies that were not practical in the past and which are cost-effective compared to manual methods.

  2. Method of automatic measurement and focus of an electron beam and apparatus therefore

    DOEpatents

    Giedt, W.H.; Campiotti, R.

    1996-01-09

    An electron beam focusing system, including a plural slit-type Faraday beam trap, for measuring the diameter of an electron beam and automatically focusing the beam for welding is disclosed. Beam size is determined from profiles of the current measured as the beam is swept over at least two narrow slits of the beam trap. An automated procedure changes the focus coil current until the focal point location is just below a workpiece surface. A parabolic equation is fitted to the calculated beam sizes from which optimal focus coil current and optimal beam diameter are determined. 12 figs.

  3. Method of automatic measurement and focus of an electron beam and apparatus therefor

    DOEpatents

    Giedt, Warren H.; Campiotti, Richard

    1996-01-01

    An electron beam focusing system, including a plural slit-type Faraday beam trap, for measuring the diameter of an electron beam and automatically focusing the beam for welding. Beam size is determined from profiles of the current measured as the beam is swept over at least two narrow slits of the beam trap. An automated procedure changes the focus coil current until the focal point location is just below a workpiece surface. A parabolic equation is fitted to the calculated beam sizes from which optimal focus coil current and optimal beam diameter are determined.

  4. SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.

  5. SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.

  6. A neural approach for improving the measurement capability of an electronic nose

    NASA Astrophysics Data System (ADS)

    Chimenti, M.; DeRossi, D.; Di Francesco, F.; Domenici, C.; Pieri, G.; Pioggia, G.; Salvetti, O.

    2003-06-01

    Electronic noses, instruments for automatic recognition of odours, are typically composed of an array of partially selective sensors, a sampling system, a data acquisition device and a data processing system. For the purpose of evaluating the quality of olive oil, an electronic nose based on an array of conducting polymer sensors capable of discriminating olive oil aromas was developed. The selection of suitable pattern recognition techniques for a particular application can enhance the performance of electronic noses. Therefore, an advanced neural recognition algorithm for improving the measurement capability of the device was designed and implemented. This method combines multivariate statistical analysis and a hierarchical neural-network architecture based on self-organizing maps and error back-propagation. The complete system was tested using samples composed of characteristic olive oil aromatic components in refined olive oil. The results obtained have shown that this approach is effective in grouping aromas into different categories representative of their chemical structure.

  7. Introduction To ITS/CVO Participant Manual, Course 1

    DOT National Transportation Integrated Search

    1999-08-01

    WEIGH-IN-MOTION OR WIM, COMMERCIAL VEHICLE INFORMATION SYSTEMS AND NETWORK OR CVISN, AUTOMATIC VEHICLE INDENTIFICATION OR AVI, AUTOMATIC VEHICLE LOCATION OR AVL, ELECTRONIC DATA INTERCHANGE OR EDI, GLOCAL POSITIONING SYSTEM OR GPS, INTERNET OR WORD W...

  8. Search for exclusive or semi-exclusive γγ production and observation of exclusive and semi-exclusive e +e - production in pp collisions at $$ \\sqrt{s}=7 $$ TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.

    A search for exclusive or semi-exclusive photon pair production, pp to p(*) + photon pair + p(*) (where p(*) stands for a diffractively-dissociated proton), and the observation of exclusive and semi-exclusive electron pair production, pp to p(*) + ee + p(*), in proton-proton collisions at sqrt(s) = 7 TeV, are presented. The analysis is based on a data sample corresponding to an integrated luminosity of 36 inverse picobarns recorded by the CMS experiment at the LHC at low instantaneous luminosities. Candidate photon pair or electron pair events are selected by requiring the presence of two photons or a positron andmore » an electron, each with transverse energy ET > 5.5 GeV and pseudorapidity abs(eta) < 2.5, and no other particles in the region abs(eta) < 5.2. No exclusive or semi-exclusive diphoton candidates are found in the data. An upper limit on the cross section for the reaction pp to p(*) + photon pair + p(*), within the above kinematic selections, is set at 1.18 pb at 95% confidence level. Seventeen exclusive or semi-exclusive dielectron candidates are observed, with an estimated background of 0.85 +/- 0.28 (stat.) events, in agreement with the QED-based prediction of 16.3 +/- 1.3 (syst.) events.« less

  9. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  10. Development of a Cancer Care Summary Through the Electronic Health Record.

    PubMed

    Carr, Laurie L; Zelarney, Pearlanne; Meadows, Sarah; Kern, Jeffrey A; Long, M Bronwyn; Kern, Elizabeth

    2016-02-01

    Our objective was to improve communication concerning lung cancer patients by developing and distributing a Cancer Care Summary that would provide clinically useful information about the patient's diagnosis and care to providers in diverse settings. We designed structured, electronic forms for the electronic health record (EHR), detailing tumor staging, classification, and treatment. To ensure completeness and accuracy of the information, we implemented a data quality cycle, composed of reports that are reviewed by oncology clinicians. The data from the EHR forms are extracted into a structured query language database system on a daily basis, from which the Summaries are derived. We conducted focus groups regarding the utility, format, and content of the Summary. Cancer Care Summaries are automatically generated 4 months after a patient's date of diagnosis, then every 6 months for those receiving treatment, and on an as-needed basis for urgent care or hospital admission. The product of our improvement project is the Cancer Care Summary. To date, 102 individual patient Summaries have been generated. These documents are automatically entered into the National Jewish Health (NJH) EHR, attached to correspondence to primary care providers, available to patients as electronic documents on the NJH patient portal, and faxed to emergency departments and admitting physicians on patient evaluation. We developed a sustainable tool to improve cancer care communication. The Cancer Care Summary integrates information from the EHR in a timely manner and distributes the information through multiple avenues. Copyright © 2016 by American Society of Clinical Oncology.

  11. Automatic method for evaluating the activity of sourdough strains based on gas pressure measurements.

    PubMed

    Wick, M; Vanhoutte, J J; Adhemard, A; Turini, G; Lebeault, J M

    2001-04-01

    A new method is proposed for the evaluation of the activity of sourdough strains, based on gas pressure measurements in closed air-tight reactors. Gas pressure and pH were monitored on-line during the cultivation of commercial yeasts and heterofermentative lactic acid bacteria on a semi-synthetic medium with glucose as the major carbon source. Relative gas pressure evolution was compared both to glucose consumption and to acidification and growth. It became obvious that gas pressure evolution is related to glucose consumption kinetics. For each strain, a correlation was made between maximum gas pressure variation and amount of glucose consumed. The mass balance of CO2 in both liquid and gas phase demonstrated that around 90% of CO2 was recovered. Concerning biomass production, a linear relationship was found between log colony-forming units/ml and log pressure for both yeasts and bacteria during the exponential phase; and for yeasts, relative gas pressure evolution also followed optical density variation.

  12. Correlation and registration of ERTS multispectral imagery. [by a digital processing technique

    NASA Technical Reports Server (NTRS)

    Bonrud, L. O.; Henrikson, P. J.

    1974-01-01

    Examples of automatic digital processing demonstrate the feasibility of registering one ERTS multispectral scanner (MSS) image with another obtained on a subsequent orbit, and automatic matching, correlation, and registration of MSS imagery with aerial photography (multisensor correlation) is demonstrated. Excellent correlation was obtained with patch sizes exceeding 16 pixels square. Qualities which lead to effective control point selection are distinctive features, good contrast, and constant feature characteristics. Results of the study indicate that more than 300 degrees of freedom are required to register two standard ERTS-1 MSS frames covering 100 by 100 nautical miles to an accuracy of 0.6 pixel mean radial displacement error. An automatic strip processing technique demonstrates 600 to 1200 degrees of freedom over a quater frame of ERTS imagery. Registration accuracies in the range of 0.3 pixel to 0.5 pixel mean radial error were confirmed by independent error analysis. Accuracies in the range of 0.5 pixel to 1.4 pixel mean radial error were demonstrated by semi-automatic registration over small geographic areas.

  13. Strategies for automatic processing of large aftershock sequences

    NASA Astrophysics Data System (ADS)

    Kvaerna, T.; Gibbons, S. J.

    2017-12-01

    Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.

  14. Chosen postures during specific sitting activities.

    PubMed

    Kamp, Irene; Kilincsoy, Umit; Vink, Peter

    2011-11-01

    This research study analysed the interaction between people's postures and activities while in semi-public/leisure situations and during transportation (journey by train). In addition, the use of small electronic devices received particular emphasis. Video recordings in German trains and photographs in Dutch semi-public spaces were analysed using a variation of Branton and Grayson's (An evaluation of train seats by observation of sitting behaviour. Ergonomics, 10 (1), (1967), 35-51) postural targeting forms and photos. The analysis suggests a significant relationship between most activities and the position of the head, trunk and arms during transportation situations. The relationship during public situations is less straightforward. Watching, talking/discussing and reading were the most observed activities for the transportation and leisure situations combined. Surprisingly, differences in head, trunk, arm and leg postures were not significant when using small electronic devices. Important issues not considered in this study include the duration of the activities, the gender and age of observed subjects and the influence of the time of day. These are interesting issues to consider and include for future research. STATEMENT OF RELEVANCE: This study shows what activities people choose to carry out and their related postures when not forced to a specific task (e.g. driving). The results of this study can be used for designing comfortable seating in the transportation industry (car passenger, train, bus and aircraft seats) and semi-public/leisure spaces.

  15. Comprehensive automatic assessment of retinal vascular abnormalities for computer-assisted retinopathy grading.

    PubMed

    Joshi, Vinayak; Agurto, Carla; VanNess, Richard; Nemeth, Sheila; Soliz, Peter; Barriga, Simon

    2014-01-01

    One of the most important signs of systemic disease that presents on the retina is vascular abnormalities such as in hypertensive retinopathy. Manual analysis of fundus images by human readers is qualitative and lacks in accuracy, consistency and repeatability. Present semi-automatic methods for vascular evaluation are reported to increase accuracy and reduce reader variability, but require extensive reader interaction; thus limiting the software-aided efficiency. Automation thus holds a twofold promise. First, decrease variability while increasing accuracy, and second, increasing the efficiency. In this paper we propose fully automated software as a second reader system for comprehensive assessment of retinal vasculature; which aids the readers in the quantitative characterization of vessel abnormalities in fundus images. This system provides the reader with objective measures of vascular morphology such as tortuosity, branching angles, as well as highlights of areas with abnormalities such as artery-venous nicking, copper and silver wiring, and retinal emboli; in order for the reader to make a final screening decision. To test the efficacy of our system, we evaluated the change in performance of a newly certified retinal reader when grading a set of 40 color fundus images with and without the assistance of the software. The results demonstrated an improvement in reader's performance with the software assistance, in terms of accuracy of detection of vessel abnormalities, determination of retinopathy, and reading time. This system enables the reader in making computer-assisted vasculature assessment with high accuracy and consistency, at a reduced reading time.

  16. Electronic properties of in-plane phase engineered 1T'/2H/1T' MoS2

    NASA Astrophysics Data System (ADS)

    Thakur, Rajesh; Sharma, Munish; Ahluwalia, P. K.; Sharma, Raman

    2018-04-01

    We present the first principles studies of semi-infinite phase engineered MoS2 along zigzag direction. The semiconducting (2H) and semi-metallic (1T') phases are known to be stable in thin-film MoS2. We described the electronic and structural properties of the infinite array of 1T'/2H/1T'. It has been found that 1T'phase induced semi-metallic character in 2H phase beyond interface but, only Mo atoms in 2H phase domain contribute to the semi-metallic nature and S atoms towards semiconducting state. 1T'/2H/1T' system can act as a typical n-p-n structure. Also high holes concentration at the interface of Mo layer provides further positive potential barriers.

  17. Systems and methods for data quality control and cleansing

    DOEpatents

    Wenzel, Michael; Boettcher, Andrew; Drees, Kirk; Kummer, James

    2016-05-31

    A method for detecting and cleansing suspect building automation system data is shown and described. The method includes using processing electronics to automatically determine which of a plurality of error detectors and which of a plurality of data cleansers to use with building automation system data. The method further includes using processing electronics to automatically detect errors in the data and cleanse the data using a subset of the error detectors and a subset of the cleansers.

  18. Understanding ITS/CVO Technology Applications, Student Manual, Course 3

    DOT National Transportation Integrated Search

    1999-01-01

    WEIGHT-IN-MOTION OR WIM, COMMERCIAL VEHICLE INFORMATION SYSTEMS AND NETWORK OR CVISN, AUTOMATIC VEHICLE IDENTIFICATION OR AVI, AUTOMATIC LOCATION OR AVL, ELECTRONIC DATA INTERCHANGE OR EDI, GLOBAL POSITIONING SYSTEM OR GPS, INTERNET OR WORLD WIDE WEB...

  19. Measurement results obtained from air quality monitoring system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turzanski, P.K.; Beres, R.

    1995-12-31

    An automatic system of air pollution monitoring operates in Cracow since 1991. The organization, assembling and start-up of the network is a result of joint efforts of the US Environmental Protection Agency and the Cracow environmental protection service. At present the automatic monitoring network is operated by the Provincial Inspection of Environmental Protection. There are in total seven stationary stations situated in Cracow to measure air pollution. These stations are supported continuously by one semi-mobile (transportable) station. It allows to modify periodically the area under investigation and therefore the 3-dimensional picture of creation and distribution of air pollutants within Cracowmore » area could be more intelligible.« less

  20. Automatic Generation of Building Models with Levels of Detail 1-3

    NASA Astrophysics Data System (ADS)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  1. The cleaning and disinfection by heat of bedpans in automatic and semi-automatic machines.

    PubMed Central

    Mostafa, A. B.; Chackett, K. F.

    1976-01-01

    This work is concerned with the cleaning and disinfection by heat of stainless-steel and polypropylene bedpans, which had been soiled with either a biological contaminant, human serum albumin (HSA) labelled with technetium-99m 99m(Tc), or a bacteriological contaminant, streptococcus faecalis mixed with Tc-labelled HSA. Results of cleaning and disinfection achieved with a Test Machine and those achieved by procedures adopted in eight different wards of a general hospital are reported. Bedpan washers installed in wards were found to be less efficient than the Test Machine, at least partly because of inadequate maintenance. Stainless-steel and polypropylene bedpans gave essentially the same results. PMID:6591

  2. Designed tools for analysis of lithography patterns and nanostructures

    NASA Astrophysics Data System (ADS)

    Dervillé, Alexandre; Baderot, Julien; Bernard, Guilhem; Foucher, Johann; Grönqvist, Hanna; Labrosse, Aurélien; Martinez, Sergio; Zimmermann, Yann

    2017-03-01

    We introduce a set of designed tools for the analysis of lithography patterns and nano structures. The classical metrological analysis of these objects has the drawbacks of being time consuming, requiring manual tuning and lacking robustness and user friendliness. With the goal of improving the current situation, we propose new image processing tools at different levels: semi automatic, automatic and machine-learning enhanced tools. The complete set of tools has been integrated into a software platform designed to transform the lab into a virtual fab. The underlying idea is to master nano processes at the research and development level by accelerating the access to knowledge and hence speed up the implementation in product lines.

  3. Generating Models of Surgical Procedures using UMLS Concepts and Multiple Sequence Alignment

    PubMed Central

    Meng, Frank; D’Avolio, Leonard W.; Chen, Andrew A.; Taira, Ricky K.; Kangarloo, Hooshang

    2005-01-01

    Surgical procedures can be viewed as a process composed of a sequence of steps performed on, by, or with the patient’s anatomy. This sequence is typically the pattern followed by surgeons when generating surgical report narratives for documenting surgical procedures. This paper describes a methodology for semi-automatically deriving a model of conducted surgeries, utilizing a sequence of derived Unified Medical Language System (UMLS) concepts for representing surgical procedures. A multiple sequence alignment was computed from a collection of such sequences and was used for generating the model. These models have the potential of being useful in a variety of informatics applications such as information retrieval and automatic document generation. PMID:16779094

  4. Automatic vs. manual curation of a multi-source chemical dictionary: the impact on text mining.

    PubMed

    Hettne, Kristina M; Williams, Antony J; van Mulligen, Erik M; Kleinjans, Jos; Tkachenko, Valery; Kors, Jan A

    2010-03-23

    Previously, we developed a combined dictionary dubbed Chemlist for the identification of small molecules and drugs in text based on a number of publicly available databases and tested it on an annotated corpus. To achieve an acceptable recall and precision we used a number of automatic and semi-automatic processing steps together with disambiguation rules. However, it remained to be investigated which impact an extensive manual curation of a multi-source chemical dictionary would have on chemical term identification in text. ChemSpider is a chemical database that has undergone extensive manual curation aimed at establishing valid chemical name-to-structure relationships. We acquired the component of ChemSpider containing only manually curated names and synonyms. Rule-based term filtering, semi-automatic manual curation, and disambiguation rules were applied. We tested the dictionary from ChemSpider on an annotated corpus and compared the results with those for the Chemlist dictionary. The ChemSpider dictionary of ca. 80 k names was only a 1/3 to a 1/4 the size of Chemlist at around 300 k. The ChemSpider dictionary had a precision of 0.43 and a recall of 0.19 before the application of filtering and disambiguation and a precision of 0.87 and a recall of 0.19 after filtering and disambiguation. The Chemlist dictionary had a precision of 0.20 and a recall of 0.47 before the application of filtering and disambiguation and a precision of 0.67 and a recall of 0.40 after filtering and disambiguation. We conclude the following: (1) The ChemSpider dictionary achieved the best precision but the Chemlist dictionary had a higher recall and the best F-score; (2) Rule-based filtering and disambiguation is necessary to achieve a high precision for both the automatically generated and the manually curated dictionary. ChemSpider is available as a web service at http://www.chemspider.com/ and the Chemlist dictionary is freely available as an XML file in Simple Knowledge Organization System format on the web at http://www.biosemantics.org/chemlist.

  5. Automatic vs. manual curation of a multi-source chemical dictionary: the impact on text mining

    PubMed Central

    2010-01-01

    Background Previously, we developed a combined dictionary dubbed Chemlist for the identification of small molecules and drugs in text based on a number of publicly available databases and tested it on an annotated corpus. To achieve an acceptable recall and precision we used a number of automatic and semi-automatic processing steps together with disambiguation rules. However, it remained to be investigated which impact an extensive manual curation of a multi-source chemical dictionary would have on chemical term identification in text. ChemSpider is a chemical database that has undergone extensive manual curation aimed at establishing valid chemical name-to-structure relationships. Results We acquired the component of ChemSpider containing only manually curated names and synonyms. Rule-based term filtering, semi-automatic manual curation, and disambiguation rules were applied. We tested the dictionary from ChemSpider on an annotated corpus and compared the results with those for the Chemlist dictionary. The ChemSpider dictionary of ca. 80 k names was only a 1/3 to a 1/4 the size of Chemlist at around 300 k. The ChemSpider dictionary had a precision of 0.43 and a recall of 0.19 before the application of filtering and disambiguation and a precision of 0.87 and a recall of 0.19 after filtering and disambiguation. The Chemlist dictionary had a precision of 0.20 and a recall of 0.47 before the application of filtering and disambiguation and a precision of 0.67 and a recall of 0.40 after filtering and disambiguation. Conclusions We conclude the following: (1) The ChemSpider dictionary achieved the best precision but the Chemlist dictionary had a higher recall and the best F-score; (2) Rule-based filtering and disambiguation is necessary to achieve a high precision for both the automatically generated and the manually curated dictionary. ChemSpider is available as a web service at http://www.chemspider.com/ and the Chemlist dictionary is freely available as an XML file in Simple Knowledge Organization System format on the web at http://www.biosemantics.org/chemlist. PMID:20331846

  6. 12 CFR 551.110 - May I provide a notice electronically?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... use other electronic communications if: (a) The parties agree to use electronic instead of hard copy notices; (b) The parties are able to print or download the notice; (c) Your electronic communications system cannot automatically delete the electronic notice; and (d) Both parties are able to receive...

  7. Regional vertical total electron content (VTEC) modeling together with satellite and receiver differential code biases (DCBs) using semi-parametric multivariate adaptive regression B-splines (SP-BMARS)

    NASA Astrophysics Data System (ADS)

    Durmaz, Murat; Karslioglu, Mahmut Onur

    2015-04-01

    There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.

  8. Confidence-based ensemble for GBM brain tumor segmentation

    NASA Astrophysics Data System (ADS)

    Huo, Jing; van Rikxoort, Eva M.; Okada, Kazunori; Kim, Hyun J.; Pope, Whitney; Goldin, Jonathan; Brown, Matthew

    2011-03-01

    It is a challenging task to automatically segment glioblastoma multiforme (GBM) brain tumors on T1w post-contrast isotropic MR images. A semi-automated system using fuzzy connectedness has recently been developed for computing the tumor volume that reduces the cost of manual annotation. In this study, we propose a an ensemble method that combines multiple segmentation results into a final ensemble one. The method is evaluated on a dataset of 20 cases from a multi-center pharmaceutical drug trial and compared to the fuzzy connectedness method. Three individual methods were used in the framework: fuzzy connectedness, GrowCut, and voxel classification. The combination method is a confidence map averaging (CMA) method. The CMA method shows an improved ROC curve compared to the fuzzy connectedness method (p < 0.001). The CMA ensemble result is more robust compared to the three individual methods.

  9. Evaluation of Semantic Web Technologies for Storing Computable Definitions of Electronic Health Records Phenotyping Algorithms.

    PubMed

    Papež, Václav; Denaxas, Spiros; Hemingway, Harry

    2017-01-01

    Electronic Health Records are electronic data generated during or as a byproduct of routine patient care. Structured, semi-structured and unstructured EHR offer researchers unprecedented phenotypic breadth and depth and have the potential to accelerate the development of precision medicine approaches at scale. A main EHR use-case is defining phenotyping algorithms that identify disease status, onset and severity. Phenotyping algorithms utilize diagnoses, prescriptions, laboratory tests, symptoms and other elements in order to identify patients with or without a specific trait. No common standardized, structured, computable format exists for storing phenotyping algorithms. The majority of algorithms are stored as human-readable descriptive text documents making their translation to code challenging due to their inherent complexity and hinders their sharing and re-use across the community. In this paper, we evaluate the two key Semantic Web Technologies, the Web Ontology Language and the Resource Description Framework, for enabling computable representations of EHR-driven phenotyping algorithms.

  10. Relative Recency Judgments in Learning Disabled Children: A Semi-Automatic Process.

    ERIC Educational Resources Information Center

    Stein, Debra K.; And Others

    The ability of 20 learning disabled (LD) and 20 non-LD students (mean age of 9 years) to process temporal order information was assessed by employing a relative recency judgment task. Ss were administered lists composed of pictures of everyday objects and were then asked to indicate which item appeared latest on the list (that is, most recently).…

  11. Semi-automatic 10/20 Identification Method for MRI-Free Probe Placement in Transcranial Brain Mapping Techniques.

    PubMed

    Xiao, Xiang; Zhu, Hao; Liu, Wei-Jie; Yu, Xiao-Ting; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe

    2017-01-01

    The International 10/20 system is an important head-surface-based positioning system for transcranial brain mapping techniques, e.g., fNIRS and TMS. As guidance for probe placement, the 10/20 system permits both proper ROI coverage and spatial consistency among multiple subjects and experiments in a MRI-free context. However, the traditional manual approach to the identification of 10/20 landmarks faces problems in reliability and time cost. In this study, we propose a semi-automatic method to address these problems. First, a novel head surface reconstruction algorithm reconstructs head geometry from a set of points uniformly and sparsely sampled on the subject's head. Second, virtual 10/20 landmarks are determined on the reconstructed head surface in computational space. Finally, a visually-guided real-time navigation system guides the experimenter to each of the identified 10/20 landmarks on the physical head of the subject. Compared with the traditional manual approach, our proposed method provides a significant improvement both in reliability and time cost and thus could contribute to improving both the effectiveness and efficiency of 10/20-guided MRI-free probe placement.

  12. Left ventricular endocardial surface detection based on real-time 3D echocardiographic data

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.

    2001-01-01

    OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.

  13. A system of regional agricultural land use mapping tested against small scale Apollo 9 color infrared photography of the Imperial Valley (California)

    USGS Publications Warehouse

    Johnson, Claude W.; Browden, Leonard W.; Pease, Robert W.

    1969-01-01

    Interpretation results of the small scale ClR photography of the Imperial Valley (California) taken on March 12, 1969 by the Apollo 9 earth orbiting satellite have shown that world wide agricultural land use mapping can be accomplished from satellite ClR imagery if sufficient a priori information is available for the region being mapped. Correlation of results with actual data is encouraging although the accuracy of identification of specific crops from the single image is poor. The poor results can be partly attributed to only one image taken during mid-season when the three major crops were reflecting approximately the same and their ClR image appears to indicate the same crop type. However, some incapacity can be attributed to lack of understanding of the subtle variations of visual and infrared color reflectance of vegetation and surrounding environment. Analysis of integrated color variations of the vegetation and background environment recorded on ClR imagery is discussed. Problems associated with the color variations may be overcome by development of a semi-automatic processing system which considers individual field units or cells. Design criteria for semi-automatic processing system are outlined.

  14. Methods for Ensuring High Quality of Coding of Cause of Death. The Mortality Register to Follow Southern Urals Populations Exposed to Radiation.

    PubMed

    Startsev, N; Dimov, P; Grosche, B; Tretyakov, F; Schüz, J; Akleyev, A

    2015-01-01

    To follow up populations exposed to several radiation accidents in the Southern Urals, a cause-of-death registry was established at the Urals Center capturing deaths in the Chelyabinsk, Kurgan and Sverdlovsk region since 1950. When registering deaths over such a long time period, quality measures need to be in place to maintain quality and reduce the impact of individual coders as well as quality changes in death certificates. To ensure the uniformity of coding, a method for semi-automatic coding was developed, which is described here. Briefly, the method is based on a dynamic thesaurus, database-supported coding and parallel coding by two different individuals. A comparison of the proposed method for organizing the coding process with the common procedure of coding showed good agreement, with, at the end of the coding process, 70  - 90% agreement for the three-digit ICD -9 rubrics. The semi-automatic method ensures a sufficiently high quality of coding by at the same time providing an opportunity to reduce the labor intensity inherent in the creation of large-volume cause-of-death registries.

  15. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure

    PubMed Central

    Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-01-01

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978

  16. Semi-automatic image personalization tool for variable text insertion and replacement

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Eschbach, Reiner; Bouman, Charles A.; Allebach, Jan P.

    2010-02-01

    Image personalization is a widely used technique in personalized marketing,1 in which a vendor attempts to promote new products or retain customers by sending marketing collateral that is tailored to the customers' demographics, needs, and interests. With current solutions of which we are aware such as XMPie,2 DirectSmile,3 and AlphaPicture,4 in order to produce this tailored marketing collateral, image templates need to be created manually by graphic designers, involving complex grid manipulation and detailed geometric adjustments. As a matter of fact, the image template design is highly manual, skill-demanding and costly, and essentially the bottleneck for image personalization. We present a semi-automatic image personalization tool for designing image templates. Two scenarios are considered: text insertion and text replacement, with the text replacement option not offered in current solutions. The graphical user interface (GUI) of the tool is described in detail. Unlike current solutions, the tool renders the text in 3-D, which allows easy adjustment of the text. In particular, the tool has been implemented in Java, which introduces flexible deployment and eliminates the need for any special software or know-how on the part of the end user.

  17. Usefulness of model-based iterative reconstruction in semi-automatic volumetry for ground-glass nodules at ultra-low-dose CT: a phantom study.

    PubMed

    Maruyama, Shuki; Fukushima, Yasuhiro; Miyamae, Yuta; Koizumi, Koji

    2018-06-01

    This study aimed to investigate the effects of parameter presets of the forward projected model-based iterative reconstruction solution (FIRST) on the accuracy of pulmonary nodule volume measurement. A torso phantom with simulated nodules [diameter: 5, 8, 10, and 12 mm; computed tomography (CT) density: - 630 HU] was scanned with a multi-detector CT at tube currents of 10 mA (ultra-low-dose: UL-dose) and 270 mA (standard-dose: Std-dose). Images were reconstructed with filtered back projection [FBP; standard (Std-FBP), ultra-low-dose (UL-FBP)], FIRST Lung (UL-Lung), and FIRST Body (UL-Body), and analyzed with a semi-automatic software. The error in the volume measurement was determined. The errors with UL-Lung and UL-Body were smaller than that with UL-FBP. The smallest error was 5.8% ± 0.3 for the 12-mm nodule with UL-Body (middle lung). Our results indicated that FIRST Body would be superior to FIRST Lung in terms of accuracy of nodule measurement with UL-dose CT.

  18. Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Kim, T.

    2016-06-01

    Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  19. MOLGENIS/connect: a system for semi-automatic integration of heterogeneous phenotype data with applications in biobanks.

    PubMed

    Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans; Swertz, Morris A

    2016-07-15

    While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect : m.a.swertz@rug.nl Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  20. MOLGENIS/connect: a system for semi-automatic integration of heterogeneous phenotype data with applications in biobanks

    PubMed Central

    Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K. Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans

    2016-01-01

    Motivation: While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. Results: To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Availability and Implementation: Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect. Contact: m.a.swertz@rug.nl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153686

Top