Sample records for level computational identification

  1. Multi-level damage identification with response reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Chao-Dong; Xu, You-Lin

    2017-10-01

    Damage identification through finite element (FE) model updating usually forms an inverse problem. Solving the inverse identification problem for complex civil structures is very challenging since the dimension of potential damage parameters in a complex civil structure is often very large. Aside from enormous computation efforts needed in iterative updating, the ill-condition and non-global identifiability features of the inverse problem probably hinder the realization of model updating based damage identification for large civil structures. Following a divide-and-conquer strategy, a multi-level damage identification method is proposed in this paper. The entire structure is decomposed into several manageable substructures and each substructure is further condensed as a macro element using the component mode synthesis (CMS) technique. The damage identification is performed at two levels: the first is at macro element level to locate the potentially damaged region and the second is over the suspicious substructures to further locate as well as quantify the damage severity. In each level's identification, the damage searching space over which model updating is performed is notably narrowed down, not only reducing the computation amount but also increasing the damage identifiability. Besides, the Kalman filter-based response reconstruction is performed at the second level to reconstruct the response of the suspicious substructure for exact damage quantification. Numerical studies and laboratory tests are both conducted on a simply supported overhanging steel beam for conceptual verification. The results demonstrate that the proposed multi-level damage identification via response reconstruction does improve the identification accuracy of damage localization and quantization considerably.

  2. Level-set techniques for facies identification in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  3. Application of gray level mapping in computed tomographic colonography: a pilot study to compare with traditional surface rendering method for identification and differentiation of endoluminal lesions

    PubMed Central

    Chen, Lih-Shyang; Hsu, Ta-Wen; Chang, Shu-Han; Lin, Chih-Wen; Chen, Yu-Ruei; Hsieh, Chin-Chiang; Han, Shu-Chen; Chang, Ku-Yaw; Hou, Chun-Ju

    2017-01-01

    Objective: In traditional surface rendering (SR) computed tomographic endoscopy, only the shape of endoluminal lesion is depicted without gray-level information unless the volume rendering technique is used. However, volume rendering technique is relatively slow and complex in terms of computation time and parameter setting. We use computed tomographic colonography (CTC) images as examples and report a new visualization technique by three-dimensional gray level mapping (GM) to better identify and differentiate endoluminal lesions. Methods: There are 33 various endoluminal cases from 30 patients evaluated in this clinical study. These cases were segmented using gray-level threshold. The marching cube algorithm was used to detect isosurfaces in volumetric data sets. GM is applied using the surface gray level of CTC. Radiologists conducted the clinical evaluation of the SR and GM images. The Wilcoxon signed-rank test was used for data analysis. Results: Clinical evaluation confirms GM is significantly superior to SR in terms of gray-level pattern and spatial shape presentation of endoluminal cases (p < 0.01) and improves the confidence of identification and clinical classification of endoluminal lesions significantly (p < 0.01). The specificity and diagnostic accuracy of GM is significantly better than those of SR in diagnostic performance evaluation (p < 0.01). Conclusion: GM can reduce confusion in three-dimensional CTC and well correlate CTC with sectional images by the location as well as gray-level value. Hence, GM increases identification and differentiation of endoluminal lesions, and facilitates diagnostic process. Advances in knowledge: GM significantly improves the traditional SR method by providing reliable gray-level information for the surface points and is helpful in identification and differentiation of endoluminal lesions according to their shape and density. PMID:27925483

  4. Identification of Computational and Experimental Reduced-Order Models

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Hong, Moeljo S.; Bartels, Robert E.; Piatak, David J.; Scott, Robert C.

    2003-01-01

    The identification of computational and experimental reduced-order models (ROMs) for the analysis of unsteady aerodynamic responses and for efficient aeroelastic analyses is presented. For the identification of a computational aeroelastic ROM, the CFL3Dv6.0 computational fluid dynamics (CFD) code is used. Flutter results for the AGARD 445.6 Wing and for a Rigid Semispan Model (RSM) computed using CFL3Dv6.0 are presented, including discussion of associated computational costs. Modal impulse responses of the unsteady aerodynamic system are computed using the CFL3Dv6.0 code and transformed into state-space form. The unsteady aerodynamic state-space ROM is then combined with a state-space model of the structure to create an aeroelastic simulation using the MATLAB/SIMULINK environment. The MATLAB/SIMULINK ROM is then used to rapidly compute aeroelastic transients, including flutter. The ROM shows excellent agreement with the aeroelastic analyses computed using the CFL3Dv6.0 code directly. For the identification of experimental unsteady pressure ROMs, results are presented for two configurations: the RSM and a Benchmark Supercritical Wing (BSCW). Both models were used to acquire unsteady pressure data due to pitching oscillations on the Oscillating Turntable (OTT) system at the Transonic Dynamics Tunnel (TDT). A deconvolution scheme involving a step input in pitch and the resultant step response in pressure, for several pressure transducers, is used to identify the unsteady pressure impulse responses. The identified impulse responses are then used to predict the pressure responses due to pitching oscillations at several frequencies. Comparisons with the experimental data are then presented.

  5. Human operator identification model and related computer programs

    NASA Technical Reports Server (NTRS)

    Kessler, K. M.; Mohr, J. N.

    1978-01-01

    Four computer programs which provide computational assistance in the analysis of man/machine systems are reported. The programs are: (1) Modified Transfer Function Program (TF); (2) Time Varying Response Program (TVSR); (3) Optimal Simulation Program (TVOPT); and (4) Linear Identification Program (SCIDNT). The TV program converts the time domain state variable system representative to frequency domain transfer function system representation. The TVSR program computes time histories of the input/output responses of the human operator model. The TVOPT program is an optimal simulation program and is similar to TVSR in that it produces time histories of system states associated with an operator in the loop system. The differences between the two programs are presented. The SCIDNT program is an open loop identification code which operates on the simulated data from TVOPT (or TVSR) or real operator data from motion simulators.

  6. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...

  7. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...

  8. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...

  9. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...

  10. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...

  11. Computational system identification of continuous-time nonlinear systems using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan

    2016-11-01

    In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.

  12. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...

  13. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...

  14. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...

  15. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...

  16. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...

  17. Occupational risk identification using hand-held or laptop computers.

    PubMed

    Naumanen, Paula; Savolainen, Heikki; Liesivuori, Jyrki

    2008-01-01

    This paper describes the Work Environment Profile (WEP) program and its use in risk identification by computer. It is installed into a hand-held computer or a laptop to be used in risk identification during work site visits. A 5-category system is used to describe the identified risks in 7 groups, i.e., accidents, biological and physical hazards, ergonomic and psychosocial load, chemicals, and information technology hazards. Each group contains several qualifying factors. These 5 categories are colour-coded at this stage to aid with visualization. Risk identification produces visual summary images the interpretation of which is facilitated by colours. The WEP program is a tool for risk assessment which is easy to learn and to use both by experts and nonprofessionals. It is especially well adapted to be used both in small and in larger enterprises. Considerable time is saved as no paper notes are needed.

  18. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1972-01-01

    Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.

  19. Identification of natural images and computer-generated graphics based on statistical and textural features.

    PubMed

    Peng, Fei; Li, Jiao-ting; Long, Min

    2015-03-01

    To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.

  20. Dysregulation in level of goal and action identification across psychological disorders.

    PubMed

    Watkins, Edward

    2011-03-01

    Goals, events, and actions can be mentally represented within a hierarchical framework that ranges from more abstract to more concrete levels of identification. A more abstract level of identification involves general, superordinate, and decontextualized mental representations that convey the meaning of goals, events, and actions, "why" an action is performed, and its purpose, ends, and consequences. A more concrete level of identification involves specific and subordinate mental representations that include contextual details of goals, events, and actions, and the specific "how" details of an action. This review considers three lines of evidence for considering that dysregulation of level of goal/action identification may be a transdiagnostic process. First, there is evidence that different levels of identification have distinct functional consequences and that in non-clinical samples level of goal/action identification appears to be regulated in a flexible and adaptive way to match the level of goal/action identification to circumstances. Second, there is evidence that level of goal/action identification causally influences symptoms and processes involved in psychological disorders, including emotional response, repetitive thought, impulsivity, problem solving and procrastination. Third, there is evidence that the level of goal/action identification is biased and/or dysregulated in certain psychological disorders, with a bias towards more abstract identification for negative events in depression, GAD, PTSD, and social anxiety. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Dysregulation in level of goal and action identification across psychological disorders

    PubMed Central

    Watkins, Edward

    2011-01-01

    Goals, events, and actions can be mentally represented within a hierarchical framework that ranges from more abstract to more concrete levels of identification. A more abstract level of identification involves general, superordinate, and decontextualized mental representations that convey the meaning of goals, events, and actions, “why” an action is performed, and its purpose, ends, and consequences. A more concrete level of identification involves specific and subordinate mental representations that include contextual details of goals, events, and actions, and the specific “how” details of an action. This review considers three lines of evidence for considering that dysregulation of level of goal/action identification may be a transdiagnostic process. First, there is evidence that different levels of identification have distinct functional consequences and that in non-clinical samples level of goal/action identification appears to be regulated in a flexible and adaptive way to match the level of goal/action identification to circumstances. Second, there is evidence that level of goal/action identification causally influences symptoms and processes involved in psychological disorders, including emotional response, repetitive thought, impulsivity, problem solving and procrastination. Third, there is evidence that the level of goal/action identification is biased and/or dysregulated in certain psychological disorders, with a bias towards more abstract identification for negative events in depression, GAD, PTSD, and social anxiety. PMID:20579789

  2. A survey of computational methods and error rate estimation procedures for peptide and protein identification in shotgun proteomics

    PubMed Central

    Nesvizhskii, Alexey I.

    2010-01-01

    This manuscript provides a comprehensive review of the peptide and protein identification process using tandem mass spectrometry (MS/MS) data generated in shotgun proteomic experiments. The commonly used methods for assigning peptide sequences to MS/MS spectra are critically discussed and compared, from basic strategies to advanced multi-stage approaches. A particular attention is paid to the problem of false-positive identifications. Existing statistical approaches for assessing the significance of peptide to spectrum matches are surveyed, ranging from single-spectrum approaches such as expectation values to global error rate estimation procedures such as false discovery rates and posterior probabilities. The importance of using auxiliary discriminant information (mass accuracy, peptide separation coordinates, digestion properties, and etc.) is discussed, and advanced computational approaches for joint modeling of multiple sources of information are presented. This review also includes a detailed analysis of the issues affecting the interpretation of data at the protein level, including the amplification of error rates when going from peptide to protein level, and the ambiguities in inferring the identifies of sample proteins in the presence of shared peptides. Commonly used methods for computing protein-level confidence scores are discussed in detail. The review concludes with a discussion of several outstanding computational issues. PMID:20816881

  3. A Teaching Exercise for the Identification of Bacteria Using An Interactive Computer Program.

    ERIC Educational Resources Information Center

    Bryant, Trevor N.; Smith, John E.

    1979-01-01

    Describes an interactive Fortran computer program which provides an exercise in the identification of bacteria. Provides a way of enhancing a student's approach to systematic bacteriology and numerical identification procedures. (Author/MA)

  4. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision.

    PubMed

    Heinrich, Andreas; Güttler, Felix; Wendt, Sebastian; Schenkl, Sebastian; Hubig, Michael; Wagner, Rebecca; Mall, Gita; Teichgräber, Ulf

    2018-06-18

     In forensic odontology the comparison between antemortem and postmortem panoramic radiographs (PRs) is a reliable method for person identification. The purpose of this study was to improve and automate identification of unknown people by comparison between antemortem and postmortem PR using computer vision.  The study includes 43 467 PRs from 24 545 patients (46 % females/54 % males). All PRs were filtered and evaluated with Matlab R2014b including the toolboxes image processing and computer vision system. The matching process used the SURF feature to find the corresponding points between two PRs (unknown person and database entry) out of the whole database.  From 40 randomly selected persons, 34 persons (85 %) could be reliably identified by corresponding PR matching points between an already existing scan in the database and the most recent PR. The systematic matching yielded a maximum of 259 points for a successful identification between two different PRs of the same person and a maximum of 12 corresponding matching points for other non-identical persons in the database. Hence 12 matching points are the threshold for reliable assignment.  Operating with an automatic PR system and computer vision could be a successful and reliable tool for identification purposes. The applied method distinguishes itself by virtue of its fast and reliable identification of persons by PR. This Identification method is suitable even if dental characteristics were removed or added in the past. The system seems to be robust for large amounts of data.   · Computer vision allows an automated antemortem and postmortem comparison of panoramic radiographs (PRs) for person identification.. · The present method is able to find identical matching partners among huge datasets (big data) in a short computing time.. · The identification method is suitable even if dental characteristics were removed or added.. · Heinrich A, Güttler F, Wendt S et al. Forensic Odontology

  5. [Feasibility and acceptance of computer-based assessment for the identification of psychosocially distressed patients in routine clinical care].

    PubMed

    Sehlen, Susanne; Ott, Martin; Marten-Mittag, Birgitt; Haimerl, Wolfgang; Dinkel, Andreas; Duehmke, Eckhart; Klein, Christian; Schaefer, Christof; Herschbach, Peter

    2012-07-01

    This study investigated feasibility and acceptance of computer-based assessment for the identification of psychosocial distress in routine radiotherapy care. 155 cancer patients were assessed using QSC-R10, PO-Bado-SF and Mach-9. The congruence between computerized tablet PC and conventional paper assessment was analysed in 50 patients. The agreement between the 2 modes was high (ICC 0.869-0.980). Acceptance of computer-based assessment was very high (>95%). Sex, age, education, distress and Karnofsky performance status (KPS) did not influence acceptance. Computerized assessment was rated more difficult by older patients (p = 0.039) and patients with low KPS (p = 0.020). 75.5% of the respondents supported referral for psycho-social intervention for distressed patients. The prevalence of distress was 27.1% (QSC-R10). Computer-based assessment allows easy identification of distressed patients. Level of staff involvement is low, and the results are quickly available for care providers. © Georg Thieme Verlag KG Stuttgart · New York.

  6. mlCAF: Multi-Level Cross-Domain Semantic Context Fusioning for Behavior Identification.

    PubMed

    Razzaq, Muhammad Asif; Villalonga, Claudia; Lee, Sungyoung; Akhtar, Usman; Ali, Maqbool; Kim, Eun-Soo; Khattak, Asad Masood; Seung, Hyonwoo; Hur, Taeho; Bang, Jaehun; Kim, Dohyeong; Ali Khan, Wajahat

    2017-10-24

    The emerging research on automatic identification of user's contexts from the cross-domain environment in ubiquitous and pervasive computing systems has proved to be successful. Monitoring the diversified user's contexts and behaviors can help in controlling lifestyle associated to chronic diseases using context-aware applications. However, availability of cross-domain heterogeneous contexts provides a challenging opportunity for their fusion to obtain abstract information for further analysis. This work demonstrates extension of our previous work from a single domain (i.e., physical activity) to multiple domains (physical activity, nutrition and clinical) for context-awareness. We propose multi-level Context-aware Framework (mlCAF), which fuses the multi-level cross-domain contexts in order to arbitrate richer behavioral contexts. This work explicitly focuses on key challenges linked to multi-level context modeling, reasoning and fusioning based on the mlCAF open-source ontology. More specifically, it addresses the interpretation of contexts from three different domains, their fusioning conforming to richer contextual information. This paper contributes in terms of ontology evolution with additional domains, context definitions, rules and inclusion of semantic queries. For the framework evaluation, multi-level cross-domain contexts collected from 20 users were used to ascertain abstract contexts, which served as basis for behavior modeling and lifestyle identification. The experimental results indicate a context recognition average accuracy of around 92.65% for the collected cross-domain contexts.

  7. mlCAF: Multi-Level Cross-Domain Semantic Context Fusioning for Behavior Identification

    PubMed Central

    Villalonga, Claudia; Lee, Sungyoung; Akhtar, Usman; Ali, Maqbool; Kim, Eun-Soo; Khattak, Asad Masood; Seung, Hyonwoo; Hur, Taeho; Kim, Dohyeong; Ali Khan, Wajahat

    2017-01-01

    The emerging research on automatic identification of user’s contexts from the cross-domain environment in ubiquitous and pervasive computing systems has proved to be successful. Monitoring the diversified user’s contexts and behaviors can help in controlling lifestyle associated to chronic diseases using context-aware applications. However, availability of cross-domain heterogeneous contexts provides a challenging opportunity for their fusion to obtain abstract information for further analysis. This work demonstrates extension of our previous work from a single domain (i.e., physical activity) to multiple domains (physical activity, nutrition and clinical) for context-awareness. We propose multi-level Context-aware Framework (mlCAF), which fuses the multi-level cross-domain contexts in order to arbitrate richer behavioral contexts. This work explicitly focuses on key challenges linked to multi-level context modeling, reasoning and fusioning based on the mlCAF open-source ontology. More specifically, it addresses the interpretation of contexts from three different domains, their fusioning conforming to richer contextual information. This paper contributes in terms of ontology evolution with additional domains, context definitions, rules and inclusion of semantic queries. For the framework evaluation, multi-level cross-domain contexts collected from 20 users were used to ascertain abstract contexts, which served as basis for behavior modeling and lifestyle identification. The experimental results indicate a context recognition average accuracy of around 92.65% for the collected cross-domain contexts. PMID:29064459

  8. Postmortem computed tomography (PMCT) and disaster victim identification.

    PubMed

    Brough, A L; Morgan, B; Rutty, G N

    2015-09-01

    Radiography has been used for identification since 1927, and established a role in mass fatality investigations in 1949. More recently, postmortem computed tomography (PMCT) has been used for disaster victim identification (DVI). PMCT offers several advantages compared with fluoroscopy, plain film and dental X-rays, including: speed, reducing the number of on-site personnel and imaging modalities required, making it potentially more efficient. However, there are limitations that inhibit the international adoption of PMCT into routine practice. One particular problem is that due to the fact that forensic radiology is a relatively new sub-speciality, there are no internationally established standards for image acquisition, image interpretation and archiving. This is reflected by the current INTERPOL DVI form, which does not contain a PMCT section. The DVI working group of the International Society of Forensic Radiology and Imaging supports the use of imaging in mass fatality response and has published positional statements in this area. This review will discuss forensic radiology, PMCT, and its role in disaster victim identification.

  9. Construction, implementation and testing of an image identification system using computer vision methods for fruit flies with economic importance (Diptera: Tephritidae).

    PubMed

    Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang

    2017-07-01

    Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  10. Computed tomographic identification of calcified optic nerve drusen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramirez, H.; Blatt, E.S.; Hibri, N.S.

    1983-07-01

    Four cases of optic disk drusen were accurately diagnosed with orbital computed tomography (CT). The radiologist should be aware of the characteristic CT finding of discrete calcification within an otherwise normal optic disk. This benign process is easily differentiated from lesions such as calcific neoplastic processes of the posterior globe. CT identification of optic disk drusen is essential in the evaluation of visual field defects, migraine-like headaches, and pseudopapilledema.

  11. Identification of Protein–Excipient Interaction Hotspots Using Computational Approaches

    PubMed Central

    Barata, Teresa S.; Zhang, Cheng; Dalby, Paul A.; Brocchini, Steve; Zloh, Mire

    2016-01-01

    Protein formulation development relies on the selection of excipients that inhibit protein–protein interactions preventing aggregation. Empirical strategies involve screening many excipient and buffer combinations using force degradation studies. Such methods do not readily provide information on intermolecular interactions responsible for the protective effects of excipients. This study describes a molecular docking approach to screen and rank interactions allowing for the identification of protein–excipient hotspots to aid in the selection of excipients to be experimentally screened. Previously published work with Drosophila Su(dx) was used to develop and validate the computational methodology, which was then used to determine the formulation hotspots for Fab A33. Commonly used excipients were examined and compared to the regions in Fab A33 prone to protein–protein interactions that could lead to aggregation. This approach could provide information on a molecular level about the protective interactions of excipients in protein formulations to aid the more rational development of future formulations. PMID:27258262

  12. Tracking by Identification Using Computer Vision and Radio

    PubMed Central

    Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez

    2013-01-01

    We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485

  13. When the ends outweigh the means: mood and level of identification in depression.

    PubMed

    Watkins, Edward R; Moberly, Nicholas J; Moulds, Michelle L

    2011-11-01

    Research in healthy controls has found that mood influences cognitive processing via level of action identification: happy moods are associated with global and abstract processing; sad moods are associated with local and concrete processing. However, this pattern seems inconsistent with the high level of abstract processing observed in depressed patients, leading Watkins (2008, 2010) to hypothesise that the association between mood and level of goal/action identification is impaired in depression. We tested this hypothesis by measuring level of identification on the Behavioural Identification Form after happy and sad mood inductions in never-depressed controls and currently depressed patients. Participants used increasingly concrete action identifications as they became sadder and less happy, but this effect was moderated by depression status. Consistent with Watkins' (2008) hypothesis, increases in sad mood and decreases in happiness were associated with shifts towards the use of more concrete action identifications in never-depressed individuals, but not in depressed patients. These findings suggest that the putatively adaptive association between mood and level of identification is impaired in major depression.

  14. When the ends outweigh the means: Mood and level of identification in depression

    PubMed Central

    Watkins, Edward R.; Moberly, Nicholas J.; Moulds, Michelle L.

    2011-01-01

    Research in healthy controls has found that mood influences cognitive processing via level of action identification: happy moods are associated with global and abstract processing; sad moods are associated with local and concrete processing. However, this pattern seems inconsistent with the high level of abstract processing observed in depressed patients, leading Watkins (2008, 2010) to hypothesise that the association between mood and level of goal/action identification is impaired in depression. We tested this hypothesis by measuring level of identification on the Behavioural Identification Form after happy and sad mood inductions in never-depressed controls and currently depressed patients. Participants used increasingly concrete action identifications as they became sadder and less happy, but this effect was moderated by depression status. Consistent with Watkins' (2008) hypothesis, increases in sad mood and decreases in happiness were associated with shifts towards the use of more concrete action identifications in never-depressed individuals, but not in depressed patients. These findings suggest that the putatively adaptive association between mood and level of identification is impaired in major depression. PMID:22017614

  15. Sculpting Computational-Level Models.

    PubMed

    Blokpoel, Mark

    2017-06-27

    In this commentary, I advocate for strict relations between Marr's levels of analysis. Under a strict relationship, each level is exactly implemented by the subordinate level. This yields two benefits. First, it brings consistency for multilevel explanations. Second, similar to how a sculptor chisels away superfluous marble, a modeler can chisel a computational-level model by applying constraints. By sculpting the model, one restricts the (potentially infinitely large) set of possible algorithmic- and implementational-level theories. Copyright © 2017 Cognitive Science Society, Inc.

  16. Usability Studies in Virtual and Traditional Computer Aided Design Environments for Fault Identification

    DTIC Science & Technology

    2017-08-08

    Usability Studies In Virtual And Traditional Computer Aided Design Environments For Fault Identification Dr. Syed Adeel Ahmed, Xavier University...virtual environment with wand interfaces compared directly with a workstation non-stereoscopic traditional CAD interface with keyboard and mouse. In...the differences in interaction when compared with traditional human computer interfaces. This paper provides analysis via usability study methods

  17. Contours identification of elements in a cone beam computed tomography for investigating maxillary cysts

    NASA Astrophysics Data System (ADS)

    Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia

    2013-10-01

    Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.

  18. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  19. Identification of control targets in Boolean molecular network models via computational algebra.

    PubMed

    Murrugarra, David; Veliz-Cuba, Alan; Aguilar, Boris; Laubenbacher, Reinhard

    2016-09-23

    Many problems in biomedicine and other areas of the life sciences can be characterized as control problems, with the goal of finding strategies to change a disease or otherwise undesirable state of a biological system into another, more desirable, state through an intervention, such as a drug or other therapeutic treatment. The identification of such strategies is typically based on a mathematical model of the process to be altered through targeted control inputs. This paper focuses on processes at the molecular level that determine the state of an individual cell, involving signaling or gene regulation. The mathematical model type considered is that of Boolean networks. The potential control targets can be represented by a set of nodes and edges that can be manipulated to produce a desired effect on the system. This paper presents a method for the identification of potential intervention targets in Boolean molecular network models using algebraic techniques. The approach exploits an algebraic representation of Boolean networks to encode the control candidates in the network wiring diagram as the solutions of a system of polynomials equations, and then uses computational algebra techniques to find such controllers. The control methods in this paper are validated through the identification of combinatorial interventions in the signaling pathways of previously reported control targets in two well studied systems, a p53-mdm2 network and a blood T cell lymphocyte granular leukemia survival signaling network. Supplementary data is available online and our code in Macaulay2 and Matlab are available via http://www.ms.uky.edu/~dmu228/ControlAlg . This paper presents a novel method for the identification of intervention targets in Boolean network models. The results in this paper show that the proposed methods are useful and efficient for moderately large networks.

  20. Multi-level hot zone identification for pedestrian safety.

    PubMed

    Lee, Jaeyoung; Abdel-Aty, Mohamed; Choi, Keechoo; Huang, Helai

    2015-03-01

    According to the National Highway Traffic Safety Administration (NHTSA), while fatalities from traffic crashes have decreased, the proportion of pedestrian fatalities has steadily increased from 11% to 14% over the past decade. This study aims at identifying two zonal levels factors. The first is to identify hot zones at which pedestrian crashes occurs, while the second are zones where crash-involved pedestrians came from. Bayesian Poisson lognormal simultaneous equation spatial error model (BPLSESEM) was estimated and revealed significant factors for the two target variables. Then, PSIs (potential for safety improvements) were computed using the model. Subsequently, a novel hot zone identification method was suggested to combine both hot zones from where vulnerable pedestrians originated with hot zones where many pedestrian crashes occur. For the former zones, targeted safety education and awareness campaigns can be provided as countermeasures whereas area-wide engineering treatments and enforcement may be effective safety treatments for the latter ones. Thus, it is expected that practitioners are able to suggest appropriate safety treatments for pedestrian crashes using the method and results from this study. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Report: Unsupervised identification of malaria parasites using computer vision.

    PubMed

    Khan, Najeed Ahmed; Pervaz, Hassan; Latif, Arsalan; Musharaff, Ayesha

    2017-01-01

    Malaria in human is a serious and fatal tropical disease. This disease results from Anopheles mosquitoes that are infected by Plasmodium species. The clinical diagnosis of malaria based on the history, symptoms and clinical findings must always be confirmed by laboratory diagnosis. Laboratory diagnosis of malaria involves identification of malaria parasite or its antigen / products in the blood of the patient. Manual diagnosis of malaria parasite by the pathologists has proven to become cumbersome. Therefore, there is a need of automatic, efficient and accurate identification of malaria parasite. In this paper, we proposed a computer vision based approach to identify the malaria parasite from light microscopy images. This research deals with the challenges involved in the automatic detection of malaria parasite tissues. Our proposed method is based on the pixel-based approach. We used K-means clustering (unsupervised approach) for the segmentation to identify malaria parasite tissues.

  2. Blind source computer device identification from recorded VoIP calls for forensic investigation.

    PubMed

    Jahanirad, Mehdi; Anuar, Nor Badrul; Wahab, Ainuddin Wahid Abdul

    2017-03-01

    The VoIP services provide fertile ground for criminal activity, thus identifying the transmitting computer devices from recorded VoIP call may help the forensic investigator to reveal useful information. It also proves the authenticity of the call recording submitted to the court as evidence. This paper extended the previous study on the use of recorded VoIP call for blind source computer device identification. Although initial results were promising but theoretical reasoning for this is yet to be found. The study suggested computing entropy of mel-frequency cepstrum coefficients (entropy-MFCC) from near-silent segments as an intrinsic feature set that captures the device response function due to the tolerances in the electronic components of individual computer devices. By applying the supervised learning techniques of naïve Bayesian, linear logistic regression, neural networks and support vector machines to the entropy-MFCC features, state-of-the-art identification accuracy of near 99.9% has been achieved on different sets of computer devices for both call recording and microphone recording scenarios. Furthermore, unsupervised learning techniques, including simple k-means, expectation-maximization and density-based spatial clustering of applications with noise (DBSCAN) provided promising results for call recording dataset by assigning the majority of instances to their correct clusters. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  3. Assigning unique identification numbers to new user accounts and groups in a computing environment with multiple registries

    DOEpatents

    DeRobertis, Christopher V.; Lu, Yantian T.

    2010-02-23

    A method, system, and program storage device for creating a new user account or user group with a unique identification number in a computing environment having multiple user registries is provided. In response to receiving a command to create a new user account or user group, an operating system of a clustered computing environment automatically checks multiple registries configured for the operating system to determine whether a candidate identification number for the new user account or user group has been assigned already to one or more existing user accounts or groups, respectively. The operating system automatically assigns the candidate identification number to the new user account or user group created in a target user registry if the checking indicates that the candidate identification number has not been assigned already to any of the existing user accounts or user groups, respectively.

  4. Apps for Angiosperms: The Usability of Mobile Computers and Printed Field Guides for UK Wild Flower and Winter Tree Identification

    ERIC Educational Resources Information Center

    Stagg, Bethan C.; Donkin, Maria E.

    2017-01-01

    We investigated usability of mobile computers and field guide books with adult botanical novices, for the identification of wildflowers and deciduous trees in winter. Identification accuracy was significantly higher for wildflowers using a mobile computer app than field guide books but significantly lower for deciduous trees. User preference…

  5. A Program for the Identification of the Enterobacteriaceae for Use in Teaching the Principles of Computer Identification of Bacteria.

    ERIC Educational Resources Information Center

    Hammonds, S. J.

    1990-01-01

    A technique for the numerical identification of bacteria using normalized likelihoods calculated from a probabilistic database is described, and the principles of the technique are explained. The listing of the computer program is included. Specimen results from the program, and examples of how they should be interpreted, are given. (KR)

  6. Computational Acoustic Beamforming for Noise Source Identification for Small Wind Turbines.

    PubMed

    Ma, Ping; Lien, Fue-Sang; Yee, Eugene

    2017-01-01

    This paper develops a computational acoustic beamforming (CAB) methodology for identification of sources of small wind turbine noise. This methodology is validated using the case of the NACA 0012 airfoil trailing edge noise. For this validation case, the predicted acoustic maps were in excellent conformance with the results of the measurements obtained from the acoustic beamforming experiment. Following this validation study, the CAB methodology was applied to the identification of noise sources generated by a commercial small wind turbine. The simulated acoustic maps revealed that the blade tower interaction and the wind turbine nacelle were the two primary mechanisms for sound generation for this small wind turbine at frequencies between 100 and 630 Hz.

  7. Computational Acoustic Beamforming for Noise Source Identification for Small Wind Turbines

    PubMed Central

    Lien, Fue-Sang

    2017-01-01

    This paper develops a computational acoustic beamforming (CAB) methodology for identification of sources of small wind turbine noise. This methodology is validated using the case of the NACA 0012 airfoil trailing edge noise. For this validation case, the predicted acoustic maps were in excellent conformance with the results of the measurements obtained from the acoustic beamforming experiment. Following this validation study, the CAB methodology was applied to the identification of noise sources generated by a commercial small wind turbine. The simulated acoustic maps revealed that the blade tower interaction and the wind turbine nacelle were the two primary mechanisms for sound generation for this small wind turbine at frequencies between 100 and 630 Hz. PMID:28378012

  8. An intermediate level of abstraction for computational systems chemistry.

    PubMed

    Andersen, Jakob L; Flamm, Christoph; Merkle, Daniel; Stadler, Peter F

    2017-12-28

    Computational techniques are required for narrowing down the vast space of possibilities to plausible prebiotic scenarios, because precise information on the molecular composition, the dominant reaction chemistry and the conditions for that era are scarce. The exploration of large chemical reaction networks is a central aspect in this endeavour. While quantum chemical methods can accurately predict the structures and reactivities of small molecules, they are not efficient enough to cope with large-scale reaction systems. The formalization of chemical reactions as graph grammars provides a generative system, well grounded in category theory, at the right level of abstraction for the analysis of large and complex reaction networks. An extension of the basic formalism into the realm of integer hyperflows allows for the identification of complex reaction patterns, such as autocatalysis, in large reaction networks using optimization techniques.This article is part of the themed issue 'Reconceptualizing the origins of life'. © 2017 The Author(s).

  9. Computed gray levels in multislice and cone-beam computed tomography.

    PubMed

    Azeredo, Fabiane; de Menezes, Luciane Macedo; Enciso, Reyes; Weissheimer, Andre; de Oliveira, Rogério Belle

    2013-07-01

    Gray level is the range of shades of gray in the pixels, representing the x-ray attenuation coefficient that allows for tissue density assessments in computed tomography (CT). An in-vitro study was performed to investigate the relationship between computed gray levels in 3 cone-beam CT (CBCT) scanners and 1 multislice spiral CT device using 5 software programs. Six materials (air, water, wax, acrylic, plaster, and gutta-percha) were scanned with the CBCT and CT scanners, and the computed gray levels for each material at predetermined points were measured with OsiriX Medical Imaging software (Geneva, Switzerland), OnDemand3D (CyberMed International, Seoul, Korea), E-Film (Merge Healthcare, Milwaukee, Wis), Dolphin Imaging (Dolphin Imaging & Management Solutions, Chatsworth, Calif), and InVivo Dental Software (Anatomage, San Jose, Calif). The repeatability of these measurements was calculated with intraclass correlation coefficients, and the gray levels were averaged to represent each material. Repeated analysis of variance tests were used to assess the differences in gray levels among scanners and materials. There were no differences in mean gray levels with the different software programs. There were significant differences in gray levels between scanners for each material evaluated (P <0.001). The software programs were reliable and had no influence on the CT and CBCT gray level measurements. However, the gray levels might have discrepancies when different CT and CBCT scanners are used. Therefore, caution is essential when interpreting or evaluating CBCT images because of the significant differences in gray levels between different CBCT scanners, and between CBCT and CT values. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  10. Identification of Cichlid Fishes from Lake Malawi Using Computer Vision

    PubMed Central

    Joo, Deokjin; Kwan, Ye-seul; Song, Jongwoo; Pinho, Catarina; Hey, Jody; Won, Yong-Jin

    2013-01-01

    Background The explosively radiating evolution of cichlid fishes of Lake Malawi has yielded an amazing number of haplochromine species estimated as many as 500 to 800 with a surprising degree of diversity not only in color and stripe pattern but also in the shape of jaw and body among them. As these morphological diversities have been a central subject of adaptive speciation and taxonomic classification, such high diversity could serve as a foundation for automation of species identification of cichlids. Methodology/Principal Finding Here we demonstrate a method for automatic classification of the Lake Malawi cichlids based on computer vision and geometric morphometrics. For this end we developed a pipeline that integrates multiple image processing tools to automatically extract informative features of color and stripe patterns from a large set of photographic images of wild cichlids. The extracted information was evaluated by statistical classifiers Support Vector Machine and Random Forests. Both classifiers performed better when body shape information was added to the feature of color and stripe. Besides the coloration and stripe pattern, body shape variables boosted the accuracy of classification by about 10%. The programs were able to classify 594 live cichlid individuals belonging to 12 different classes (species and sexes) with an average accuracy of 78%, contrasting to a mere 42% success rate by human eyes. The variables that contributed most to the accuracy were body height and the hue of the most frequent color. Conclusions Computer vision showed a notable performance in extracting information from the color and stripe patterns of Lake Malawi cichlids although the information was not enough for errorless species identification. Our results indicate that there appears an unavoidable difficulty in automatic species identification of cichlid fishes, which may arise from short divergence times and gene flow between closely related species. PMID:24204918

  11. Factors influencing exemplary science teachers' levels of computer use

    NASA Astrophysics Data System (ADS)

    Hakverdi, Meral

    This study examines exemplary science teachers' use of technology in science instruction, factors influencing their level of computer use, their level of knowledge/skills in using specific computer applications for science instruction, their use of computer-related applications/tools during their instruction, and their students' use of computer applications/tools in or for their science class. After a relevant review of the literature certain variables were selected for analysis. These variables included personal self-efficacy in teaching with computers, outcome expectancy, pupil-control ideology, level of computer use, age, gender, teaching experience, personal computer use, professional computer use and science teachers' level of knowledge/skills in using specific computer applications for science instruction. The sample for this study includes middle and high school science teachers who received the Presidential Award for Excellence in Science Teaching Award (sponsored by the White House and the National Science Foundation) between the years 1997 and 2003 from all 50 states and U.S. territories. Award-winning science teachers were contacted about the survey via e-mail or letter with an enclosed return envelope. Of the 334 award-winning science teachers, usable responses were received from 92 science teachers, which made a response rate of 27.5%. Analysis of the survey responses indicated that exemplary science teachers have a variety of knowledge/skills in using computer related applications/tools. The most commonly used computer applications/tools are information retrieval via the Internet, presentation tools, online communication, digital cameras, and data collection probes. Results of the study revealed that students' use of technology in their science classroom is highly correlated with the frequency of their science teachers' use of computer applications/tools. The results of the multiple regression analysis revealed that personal self-efficacy related to

  12. Identify Skills and Proficiency Levels Necessary for Entry-Level Employment for All Vocational Programs Using Computers to Process Data. Final Report.

    ERIC Educational Resources Information Center

    Crowe, Jacquelyn

    This study investigated computer and word processing operator skills necessary for employment in today's high technology office. The study was comprised of seven major phases: (1) identification of existing community college computer operator programs in the state of Washington; (2) attendance at an information management seminar; (3) production…

  13. Evolutionary Computation for the Identification of Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2009-01-01

    Over the past several years the Center for Evolutionary Computation and Automated Design at the Jet Propulsion Laboratory has developed a technique based on Evolutionary Computational Methods (ECM) that allows for the automated optimization of complex computationally modeled systems. An important application of this technique is for the identification of emergent behaviors in autonomous systems. Mobility platforms such as rovers or airborne vehicles are now being designed with autonomous mission controllers that can find trajectories over a solution space that is larger than can reasonably be tested. It is critical to identify control behaviors that are not predicted and can have surprising results (both good and bad). These emergent behaviors need to be identified, characterized and either incorporated into or isolated from the acceptable range of control characteristics. We use cluster analysis of automatically retrieved solutions to identify isolated populations of solutions with divergent behaviors.

  14. Factors Influencing Exemplary Science Teachers' Levels of Computer Use

    ERIC Educational Resources Information Center

    Hakverdi, Meral; Dana, Thomas M.; Swain, Colleen

    2011-01-01

    The purpose of this study was to examine exemplary science teachers' use of technology in science instruction, factors influencing their level of computer use, their level of knowledge/skills in using specific computer applications for science instruction, their use of computer-related applications/tools during their instruction, and their…

  15. iProphet: Multi-level Integrative Analysis of Shotgun Proteomic Data Improves Peptide and Protein Identification Rates and Error Estimates*

    PubMed Central

    Shteynberg, David; Deutsch, Eric W.; Lam, Henry; Eng, Jimmy K.; Sun, Zhi; Tasman, Natalie; Mendoza, Luis; Moritz, Robert L.; Aebersold, Ruedi; Nesvizhskii, Alexey I.

    2011-01-01

    The combination of tandem mass spectrometry and sequence database searching is the method of choice for the identification of peptides and the mapping of proteomes. Over the last several years, the volume of data generated in proteomic studies has increased dramatically, which challenges the computational approaches previously developed for these data. Furthermore, a multitude of search engines have been developed that identify different, overlapping subsets of the sample peptides from a particular set of tandem mass spectrometry spectra. We present iProphet, the new addition to the widely used open-source suite of proteomic data analysis tools Trans-Proteomics Pipeline. Applied in tandem with PeptideProphet, it provides more accurate representation of the multilevel nature of shotgun proteomic data. iProphet combines the evidence from multiple identifications of the same peptide sequences across different spectra, experiments, precursor ion charge states, and modified states. It also allows accurate and effective integration of the results from multiple database search engines applied to the same data. The use of iProphet in the Trans-Proteomics Pipeline increases the number of correctly identified peptides at a constant false discovery rate as compared with both PeptideProphet and another state-of-the-art tool Percolator. As the main outcome, iProphet permits the calculation of accurate posterior probabilities and false discovery rate estimates at the level of sequence identical peptide identifications, which in turn leads to more accurate probability estimates at the protein level. Fully integrated with the Trans-Proteomics Pipeline, it supports all commonly used MS instruments, search engines, and computer platforms. The performance of iProphet is demonstrated on two publicly available data sets: data from a human whole cell lysate proteome profiling experiment representative of typical proteomic data sets, and from a set of Streptococcus pyogenes experiments

  16. Multi-level RF identification system

    DOEpatents

    Steele, Kerry D.; Anderson, Gordon A.; Gilbert, Ronald W.

    2004-07-20

    A radio frequency identification system having a radio frequency transceiver for generating a continuous wave RF interrogation signal that impinges upon an RF identification tag. An oscillation circuit in the RF identification tag modulates the interrogation signal with a subcarrier of a predetermined frequency and modulates the frequency-modulated signal back to the transmitting interrogator. The interrogator recovers and analyzes the subcarrier signal and determines its frequency. The interrogator generates an output indicative of the frequency of the subcarrier frequency, thereby identifying the responding RFID tag as one of a "class" of RFID tags configured to respond with a subcarrier signal of a predetermined frequency.

  17. Development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification

    NASA Astrophysics Data System (ADS)

    Astafiev, A.; Orlov, A.; Privezencev, D.

    2018-01-01

    The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.

  18. Particle Identification on an FPGA Accelerated Compute Platform for the LHCb Upgrade

    NASA Astrophysics Data System (ADS)

    Fäerber, Christian; Schwemmer, Rainer; Machen, Jonathan; Neufeld, Niko

    2017-07-01

    The current LHCb readout system will be upgraded in 2018 to a “triggerless” readout of the entire detector at the Large Hadron Collider collision rate of 40 MHz. The corresponding bandwidth from the detector down to the foreseen dedicated computing farm (event filter farm), which acts as the trigger, has to be increased by a factor of almost 100 from currently 500 Gb/s up to 40 Tb/s. The event filter farm will preanalyze the data and will select the events on an event by event basis. This will reduce the bandwidth down to a manageable size to write the interesting physics data to tape. The design of such a system is a challenging task, and the reason why different new technologies are considered and have to be investigated for the different parts of the system. For the usage in the event building farm or in the event filter farm (trigger), an experimental field programmable gate array (FPGA) accelerated computing platform is considered and, therefore, tested. FPGA compute accelerators are used more and more in standard servers such as for Microsoft Bing search or Baidu search. The platform we use hosts a general Intel CPU and a high-performance FPGA linked via the high-speed Intel QuickPath Interconnect. An accelerator is implemented on the FPGA. It is very likely that these platforms, which are built, in general, for high-performance computing, are also very interesting for the high-energy physics community. First, the performance results of smaller test cases performed at the beginning are presented. Afterward, a part of the existing LHCb RICH particle identification is tested and is ported to the experimental FPGA accelerated platform. We have compared the performance of the LHCb RICH particle identification running on a normal CPU with the performance of the same algorithm, which is running on the Xeon-FPGA compute accelerator platform.

  19. Identification of Program Signatures from Cloud Computing System Telemetry Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.

    Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less

  20. The Use of Computer-Assisted Identification of ARIMA Time-Series.

    ERIC Educational Resources Information Center

    Brown, Roger L.

    This study was conducted to determine the effects of using various levels of tutorial statistical software for the tentative identification of nonseasonal ARIMA models, a statistical technique proposed by Box and Jenkins for the interpretation of time-series data. The Box-Jenkins approach is an iterative process encompassing several stages of…

  1. Identification of oxygen-related midgap level in GaAs

    NASA Technical Reports Server (NTRS)

    Lagowski, J.; Lin, D. G.; Gatos, H. C.; Aoyama, T.

    1984-01-01

    An oxygen-related deep level ELO was identified in GaAs employing Bridgman-grown crystals with controlled oxygen doping. The activation energy of ELO is almost the same as that of the dominant midgap level: EL2. This fact impedes the identification of ELO by standard deep level transient spectroscopy. However, it was found that the electron capture cross section of ELO is about four times greater than that of EL2. This characteristic served as the basis for the separation and quantitative investigation of ELO employing detailed capacitance transient measurements in conjunction with reference measurements on crystals grown without oxygen doping and containing only EL2.

  2. New Possibilities of Substance Identification Based on THz Time Domain Spectroscopy Using a Cascade Mechanism of High Energy Level Excitation

    PubMed Central

    Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Zakharova, Irina G.; Zagursky, Dmitry Yu.

    2017-01-01

    Using an experiment with thin paper layers and computer simulation, we demonstrate the principal limitations of standard Time Domain Spectroscopy (TDS) based on using a broadband THz pulse for the detection and identification of a substance placed inside a disordered structure. We demonstrate the spectrum broadening of both transmitted and reflected pulses due to the cascade mechanism of the high energy level excitation considering, for example, a three-energy level medium. The pulse spectrum in the range of high frequencies remains undisturbed in the presence of a disordered structure. To avoid false absorption frequencies detection, we apply the spectral dynamics analysis method (SDA-method) together with certain integral correlation criteria (ICC). PMID:29186849

  3. Identification of quasi-steady compressor characteristics from transient data

    NASA Technical Reports Server (NTRS)

    Nunes, K. B.; Rock, S. M.

    1984-01-01

    The principal goal was to demonstrate that nonlinear compressor map parameters, which govern an in-stall response, can be identified from test data using parameter identification techniques. The tasks included developing and then applying an identification procedure to data generated by NASA LeRC on a hybrid computer. Two levels of model detail were employed. First was a lumped compressor rig model; second was a simplified turbofan model. The main outputs are the tools and procedures generated to accomplish the identification.

  4. Computing Bounds on Resource Levels for Flexible Plans

    NASA Technical Reports Server (NTRS)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow

  5. Multi-level Hierarchical Poly Tree computer architectures

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug

    1990-01-01

    Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.

  6. Towards large-scale FAME-based bacterial species identification using machine learning techniques.

    PubMed

    Slabbinck, Bram; De Baets, Bernard; Dawyndt, Peter; De Vos, Paul

    2009-05-01

    In the last decade, bacterial taxonomy witnessed a huge expansion. The swift pace of bacterial species (re-)definitions has a serious impact on the accuracy and completeness of first-line identification methods. Consequently, back-end identification libraries need to be synchronized with the List of Prokaryotic names with Standing in Nomenclature. In this study, we focus on bacterial fatty acid methyl ester (FAME) profiling as a broadly used first-line identification method. From the BAME@LMG database, we have selected FAME profiles of individual strains belonging to the genera Bacillus, Paenibacillus and Pseudomonas. Only those profiles resulting from standard growth conditions have been retained. The corresponding data set covers 74, 44 and 95 validly published bacterial species, respectively, represented by 961, 378 and 1673 standard FAME profiles. Through the application of machine learning techniques in a supervised strategy, different computational models have been built for genus and species identification. Three techniques have been considered: artificial neural networks, random forests and support vector machines. Nearly perfect identification has been achieved at genus level. Notwithstanding the known limited discriminative power of FAME analysis for species identification, the computational models have resulted in good species identification results for the three genera. For Bacillus, Paenibacillus and Pseudomonas, random forests have resulted in sensitivity values, respectively, 0.847, 0.901 and 0.708. The random forests models outperform those of the other machine learning techniques. Moreover, our machine learning approach also outperformed the Sherlock MIS (MIDI Inc., Newark, DE, USA). These results show that machine learning proves very useful for FAME-based bacterial species identification. Besides good bacterial identification at species level, speed and ease of taxonomic synchronization are major advantages of this computational species

  7. Identification of double-yolked duck egg using computer vision.

    PubMed

    Ma, Long; Sun, Ke; Tu, Kang; Pan, Leiqing; Zhang, Wei

    2017-01-01

    The double-yolked (DY) egg is quite popular in some Asian countries because it is considered as a sign of good luck, however, the double yolk is one of the reasons why these eggs fail to hatch. The usage of automatic methods for identifying DY eggs can increase the efficiency in the poultry industry by decreasing egg loss during incubation or improving sale proceeds. In this study, two methods for DY duck egg identification were developed by using computer vision technology. Transmittance images of DY and single-yolked (SY) duck eggs were acquired by a CCD camera to identify them according to their shape features. The Fisher's linear discriminant (FLD) model equipped with a set of normalized Fourier descriptors (NFDs) extracted from the acquired images and the convolutional neural network (CNN) model using primary preprocessed images were built to recognize duck egg yolk types. The classification accuracies of the FLD model for SY and DY eggs were 100% and 93.2% respectively, while the classification accuracies of the CNN model for SY and DY eggs were 98% and 98.8% respectively. The CNN-based algorithm took about 0.12 s to recognize one sample image, which was slightly faster than the FLD-based (about 0.20 s). Finally, this work compared two classification methods and provided the better method for DY egg identification.

  8. Identification of double-yolked duck egg using computer vision

    PubMed Central

    Ma, Long; Sun, Ke; Tu, Kang; Pan, Leiqing; Zhang, Wei

    2017-01-01

    The double-yolked (DY) egg is quite popular in some Asian countries because it is considered as a sign of good luck, however, the double yolk is one of the reasons why these eggs fail to hatch. The usage of automatic methods for identifying DY eggs can increase the efficiency in the poultry industry by decreasing egg loss during incubation or improving sale proceeds. In this study, two methods for DY duck egg identification were developed by using computer vision technology. Transmittance images of DY and single-yolked (SY) duck eggs were acquired by a CCD camera to identify them according to their shape features. The Fisher’s linear discriminant (FLD) model equipped with a set of normalized Fourier descriptors (NFDs) extracted from the acquired images and the convolutional neural network (CNN) model using primary preprocessed images were built to recognize duck egg yolk types. The classification accuracies of the FLD model for SY and DY eggs were 100% and 93.2% respectively, while the classification accuracies of the CNN model for SY and DY eggs were 98% and 98.8% respectively. The CNN-based algorithm took about 0.12 s to recognize one sample image, which was slightly faster than the FLD-based (about 0.20 s). Finally, this work compared two classification methods and provided the better method for DY egg identification. PMID:29267387

  9. Computer aided identification of a Hevein-like antimicrobial peptide of bell pepper leaves for biotechnological use.

    PubMed

    Games, Patrícia Dias; daSilva, Elói Quintas Gonçalves; Barbosa, Meire de Oliveira; Almeida-Souza, Hebréia Oliveira; Fontes, Patrícia Pereira; deMagalhães, Marcos Jorge; Pereira, Paulo Roberto Gomes; Prates, Maura Vianna; Franco, Gloria Regina; Faria-Campos, Alessandra; Campos, Sérgio Vale Aguiar; Baracat-Pereira, Maria Cristina

    2016-12-15

    Antimicrobial peptides from plants present mechanisms of action that are different from those of conventional defense agents. They are under-explored but have a potential as commercial antimicrobials. Bell pepper leaves ('Magali R') are discarded after harvesting the fruit and are sources of bioactive peptides. This work reports the isolation by peptidomics tools, and the identification and partially characterization by computational tools of an antimicrobial peptide from bell pepper leaves, and evidences the usefulness of records and the in silico analysis for the study of plant peptides aiming biotechnological uses. Aqueous extracts from leaves were enriched in peptide by salt fractionation and ultrafiltration. An antimicrobial peptide was isolated by tandem chromatographic procedures. Mass spectrometry, automated peptide sequencing and bioinformatics tools were used alternately for identification and partial characterization of the Hevein-like peptide, named HEV-CANN. The computational tools that assisted to the identification of the peptide included BlastP, PSI-Blast, ClustalOmega, PeptideCutter, and ProtParam; conventional protein databases (DB) as Mascot, Protein-DB, GenBank-DB, RefSeq, Swiss-Prot, and UniProtKB; specific for peptides DB as Amper, APD2, CAMP, LAMPs, and PhytAMP; other tools included in ExPASy for Proteomics; The Bioactive Peptide Databases, and The Pepper Genome Database. The HEV-CANN sequence presented 40 amino acid residues, 4258.8 Da, theoretical pI-value of 8.78, and four disulfide bonds. It was stable, and it has inhibited the growth of phytopathogenic bacteria and a fungus. HEV-CANN presented a chitin-binding domain in their sequence. There was a high identity and a positive alignment of HEV-CANN sequence in various databases, but there was not a complete identity, suggesting that HEV-CANN may be produced by ribosomal synthesis, which is in accordance with its constitutive nature. Computational tools for proteomics and databases are

  10. OS friendly microprocessor architecture: Hardware level computer security

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; La Fratta, Patrick

    2016-05-01

    We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.

  11. A hyperspectral X-ray computed tomography system for enhanced material identification

    NASA Astrophysics Data System (ADS)

    Wu, Xiaomei; Wang, Qian; Ma, Jinlei; Zhang, Wei; Li, Po; Fang, Zheng

    2017-08-01

    X-ray computed tomography (CT) can distinguish different materials according to their absorption characteristics. The hyperspectral X-ray CT (HXCT) system proposed in the present work reconstructs each voxel according to its X-ray absorption spectral characteristics. In contrast to a dual-energy or multi-energy CT system, HXCT employs cadmium telluride (CdTe) as the x-ray detector, which provides higher spectral resolution and separate spectral lines according to the material's photon-counter working principle. In this paper, a specimen containing ten different polymer materials randomly arranged was adopted for material identification by HXCT. The filtered back-projection algorithm was applied for image and spectral reconstruction. The first step was to sort the individual material components of the specimen according to their cross-sectional image intensity. The second step was to classify materials with similar intensities according to their reconstructed spectral characteristics. The results demonstrated the feasibility of the proposed material identification process and indicated that the proposed HXCT system has good prospects for a wide range of biomedical and industrial nondestructive testing applications.

  12. Computing NLTE Opacities -- Node Level Parallel Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holladay, Daniel

    Presentation. The goal: to produce a robust library capable of computing reasonably accurate opacities inline with the assumption of LTE relaxed (non-LTE). Near term: demonstrate acceleration of non-LTE opacity computation. Far term (if funded): connect to application codes with in-line capability and compute opacities. Study science problems. Use efficient algorithms that expose many levels of parallelism and utilize good memory access patterns for use on advanced architectures. Portability to multiple types of hardware including multicore processors, manycore processors such as KNL, GPUs, etc. Easily coupled to radiation hydrodynamics and thermal radiative transfer codes.

  13. Genus- and species-level identification of dermatophyte fungi by surface-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Witkowska, Evelin; Jagielski, Tomasz; Kamińska, Agnieszka

    2018-03-01

    This paper demonstrates that surface-enhanced Raman spectroscopy (SERS) coupled with principal component analysis (PCA) can serve as a fast and reliable technique for detection and identification of dermatophyte fungi at both genus and species level. Dermatophyte infections are the most common mycotic diseases worldwide, affecting a quarter of the human population. Currently, there is no optimal method for detection and identification of fungal diseases, as each has certain limitations. Here, for the first time, we have achieved with a high accuracy, differentiation of dermatophytes representing three major genera, i.e. Trichophyton, Microsporum, and Epidermophyton. Two first principal components (PC), namely PC-1 and PC-2, gave together 97% of total variance. Additionally, species-level identification within the Trichophyton genus has been performed. PC-1 and PC-2, which are the most diagnostically significant, explain 98% of the variance in the data obtained from spectra of: Trichophyton rubrum, Trichophyton menatgrophytes, Trichophyton interdigitale and Trichophyton tonsurans. This study offers a new diagnostic approach for the identification of dermatophytes. Being fast, reliable and cost-effective, it has the potential to be incorporated in the clinical practice to improve diagnostics of medically important fungi.

  14. Seeing meaning in action: a bidirectional link between visual perspective and action identification level.

    PubMed

    Libby, Lisa K; Shaeffer, Eric M; Eibach, Richard P

    2009-11-01

    Actions do not have inherent meaning but rather can be interpreted in many ways. The interpretation a person adopts has important effects on a range of higher order cognitive processes. One dimension on which interpretations can vary is the extent to which actions are identified abstractly--in relation to broader goals, personal characteristics, or consequences--versus concretely, in terms of component processes. The present research investigated how visual perspective (own 1st-person vs. observer's 3rd-person) in action imagery is related to action identification level. A series of experiments measured and manipulated visual perspective in mental and photographic images to test the connection with action identification level. Results revealed a bidirectional causal relationship linking 3rd-person images and abstract action identifications. These findings highlight the functional role of visual imagery and have implications for understanding how perspective is involved in action perception at the social, cognitive, and neural levels. Copyright 2009 APA

  15. Comparison of conventional ultrasonography and ultrasonography-computed tomography fusion imaging for target identification using digital/real hybrid phantoms: a preliminary study.

    PubMed

    Soyama, Takeshi; Sakuhara, Yusuke; Kudo, Kohsuke; Abo, Daisuke; Wang, Jeff; Ito, Yoichi M; Hasegawa, Yu; Shirato, Hiroki

    2016-07-01

    This preliminary study compared ultrasonography-computed tomography (US-CT) fusion imaging and conventional ultrasonography (US) for accuracy and time required for target identification using a combination of real phantoms and sets of digitally modified computed tomography (CT) images (digital/real hybrid phantoms). In this randomized prospective study, 27 spheres visible on B-mode US were placed at depths of 3.5, 8.5, and 13.5 cm (nine spheres each). All 27 spheres were digitally erased from the CT images, and a radiopaque sphere was digitally placed at each of the 27 locations to create 27 different sets of CT images. Twenty clinicians were instructed to identify the sphere target using US alone and fusion imaging. The accuracy of target identification of the two methods was compared using McNemar's test. The mean time required for target identification and error distances were compared using paired t tests. At all three depths, target identification was more accurate and the mean time required for target identification was significantly less with US-CT fusion imaging than with US alone, and the mean error distances were also shorter with US-CT fusion imaging. US-CT fusion imaging was superior to US alone in terms of accurate and rapid identification of target lesions.

  16. A computational model to protect patient data from location-based re-identification.

    PubMed

    Malin, Bradley

    2007-07-01

    Health care organizations must preserve a patient's anonymity when disclosing personal data. Traditionally, patient identity has been protected by stripping identifiers from sensitive data such as DNA. However, simple automated methods can re-identify patient data using public information. In this paper, we present a solution to prevent a threat to patient anonymity that arises when multiple health care organizations disclose data. In this setting, a patient's location visit pattern, or "trail", can re-identify seemingly anonymous DNA to patient identity. This threat exists because health care organizations (1) cannot prevent the disclosure of certain types of patient information and (2) do not know how to systematically avoid trail re-identification. In this paper, we develop and evaluate computational methods that health care organizations can apply to disclose patient-specific DNA records that are impregnable to trail re-identification. To prevent trail re-identification, we introduce a formal model called k-unlinkability, which enables health care administrators to specify different degrees of patient anonymity. Specifically, k-unlinkability is satisfied when the trail of each DNA record is linkable to no less than k identified records. We present several algorithms that enable health care organizations to coordinate their data disclosure, so that they can determine which DNA records can be shared without violating k-unlinkability. We evaluate the algorithms with the trails of patient populations derived from publicly available hospital discharge databases. Algorithm efficacy is evaluated using metrics based on real world applications, including the number of suppressed records and the number of organizations that disclose records. Our experiments indicate that it is unnecessary to suppress all patient records that initially violate k-unlinkability. Rather, only portions of the trails need to be suppressed. For example, if each hospital discloses 100% of its

  17. Evaluation of the Microbial Identification System for identification of clinically isolated yeasts.

    PubMed Central

    Crist, A E; Johnson, L M; Burke, P J

    1996-01-01

    The Microbial Identification System (MIS; Microbial ID, Inc., Newark, Del.) was evaluated for the identification of 550 clinically isolated yeasts. The organisms evaluated were fresh clinical isolates identified by methods routinely used in our laboratory (API 20C and conventional methods) and included Candida albicans (n = 294), C. glabrata (n = 145), C. tropicalis (n = 58), C. parapsilosis (n = 33), and other yeasts (n = 20). In preparation for fatty acid analysis, yeasts were inoculated onto Sabouraud dextrose agar and incubated at 28 degrees C for 24 h. Yeasts were harvested, saponified, derivatized, and extracted, and fatty acid analysis was performed according to the manufacturer's instructions. Fatty acid profiles were analyzed, and computer identifications were made with the Yeast Clinical Library (database version 3.8). Of the 550 isolates tested, 374 (68.0%) were correctly identified to the species level, with 87 (15.8%) being incorrectly identified and 89 (16.2%) giving no identification. Repeat testing of isolates giving no identification resulted in an additional 18 isolates being correctly identified. This gave the MIS an overall identification rate of 71.3%. The most frequently misidentified yeast was C. glabrata, which was identified as Saccharomyces cerevisiae 32.4% of the time. On the basis of these results, the MIS, with its current database, does not appear suitable for the routine identification of clinically important yeasts. PMID:8880489

  18. Genus- and species-level identification of dermatophyte fungi by surface-enhanced Raman spectroscopy.

    PubMed

    Witkowska, Evelin; Jagielski, Tomasz; Kamińska, Agnieszka

    2018-03-05

    This paper demonstrates that surface-enhanced Raman spectroscopy (SERS) coupled with principal component analysis (PCA) can serve as a fast and reliable technique for detection and identification of dermatophyte fungi at both genus and species level. Dermatophyte infections are the most common mycotic diseases worldwide, affecting a quarter of the human population. Currently, there is no optimal method for detection and identification of fungal diseases, as each has certain limitations. Here, for the first time, we have achieved with a high accuracy, differentiation of dermatophytes representing three major genera, i.e. Trichophyton, Microsporum, and Epidermophyton. Two first principal components (PC), namely PC-1 and PC-2, gave together 97% of total variance. Additionally, species-level identification within the Trichophyton genus has been performed. PC-1 and PC-2, which are the most diagnostically significant, explain 98% of the variance in the data obtained from spectra of: Trichophyton rubrum, Trichophyton menatgrophytes, Trichophyton interdigitale and Trichophyton tonsurans. This study offers a new diagnostic approach for the identification of dermatophytes. Being fast, reliable and cost-effective, it has the potential to be incorporated in the clinical practice to improve diagnostics of medically important fungi. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. OPTICAL correlation identification technology applied in underwater laser imaging target identification

    NASA Astrophysics Data System (ADS)

    Yao, Guang-tao; Zhang, Xiao-hui; Ge, Wei-long

    2012-01-01

    The underwater laser imaging detection is an effective method of detecting short distance target underwater as an important complement of sonar detection. With the development of underwater laser imaging technology and underwater vehicle technology, the underwater automatic target identification has gotten more and more attention, and is a research difficulty in the area of underwater optical imaging information processing. Today, underwater automatic target identification based on optical imaging is usually realized with the method of digital circuit software programming. The algorithm realization and control of this method is very flexible. However, the optical imaging information is 2D image even 3D image, the amount of imaging processing information is abundant, so the electronic hardware with pure digital algorithm will need long identification time and is hard to meet the demands of real-time identification. If adopt computer parallel processing, the identification speed can be improved, but it will increase complexity, size and power consumption. This paper attempts to apply optical correlation identification technology to realize underwater automatic target identification. The optics correlation identification technology utilizes the Fourier transform characteristic of Fourier lens which can accomplish Fourier transform of image information in the level of nanosecond, and optical space interconnection calculation has the features of parallel, high speed, large capacity and high resolution, combines the flexibility of calculation and control of digital circuit method to realize optoelectronic hybrid identification mode. We reduce theoretical formulation of correlation identification and analyze the principle of optical correlation identification, and write MATLAB simulation program. We adopt single frame image obtained in underwater range gating laser imaging to identify, and through identifying and locating the different positions of target, we can improve

  20. Computational Prediction of Electron Ionization Mass Spectra to Assist in GC/MS Compound Identification.

    PubMed

    Allen, Felicity; Pon, Allison; Greiner, Russ; Wishart, David

    2016-08-02

    We describe a tool, competitive fragmentation modeling for electron ionization (CFM-EI) that, given a chemical structure (e.g., in SMILES or InChI format), computationally predicts an electron ionization mass spectrum (EI-MS) (i.e., the type of mass spectrum commonly generated by gas chromatography mass spectrometry). The predicted spectra produced by this tool can be used for putative compound identification, complementing measured spectra in reference databases by expanding the range of compounds able to be considered when availability of measured spectra is limited. The tool extends CFM-ESI, a recently developed method for computational prediction of electrospray tandem mass spectra (ESI-MS/MS), but unlike CFM-ESI, CFM-EI can handle odd-electron ions and isotopes and incorporates an artificial neural network. Tests on EI-MS data from the NIST database demonstrate that CFM-EI is able to model fragmentation likelihoods in low-resolution EI-MS data, producing predicted spectra whose dot product scores are significantly better than full enumeration "bar-code" spectra. CFM-EI also outperformed previously reported results for MetFrag, MOLGEN-MS, and Mass Frontier on one compound identification task. It also outperformed MetFrag in a range of other compound identification tasks involving a much larger data set, containing both derivatized and nonderivatized compounds. While replicate EI-MS measurements of chemical standards are still a more accurate point of comparison, CFM-EI's predictions provide a much-needed alternative when no reference standard is available for measurement. CFM-EI is available at https://sourceforge.net/projects/cfm-id/ for download and http://cfmid.wishartlab.com as a web service.

  1. Levels of Conformity to Islamic Values and the Process of Identification.

    ERIC Educational Resources Information Center

    Nassir, Balkis

    This study was conducted to measure the conformity levels and the identification process among university women students in an Islamic culture. Identity/conformity tests and costume identity tests were administered to 129 undergraduate female students at King Abdulaziz University in Saudi Arabia. The Photographic Costume Identity Test and the…

  2. Computer-assisted handwriting style identification system for questioned document examination

    NASA Astrophysics Data System (ADS)

    Cha, Sung-Hyuk; Yoon, Sungsoo; Tappert, Charles C.; Lee, Yillbyung

    2005-03-01

    Handwriting originates from a particular copybook style such as Palmer or Zaner-Bloser that one learns in childhood. Since questioned document examination plays an important investigative and forensic role in many types of crime, it is important to develop a system that helps objectively identify a questioned document"s handwriting style. Here, we propose a computer vision system that can assist a document examiner in the identification of a writer"s handwriting style and therefore the origin or nationality of an unknown writer of a questioned document. We collected 33 Roman alphabet copybook styles from 18 countries. Each character in a questioned document is segmented and matched against all of the 33 handwriting copybook styles. The more characters present in the questioned document, the higher the accuracy observed.

  3. Computer-aided drug discovery.

    PubMed

    Bajorath, Jürgen

    2015-01-01

    Computational approaches are an integral part of interdisciplinary drug discovery research. Understanding the science behind computational tools, their opportunities, and limitations is essential to make a true impact on drug discovery at different levels. If applied in a scientifically meaningful way, computational methods improve the ability to identify and evaluate potential drug molecules, but there remain weaknesses in the methods that preclude naïve applications. Herein, current trends in computer-aided drug discovery are reviewed, and selected computational areas are discussed. Approaches are highlighted that aid in the identification and optimization of new drug candidates. Emphasis is put on the presentation and discussion of computational concepts and methods, rather than case studies or application examples. As such, this contribution aims to provide an overview of the current methodological spectrum of computational drug discovery for a broad audience.

  4. On parameters identification of computational models of vibrations during quiet standing of humans

    NASA Astrophysics Data System (ADS)

    Barauskas, R.; Krušinskienė, R.

    2007-12-01

    Vibration of the center of pressure (COP) of human body on the base of support during quiet standing is a very popular clinical research, which provides useful information about the physical and health condition of an individual. In this work, vibrations of COP of a human body in forward-backward direction during still standing are generated using controlled inverted pendulum (CIP) model with a single degree of freedom (dof) supplied with proportional, integral and differential (PID) controller, which represents the behavior of the central neural system of a human body and excited by cumulative disturbance vibration, generated within the body due to breathing or any other physical condition. The identification of the model and disturbance parameters is an important stage while creating a close-to-reality computational model able to evaluate features of disturbance. The aim of this study is to present the CIP model parameters identification approach based on the information captured by time series of the COP signal. The identification procedure is based on an error function minimization. Error function is formulated in terms of time laws of computed and experimentally measured COP vibrations. As an alternative, error function is formulated in terms of the stabilogram diffusion function (SDF). The minimization of error functions is carried out by employing methods based on sensitivity functions of the error with respect to model and excitation parameters. The sensitivity functions are obtained by using the variational techniques. The inverse dynamic problem approach has been employed in order to establish the properties of the disturbance time laws ensuring the satisfactory coincidence of measured and computed COP vibration laws. The main difficulty of the investigated problem is encountered during the model validation stage. Generally, neither the PID controller parameter set nor the disturbance time law are known in advance. In this work, an error function

  5. Computer ethics and teritary level education in Hong Kong

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, E.Y.W.; Davison, R.M.; Wade, P.W.

    1994-12-31

    This paper seeks to highlight some ethical issues relating to the increasing proliferation of Information Technology into our everyday lives. The authors explain their understanding of computer ethics, and give some reasons why the study of computer ethics is becoming increasingly pertinent. The paper looks at some of the problems that arise in attempting to develop appropriate ethical concepts in a constantly changing environment, and explores some of the ethical dilemmas arising from the increasing use of computers. Some initial research undertaken to explore the ideas and understanding of tertiary level students in Hong Kong on a number of ethicalmore » issues of interest is described, and our findings discussed. We hope that presenting this paper and eliciting subsequent discussion will enable us to draw up more comprehensive guidelines for the teaching of computer related ethics to tertiary level students, as well as reveal some directions for future research.« less

  6. Identification of pumping influences in long-term water level fluctuations.

    PubMed

    Harp, Dylan R; Vesselinov, Velimir V

    2011-01-01

    Identification of the pumping influences at monitoring wells caused by spatially and temporally variable water supply pumping can be a challenging, yet an important hydrogeological task. The information that can be obtained can be critical for conceptualization of the hydrogeological conditions and indications of the zone of influence of the individual pumping wells. However, the pumping influences are often intermittent and small in magnitude with variable production rates from multiple pumping wells. While these difficulties may support an inclination to abandon the existing dataset and conduct a dedicated cross-hole pumping test, that option can be challenging and expensive to coordinate and execute. This paper presents a method that utilizes a simple analytical modeling approach for analysis of a long-term water level record utilizing an inverse modeling approach. The methodology allows the identification of pumping wells influencing the water level fluctuations. Thus, the analysis provides an efficient and cost-effective alternative to designed and coordinated cross-hole pumping tests. We apply this method on a dataset from the Los Alamos National Laboratory site. Our analysis also provides (1) an evaluation of the information content of the transient water level data; (2) indications of potential structures of the aquifer heterogeneity inhibiting or promoting pressure propagation; and (3) guidance for the development of more complicated models requiring detailed specification of the aquifer heterogeneity. Copyright © 2010 The Author(s). Journal compilation © 2010 National Ground Water Association.

  7. Ontology-Based High-Level Context Inference for Human Behavior Identification

    PubMed Central

    Villalonga, Claudia; Razzaq, Muhammad Asif; Khan, Wajahat Ali; Pomares, Hector; Rojas, Ignacio; Lee, Sungyoung; Banos, Oresti

    2016-01-01

    Recent years have witnessed a huge progress in the automatic identification of individual primitives of human behavior, such as activities or locations. However, the complex nature of human behavior demands more abstract contextual information for its analysis. This work presents an ontology-based method that combines low-level primitives of behavior, namely activity, locations and emotions, unprecedented to date, to intelligently derive more meaningful high-level context information. The paper contributes with a new open ontology describing both low-level and high-level context information, as well as their relationships. Furthermore, a framework building on the developed ontology and reasoning models is presented and evaluated. The proposed method proves to be robust while identifying high-level contexts even in the event of erroneously-detected low-level contexts. Despite reasonable inference times being obtained for a relevant set of users and instances, additional work is required to scale to long-term scenarios with a large number of users. PMID:27690050

  8. Development of a computer-assisted forensic radiographic identification method using the lateral cervical and lumbar spine.

    PubMed

    Derrick, Sharon M; Raxter, Michelle H; Hipp, John A; Goel, Priya; Chan, Elaine F; Love, Jennifer C; Wiersema, Jason M; Akella, N Shastry

    2015-01-01

    Medical examiners and coroners (ME/C) in the United States hold statutory responsibility to identify deceased individuals who fall under their jurisdiction. The computer-assisted decedent identification (CADI) project was designed to modify software used in diagnosis and treatment of spinal injuries into a mathematically validated tool for ME/C identification of fleshed decedents. CADI software analyzes the shapes of targeted vertebral bodies imaged in an array of standard radiographs and quantifies the likelihood that any two of the radiographs contain matching vertebral bodies. Six validation tests measured the repeatability, reliability, and sensitivity of the method, and the effects of age, sex, and number of radiographs in array composition. CADI returned a 92-100% success rate in identifying the true matching pair of vertebrae within arrays of five to 30 radiographs. Further development of CADI is expected to produce a novel identification method for use in ME/C offices that is reliable, timely, and cost-effective. © 2014 American Academy of Forensic Sciences.

  9. How automated image analysis techniques help scientists in species identification and classification?

    PubMed

    Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder

    2017-09-04

    Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.

  10. Proficiency Level--A Fuzzy Variable in Computer Learner Corpora

    ERIC Educational Resources Information Center

    Carlsen, Cecilie

    2012-01-01

    This article focuses on the proficiency level of texts in Computer Learner Corpora (CLCs). A claim is made that proficiency levels are often poorly defined in CLC design, and that the methods used for level assignment of corpus texts are not always adequate. Proficiency level can therefore, best be described as a fuzzy variable in CLCs,…

  11. Mental Mechanisms for Topics Identification

    PubMed Central

    2014-01-01

    Topics identification (TI) is the process that consists in determining the main themes present in natural language documents. The current TI modeling paradigm aims at acquiring semantic information from statistic properties of large text datasets. We investigate the mental mechanisms responsible for the identification of topics in a single document given existing knowledge. Our main hypothesis is that topics are the result of accumulated neural activation of loosely organized information stored in long-term memory (LTM). We experimentally tested our hypothesis with a computational model that simulates LTM activation. The model assumes activation decay as an unavoidable phenomenon originating from the bioelectric nature of neural systems. Since decay should negatively affect the quality of topics, the model predicts the presence of short-term memory (STM) to keep the focus of attention on a few words, with the expected outcome of restoring quality to a baseline level. Our experiments measured topics quality of over 300 documents with various decay rates and STM capacity. Our results showed that accumulated activation of loosely organized information was an effective mental computational commodity to identify topics. It was furthermore confirmed that rapid decay is detrimental to topics quality but that limited capacity STM restores quality to a baseline level, even exceeding it slightly. PMID:24744775

  12. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    PubMed

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis." Copyright © 2015 Cognitive Science Society, Inc.

  13. An accurate and computationally efficient algorithm for ground peak identification in large footprint waveform LiDAR data

    NASA Astrophysics Data System (ADS)

    Zhuang, Wei; Mountrakis, Giorgos

    2014-09-01

    Large footprint waveform LiDAR sensors have been widely used for numerous airborne studies. Ground peak identification in a large footprint waveform is a significant bottleneck in exploring full usage of the waveform datasets. In the current study, an accurate and computationally efficient algorithm was developed for ground peak identification, called Filtering and Clustering Algorithm (FICA). The method was evaluated on Land, Vegetation, and Ice Sensor (LVIS) waveform datasets acquired over Central NY. FICA incorporates a set of multi-scale second derivative filters and a k-means clustering algorithm in order to avoid detecting false ground peaks. FICA was tested in five different land cover types (deciduous trees, coniferous trees, shrub, grass and developed area) and showed more accurate results when compared to existing algorithms. More specifically, compared with Gaussian decomposition, the RMSE ground peak identification by FICA was 2.82 m (5.29 m for GD) in deciduous plots, 3.25 m (4.57 m for GD) in coniferous plots, 2.63 m (2.83 m for GD) in shrub plots, 0.82 m (0.93 m for GD) in grass plots, and 0.70 m (0.51 m for GD) in plots of developed areas. FICA performance was also relatively consistent under various slope and canopy coverage (CC) conditions. In addition, FICA showed better computational efficiency compared to existing methods. FICA's major computational and accuracy advantage is a result of the adopted multi-scale signal processing procedures that concentrate on local portions of the signal as opposed to the Gaussian decomposition that uses a curve-fitting strategy applied in the entire signal. The FICA algorithm is a good candidate for large-scale implementation on future space-borne waveform LiDAR sensors.

  14. Logic as Marr's Computational Level: Four Case Studies.

    PubMed

    Baggio, Giosuè; van Lambalgen, Michiel; Hagoort, Peter

    2015-04-01

    We sketch four applications of Marr's levels-of-analysis methodology to the relations between logic and experimental data in the cognitive neuroscience of language and reasoning. The first part of the paper illustrates the explanatory power of computational level theories based on logic. We show that a Bayesian treatment of the suppression task in reasoning with conditionals is ruled out by EEG data, supporting instead an analysis based on defeasible logic. Further, we describe how results from an EEG study on temporal prepositions can be reanalyzed using formal semantics, addressing a potential confound. The second part of the article demonstrates the predictive power of logical theories drawing on EEG data on processing progressive constructions and on behavioral data on conditional reasoning in people with autism. Logical theories can constrain processing hypotheses all the way down to neurophysiology, and conversely neuroscience data can guide the selection of alternative computational level models of cognition. Copyright © 2014 Cognitive Science Society, Inc.

  15. Nuclear Magnetic Resonance Spectroscopy-Based Identification of Yeast.

    PubMed

    Himmelreich, Uwe; Sorrell, Tania C; Daniel, Heide-Marie

    2017-01-01

    Rapid and robust high-throughput identification of environmental, industrial, or clinical yeast isolates is important whenever relatively large numbers of samples need to be processed in a cost-efficient way. Nuclear magnetic resonance (NMR) spectroscopy generates complex data based on metabolite profiles, chemical composition and possibly on medium consumption, which can not only be used for the assessment of metabolic pathways but also for accurate identification of yeast down to the subspecies level. Initial results on NMR based yeast identification where comparable with conventional and DNA-based identification. Potential advantages of NMR spectroscopy in mycological laboratories include not only accurate identification but also the potential of automated sample delivery, automated analysis using computer-based methods, rapid turnaround time, high throughput, and low running costs.We describe here the sample preparation, data acquisition and analysis for NMR-based yeast identification. In addition, a roadmap for the development of classification strategies is given that will result in the acquisition of a database and analysis algorithms for yeast identification in different environments.

  16. Identification of High-Risk Plaques Destined to Cause Acute Coronary Syndrome Using Coronary Computed Tomographic Angiography and Computational Fluid Dynamics.

    PubMed

    Lee, Joo Myung; Choi, Gilwoo; Koo, Bon-Kwon; Hwang, Doyeon; Park, Jonghanne; Zhang, Jinlong; Kim, Kyung-Jin; Tong, Yaliang; Kim, Hyun Jin; Grady, Leo; Doh, Joon-Hyung; Nam, Chang-Wook; Shin, Eun-Seok; Cho, Young-Seok; Choi, Su-Yeon; Chun, Eun Ju; Choi, Jin-Ho; Nørgaard, Bjarne L; Christiansen, Evald H; Niemen, Koen; Otake, Hiromasa; Penicka, Martin; de Bruyne, Bernard; Kubo, Takashi; Akasaka, Takashi; Narula, Jagat; Douglas, Pamela S; Taylor, Charles A; Kim, Hyo-Soo

    2018-03-14

    We investigated the utility of noninvasive hemodynamic assessment in the identification of high-risk plaques that caused subsequent acute coronary syndrome (ACS). ACS is a critical event that impacts the prognosis of patients with coronary artery disease. However, the role of hemodynamic factors in the development of ACS is not well-known. Seventy-two patients with clearly documented ACS and available coronary computed tomographic angiography (CTA) acquired between 1 month and 2 years before the development of ACS were included. In 66 culprit and 150 nonculprit lesions as a case-control design, the presence of adverse plaque characteristics (APC) was assessed and hemodynamic parameters (fractional flow reserve derived by coronary computed tomographic angiography [FFR CT ], change in FFR CT across the lesion [△FFR CT ], wall shear stress [WSS], and axial plaque stress) were analyzed using computational fluid dynamics. The best cut-off values for FFR CT , △FFR CT , WSS, and axial plaque stress were used to define the presence of adverse hemodynamic characteristics (AHC). The incremental discriminant and reclassification abilities for ACS prediction were compared among 3 models (model 1: percent diameter stenosis [%DS] and lesion length, model 2: model 1 + APC, and model 3: model 2 + AHC). The culprit lesions showed higher %DS (55.5 ± 15.4% vs. 43.1 ± 15.0%; p < 0.001) and higher prevalence of APC (80.3% vs. 42.0%; p < 0.001) than nonculprit lesions. Regarding hemodynamic parameters, culprit lesions showed lower FFR CT and higher △FFR CT , WSS, and axial plaque stress than nonculprit lesions (all p values <0.01). Among the 3 models, model 3, which included hemodynamic parameters, showed the highest c-index, and better discrimination (concordance statistic [c-index] 0.789 vs. 0.747; p = 0.014) and reclassification abilities (category-free net reclassification index 0.287; p = 0.047; relative integrated discrimination improvement 0.368; p < 0.001) than

  17. Identification of Nonlinear Micron-Level Mechanics for a Precision Deployable Joint

    NASA Technical Reports Server (NTRS)

    Bullock, S. J.; Peterson, L. D.

    1994-01-01

    The experimental identification of micron-level nonlinear joint mechanics and dynamics for a pin-clevis joint used in a precision, adaptive, deployable space structure are investigated. The force-state mapping method is used to identify the behavior of the joint under a preload. The results of applying a single tension-compression cycle to the joint under a tensile preload are presented. The observed micron-level behavior is highly nonlinear and involves all six rigid body motion degrees-of-freedom of the joint. it is also suggests that at micron levels of motion modelling of the joint mechanics and dynamics must include the interactions between all internal components, such as the pin, bushings, and the joint node.

  18. Enhanced fault-tolerant quantum computing in d-level systems.

    PubMed

    Campbell, Earl T

    2014-12-05

    Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d-level qudit systems with prime d. The codes use n=d-1 qudits and can detect up to ∼d/3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d.

  19. Computer game as a tool for training the identification of phonemic length.

    PubMed

    Pennala, Riitta; Richardson, Ulla; Ylinen, Sari; Lyytinen, Heikki; Martin, Maisa

    2014-12-01

    Computer-assisted training of Finnish phonemic length was conducted with 7-year-old Russian-speaking second-language learners of Finnish. Phonemic length plays a different role in these two languages. The training included game activities with two- and three-syllable word and pseudo-word minimal pairs with prototypical vowel durations. The lowest accuracy scores were recorded for two-syllable words. Accuracy scores were higher for the minimal pairs with larger rather than smaller differences in duration. Accuracy scores were lower for long duration than for short duration. The ability to identify quantity degree was generalized to stimuli used in the identification test in two of the children. Ideas for improving the game are introduced.

  20. The photon identification loophole in EPRB experiments: computer models with single-wing selection

    NASA Astrophysics Data System (ADS)

    De Raedt, Hans; Michielsen, Kristel; Hess, Karl

    2017-11-01

    Recent Einstein-Podolsky-Rosen-Bohm experiments [M. Giustina et al. Phys. Rev. Lett. 115, 250401 (2015); L. K. Shalm et al. Phys. Rev. Lett. 115, 250402 (2015)] that claim to be loophole free are scrutinized. The combination of a digital computer and discrete-event simulation is used to construct a minimal but faithful model of the most perfected realization of these laboratory experiments. In contrast to prior simulations, all photon selections are strictly made, as they are in the actual experiments, at the local station and no other "post-selection" is involved. The simulation results demonstrate that a manifestly non-quantum model that identifies photons in the same local manner as in these experiments can produce correlations that are in excellent agreement with those of the quantum theoretical description of the corresponding thought experiment, in conflict with Bell's theorem which states that this is impossible. The failure of Bell's theorem is possible because of our recognition of the photon identification loophole. Such identification measurement-procedures are necessarily included in all actual experiments but are not included in the theory of Bell and his followers.

  1. A comparative approach to closed-loop computation.

    PubMed

    Roth, E; Sponberg, S; Cowan, N J

    2014-04-01

    Neural computation is inescapably closed-loop: the nervous system processes sensory signals to shape motor output, and motor output consequently shapes sensory input. Technological advances have enabled neuroscientists to close, open, and alter feedback loops in a wide range of experimental preparations. The experimental capability of manipulating the topology-that is, how information can flow between subsystems-provides new opportunities to understand the mechanisms and computations underlying behavior. These experiments encompass a spectrum of approaches from fully open-loop, restrained preparations to the fully closed-loop character of free behavior. Control theory and system identification provide a clear computational framework for relating these experimental approaches. We describe recent progress and new directions for translating experiments at one level in this spectrum to predictions at another level. Operating across this spectrum can reveal new understanding of how low-level neural mechanisms relate to high-level function during closed-loop behavior. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. The identification of van Hiele level students on the topic of space analytic geometry

    NASA Astrophysics Data System (ADS)

    Yudianto, E.; Sunardi; Sugiarti, T.; Susanto; Suharto; Trapsilasiwi, D.

    2018-03-01

    Geometry topics are still considered difficult by most students. Therefore, this study focused on the identification of students related to van Hiele levels. The task used from result of the development of questions related to analytical geometry of space. The results of the work involving 78 students who worked on these questions covered 11.54% (nine students) classified on a visual level; 5.13% (four students) on analysis level; 1.28% (one student) on informal deduction level; 2.56% (two students) on deduction and 2.56% (two students) on rigor level, and 76.93% (sixty students) classified on the pre-visualization level.

  3. Study on Urban Heat Island Intensity Level Identification Based on an Improved Restricted Boltzmann Machine.

    PubMed

    Zhang, Yang; Jiang, Ping; Zhang, Hongyan; Cheng, Peng

    2018-01-23

    Thermal infrared remote sensing has become one of the main technology methods used for urban heat island research. When applying urban land surface temperature inversion of the thermal infrared band, problems with intensity level division arise because the method is subjective. However, this method is one of the few that performs heat island intensity level identification. This paper will build an intensity level identifier for an urban heat island, by using weak supervision and thought-based learning in an improved, restricted Boltzmann machine (RBM) model. The identifier automatically initializes the annotation and optimizes the model parameters sequentially until the target identifier is completed. The algorithm needs very little information about the weak labeling of the target training sample and generates an urban heat island intensity spatial distribution map. This study can provide reliable decision-making support for urban ecological planning and effective protection of urban ecological security. The experimental results showed the following: (1) The heat island effect in Wuhan is existent and intense. Heat island areas are widely distributed. The largest heat island area is in Wuhan, followed by the sub-green island. The total area encompassed by heat island and strong island levels accounts for 54.16% of the land in Wuhan. (2) Partially based on improved RBM identification, this method meets the research demands of determining the spatial distribution characteristics of the internal heat island effect; its identification accuracy is superior to that of comparable methods.

  4. Study on Urban Heat Island Intensity Level Identification Based on an Improved Restricted Boltzmann Machine

    PubMed Central

    Jiang, Ping; Zhang, Hongyan; Cheng, Peng

    2018-01-01

    Thermal infrared remote sensing has become one of the main technology methods used for urban heat island research. When applying urban land surface temperature inversion of the thermal infrared band, problems with intensity level division arise because the method is subjective. However, this method is one of the few that performs heat island intensity level identification. This paper will build an intensity level identifier for an urban heat island, by using weak supervision and thought-based learning in an improved, restricted Boltzmann machine (RBM) model. The identifier automatically initializes the annotation and optimizes the model parameters sequentially until the target identifier is completed. The algorithm needs very little information about the weak labeling of the target training sample and generates an urban heat island intensity spatial distribution map. This study can provide reliable decision-making support for urban ecological planning and effective protection of urban ecological security. The experimental results showed the following: (1) The heat island effect in Wuhan is existent and intense. Heat island areas are widely distributed. The largest heat island area is in Wuhan, followed by the sub-green island. The total area encompassed by heat island and strong island levels accounts for 54.16% of the land in Wuhan. (2) Partially based on improved RBM identification, this method meets the research demands of determining the spatial distribution characteristics of the internal heat island effect; its identification accuracy is superior to that of comparable methods. PMID:29360786

  5. A College-Level, Computer-Assisted Course in Nutrition.

    ERIC Educational Resources Information Center

    Carew, Lyndon B.; And Others

    1984-01-01

    Describes a computer-assisted instructional (CAI) program to accompany a 15-week, college-level, introductory lecture course on the scientific principles of mammalian nutrition. The nature of the program is discussed, and examples of how it operates are provided. Comments on the evaluation of the program are also provided. (JN)

  6. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  7. Deaf individuals who work with computers present a high level of visual attention.

    PubMed

    Ribeiro, Paula Vieira; Ribas, Valdenilson Ribeiro; Ribas, Renata de Melo Guerra; de Melo, Teresinha de Jesus Oliveira Guimarães; Marinho, Carlos Antonio de Sá; Silva, Kátia Karina do Monte; de Albuquerque, Elizabete Elias; Ribas, Valéria Ribeiro; de Lima, Renata Mirelly Silva; Santos, Tuthcha Sandrelle Botelho Tavares

    2011-01-01

    Some studies in the literature indicate that deaf individuals seem to develop a higher level of attention and concentration during the process of constructing of different ways of communicating. The aim of this study was to evaluate the level of attention in individuals deaf from birth that worked with computers. A total of 161 individuals in the 18-25 age group were assessed. Of these, 40 were congenitally deaf individuals that worked with computers, 42 were deaf individuals that did not work, did not know how to use nor used computers (Control 1), 39 individuals with normal hearing that did not work, did not know how to use computers nor used them (Control 2), and 40 individuals with normal hearing that worked with computers (Control 3). The group of subjects deaf from birth that worked with computers (IDWC) presented a higher level of focused attention, sustained attention, mental manipulation capacity and resistance to interference compared to the control groups. This study highlights the relevance sensory to cognitive processing.

  8. Deaf individuals who work with computers present a high level of visual attention

    PubMed Central

    Ribeiro, Paula Vieira; Ribas, Valdenilson Ribeiro; Ribas, Renata de Melo Guerra; de Melo, Teresinha de Jesus Oliveira Guimarães; Marinho, Carlos Antonio de Sá; Silva, Kátia Karina do Monte; de Albuquerque, Elizabete Elias; Ribas, Valéria Ribeiro; de Lima, Renata Mirelly Silva; Santos, Tuthcha Sandrelle Botelho Tavares

    2011-01-01

    Some studies in the literature indicate that deaf individuals seem to develop a higher level of attention and concentration during the process of constructing of different ways of communicating. Objective The aim of this study was to evaluate the level of attention in individuals deaf from birth that worked with computers. Methods A total of 161 individuals in the 18-25 age group were assessed. Of these, 40 were congenitally deaf individuals that worked with computers, 42 were deaf individuals that did not work, did not know how to use nor used computers (Control 1), 39 individuals with normal hearing that did not work, did not know how to use computers nor used them (Control 2), and 40 individuals with normal hearing that worked with computers (Control 3). Results The group of subjects deaf from birth that worked with computers (IDWC) presented a higher level of focused attention, sustained attention, mental manipulation capacity and resistance to interference compared to the control groups. Conclusion This study highlights the relevance sensory to cognitive processing. PMID:29213734

  9. Electro-Optic Identification Research Program

    DTIC Science & Technology

    2002-04-01

    Electro - optic identification (EOID) sensors provide photographic quality images that can be used to identify mine-like contacts provided by long...tasks such as validating existing electro - optic models, development of performance metrics, and development of computer aided identification and

  10. 24 CFR 990.170 - Computation of utilities expense level (UEL): Overview.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... level (UEL): Overview. 990.170 Section 990.170 Housing and Urban Development Regulations Relating to... Expenses § 990.170 Computation of utilities expense level (UEL): Overview. (a) General. The UEL for each... by the payable consumption level multiplied by the inflation factor. The UEL is expressed in terms of...

  11. Computational methods for the identification of spatially varying stiffness and damping in beams

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Rosen, I. G.

    1986-01-01

    A numerical approximation scheme for the estimation of functional parameters in Euler-Bernoulli models for the transverse vibration of flexible beams with tip bodies is developed. The method permits the identification of spatially varying flexural stiffness and Voigt-Kelvin viscoelastic damping coefficients which appear in the hybrid system of ordinary and partial differential equations and boundary conditions describing the dynamics of such structures. An inverse problem is formulated as a least squares fit to data subject to constraints in the form of a vector system of abstract first order evolution equations. Spline-based finite element approximations are used to finite dimensionalize the problem. Theoretical convergence results are given and numerical studies carried out on both conventional (serial) and vector computers are discussed.

  12. Developing Computer-Interactive Tape Exercises for Intermediate-Level Business French.

    ERIC Educational Resources Information Center

    Garnett, Mary Anne

    One college language teacher developed computer-interactive audiotape exercises for an intermediate-level class in business French. The project was undertaken because of a need for appropriate materials at that level. The use of authoring software permitted development of a variety of activity types, including multiple-choice, fill-in-the-blank,…

  13. Computational techniques in tribology and material science at the atomic level

    NASA Technical Reports Server (NTRS)

    Ferrante, J.; Bozzolo, G. H.

    1992-01-01

    Computations in tribology and material science at the atomic level present considerable difficulties. Computational techniques ranging from first-principles to semi-empirical and their limitations are discussed. Example calculations of metallic surface energies using semi-empirical techniques are presented. Finally, application of the methods to calculation of adhesion and friction are presented.

  14. Bladed-shrouded-disc aeroelastic analyses: Computer program updates in NASTRAN level 17.7

    NASA Technical Reports Server (NTRS)

    Gallo, A. M.; Elchuri, V.; Skalski, S. C.

    1981-01-01

    In October 1979, a computer program based on the state-of-the-art compressor and structural technologies applied to bladed-shrouded-disc was developed. The program was more operational in NASTRAN Level 16. The bladed disc computer program was updated for operation in NASTRAN Level 17.7. The supersonic cascade unsteady aerodynamics routine UCAS, delivered as part of the NASTRAN Level 16 program was recorded to improve its execution time. These improvements are presented.

  15. Level of Identification as a Predictor of Attitude Change.

    ERIC Educational Resources Information Center

    Williams, Robert H.; Williams, Sharon Ann

    1987-01-01

    Discussion of conditions under which simulation games promote changes in attitudes focuses on identification theory as a predictor of attitude change. Incentive theory and cognitive dissonance theory are discussed, and a study of community college students is described that tested the role of identification in changing attitudes. (LRW)

  16. K-nearest neighbors based methods for identification of different gear crack levels under different motor speeds and loads: Revisited

    NASA Astrophysics Data System (ADS)

    Wang, Dong

    2016-03-01

    Gears are the most commonly used components in mechanical transmission systems. Their failures may cause transmission system breakdown and result in economic loss. Identification of different gear crack levels is important to prevent any unexpected gear failure because gear cracks lead to gear tooth breakage. Signal processing based methods mainly require expertize to explain gear fault signatures which is usually not easy to be achieved by ordinary users. In order to automatically identify different gear crack levels, intelligent gear crack identification methods should be developed. The previous case studies experimentally proved that K-nearest neighbors based methods exhibit high prediction accuracies for identification of 3 different gear crack levels under different motor speeds and loads. In this short communication, to further enhance prediction accuracies of existing K-nearest neighbors based methods and extend identification of 3 different gear crack levels to identification of 5 different gear crack levels, redundant statistical features are constructed by using Daubechies 44 (db44) binary wavelet packet transform at different wavelet decomposition levels, prior to the use of a K-nearest neighbors method. The dimensionality of redundant statistical features is 620, which provides richer gear fault signatures. Since many of these statistical features are redundant and highly correlated with each other, dimensionality reduction of redundant statistical features is conducted to obtain new significant statistical features. At last, the K-nearest neighbors method is used to identify 5 different gear crack levels under different motor speeds and loads. A case study including 3 experiments is investigated to demonstrate that the developed method provides higher prediction accuracies than the existing K-nearest neighbors based methods for recognizing different gear crack levels under different motor speeds and loads. Based on the new significant statistical

  17. New Fe i Level Energies and Line Identifications from Stellar Spectra. II. Initial Results from New Ultraviolet Spectra of Metal-poor Stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Ruth C.; Kurucz, Robert L.; Ayres, Thomas R., E-mail: peterson@ucolick.org

    2017-04-01

    The Fe i spectrum is critical to many areas of astrophysics, yet many of the high-lying levels remain uncharacterized. To remedy this deficiency, Peterson and Kurucz identified Fe i lines in archival ultraviolet and optical spectra of metal-poor stars, whose warm temperatures favor moderate Fe i excitation. Sixty-five new levels were recovered, with 1500 detectable lines, including several bound levels in the ionization continuum of Fe i. Here, we extend the previous work by identifying 59 additional levels, with 1400 detectable lines, by incorporating new high-resolution UV spectra of warm metal-poor stars recently obtained by the Hubble Space Telescope Imagingmore » Spectrograph. We provide gf values for these transitions, both computed as well as adjusted to fit the stellar spectra. We also expand our spectral calculations to the infrared, confirming three levels by matching high-quality spectra of the Sun and two cool stars in the H -band. The predicted gf values suggest that an additional 3700 Fe i lines should be detectable in existing solar infrared spectra. Extending the empirical line identification work to the infrared would help confirm additional Fe i levels, as would new high-resolution UV spectra of metal-poor turnoff stars below 1900 Å.« less

  18. New Fe I Level Energies and Line Identifications from Stellar Spectra. II. Initial Results from New Ultraviolet Spectra of Metal-poor Stars

    NASA Astrophysics Data System (ADS)

    Peterson, Ruth C.; Kurucz, Robert L.; Ayres, Thomas R.

    2017-04-01

    The Fe I spectrum is critical to many areas of astrophysics, yet many of the high-lying levels remain uncharacterized. To remedy this deficiency, Peterson & Kurucz identified Fe I lines in archival ultraviolet and optical spectra of metal-poor stars, whose warm temperatures favor moderate Fe I excitation. Sixty-five new levels were recovered, with 1500 detectable lines, including several bound levels in the ionization continuum of Fe I. Here, we extend the previous work by identifying 59 additional levels, with 1400 detectable lines, by incorporating new high-resolution UV spectra of warm metal-poor stars recently obtained by the Hubble Space Telescope Imaging Spectrograph. We provide gf values for these transitions, both computed as well as adjusted to fit the stellar spectra. We also expand our spectral calculations to the infrared, confirming three levels by matching high-quality spectra of the Sun and two cool stars in the H-band. The predicted gf values suggest that an additional 3700 Fe I lines should be detectable in existing solar infrared spectra. Extending the empirical line identification work to the infrared would help confirm additional Fe I levels, as would new high-resolution UV spectra of metal-poor turnoff stars below 1900 Å.

  19. Fostering Girls' Computer Literacy through Laptop Learning: Can Mobile Computers Help To Level Out the Gender Difference?

    ERIC Educational Resources Information Center

    Schaumburg, Heike

    The goal of this study was to find out if the difference between boys and girls in computer literacy can be leveled out in a laptop program where each student has his/her own mobile computer to work with at home and at school. Ninth grade students (n=113) from laptop and non-laptop classes in a German high school were tested for their computer…

  20. Using state-issued identification cards for obesity tracking.

    PubMed

    Morris, Daniel S; Schubert, Stacey S; Ngo, Duyen L; Rubado, Dan J; Main, Eric; Douglas, Jae P

    2015-01-01

    Obesity prevention has emerged as one of public health's top priorities. Public health agencies need reliable data on population health status to guide prevention efforts. Existing survey data sources provide county-level estimates; obtaining sub-county estimates from survey data can be prohibitively expensive. State-issued identification cards are an alternate data source for community-level obesity estimates. We computed body mass index for 3.2 million adult Oregonians who were issued a driver license or identification card between 2003 and 2010. Statewide estimates of obesity prevalence and average body mass index were compared to the Oregon Behavioral Risk Factor Surveillance System (BRFSS). After geocoding addresses we calculated average adult body mass index for every census tract and block group in the state. Sub-county estimates reveal striking patterns in the population's weight status. Annual obesity prevalence estimates from identification cards averaged 18% lower than the BRFSS for men and 31% lower for women. Body mass index estimates averaged 2% lower than the BRFSS for men and 5% lower for women. Identification card records are a promising data source to augment tracking of obesity. People do tend to misrepresent their weight, but the consistent bias does not obscure patterns and trends. Large numbers of records allow for stable estimates for small geographic areas. Copyright © 2014 Asian Oceanian Association for the Study of Obesity. All rights reserved.

  1. Energy Use and Power Levels in New Monitors and Personal Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay

    2002-07-23

    Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can usemore » to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption

  2. Progress and challenges in bioinformatics approaches for enhancer identification

    PubMed Central

    Kleftogiannis, Dimitrios; Kalnis, Panos

    2016-01-01

    Enhancers are cis-acting DNA elements that play critical roles in distal regulation of gene expression. Identifying enhancers is an important step for understanding distinct gene expression programs that may reflect normal and pathogenic cellular conditions. Experimental identification of enhancers is constrained by the set of conditions used in the experiment. This requires multiple experiments to identify enhancers, as they can be active under specific cellular conditions but not in different cell types/tissues or cellular states. This has opened prospects for computational prediction methods that can be used for high-throughput identification of putative enhancers to complement experimental approaches. Potential functions and properties of predicted enhancers have been catalogued and summarized in several enhancer-oriented databases. Because the current methods for the computational prediction of enhancers produce significantly different enhancer predictions, it will be beneficial for the research community to have an overview of the strategies and solutions developed in this field. In this review, we focus on the identification and analysis of enhancers by bioinformatics approaches. First, we describe a general framework for computational identification of enhancers, present relevant data types and discuss possible computational solutions. Next, we cover over 30 existing computational enhancer identification methods that were developed since 2000. Our review highlights advantages, limitations and potentials, while suggesting pragmatic guidelines for development of more efficient computational enhancer prediction methods. Finally, we discuss challenges and open problems of this topic, which require further consideration. PMID:26634919

  3. Relationship between Teacher Views on Levels of Organizational Support--Organizational Identification and Climate of Initiative

    ERIC Educational Resources Information Center

    Nartgün, Senay Sezgin; Taskin, Sevgi

    2017-01-01

    This study aimed to identify secondary school teachers' views on levels of organizational support, organizational identification and climate of initiative and to determine whether there were any significant differences between these views based on teachers' demographic characteristics and whether there were significant differences between…

  4. Preparing Students for Computer Aided Drafting (CAD). A Conceptual Approach.

    ERIC Educational Resources Information Center

    Putnam, A. R.; Duelm, Brian

    This presentation outlines guidelines for developing and implementing an introductory course in computer-aided drafting (CAD) that is geared toward secondary-level students. The first section of the paper, which deals with content identification and selection, includes lists of mechanical drawing and CAD competencies and a list of rationales for…

  5. The effect on cadaver blood DNA identification by the use of targeted and whole body post-mortem computed tomography angiography.

    PubMed

    Rutty, Guy N; Barber, Jade; Amoroso, Jasmin; Morgan, Bruno; Graham, Eleanor A M

    2013-12-01

    Post-mortem computed tomography angiography (PMCTA) involves the injection of contrast agents. This could have both a dilution effect on biological fluid samples and could affect subsequent post-contrast analytical laboratory processes. We undertook a small sample study of 10 targeted and 10 whole body PMCTA cases to consider whether or not these two methods of PMCTA could affect post-PMCTA cadaver blood based DNA identification. We used standard methodology to examine DNA from blood samples obtained before and after the PMCTA procedure. We illustrate that neither of these PMCTA methods had an effect on the alleles called following short tandem repeat based DNA profiling, and therefore the ability to undertake post-PMCTA blood based DNA identification.

  6. Cloud computing approaches to accelerate drug discovery value chain.

    PubMed

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  7. Impact of PECS tablet computer app on receptive identification of pictures given a verbal stimulus.

    PubMed

    Ganz, Jennifer B; Hong, Ee Rea; Goodwyn, Fara; Kite, Elizabeth; Gilliland, Whitney

    2015-04-01

    The purpose of this brief report was to determine the effect on receptive identification of photos of a tablet computer-based augmentative and alternative communication (AAC) system with voice output. A multiple baseline single-case experimental design across vocabulary words was implemented. One participant, a preschool-aged boy with autism and little intelligible verbal language, was included in the study. Although a functional relation between the intervention and the dependent variable was not established, the intervention did appear to result in mild improvement for two of the three vocabulary words selected. The authors recommend further investigations of the collateral impacts of AAC on skills other than expressive language.

  8. Computational Identification of Novel Genes: Current and Future Perspectives.

    PubMed

    Klasberg, Steffen; Bitard-Feildel, Tristan; Mallet, Ludovic

    2016-01-01

    While it has long been thought that all genomic novelties are derived from the existing material, many genes lacking homology to known genes were found in recent genome projects. Some of these novel genes were proposed to have evolved de novo, ie, out of noncoding sequences, whereas some have been shown to follow a duplication and divergence process. Their discovery called for an extension of the historical hypotheses about gene origination. Besides the theoretical breakthrough, increasing evidence accumulated that novel genes play important roles in evolutionary processes, including adaptation and speciation events. Different techniques are available to identify genes and classify them as novel. Their classification as novel is usually based on their similarity to known genes, or lack thereof, detected by comparative genomics or against databases. Computational approaches are further prime methods that can be based on existing models or leveraging biological evidences from experiments. Identification of novel genes remains however a challenging task. With the constant software and technologies updates, no gold standard, and no available benchmark, evaluation and characterization of genomic novelty is a vibrant field. In this review, the classical and state-of-the-art tools for gene prediction are introduced. The current methods for novel gene detection are presented; the methodological strategies and their limits are discussed along with perspective approaches for further studies.

  9. Effects of portable computing devices on posture, muscle activation levels and efficiency.

    PubMed

    Werth, Abigail; Babski-Reeves, Kari

    2014-11-01

    Very little research exists on ergonomic exposures when using portable computing devices. This study quantified muscle activity (forearm and neck), posture (wrist, forearm and neck), and performance (gross typing speed and error rates) differences across three portable computing devices (laptop, netbook, and slate computer) and two work settings (desk and computer) during data entry tasks. Twelve participants completed test sessions on a single computer using a test-rest-test protocol (30min of work at one work setting, 15min of rest, 30min of work at the other work setting). The slate computer resulted in significantly more non-neutral wrist, elbow and neck postures, particularly when working on the sofa. Performance on the slate computer was four times less than that of the other computers, though lower muscle activity levels were also found. Potential or injury or illness may be elevated when working on smaller, portable computers in non-traditional work settings. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  10. A Full Dynamic Compound Inverse Method for output-only element-level system identification and input estimation from earthquake response signals

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2016-08-01

    This paper proposes a new output-only element-level system identification and input estimation technique, towards the simultaneous identification of modal parameters, input excitation time history and structural features at the element-level by adopting earthquake-induced structural response signals. The method, named Full Dynamic Compound Inverse Method (FDCIM), releases strong assumptions of earlier element-level techniques, by working with a two-stage iterative algorithm. Jointly, a Statistical Average technique, a modification process and a parameter projection strategy are adopted at each stage to achieve stronger convergence for the identified estimates. The proposed method works in a deterministic way and is completely developed in State-Space form. Further, it does not require continuous- to discrete-time transformations and does not depend on initialization conditions. Synthetic earthquake-induced response signals from different shear-type buildings are generated to validate the implemented procedure, also with noise-corrupted cases. The achieved results provide a necessary condition to demonstrate the effectiveness of the proposed identification method.

  11. A Unique Automation Platform for Measuring Low Level Radioactivity in Metabolite Identification Studies

    PubMed Central

    Krauser, Joel; Walles, Markus; Wolf, Thierry; Graf, Daniel; Swart, Piet

    2012-01-01

    Generation and interpretation of biotransformation data on drugs, i.e. identification of physiologically relevant metabolites, defining metabolic pathways and elucidation of metabolite structures, have become increasingly important to the drug development process. Profiling using 14C or 3H radiolabel is defined as the chromatographic separation and quantification of drug-related material in a given biological sample derived from an in vitro, preclinical in vivo or clinical study. Metabolite profiling is a very time intensive activity, particularly for preclinical in vivo or clinical studies which have defined limitations on radiation burden and exposure levels. A clear gap exists for certain studies which do not require specialized high volume automation technologies, yet these studies would still clearly benefit from automation. Use of radiolabeled compounds in preclinical and clinical ADME studies, specifically for metabolite profiling and identification are a very good example. The current lack of automation for measuring low level radioactivity in metabolite profiling requires substantial capacity, personal attention and resources from laboratory scientists. To help address these challenges and improve efficiency, we have innovated, developed and implemented a novel and flexible automation platform that integrates a robotic plate handling platform, HPLC or UPLC system, mass spectrometer and an automated fraction collector. PMID:22723932

  12. Computational Tools for Allosteric Drug Discovery: Site Identification and Focus Library Design.

    PubMed

    Huang, Wenkang; Nussinov, Ruth; Zhang, Jian

    2017-01-01

    Allostery is an intrinsic phenomenon of biological macromolecules involving regulation and/or signal transduction induced by a ligand binding to an allosteric site distinct from a molecule's active site. Allosteric drugs are currently receiving increased attention in drug discovery because drugs that target allosteric sites can provide important advantages over the corresponding orthosteric drugs including specific subtype selectivity within receptor families. Consequently, targeting allosteric sites, instead of orthosteric sites, can reduce drug-related side effects and toxicity. On the down side, allosteric drug discovery can be more challenging than traditional orthosteric drug discovery due to difficulties associated with determining the locations of allosteric sites and designing drugs based on these sites and the need for the allosteric effects to propagate through the structure, reach the ligand binding site and elicit a conformational change. In this study, we present computational tools ranging from the identification of potential allosteric sites to the design of "allosteric-like" modulator libraries. These tools may be particularly useful for allosteric drug discovery.

  13. The Reality of Computers at the Community College Level.

    ERIC Educational Resources Information Center

    Leone, Stephen J.

    Writing teachers at the community college level who teach using a computer have come to accept the fact that it is more than "just teaching" composition. Such teaching often requires instructors to be as knowledgeable as some of the technicians. Two-year college students and faculty are typically given little support in using computers…

  14. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  15. Electro-Optic Identification (EOID) Research Program

    DTIC Science & Technology

    2002-09-30

    The goal of this research is to provide computer-assisted identification of underwater mines in electro - optic imagery. Identification algorithms will...greatly reduce the time and risk to reacquire mine-like-objects for positive classification and identification. The objectives are to collect electro ... optic data under a wide range of operating and environmental conditions and develop precise algorithms that can provide accurate target recognition on this data for all possible conditions.

  16. Estimating groundwater levels using system identification models in Nzhelele and Luvuvhu areas, Limpopo Province, South Africa

    NASA Astrophysics Data System (ADS)

    Makungo, Rachel; Odiyo, John O.

    2017-08-01

    This study was focused on testing the ability of a coupled linear and non-linear system identification model in estimating groundwater levels. System identification provides an alternative approach for estimating groundwater levels in areas that lack data required by physically-based models. It also overcomes the limitations of physically-based models due to approximations, assumptions and simplifications. Daily groundwater levels for 4 boreholes, rainfall and evaporation data covering the period 2005-2014 were used in the study. Seventy and thirty percent of the data were used to calibrate and validate the model, respectively. Correlation coefficient (R), coefficient of determination (R2), root mean square error (RMSE), percent bias (PBIAS), Nash Sutcliffe coefficient of efficiency (NSE) and graphical fits were used to evaluate the model performance. Values for R, R2, RMSE, PBIAS and NSE ranged from 0.8 to 0.99, 0.63 to 0.99, 0.01-2.06 m, -7.18 to 1.16 and 0.68 to 0.99, respectively. Comparisons of observed and simulated groundwater levels for calibration and validation runs showed close agreements. The model performance mostly varied from satisfactory, good, very good and excellent. Thus, the model is able to estimate groundwater levels. The calibrated models can reasonably capture description between input and output variables and can, thus be used to estimate long term groundwater levels.

  17. All-memristive neuromorphic computing with level-tuned neurons

    NASA Astrophysics Data System (ADS)

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-01

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.

  18. All-memristive neuromorphic computing with level-tuned neurons.

    PubMed

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-02

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.

  19. Thinking beyond the Common Candida Species: Need for Species-Level Identification of Candida Due to the Emergence of Multidrug-Resistant Candida auris.

    PubMed

    Lockhart, Shawn R; Jackson, Brendan R; Vallabhaneni, Snigdha; Ostrosky-Zeichner, Luis; Pappas, Peter G; Chiller, Tom

    2017-12-01

    Candida species are one of the leading causes of nosocomial infections. Because much of the treatment for Candida infections is empirical, some institutions do not identify Candida to species level. With the worldwide emergence of the multidrug-resistant species Candida auris , identification of Candida to species level has new clinical relevance. Species should be identified for invasive candidiasis isolates, and species-level identification can be considered for selected noninvasive isolates to improve detection of C. auris . Copyright © 2017 American Society for Microbiology.

  20. Wheeze sound analysis using computer-based techniques: a systematic review.

    PubMed

    Ghulam Nabi, Fizza; Sundaraj, Kenneth; Chee Kiang, Lam; Palaniappan, Rajkumar; Sundaraj, Sebastian

    2017-10-31

    Wheezes are high pitched continuous respiratory acoustic sounds which are produced as a result of airway obstruction. Computer-based analyses of wheeze signals have been extensively used for parametric analysis, spectral analysis, identification of airway obstruction, feature extraction and diseases or pathology classification. While this area is currently an active field of research, the available literature has not yet been reviewed. This systematic review identified articles describing wheeze analyses using computer-based techniques on the SCOPUS, IEEE Xplore, ACM, PubMed and Springer and Elsevier electronic databases. After a set of selection criteria was applied, 41 articles were selected for detailed analysis. The findings reveal that 1) computerized wheeze analysis can be used for the identification of disease severity level or pathology, 2) further research is required to achieve acceptable rates of identification on the degree of airway obstruction with normal breathing, 3) analysis using combinations of features and on subgroups of the respiratory cycle has provided a pathway to classify various diseases or pathology that stem from airway obstruction.

  1. geneGIS: Computational Tools for Spatial Analyses of DNA Profiles with Associated Photo-Identification and Telemetry Records of Marine Mammals

    DTIC Science & Technology

    2012-09-30

    computational tools provide the ability to display, browse, select, filter and summarize spatio-temporal relationships of these individual-based...her research assistant at Esri, Shaun Walbridge, and members of the Marine Mammal Institute ( MMI ), including Tomas Follet and Debbie Steel. This...Genomics Laboratory, MMI , OSU. 4 As part of the geneGIS initiative, these SPLASH photo-identification records and the geneSPLASH DNA profiles

  2. Identification of the Characteristics and Attributes Needed for Career Success in Entry-Level Management Positions in Selected Retailing Industry.

    ERIC Educational Resources Information Center

    Ahearn, Anne C. Erikson

    A study examined the characteristics and attributes needed by individuals for career success in entry-level management positions in the retailing industry. Included among the specific objectives of the study were the following: identification of the educational level and retailing experience needed by successful entry-level retail managers,…

  3. Chemistry by Computer.

    ERIC Educational Resources Information Center

    Garmon, Linda

    1981-01-01

    Describes the features of various computer chemistry programs. Utilization of computer graphics, color, digital imaging, and other innovations are discussed in programs including those which aid in the identification of unknowns, predict whether chemical reactions are feasible, and predict the biological activity of xenobiotic compounds. (CS)

  4. Accurate Identification of Cancerlectins through Hybrid Machine Learning Technology.

    PubMed

    Zhang, Jieru; Ju, Ying; Lu, Huijuan; Xuan, Ping; Zou, Quan

    2016-01-01

    Cancerlectins are cancer-related proteins that function as lectins. They have been identified through computational identification techniques, but these techniques have sometimes failed to identify proteins because of sequence diversity among the cancerlectins. Advanced machine learning identification methods, such as support vector machine and basic sequence features (n-gram), have also been used to identify cancerlectins. In this study, various protein fingerprint features and advanced classifiers, including ensemble learning techniques, were utilized to identify this group of proteins. We improved the prediction accuracy of the original feature extraction methods and classification algorithms by more than 10% on average. Our work provides a basis for the computational identification of cancerlectins and reveals the power of hybrid machine learning techniques in computational proteomics.

  5. Overhead longwave infrared hyperspectral material identification using radiometric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zelinski, M. E.

    Material detection algorithms used in hyperspectral data processing are computationally efficient but can produce relatively high numbers of false positives. Material identification performed as a secondary processing step on detected pixels can help separate true and false positives. This paper presents a material identification processing chain for longwave infrared hyperspectral data of solid materials collected from airborne platforms. The algorithms utilize unwhitened radiance data and an iterative algorithm that determines the temperature, humidity, and ozone of the atmospheric profile. Pixel unmixing is done using constrained linear regression and Bayesian Information Criteria for model selection. The resulting product includes an optimalmore » atmospheric profile and full radiance material model that includes material temperature, abundance values, and several fit statistics. A logistic regression method utilizing all model parameters to improve identification is also presented. This paper details the processing chain and provides justification for the algorithms used. Several examples are provided using modeled data at different noise levels.« less

  6. Computer-aided detection of initial polyp candidates with level set-based adaptive convolution

    NASA Astrophysics Data System (ADS)

    Zhu, Hongbin; Duan, Chaijie; Liang, Zhengrong

    2009-02-01

    In order to eliminate or weaken the interference between different topological structures on the colon wall, adaptive and normalized convolution methods were used to compute the first and second order spatial derivatives of computed tomographic colonography images, which is the beginning of various geometric analyses. However, the performance of such methods greatly depends on the single-layer representation of the colon wall, which is called the starting layer (SL) in the following text. In this paper, we introduce a level set-based adaptive convolution (LSAC) method to compute the spatial derivatives, in which the level set method is employed to determine a more reasonable SL. The LSAC was applied to a computer-aided detection (CAD) scheme to detect the initial polyp candidates, and experiments showed that it benefits the CAD scheme in both the detection sensitivity and specificity as compared to our previous work.

  7. 24 CFR 990.165 - Computation of project expense level (PEL).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Computation of project expense level (PEL). 990.165 Section 990.165 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF...

  8. Software For Computer-Security Audits

    NASA Technical Reports Server (NTRS)

    Arndt, Kate; Lonsford, Emily

    1994-01-01

    Information relevant to potential breaches of security gathered efficiently. Automated Auditing Tools for VAX/VMS program includes following automated software tools performing noted tasks: Privileged ID Identification, program identifies users and their privileges to circumvent existing computer security measures; Critical File Protection, critical files not properly protected identified; Inactive ID Identification, identifications of users no longer in use found; Password Lifetime Review, maximum lifetimes of passwords of all identifications determined; and Password Length Review, minimum allowed length of passwords of all identifications determined. Written in DEC VAX DCL language.

  9. Attendance fingerprint identification system using arduino and single board computer

    NASA Astrophysics Data System (ADS)

    Muchtar, M. A.; Seniman; Arisandi, D.; Hasanah, S.

    2018-03-01

    Fingerprint is one of the most unique parts of the human body that distinguishes one person from others and is easily accessed. This uniqueness is supported by technology that can automatically identify or recognize a person called fingerprint sensor. Yet, the existing Fingerprint Sensor can only do fingerprint identification on one machine. For the mentioned reason, we need a method to be able to recognize each user in a different fingerprint sensor. The purpose of this research is to build fingerprint sensor system for fingerprint data management to be centralized so identification can be done in each Fingerprint Sensor. The result of this research shows that by using Arduino and Raspberry Pi, data processing can be centralized so that fingerprint identification can be done in each fingerprint sensor with 98.5 % success rate of centralized server recording.

  10. A computable phenotype for asthma case identification in adult and pediatric patients: External validation in the Chicago Area Patient-Outcomes Research Network (CAPriCORN).

    PubMed

    Afshar, Majid; Press, Valerie G; Robison, Rachel G; Kho, Abel N; Bandi, Sindhura; Biswas, Ashvini; Avila, Pedro C; Kumar, Harsha Vardhan Madan; Yu, Byung; Naureckas, Edward T; Nyenhuis, Sharmilee M; Codispoti, Christopher D

    2017-10-13

    Comprehensive, rapid, and accurate identification of patients with asthma for clinical care and engagement in research efforts is needed. The original development and validation of a computable phenotype for asthma case identification occurred at a single institution in Chicago and demonstrated excellent test characteristics. However, its application in a diverse payer mix, across different health systems and multiple electronic health record vendors, and in both children and adults was not examined. The objective of this study is to externally validate the computable phenotype across diverse Chicago institutions to accurately identify pediatric and adult patients with asthma. A cohort of 900 asthma and control patients was identified from the electronic health record between January 1, 2012 and November 30, 2014. Two physicians at each site independently reviewed the patient chart to annotate cases. The inter-observer reliability between the physician reviewers had a κ-coefficient of 0.95 (95% CI 0.93-0.97). The accuracy, sensitivity, specificity, negative predictive value, and positive predictive value of the computable phenotype were all above 94% in the full cohort. The excellent positive and negative predictive values in this multi-center external validation study establish a useful tool to identify asthma cases in in the electronic health record for research and care. This computable phenotype could be used in large-scale comparative-effectiveness trials.

  11. Current algorithmic solutions for peptide-based proteomics data generation and identification.

    PubMed

    Hoopmann, Michael R; Moritz, Robert L

    2013-02-01

    Peptide-based proteomic data sets are ever increasing in size and complexity. These data sets provide computational challenges when attempting to quickly analyze spectra and obtain correct protein identifications. Database search and de novo algorithms must consider high-resolution MS/MS spectra and alternative fragmentation methods. Protein inference is a tricky problem when analyzing large data sets of degenerate peptide identifications. Combining multiple algorithms for improved peptide identification puts significant strain on computational systems when investigating large data sets. This review highlights some of the recent developments in peptide and protein identification algorithms for analyzing shotgun mass spectrometry data when encountering the aforementioned hurdles. Also explored are the roles that analytical pipelines, public spectral libraries, and cloud computing play in the evolution of peptide-based proteomics. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Detection and identification of substances using noisy THz signal

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Zakharova, Irina G.; Zagursky, Dmitry Yu.; Varentsova, Svetlana A.

    2017-05-01

    We discuss an effective method for the detection and identification of substances using a high noisy THz signal. In order to model such a noisy signal, we add to the THz signal transmitted through a pure substance, a noisy THz signal obtained in real conditions at a long distance (more than 3.5 m) from the receiver in air. The insufficiency of the standard THz-TDS method is demonstrated. The method discussed in the paper is based on time-dependent integral correlation criteria calculated using spectral dynamics of medium response. A new type of the integral correlation criterion, which is less dependent on spectral characteristics of the noisy signal under investigation, is used for the substance identification. To demonstrate the possibilities of the integral correlation criteria in real experiment, they are applied for the identification of explosive HMX in the reflection mode. To explain the physical mechanism for the false absorption frequencies appearance in the signal we make a computer simulation using 1D Maxwell's equations and density matrix formalism. We propose also new method for the substance identification by using the THz pulse frequency up-conversion and discuss an application of the cascade mechanism of molecules high energy levels excitation for the substance identification.

  13. Software architecture of the III/FBI segment of the FBI's integrated automated identification system

    NASA Astrophysics Data System (ADS)

    Booker, Brian T.

    1997-02-01

    This paper will describe the software architecture of the Interstate Identification Index (III/FBI) Segment of the FBI's Integrated Automated Fingerprint Identification System (IAFIS). IAFIS is currently under development, with deployment to begin in 1998. III/FBI will provide the repository of criminal history and photographs for criminal subjects, as well as identification data for military and civilian federal employees. Services provided by III/FBI include maintenance of the criminal and civil data, subject search of the criminal and civil data, and response generation services for IAFIS. III/FBI software will be comprised of both COTS and an estimated 250,000 lines of developed C code. This paper will describe the following: (1) the high-level requirements of the III/FBI software; (2) the decomposition of the III/FBI software into Computer Software Configuration Items (CSCIs); (3) the top-level design of the III/FBI CSCIs; and (4) the relationships among the developed CSCIs and the COTS products that will comprise the III/FBI software.

  14. Computational identification of obligatorily autocatalytic replicators embedded in metabolic networks

    PubMed Central

    Kun, Ádám; Papp, Balázs; Szathmáry, Eörs

    2008-01-01

    Background If chemical A is necessary for the synthesis of more chemical A, then A has the power of replication (such systems are known as autocatalytic systems). We provide the first systems-level analysis searching for small-molecular autocatalytic components in the metabolisms of diverse organisms, including an inferred minimal metabolism. Results We find that intermediary metabolism is invariably autocatalytic for ATP. Furthermore, we provide evidence for the existence of additional, organism-specific autocatalytic metabolites in the forms of coenzymes (NAD+, coenzyme A, tetrahydrofolate, quinones) and sugars. Although the enzymatic reactions of a number of autocatalytic cycles are present in most of the studied organisms, they display obligatorily autocatalytic behavior in a few networks only, hence demonstrating the need for a systems-level approach to identify metabolic replicators embedded in large networks. Conclusion Metabolic replicators are apparently common and potentially both universal and ancestral: without their presence, kick-starting metabolic networks is impossible, even if all enzymes and genes are present in the same cell. Identification of metabolic replicators is also important for attempts to create synthetic cells, as some of these autocatalytic molecules will presumably be needed to be added to the system as, by definition, the system cannot synthesize them without their initial presence. PMID:18331628

  15. AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data

    NASA Astrophysics Data System (ADS)

    Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin

    2018-01-01

    In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.

  16. System-Level Virtualization for High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian

    2008-01-01

    System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing themore » machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.« less

  17. Decentralized System Identification Using Stochastic Subspace Identification for Wireless Sensor Networks

    PubMed Central

    Cho, Soojin; Park, Jong-Woong; Sim, Sung-Han

    2015-01-01

    Wireless sensor networks (WSNs) facilitate a new paradigm to structural identification and monitoring for civil infrastructure. Conventional structural monitoring systems based on wired sensors and centralized data acquisition systems are costly for installation as well as maintenance. WSNs have emerged as a technology that can overcome such difficulties, making deployment of a dense array of sensors on large civil structures both feasible and economical. However, as opposed to wired sensor networks in which centralized data acquisition and processing is common practice, WSNs require decentralized computing algorithms to reduce data transmission due to the limitation associated with wireless communication. In this paper, the stochastic subspace identification (SSI) technique is selected for system identification, and SSI-based decentralized system identification (SDSI) is proposed to be implemented in a WSN composed of Imote2 wireless sensors that measure acceleration. The SDSI is tightly scheduled in the hierarchical WSN, and its performance is experimentally verified in a laboratory test using a 5-story shear building model. PMID:25856325

  18. Use of sEMG in identification of low level muscle activities: features based on ICA and fractal dimension.

    PubMed

    Naik, Ganesh R; Kumar, Dinesh K; Arjunan, Sridhar

    2009-01-01

    This paper has experimentally verified and compared features of sEMG (Surface Electromyogram) such as ICA (Independent Component Analysis) and Fractal Dimension (FD) for identification of low level forearm muscle activities. The fractal dimension was used as a feature as reported in the literature. The normalized feature values were used as training and testing vectors for an Artificial neural network (ANN), in order to reduce inter-experimental variations. The identification accuracy using FD of four channels sEMG was 58%, and increased to 96% when the signals are separated to their independent components using ICA.

  19. Real-time flutter identification

    NASA Technical Reports Server (NTRS)

    Roy, R.; Walker, R.

    1985-01-01

    The techniques and a FORTRAN 77 MOdal Parameter IDentification (MOPID) computer program developed for identification of the frequencies and damping ratios of multiple flutter modes in real time are documented. Physically meaningful model parameterization was combined with state of the art recursive identification techniques and applied to the problem of real time flutter mode monitoring. The performance of the algorithm in terms of convergence speed and parameter estimation error is demonstrated for several simulated data cases, and the results of actual flight data analysis from two different vehicles are presented. It is indicated that the algorithm is capable of real time monitoring of aircraft flutter characteristics with a high degree of reliability.

  20. Nasal computed tomography.

    PubMed

    Kuehn, Ned F

    2006-05-01

    Chronic nasal disease is often a challenge to diagnose. Computed tomography greatly enhances the ability to diagnose chronic nasal disease in dogs and cats. Nasal computed tomography provides detailed information regarding the extent of disease, accurate discrimination of neoplastic versus nonneoplastic diseases, and identification of areas of the nose to examine rhinoscopically and suspicious regions to target for biopsy.

  1. Analysis of stationary availability factor of two-level backbone computer networks with arbitrary topology

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the two-level backbone computer networks with arbitrary topology. A specialized method, offered by the author for calculation of the stationary availability factor of the two-level backbone computer networks, based on the Markov reliability models for the set of the independent repairable elements with the given failure and repair rates and the methods of the discrete mathematics, is also discussed. A specialized algorithm, offered by the author for analysis of the network connectivity, taking into account different kinds of the network equipment failures, is also observed. Finally, this paper presents an example of calculation of the stationary availability factor for the backbone computer network with the given topology.

  2. Identification of dynamic load for prosthetic structures.

    PubMed

    Zhang, Dequan; Han, Xu; Zhang, Zhongpu; Liu, Jie; Jiang, Chao; Yoda, Nobuhiro; Meng, Xianghua; Li, Qing

    2017-12-01

    Dynamic load exists in numerous biomechanical systems, and its identification signifies a critical issue for characterizing dynamic behaviors and studying biomechanical consequence of the systems. This study aims to identify dynamic load in the dental prosthetic structures, namely, 3-unit implant-supported fixed partial denture (I-FPD) and teeth-supported fixed partial denture. The 3-dimensional finite element models were constructed through specific patient's computerized tomography images. A forward algorithm and regularization technique were developed for identifying dynamic load. To verify the effectiveness of the identification method proposed, the I-FPD and teeth-supported fixed partial denture structures were investigated to determine the dynamic loads. For validating the results of inverse identification, an experimental force-measuring system was developed by using a 3-dimensional piezoelectric transducer to measure the dynamic load in the I-FPD structure in vivo. The computationally identified loads were presented with different noise levels to determine their influence on the identification accuracy. The errors between the measured load and identified counterpart were calculated for evaluating the practical applicability of the proposed procedure in biomechanical engineering. This study is expected to serve as a demonstrative role in identifying dynamic loading in biomedical systems, where a direct in vivo measurement may be rather demanding in some areas of interest clinically. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Optical computing.

    NASA Technical Reports Server (NTRS)

    Stroke, G. W.

    1972-01-01

    Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.

  4. Determining the Computer Literacy Levels of Vocational Teachers in Southern Nevada and Developing a Computer In-Service Program for Vocational Teachers.

    ERIC Educational Resources Information Center

    Pomeroy, James L.

    A study was conducted to achieve the following objectives: (1) to determine the computer skills level of the vocational teachers in Southern Nevada; (2) to design a computer literacy inservice program targeting the specific instructional needs of vocational teachers with deficient skills; (3) to develop a plan for evaluating the inservice training…

  5. MALDI-TOF mass spectrometry provides high accuracy in identification of Salmonella at species level but is limited to type or subtype Salmonella serovars.

    PubMed

    Kang, Lin; Li, Nan; Li, Ping; Zhou, Yang; Gao, Shan; Gao, Hongwei; Xin, Wenwen; Wang, Jinglin

    2017-04-01

    Salmonella can cause global foodborne illnesses in humans and many animals. The current diagnostic gold standard used for detecting Salmonella infection is microbiological culture followed by serological confirmation tests. However, these methods are complicated and time-consuming. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) analysis offers some advantages in rapid identification, for example, simple and fast sample preparation, fast and automated measurement, and robust and reliable identification up to genus and species levels, possibly even to the strain level. In this study, we established a reference database for species identification using whole-cell MALDI-TOF MS; the database consisted of 12 obtained main spectra of the Salmonella culture collection strains belonged to seven serotypes. Eighty-two clinical isolates of Salmonella were identified using established database, and partial 16S rDNA gene sequencing and serological method were used as comparison. We found that MALDI-TOF mass spectrometry provided high accuracy in identification of Salmonella at species level but was limited to type or subtype Salmonella serovars. We also tried to find serovar-specific biomarkers and failed. Our study demonstrated that (a) MALDI-TOF MS was suitable for identification of Salmonella at species level with high accuracy and (b) that MALDI-TOF MS method presented in this study was not useful for serovar assignment of Salmonella currently, because of its low matching with serological method and (c) MALDI-TOF MS method presented in this study was not suitable to subtype S. typhimurium because of its low discriminatory ability.

  6. Annotation: a computational solution for streamlining metabolomics analysis

    PubMed Central

    Domingo-Almenara, Xavier; Montenegro-Burke, J. Rafael; Benton, H. Paul; Siuzdak, Gary

    2017-01-01

    Metabolite identification is still considered an imposing bottleneck in liquid chromatography mass spectrometry (LC/MS) untargeted metabolomics. The identification workflow usually begins with detecting relevant LC/MS peaks via peak-picking algorithms and retrieving putative identities based on accurate mass searching. However, accurate mass search alone provides poor evidence for metabolite identification. For this reason, computational annotation is used to reveal the underlying metabolites monoisotopic masses, improving putative identification in addition to confirmation with tandem mass spectrometry. This review examines LC/MS data from a computational and analytical perspective, focusing on the occurrence of neutral losses and in-source fragments, to understand the challenges in computational annotation methodologies. Herein, we examine the state-of-the-art strategies for computational annotation including: (i) peak grouping or full scan (MS1) pseudo-spectra extraction, i.e., clustering all mass spectral signals stemming from each metabolite; (ii) annotation using ion adduction and mass distance among ion peaks; (iii) incorporation of biological knowledge such as biotransformations or pathways; (iv) tandem MS data; and (v) metabolite retention time calibration, usually achieved by prediction from molecular descriptors. Advantages and pitfalls of each of these strategies are discussed, as well as expected future trends in computational annotation. PMID:29039932

  7. [Automated identification, interpretation and classification of focal changes in the lungs on the images obtained at computed tomography for lung cancer screening].

    PubMed

    Barchuk, A A; Podolsky, M D; Tarakanov, S A; Kotsyuba, I Yu; Gaidukov, V S; Kuznetsov, V I; Merabishvili, V M; Barchuk, A S; Levchenko, E V; Filochkina, A V; Arseniev, A I

    2015-01-01

    This review article analyzes data of literature devoted to the description, interpretation and classification of focal (nodal) changes in the lungs detected by computed tomography of the chest cavity. There are discussed possible criteria for determining the most likely of their character--primary and metastatic tumor processes, inflammation, scarring, and autoimmune changes, tuberculosis and others. Identification of the most characteristic, reliable and statistically significant evidences of a variety of pathological processes in the lungs including the use of modern computer-aided detection and diagnosis of sites will optimize the diagnostic measures and ensure processing of a large volume of medical data in a short time.

  8. Computational Psychiatry of ADHD: Neural Gain Impairments across Marrian Levels of Analysis

    PubMed Central

    Hauser, Tobias U.; Fiore, Vincenzo G.; Moutoussis, Michael; Dolan, Raymond J.

    2016-01-01

    Attention-deficit hyperactivity disorder (ADHD), one of the most common psychiatric disorders, is characterised by unstable response patterns across multiple cognitive domains. However, the neural mechanisms that explain these characteristic features remain unclear. Using a computational multilevel approach, we propose that ADHD is caused by impaired gain modulation in systems that generate this phenotypic increased behavioural variability. Using Marr's three levels of analysis as a heuristic framework, we focus on this variable behaviour, detail how it can be explained algorithmically, and how it might be implemented at a neural level through catecholamine influences on corticostriatal loops. This computational, multilevel, approach to ADHD provides a framework for bridging gaps between descriptions of neuronal activity and behaviour, and provides testable predictions about impaired mechanisms. PMID:26787097

  9. Computational Modeling and Treatment Identification in the Myelodysplastic Syndromes.

    PubMed

    Drusbosky, Leylah M; Cogle, Christopher R

    2017-10-01

    This review discusses the need for computational modeling in myelodysplastic syndromes (MDS) and early test results. As our evolving understanding of MDS reveals a molecularly complicated disease, the need for sophisticated computer analytics is required to keep track of the number and complex interplay among the molecular abnormalities. Computational modeling and digital drug simulations using whole exome sequencing data input have produced early results showing high accuracy in predicting treatment response to standard of care drugs. Furthermore, the computational MDS models serve as clinically relevant MDS cell lines for pre-clinical assays of investigational agents. MDS is an ideal disease for computational modeling and digital drug simulations. Current research is focused on establishing the prediction value of computational modeling. Future research will test the clinical advantage of computer-informed therapy in MDS.

  10. Crop identification and area estimation over large geographic areas using LANDSAT MSS data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. LANDSAT MSS data was adequate to accurately identify wheat in Kansas; corn and soybean estimates in Indiana were less accurate. Computer-aided analysis techniques were effectively used to extract crop identification information from LANDSAT data. Systematic sampling of entire counties made possible by computer classification methods resulted in very precise area estimates at county, district, and state levels. Training statistics were successfully extended from one county to other counties having similar crops and soils if the training areas sampled the total variation of the area to be classified.

  11. Prediction of monthly regional groundwater levels through hybrid soft-computing techniques

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Chang, Li-Chiu; Huang, Chien-Wei; Kao, I.-Feng

    2016-10-01

    Groundwater systems are intrinsically heterogeneous with dynamic temporal-spatial patterns, which cause great difficulty in quantifying their complex processes, while reliable predictions of regional groundwater levels are commonly needed for managing water resources to ensure proper service of water demands within a region. In this study, we proposed a novel and flexible soft-computing technique that could effectively extract the complex high-dimensional input-output patterns of basin-wide groundwater-aquifer systems in an adaptive manner. The soft-computing models combined the Self Organized Map (SOM) and the Nonlinear Autoregressive with Exogenous Inputs (NARX) network for predicting monthly regional groundwater levels based on hydrologic forcing data. The SOM could effectively classify the temporal-spatial patterns of regional groundwater levels, the NARX could accurately predict the mean of regional groundwater levels for adjusting the selected SOM, the Kriging was used to interpolate the predictions of the adjusted SOM into finer grids of locations, and consequently the prediction of a monthly regional groundwater level map could be obtained. The Zhuoshui River basin in Taiwan was the study case, and its monthly data sets collected from 203 groundwater stations, 32 rainfall stations and 6 flow stations during 2000 and 2013 were used for modelling purpose. The results demonstrated that the hybrid SOM-NARX model could reliably and suitably predict monthly basin-wide groundwater levels with high correlations (R2 > 0.9 in both training and testing cases). The proposed methodology presents a milestone in modelling regional environmental issues and offers an insightful and promising way to predict monthly basin-wide groundwater levels, which is beneficial to authorities for sustainable water resources management.

  12. Intelligent wear mode identification system for marine diesel engines based on multi-level belief rule base methodology

    NASA Astrophysics Data System (ADS)

    Yan, Xinping; Xu, Xiaojian; Sheng, Chenxing; Yuan, Chengqing; Li, Zhixiong

    2018-01-01

    Wear faults are among the chief causes of main-engine damage, significantly influencing the secure and economical operation of ships. It is difficult for engineers to utilize multi-source information to identify wear modes, so an intelligent wear mode identification model needs to be developed to assist engineers in diagnosing wear faults in diesel engines. For this purpose, a multi-level belief rule base (BBRB) system is proposed in this paper. The BBRB system consists of two-level belief rule bases, and the 2D and 3D characteristics of wear particles are used as antecedent attributes on each level. Quantitative and qualitative wear information with uncertainties can be processed simultaneously by the BBRB system. In order to enhance the efficiency of the BBRB, the silhouette value is adopted to determine referential points and the fuzzy c-means clustering algorithm is used to transform input wear information into belief degrees. In addition, the initial parameters of the BBRB system are constructed on the basis of expert-domain knowledge and then optimized by the genetic algorithm to ensure the robustness of the system. To verify the validity of the BBRB system, experimental data acquired from real-world diesel engines are analyzed. Five-fold cross-validation is conducted on the experimental data and the BBRB is compared with the other four models in the cross-validation. In addition, a verification dataset containing different wear particles is used to highlight the effectiveness of the BBRB system in wear mode identification. The verification results demonstrate that the proposed BBRB is effective and efficient for wear mode identification with better performance and stability than competing systems.

  13. Attitude identification for SCOLE using two infrared cameras

    NASA Technical Reports Server (NTRS)

    Shenhar, Joram

    1991-01-01

    An algorithm is presented that incorporates real time data from two infrared cameras and computes the attitude parameters of the Spacecraft COntrol Lab Experiment (SCOLE), a lab apparatus representing an offset feed antenna attached to the Space Shuttle by a flexible mast. The algorithm uses camera position data of three miniature light emitting diodes (LEDs), mounted on the SCOLE platform, permitting arbitrary camera placement and an on-line attitude extraction. The continuous nature of the algorithm allows identification of the placement of the two cameras with respect to some initial position of the three reference LEDs, followed by on-line six degrees of freedom attitude tracking, regardless of the attitude time history. A description is provided of the algorithm in the camera identification mode as well as the mode of target tracking. Experimental data from a reduced size SCOLE-like lab model, reflecting the performance of the camera identification and the tracking processes, are presented. Computer code for camera placement identification and SCOLE attitude tracking is listed.

  14. Multicenter accuracy and interobserver agreement of spot sign identification in acute intracerebral hemorrhage.

    PubMed

    Huynh, Thien J; Flaherty, Matthew L; Gladstone, David J; Broderick, Joseph P; Demchuk, Andrew M; Dowlatshahi, Dar; Meretoja, Atte; Davis, Stephen M; Mitchell, Peter J; Tomlinson, George A; Chenkin, Jordan; Chia, Tze L; Symons, Sean P; Aviv, Richard I

    2014-01-01

    Rapid, accurate, and reliable identification of the computed tomography angiography spot sign is required to identify patients with intracerebral hemorrhage for trials of acute hemostatic therapy. We sought to assess the accuracy and interobserver agreement for spot sign identification. A total of 131 neurology, emergency medicine, and neuroradiology staff and fellows underwent imaging certification for spot sign identification before enrolling patients in 3 trials targeting spot-positive intracerebral hemorrhage for hemostatic intervention (STOP-IT, SPOTLIGHT, STOP-AUST). Ten intracerebral hemorrhage cases (spot-positive/negative ratio, 1:1) were presented for evaluation of spot sign presence, number, and mimics. True spot positivity was determined by consensus of 2 experienced neuroradiologists. Diagnostic performance, agreement, and differences by training level were analyzed. Mean accuracy, sensitivity, and specificity for spot sign identification were 87%, 78%, and 96%, respectively. Overall sensitivity was lower than specificity (P<0.001) because of true spot signs incorrectly perceived as spot mimics. Interobserver agreement for spot sign presence was moderate (k=0.60). When true spots were correctly identified, 81% correctly identified the presence of single or multiple spots. Median time needed to evaluate the presence of a spot sign was 1.9 minutes (interquartile range, 1.2-3.1 minutes). Diagnostic performance, interobserver agreement, and time needed for spot sign evaluation were similar among staff physicians and fellows. Accuracy for spot identification is high with opportunity for improvement in spot interpretation sensitivity and interobserver agreement particularly through greater reliance on computed tomography angiography source data and awareness of limitations of multiplanar images. Further prospective study is needed.

  15. Assessing Pre-Service Teachers' Computer Phobia Levels in Terms of Gender and Experience, Turkish Sample

    ERIC Educational Resources Information Center

    Ursavas, Omer Faruk; Karal, Hasan

    2009-01-01

    In this study it is aimed to determine the level of pre-service teachers' computer phobia. Whether or not computer phobia meaningfully varies statistically according to gender and computer experience has been tested in the study. The study was performed on 430 pre-service teachers at the Education Faculty in Rize/Turkey. Data in the study were…

  16. Computational Approaches to Chemical Hazard Assessment

    PubMed Central

    Luechtefeld, Thomas; Hartung, Thomas

    2018-01-01

    Summary Computational prediction of toxicity has reached new heights as a result of decades of growth in the magnitude and diversity of biological data. Public packages for statistics and machine learning make model creation faster. New theory in machine learning and cheminformatics enables integration of chemical structure, toxicogenomics, simulated and physical data in the prediction of chemical health hazards, and other toxicological information. Our earlier publications have characterized a toxicological dataset of unprecedented scale resulting from the European REACH legislation (Registration Evaluation Authorisation and Restriction of Chemicals). These publications dove into potential use cases for regulatory data and some models for exploiting this data. This article analyzes the options for the identification and categorization of chemicals, moves on to the derivation of descriptive features for chemicals, discusses different kinds of targets modeled in computational toxicology, and ends with a high-level perspective of the algorithms used to create computational toxicology models. PMID:29101769

  17. Computational modelling of cellular level metabolism

    NASA Astrophysics Data System (ADS)

    Calvetti, D.; Heino, J.; Somersalo, E.

    2008-07-01

    The steady and stationary state inverse problems consist of estimating the reaction and transport fluxes, blood concentrations and possibly the rates of change of some of the concentrations based on data which are often scarce noisy and sampled over a population. The Bayesian framework provides a natural setting for the solution of this inverse problem, because a priori knowledge about the system itself and the unknown reaction fluxes and transport rates can compensate for the insufficiency of measured data, provided that the computational costs do not become prohibitive. This article identifies the computational challenges which have to be met when analyzing the steady and stationary states of multicompartment model for cellular metabolism and suggest stable and efficient ways to handle the computations. The outline of a computational tool based on the Bayesian paradigm for the simulation and analysis of complex cellular metabolic systems is also presented.

  18. Computer vision applied to herbarium specimens of German trees: testing the future utility of the millions of herbarium specimen images for automated identification.

    PubMed

    Unger, Jakob; Merhof, Dorit; Renner, Susanne

    2016-11-16

    Global Plants, a collaborative between JSTOR and some 300 herbaria, now contains about 2.48 million high-resolution images of plant specimens, a number that continues to grow, and collections that are digitizing their specimens at high resolution are allocating considerable recourses to the maintenance of computer hardware (e.g., servers) and to acquiring digital storage space. We here apply machine learning, specifically the training of a Support-Vector-Machine, to classify specimen images into categories, ideally at the species level, using the 26 most common tree species in Germany as a test case. We designed an analysis pipeline and classification system consisting of segmentation, normalization, feature extraction, and classification steps and evaluated the system in two test sets, one with 26 species, the other with 17, in each case using 10 images per species of plants collected between 1820 and 1995, which simulates the empirical situation that most named species are represented in herbaria and databases, such as JSTOR, by few specimens. We achieved 73.21% accuracy of species assignments in the larger test set, and 84.88% in the smaller test set. The results of this first application of a computer vision algorithm trained on images of herbarium specimens shows that despite the problem of overlapping leaves, leaf-architectural features can be used to categorize specimens to species with good accuracy. Computer vision is poised to play a significant role in future rapid identification at least for frequently collected genera or species in the European flora.

  19. Attitudes of Entry-Level University Students towards Computers: A Comparative Study

    ERIC Educational Resources Information Center

    Smith, E.; Oosthuizen, H. J.

    2006-01-01

    In this paper, we present the findings of a study of attitude changes of entry-level University students towards computers conducted at two South African Universities. Analysis comprised "t" tests to discover differences between the perceptions/attitudes of male and female respondents, English/Afrikaans speakers and those speaking the…

  20. Data identification for improving gene network inference using computational algebra.

    PubMed

    Dimitrova, Elena; Stigler, Brandilyn

    2014-11-01

    Identification of models of gene regulatory networks is sensitive to the amount of data used as input. Considering the substantial costs in conducting experiments, it is of value to have an estimate of the amount of data required to infer the network structure. To minimize wasted resources, it is also beneficial to know which data are necessary to identify the network. Knowledge of the data and knowledge of the terms in polynomial models are often required a priori in model identification. In applications, it is unlikely that the structure of a polynomial model will be known, which may force data sets to be unnecessarily large in order to identify a model. Furthermore, none of the known results provides any strategy for constructing data sets to uniquely identify a model. We provide a specialization of an existing criterion for deciding when a set of data points identifies a minimal polynomial model when its monomial terms have been specified. Then, we relax the requirement of the knowledge of the monomials and present results for model identification given only the data. Finally, we present a method for constructing data sets that identify minimal polynomial models.

  1. Dynamic Identification for Control of Large Space Structures

    NASA Technical Reports Server (NTRS)

    Ibrahim, S. R.

    1985-01-01

    This is a compilation of reports by the one author on one subject. It consists of the following five journal articles: (1) A Parametric Study of the Ibrahim Time Domain Modal Identification Algorithm; (2) Large Modal Survey Testing Using the Ibrahim Time Domain Identification Technique; (3) Computation of Normal Modes from Identified Complex Modes; (4) Dynamic Modeling of Structural from Measured Complex Modes; and (5) Time Domain Quasi-Linear Identification of Nonlinear Dynamic Systems.

  2. 21 CFR 868.1730 - Oxygen uptake computer.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Oxygen uptake computer. 868.1730 Section 868.1730...) MEDICAL DEVICES ANESTHESIOLOGY DEVICES Diagnostic Devices § 868.1730 Oxygen uptake computer. (a) Identification. An oxygen uptake computer is a device intended to compute the amount of oxygen consumed by a...

  3. 21 CFR 868.1730 - Oxygen uptake computer.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Oxygen uptake computer. 868.1730 Section 868.1730...) MEDICAL DEVICES ANESTHESIOLOGY DEVICES Diagnostic Devices § 868.1730 Oxygen uptake computer. (a) Identification. An oxygen uptake computer is a device intended to compute the amount of oxygen consumed by a...

  4. 21 CFR 868.1730 - Oxygen uptake computer.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Oxygen uptake computer. 868.1730 Section 868.1730...) MEDICAL DEVICES ANESTHESIOLOGY DEVICES Diagnostic Devices § 868.1730 Oxygen uptake computer. (a) Identification. An oxygen uptake computer is a device intended to compute the amount of oxygen consumed by a...

  5. 21 CFR 868.1730 - Oxygen uptake computer.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Oxygen uptake computer. 868.1730 Section 868.1730...) MEDICAL DEVICES ANESTHESIOLOGY DEVICES Diagnostic Devices § 868.1730 Oxygen uptake computer. (a) Identification. An oxygen uptake computer is a device intended to compute the amount of oxygen consumed by a...

  6. 21 CFR 868.1730 - Oxygen uptake computer.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Oxygen uptake computer. 868.1730 Section 868.1730...) MEDICAL DEVICES ANESTHESIOLOGY DEVICES Diagnostic Devices § 868.1730 Oxygen uptake computer. (a) Identification. An oxygen uptake computer is a device intended to compute the amount of oxygen consumed by a...

  7. A distributed computational search strategy for the identification of diagnostics targets: application to finding aptamer targets for methicillin-resistant staphylococci.

    PubMed

    Flanagan, Keith; Cockell, Simon; Harwood, Colin; Hallinan, Jennifer; Nakjang, Sirintra; Lawry, Beth; Wipat, Anil

    2014-06-30

    The rapid and cost-effective identification of bacterial species is crucial, especially for clinical diagnosis and treatment. Peptide aptamers have been shown to be valuable for use as a component of novel, direct detection methods. These small peptides have a number of advantages over antibodies, including greater specificity and longer shelf life. These properties facilitate their use as the detector components of biosensor devices. However, the identification of suitable aptamer targets for particular groups of organisms is challenging. We present a semi-automated processing pipeline for the identification of candidate aptamer targets from whole bacterial genome sequences. The pipeline can be configured to search for protein sequence fragments that uniquely identify a set of strains of interest. The system is also capable of identifying additional organisms that may be of interest due to their possession of protein fragments in common with the initial set. Through the use of Cloud computing technology and distributed databases, our system is capable of scaling with the rapidly growing genome repositories, and consequently of keeping the resulting data sets up-to-date. The system described is also more generically applicable to the discovery of specific targets for other diagnostic approaches such as DNA probes, PCR primers and antibodies.

  8. A distributed computational search strategy for the identification of diagnostics targets: Application to finding aptamer targets for methicillin-resistant staphylococci.

    PubMed

    Flanagan, Keith; Cockell, Simon; Harwood, Colin; Hallinan, Jennifer; Nakjang, Sirintra; Lawry, Beth; Wipat, Anil

    2014-06-01

    The rapid and cost-effective identification of bacterial species is crucial, especially for clinical diagnosis and treatment. Peptide aptamers have been shown to be valuable for use as a component of novel, direct detection methods. These small peptides have a number of advantages over antibodies, including greater specificity and longer shelf life. These properties facilitate their use as the detector components of biosensor devices. However, the identification of suitable aptamer targets for particular groups of organisms is challenging. We present a semi-automated processing pipeline for the identification of candidate aptamer targets from whole bacterial genome sequences. The pipeline can be configured to search for protein sequence fragments that uniquely identify a set of strains of interest. The system is also capable of identifying additional organisms that may be of interest due to their possession of protein fragments in common with the initial set. Through the use of Cloud computing technology and distributed databases, our system is capable of scaling with the rapidly growing genome repositories, and consequently of keeping the resulting data sets up-to-date. The system described is also more generically applicable to the discovery of specific targets for other diagnostic approaches such as DNA probes, PCR primers and antibodies.

  9. Decentralized system identification using stochastic subspace identification on wireless smart sensor networks

    NASA Astrophysics Data System (ADS)

    Sim, Sung-Han; Spencer, Billie F., Jr.; Park, Jongwoong; Jung, Hyungjo

    2012-04-01

    Wireless Smart Sensor Networks (WSSNs) facilitates a new paradigm to structural identification and monitoring for civil infrastructure. Conventional monitoring systems based on wired sensors and centralized data acquisition and processing have been considered to be challenging and costly due to cabling and expensive equipment and maintenance costs. WSSNs have emerged as a technology that can overcome such difficulties, making deployment of a dense array of sensors on large civil structures both feasible and economical. However, as opposed to wired sensor networks in which centralized data acquisition and processing is common practice, WSSNs require decentralized computing algorithms to reduce data transmission due to the limitation associated with wireless communication. Thus, several system identification methods have been implemented to process sensor data and extract essential information, including Natural Excitation Technique with Eigensystem Realization Algorithm, Frequency Domain Decomposition (FDD), and Random Decrement Technique (RDT); however, Stochastic Subspace Identification (SSI) has not been fully utilized in WSSNs, while SSI has the strong potential to enhance the system identification. This study presents a decentralized system identification using SSI in WSSNs. The approach is implemented on MEMSIC's Imote2 sensor platform and experimentally verified using a 5-story shear building model.

  10. Contribution of the computed tomography of the anatomical aspects of the sphenoid sinuses to forensic identification.

    PubMed

    Auffret, Mathieu; Garetier, Marc; Diallo, Idris; Aho, Serge; Ben Salem, Douraied

    2016-12-01

    Body identification is the cornerstone of forensic investigation. It can be performed using radiographic techniques, if antemortem images are available. This study was designed to assess the value of visual comparison of the computed tomography (CT) anatomical aspects of the sphenoid sinuses, in forensic individual identification, especially if antemortem dental records, fingerprints or DNA samples are not available. This retrospective work took place in a French university hospital. The supervisor of this study randomly selected from the picture archiving and communication system (PACS), 58 patients who underwent one (16 patients) or two (42 patients) head CT in various neurological contexts. To avoid bias, those studies were prepared (anonymized, and all the head structures but the sphenoid sinuses were excluded), and used to constitute two working lists of 50 (42+8) CT studies of the sphenoid sinuses. An anatomical classification system of the sphenoid sinuses anatomical variations was created based on the anatomical and surgical literature. In these two working lists, three blinded readers had to identify, using the anatomical system and subjective visual comparison, 42 pairs of matched studies, and 16 unmatched studies. Readers were blinded from the exact numbers of matching studies. Each reader correctly identified the 42 pairs of CT with a concordance of 100% [97.5% confidence interval: 91-100%], and the 16 unmatched CT with a concordance of 100% [97.5% confidence interval: 79-100%]. Overall accuracy was 100%. Our study shows that establishing the anatomical concordance of the sphenoid sinuses by visual comparison could be used in personal identification. This easy method, based on a frequently and increasingly prescribed exam, still needs to be assessed on a postmortem cohort. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  11. Computational identification of mutually exclusive transcriptional drivers dysregulating metastatic microRNAs in prostate cancer

    PubMed Central

    Xue, Mengzhu; Liu, Haiyue; Zhang, Liwen; Chang, Hongyuan; Liu, Yuwei; Du, Shaowei; Yang, Yingqun; Wang, Peng

    2017-01-01

    Androgen-ablation therapies, which are the standard treatment for metastatic prostate cancer, invariably lead to acquired resistance. Hence, a systematic identification of additional drivers may provide useful insights into the development of effective therapies. Numerous microRNAs that are critical for metastasis are dysregulated in metastatic prostate cancer, but the underlying molecular mechanism is poorly understood. We perform an integrative analysis of transcription factor (TF) and microRNA expression profiles and computationally identify three master TFs, AR, HOXC6 and NKX2-2, which induce the aberrant metastatic microRNA expression in a mutually exclusive fashion. Experimental validations confirm that the three TFs co-dysregulate a large number of metastasis-associated microRNAs. Moreover, their overexpression substantially enhances cell motility and is consistently associated with a poor clinical outcome. Finally, the mutually exclusive overexpression between AR, HOXC6 and NKX2-2 is preserved across various tissues and cancers, suggesting that mutual exclusivity may represent an intrinsic characteristic of driver TFs during tumorigenesis. PMID:28397780

  12. A computational method for the identification of new candidate carcinogenic and non-carcinogenic chemicals.

    PubMed

    Chen, Lei; Chu, Chen; Lu, Jing; Kong, Xiangyin; Huang, Tao; Cai, Yu-Dong

    2015-09-01

    Cancer is one of the leading causes of human death. Based on current knowledge, one of the causes of cancer is exposure to toxic chemical compounds, including radioactive compounds, dioxin, and arsenic. The identification of new carcinogenic chemicals may warn us of potential danger and help to identify new ways to prevent cancer. In this study, a computational method was proposed to identify potential carcinogenic chemicals, as well as non-carcinogenic chemicals. According to the current validated carcinogenic and non-carcinogenic chemicals from the CPDB (Carcinogenic Potency Database), the candidate chemicals were searched in a weighted chemical network constructed according to chemical-chemical interactions. Then, the obtained candidate chemicals were further selected by a randomization test and information on chemical interactions and structures. The analyses identified several candidate carcinogenic chemicals, while those candidates identified as non-carcinogenic were supported by a literature search. In addition, several candidate carcinogenic/non-carcinogenic chemicals exhibit structural dissimilarity with validated carcinogenic/non-carcinogenic chemicals.

  13. Basic concepts and development of an all-purpose computer interface for ROC/FROC observer study.

    PubMed

    Shiraishi, Junji; Fukuoka, Daisuke; Hara, Takeshi; Abe, Hiroyuki

    2013-01-01

    In this study, we initially investigated various aspects of requirements for a computer interface employed in receiver operating characteristic (ROC) and free-response ROC (FROC) observer studies which involve digital images and ratings obtained by observers (radiologists). Secondly, by taking into account these aspects, an all-purpose computer interface utilized for these observer performance studies was developed. Basically, the observer studies can be classified into three paradigms, such as one rating for one case without an identification of a signal location, one rating for one case with an identification of a signal location, and multiple ratings for one case with identification of signal locations. For these paradigms, display modes on the computer interface can be used for single/multiple views of a static image, continuous viewing with cascade images (i.e., CT, MRI), and dynamic viewing of movies (i.e., DSA, ultrasound). Various functions on these display modes, which include windowing (contrast/level), magnifications, and annotations, are needed to be selected by an experimenter corresponding to the purpose of the research. In addition, the rules of judgment for distinguishing between true positives and false positives are an important factor for estimating diagnostic accuracy in an observer study. We developed a computer interface which runs on a Windows operating system by taking into account all aspects required for various observer studies. This computer interface requires experimenters to have sufficient knowledge about ROC/FROC observer studies, but allows its use for any purpose of the observer studies. This computer interface will be distributed publicly in the near future.

  14. Reference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate BoilingReference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate Boiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less

  15. [Transfer characteristic and source identification of soil heavy metals from water-level-fluctuating zone along Xiangxi River, three-Gorges Reservoir area].

    PubMed

    Xu, Tao; Wang, Fei; Guo, Qiang; Nie, Xiao-Qian; Huang, Ying-Ping; Chen, Jun

    2014-04-01

    Transfer characteristics of heavy metals and their evaluation of potential risk were studied based on determining concentration of heavy metal in soils from water-level-fluctuating zone (altitude:145-175 m) and bank (altitude: 175-185 m) along Xiangxi River, Three Gorges Reservoir area. Factor analysis-multiple linear regression (FA-MLR) was employed for heavy metal source identification and source apportionment. Results demonstrate that, during exposing season, the concentration of soil heavy metals in water-level-fluctuation zone and bank showed the variation, and the concentration of soil heavy metals reduced in shallow soil, but increased in deep soil at water-level-fluctuation zone. However, the concentration of soil heavy metals reduced in both shallow and deep soil at bank during the same period. According to the geoaccumulation index,the pollution extent of heavy metals followed the order: Cd > Pb > Cu > Cr, Cd is the primary pollutant. FA and FA-MLR reveal that in soils from water-level-fluctuation zone, 75.60% of Pb originates from traffic, 62.03% of Cd is from agriculture, 64.71% of Cu and 75.36% of Cr are from natural rock. In soils from bank, 82.26% of Pb originates from traffic, 68.63% of Cd is from agriculture, 65.72% of Cu and 69.33% of Cr are from natural rock. In conclusion, FA-MLR can successfully identify source of heavy metal and compute source apportionment of heavy metals, meanwhile the transfer characteristic is revealed. All these information can be a reference for heavy metal pollution control.

  16. 21 CFR 870.1425 - Programmable diagnostic computer.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Programmable diagnostic computer. 870.1425 Section... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Diagnostic Devices § 870.1425 Programmable diagnostic computer. (a) Identification. A programmable diagnostic computer is a device that can be...

  17. 21 CFR 870.1425 - Programmable diagnostic computer.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Programmable diagnostic computer. 870.1425 Section... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Diagnostic Devices § 870.1425 Programmable diagnostic computer. (a) Identification. A programmable diagnostic computer is a device that can be...

  18. 21 CFR 870.1425 - Programmable diagnostic computer.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Programmable diagnostic computer. 870.1425 Section... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Diagnostic Devices § 870.1425 Programmable diagnostic computer. (a) Identification. A programmable diagnostic computer is a device that can be...

  19. 21 CFR 870.1425 - Programmable diagnostic computer.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Programmable diagnostic computer. 870.1425 Section... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Diagnostic Devices § 870.1425 Programmable diagnostic computer. (a) Identification. A programmable diagnostic computer is a device that can be...

  20. 21 CFR 870.1425 - Programmable diagnostic computer.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Programmable diagnostic computer. 870.1425 Section... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Diagnostic Devices § 870.1425 Programmable diagnostic computer. (a) Identification. A programmable diagnostic computer is a device that can be...

  1. High-level waste storage tank farms/242-A evaporator standards/requirements identification document (S/RID), Vol. 7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-04-01

    This Requirements Identification Document (RID) describes an Occupational Health and Safety Program as defined through the Relevant DOE Orders, regulations, industry codes/standards, industry guidance documents and, as appropriate, good industry practice. The definition of an Occupational Health and Safety Program as specified by this document is intended to address Defense Nuclear Facilities Safety Board Recommendations 90-2 and 91-1, which call for the strengthening of DOE complex activities through the identification and application of relevant standards which supplement or exceed requirements mandated by DOE Orders. This RID applies to the activities, personnel, structures, systems, components, and programs involved in maintaining themore » facility and executing the mission of the High-Level Waste Storage Tank Farms.« less

  2. Mirror neurons and imitation: a computationally guided review.

    PubMed

    Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael

    2006-04-01

    Neurophysiology reveals the properties of individual mirror neurons in the macaque while brain imaging reveals the presence of 'mirror systems' (not individual neurons) in the human. Current conceptual models attribute high level functions such as action understanding, imitation, and language to mirror neurons. However, only the first of these three functions is well-developed in monkeys. We thus distinguish current opinions (conceptual models) on mirror neuron function from more detailed computational models. We assess the strengths and weaknesses of current computational models in addressing the data and speculations on mirror neurons (macaque) and mirror systems (human). In particular, our mirror neuron system (MNS), mental state inference (MSI) and modular selection and identification for control (MOSAIC) models are analyzed in more detail. Conceptual models often overlook the computational requirements for posited functions, while too many computational models adopt the erroneous hypothesis that mirror neurons are interchangeable with imitation ability. Our meta-analysis underlines the gap between conceptual and computational models and points out the research effort required from both sides to reduce this gap.

  3. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 7. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burt, D.L.

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 7) presents the standards and requirements for the following sections: Occupational Safety and Health, and Environmental Protection.

  4. Hydration level is an internal variable for computing motivation to obtain water rewards in monkeys.

    PubMed

    Minamimoto, Takafumi; Yamada, Hiroshi; Hori, Yukiko; Suhara, Tetsuya

    2012-05-01

    In the process of motivation to engage in a behavior, valuation of the expected outcome is comprised of not only external variables (i.e., incentives) but also internal variables (i.e., drive). However, the exact neural mechanism that integrates these variables for the computation of motivational value remains unclear. Besides, the signal of physiological needs, which serves as the primary internal variable for this computation, remains to be identified. Concerning fluid rewards, the osmolality level, one of the physiological indices for the level of thirst, may be an internal variable for valuation, since an increase in the osmolality level induces drinking behavior. Here, to examine the relationship between osmolality and the motivational value of a water reward, we repeatedly measured the blood osmolality level, while 2 monkeys continuously performed an instrumental task until they spontaneously stopped. We found that, as the total amount of water earned increased, the osmolality level progressively decreased (i.e., the hydration level increased) in an individual-dependent manner. There was a significant negative correlation between the error rate of the task (the proportion of trials with low motivation) and the osmolality level. We also found that the increase in the error rate with reward accumulation can be well explained by a formula describing the changes in the osmolality level. These results provide a biologically supported computational formula for the motivational value of a water reward that depends on the hydration level, enabling us to identify the neural mechanism that integrates internal and external variables.

  5. PROGRAM FOR THE IDENTIFICATION AND REPLACEMENT OF ENDOCRINE DISRUPTING CHEMICALS

    EPA Science Inventory

    A computer software program is being developed to aid in the identification and replacement of endocrine disrupting chemicals (EDC). This program will be comprised of two distinct areas of research: identification of potential EDC nd suggstions for replacing those potential EDC. ...

  6. High level language for measurement complex control based on the computer E-100I

    NASA Technical Reports Server (NTRS)

    Zubkov, B. V.

    1980-01-01

    A high level language was designed to control the process of conducting an experiment using the computer "Elektrinika-1001". Program examples are given to control the measuring and actuating devices. The procedure of including these programs in the suggested high level language is described.

  7. Computational identification of binding energy hot spots in protein-RNA complexes using an ensemble approach.

    PubMed

    Pan, Yuliang; Wang, Zixiang; Zhan, Weihua; Deng, Lei

    2018-05-01

    Identifying RNA-binding residues, especially energetically favored hot spots, can provide valuable clues for understanding the mechanisms and functional importance of protein-RNA interactions. Yet, limited availability of experimentally recognized energy hot spots in protein-RNA crystal structures leads to the difficulties in developing empirical identification approaches. Computational prediction of RNA-binding hot spot residues is still in its infant stage. Here, we describe a computational method, PrabHot (Prediction of protein-RNA binding hot spots), that can effectively detect hot spot residues on protein-RNA binding interfaces using an ensemble of conceptually different machine learning classifiers. Residue interaction network features and new solvent exposure characteristics are combined together and selected for classification with the Boruta algorithm. In particular, two new reference datasets (benchmark and independent) have been generated containing 107 hot spots from 47 known protein-RNA complex structures. In 10-fold cross-validation on the training dataset, PrabHot achieves promising performances with an AUC score of 0.86 and a sensitivity of 0.78, which are significantly better than that of the pioneer RNA-binding hot spot prediction method HotSPRing. We also demonstrate the capability of our proposed method on the independent test dataset and gain a competitive advantage as a result. The PrabHot webserver is freely available at http://denglab.org/PrabHot/. leideng@csu.edu.cn. Supplementary data are available at Bioinformatics online.

  8. Measuring exertion time, duty cycle and hand activity level for industrial tasks using computer vision.

    PubMed

    Akkas, Oguz; Lee, Cheng Hsien; Hu, Yu Hen; Harris Adamson, Carisa; Rempel, David; Radwin, Robert G

    2017-12-01

    Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.

  9. Validation of DNA-based identification software by computation of pedigree likelihood ratios.

    PubMed

    Slooten, K

    2011-08-01

    Disaster victim identification (DVI) can be aided by DNA-evidence, by comparing the DNA-profiles of unidentified individuals with those of surviving relatives. The DNA-evidence is used optimally when such a comparison is done by calculating the appropriate likelihood ratios. Though conceptually simple, the calculations can be quite involved, especially with large pedigrees, precise mutation models etc. In this article we describe a series of test cases designed to check if software designed to calculate such likelihood ratios computes them correctly. The cases include both simple and more complicated pedigrees, among which inbred ones. We show how to calculate the likelihood ratio numerically and algebraically, including a general mutation model and possibility of allelic dropout. In Appendix A we show how to derive such algebraic expressions mathematically. We have set up these cases to validate new software, called Bonaparte, which performs pedigree likelihood ratio calculations in a DVI context. Bonaparte has been developed by SNN Nijmegen (The Netherlands) for the Netherlands Forensic Institute (NFI). It is available free of charge for non-commercial purposes (see www.dnadvi.nl for details). Commercial licenses can also be obtained. The software uses Bayesian networks and the junction tree algorithm to perform its calculations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  10. Using monomer vibrational wavefunctions to compute numerically exact (12D) rovibrational levels of water dimer

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Gang; Carrington, Tucker

    2018-02-01

    We compute numerically exact rovibrational levels of water dimer, with 12 vibrational coordinates, on the accurate CCpol-8sf ab initio flexible monomer potential energy surface [C. Leforestier et al., J. Chem. Phys. 137, 014305 (2012)]. It does not have a sum-of-products or multimode form and therefore quadrature in some form must be used. To do the calculation, it is necessary to use an efficient basis set and to develop computational tools, for evaluating the matrix-vector products required to calculate the spectrum, that obviate the need to store the potential on a 12D quadrature grid. The basis functions we use are products of monomer vibrational wavefunctions and standard rigid-monomer basis functions (which involve products of three Wigner functions). Potential matrix-vector products are evaluated using the F matrix idea previously used to compute rovibrational levels of 5-atom and 6-atom molecules. When the coupling between inter- and intra-monomer coordinates is weak, this crude adiabatic type basis is efficient (only a few monomer vibrational wavefunctions are necessary), although the calculation of matrix elements is straightforward. It is much easier to use than an adiabatic basis. The product structure of the basis is compatible with the product structure of the kinetic energy operator and this facilitates computation of matrix-vector products. Compared with the results obtained using a [6 + 6]D adiabatic approach, we find good agreement for the inter-molecular levels and larger differences for the intra-molecular water bend levels.

  11. Using monomer vibrational wavefunctions to compute numerically exact (12D) rovibrational levels of water dimer.

    PubMed

    Wang, Xiao-Gang; Carrington, Tucker

    2018-02-21

    We compute numerically exact rovibrational levels of water dimer, with 12 vibrational coordinates, on the accurate CCpol-8sf ab initio flexible monomer potential energy surface [C. Leforestier et al., J. Chem. Phys. 137, 014305 (2012)]. It does not have a sum-of-products or multimode form and therefore quadrature in some form must be used. To do the calculation, it is necessary to use an efficient basis set and to develop computational tools, for evaluating the matrix-vector products required to calculate the spectrum, that obviate the need to store the potential on a 12D quadrature grid. The basis functions we use are products of monomer vibrational wavefunctions and standard rigid-monomer basis functions (which involve products of three Wigner functions). Potential matrix-vector products are evaluated using the F matrix idea previously used to compute rovibrational levels of 5-atom and 6-atom molecules. When the coupling between inter- and intra-monomer coordinates is weak, this crude adiabatic type basis is efficient (only a few monomer vibrational wavefunctions are necessary), although the calculation of matrix elements is straightforward. It is much easier to use than an adiabatic basis. The product structure of the basis is compatible with the product structure of the kinetic energy operator and this facilitates computation of matrix-vector products. Compared with the results obtained using a [6 + 6]D adiabatic approach, we find good agreement for the inter-molecular levels and larger differences for the intra-molecular water bend levels.

  12. 21 CFR 870.1110 - Blood pressure computer.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Blood pressure computer. 870.1110 Section 870.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... computer. (a) Identification. A blood pressure computer is a device that accepts the electrical signal from...

  13. 21 CFR 870.1110 - Blood pressure computer.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Blood pressure computer. 870.1110 Section 870.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... computer. (a) Identification. A blood pressure computer is a device that accepts the electrical signal from...

  14. 21 CFR 870.1110 - Blood pressure computer.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Blood pressure computer. 870.1110 Section 870.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... computer. (a) Identification. A blood pressure computer is a device that accepts the electrical signal from...

  15. 21 CFR 870.1110 - Blood pressure computer.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Blood pressure computer. 870.1110 Section 870.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... computer. (a) Identification. A blood pressure computer is a device that accepts the electrical signal from...

  16. GC/IR computer-aided identification of anaerobic bacteria

    NASA Astrophysics Data System (ADS)

    Ye, Hunian; Zhang, Feng S.; Yang, Hua; Li, Zhu; Ye, Song

    1993-09-01

    A new method was developed to identify anaerobic bacteria by using pattern recognition. The method is depended on GC / JR data. The system is intended for use as a precise rapid and reproduceable aid in the identification of unknown isolates. Key Words: Anaerobic bacteria Pattern recognition Computeraided identification GC / JR 1 . TNTRODUCTTON A major problem in the field of anaerobic bacteriology is the difficulty in accurately precisely and rapidly identifying unknown isolates. Tn the proceedings of the Third International Symposium on Rapid Methods and Automation in Microbiology C. M. Moss said: " Chromatographic analysis is a new future for clinical microbiology" . 12 years past and so far it seems that this is an idea whose time has not get come but it close. Now two major advances that have brought the technology forword in terms ofmaking it appropriate for use in the clinical laboratory can aldo be cited. One is the development and implementation of fused silica capillary columns. In contrast to packed columns and those of'' greater width these columns allow reproducible recovery of hydroxey fatty acids with the same carbon chain length. The second advance is the efficient data processing afforded by modern microcomputer systems. On the other hand the practical steps for sample preparation also are an advance in the clinical laboratory. Chromatographic Analysis means mainly of analysis of fatty acids. The most common

  17. Computational identification of conserved microRNAs and their targets from expression sequence tags of blueberry (Vaccinium corybosum)

    PubMed Central

    Li, Xuyan; Hou, Yanming; Zhang, Li; Zhang, Wenhao; Quan, Chen; Cui, Yuhai; Bian, Shaomin

    2014-01-01

    MicroRNAs (miRNAs) are a class of endogenous, approximately 21nt in length, non-coding RNA, which mediate the expression of target genes primarily at post-transcriptional levels. miRNAs play critical roles in almost all plant cellular and metabolic processes. Although numerous miRNAs have been identified in the plant kingdom, the miRNAs in blueberry, which is an economically important small fruit crop, still remain totally unknown. In this study, we reported a computational identification of miRNAs and their targets in blueberry. By conducting an EST-based comparative genomics approach, 9 potential vco-miRNAs were discovered from 22,402 blueberry ESTs according to a series of filtering criteria, designated as vco-miR156–5p, vco-miR156–3p, vco-miR1436, vco-miR1522, vco-miR4495, vco-miR5120, vco-miR5658, vco-miR5783, and vco-miR5986. Based on sequence complementarity between miRNA and its target transcript, 34 target ESTs from blueberry and 70 targets from other species were identified for the vco-miRNAs. The targets were found to be involved in transcription, RNA splicing and binding, DNA duplication, signal transduction, transport and trafficking, stress response, as well as synthesis and metabolic process. These findings will greatly contribute to future research in regard to functions and regulatory mechanisms of blueberry miRNAs. PMID:25763692

  18. Computational identification of conserved microRNAs and their targets from expression sequence tags of blueberry (Vaccinium corybosum).

    PubMed

    Li, Xuyan; Hou, Yanming; Zhang, Li; Zhang, Wenhao; Quan, Chen; Cui, Yuhai; Bian, Shaomin

    2014-01-01

    MicroRNAs (miRNAs) are a class of endogenous, approximately 21nt in length, non-coding RNA, which mediate the expression of target genes primarily at post-transcriptional levels. miRNAs play critical roles in almost all plant cellular and metabolic processes. Although numerous miRNAs have been identified in the plant kingdom, the miRNAs in blueberry, which is an economically important small fruit crop, still remain totally unknown. In this study, we reported a computational identification of miRNAs and their targets in blueberry. By conducting an EST-based comparative genomics approach, 9 potential vco-miRNAs were discovered from 22,402 blueberry ESTs according to a series of filtering criteria, designated as vco-miR156-5p, vco-miR156-3p, vco-miR1436, vco-miR1522, vco-miR4495, vco-miR5120, vco-miR5658, vco-miR5783, and vco-miR5986. Based on sequence complementarity between miRNA and its target transcript, 34 target ESTs from blueberry and 70 targets from other species were identified for the vco-miRNAs. The targets were found to be involved in transcription, RNA splicing and binding, DNA duplication, signal transduction, transport and trafficking, stress response, as well as synthesis and metabolic process. These findings will greatly contribute to future research in regard to functions and regulatory mechanisms of blueberry miRNAs.

  19. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...

  20. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...

  1. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...

  2. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...

  3. Identification of Aerodynamic Coefficients Using Computational Neural Networks

    DTIC Science & Technology

    1992-01-09

    the Am-. icar , Institete ur Aeronautics and mation model, excellent matches of aerodynamic coef- Astronautics, Inc. All rights reserved. ficient...UL NSN 7540-01-2EO-SSO0 Standard Form 296 (Rev. 2-89) ft"""~e by Ar t4ed. Z39-1 SAIA A_ AIAA 92-0172 Identification of Aerodynamic Coefficients Using...state and control space. While the partitions span the space, these global models are, in general, not contin- Precise, smooth aerodynamic models are

  4. A Comparative Study to Evaluate the Effectiveness of Computer Assisted Instruction (CAI) versus Class Room Lecture (RL) for Computer Science at ICS Level

    ERIC Educational Resources Information Center

    Kausar, Tayyaba; Choudhry, Bushra Naoreen; Gujjar, Aijaz Ahmed

    2008-01-01

    This study was aimed to evaluate the effectiveness of CAI vs. classroom lecture for computer science at ICS level. The objectives were to compare the learning effects of two groups with class room lecture and computer assisted instruction studying the same curriculum and the effects of CAI and CRL in terms of cognitive development. Hypothesis of…

  5. A Comparative Study to Evaluate the Effectiveness of Computer Assisted Instruction (CAI) versus Class Room Lecture (CRL) for Computer Science at ICS Level

    ERIC Educational Resources Information Center

    Kausar, Tayyaba; Choudhry, Bushra Naoreen; Gujjar, Aijaz Ahmed

    2008-01-01

    This study was aimed to evaluate the effectiveness of CAI vs. classroom lecture for computer science at ICS level. The objectives were to compare the learning effects of two groups with class room lecture and computer assisted instruction studying the same curriculum and the effects of CAI and CRL in terms of cognitive development. Hypothesis of…

  6. Three dimensional identification card and applications

    NASA Astrophysics Data System (ADS)

    Zhou, Changhe; Wang, Shaoqing; Li, Chao; Li, Hao; Liu, Zhao

    2016-10-01

    Three dimensional Identification Card, with its three-dimensional personal image displayed and stored for personal identification, is supposed be the advanced version of the present two-dimensional identification card in the future [1]. Three dimensional Identification Card means that there are three-dimensional optical techniques are used, the personal image on ID card is displayed to be three-dimensional, so we can see three dimensional personal face. The ID card also stores the three-dimensional face information in its inside electronics chip, which might be recorded by using two-channel cameras, and it can be displayed in computer as three-dimensional images for personal identification. Three-dimensional ID card might be one interesting direction to update the present two-dimensional card in the future. Three-dimension ID card might be widely used in airport custom, entrance of hotel, school, university, as passport for on-line banking, registration of on-line game, etc...

  7. Distributing an executable job load file to compute nodes in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gooding, Thomas M.

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications linkmore » over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.« less

  8. APRICOT: an integrated computational pipeline for the sequence-based identification and characterization of RNA-binding proteins.

    PubMed

    Sharan, Malvika; Förstner, Konrad U; Eulalio, Ana; Vogel, Jörg

    2017-06-20

    RNA-binding proteins (RBPs) have been established as core components of several post-transcriptional gene regulation mechanisms. Experimental techniques such as cross-linking and co-immunoprecipitation have enabled the identification of RBPs, RNA-binding domains (RBDs) and their regulatory roles in the eukaryotic species such as human and yeast in large-scale. In contrast, our knowledge of the number and potential diversity of RBPs in bacteria is poorer due to the technical challenges associated with the existing global screening approaches. We introduce APRICOT, a computational pipeline for the sequence-based identification and characterization of proteins using RBDs known from experimental studies. The pipeline identifies functional motifs in protein sequences using position-specific scoring matrices and Hidden Markov Models of the functional domains and statistically scores them based on a series of sequence-based features. Subsequently, APRICOT identifies putative RBPs and characterizes them by several biological properties. Here we demonstrate the application and adaptability of the pipeline on large-scale protein sets, including the bacterial proteome of Escherichia coli. APRICOT showed better performance on various datasets compared to other existing tools for the sequence-based prediction of RBPs by achieving an average sensitivity and specificity of 0.90 and 0.91 respectively. The command-line tool and its documentation are available at https://pypi.python.org/pypi/bio-apricot. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Computational drug discovery

    PubMed Central

    Ou-Yang, Si-sheng; Lu, Jun-yan; Kong, Xiang-qian; Liang, Zhong-jie; Luo, Cheng; Jiang, Hualiang

    2012-01-01

    Computational drug discovery is an effective strategy for accelerating and economizing drug discovery and development process. Because of the dramatic increase in the availability of biological macromolecule and small molecule information, the applicability of computational drug discovery has been extended and broadly applied to nearly every stage in the drug discovery and development workflow, including target identification and validation, lead discovery and optimization and preclinical tests. Over the past decades, computational drug discovery methods such as molecular docking, pharmacophore modeling and mapping, de novo design, molecular similarity calculation and sequence-based virtual screening have been greatly improved. In this review, we present an overview of these important computational methods, platforms and successful applications in this field. PMID:22922346

  10. A Corticothalamic Circuit Model for Sound Identification in Complex Scenes

    PubMed Central

    Otazu, Gonzalo H.; Leibold, Christian

    2011-01-01

    The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668

  11. Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing

    PubMed Central

    Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong

    2014-01-01

    This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications. PMID:24574931

  12. Level-2 perspectives computed quickly and spontaneously: Evidence from eight- to 9.5-year-old children.

    PubMed

    Elekes, Fruzsina; Varga, Máté; Király, Ildikó

    2017-11-01

    It has been widely assumed that computing how a scene looks from another perspective (level-2 perspective taking, PT) is an effortful process, as opposed to the automatic capacity of tracking visual access to objects (level-1 PT). Recently, adults have been found to compute both forms of visual perspectives in a quick but context-sensitive way, indicating that the two functions share more features than previously assumed. However, the developmental literature still shows the dissociation between automatic level-1 and effortful level-2 PT. In the current paper, we report an experiment showing that in a minimally social situation, participating in a number verification task with an adult confederate, eight- to 9.5-year-old children demonstrate similar online level-2 PT capacities as adults. Future studies need to address whether online PT shows selectivity in children as well and develop paradigms that are adequate to test preschoolers' online level-2 PT abilities. Statement of Contribution What is already known on this subject? Adults can access how objects appear to others (level-2 perspective) spontaneously and online Online level-1, but not level-2 perspective taking (PT) has been documented in school-aged children What the present study adds? Eight- to 9.5-year-olds performed a number verification task with a confederate who had the same task Children showed similar perspective interference as adults, indicating spontaneous level-2 PT Not only agent-object relations but also object appearances are computed online by eight- to 9.5-year-olds. © 2017 The British Psychological Society.

  13. Metabolite identification through multiple kernel learning on fragmentation trees.

    PubMed

    Shen, Huibin; Dührkop, Kai; Böcker, Sebastian; Rousu, Juho

    2014-06-15

    Metabolite identification from tandem mass spectrometric data is a key task in metabolomics. Various computational methods have been proposed for the identification of metabolites from tandem mass spectra. Fragmentation tree methods explore the space of possible ways in which the metabolite can fragment, and base the metabolite identification on scoring of these fragmentation trees. Machine learning methods have been used to map mass spectra to molecular fingerprints; predicted fingerprints, in turn, can be used to score candidate molecular structures. Here, we combine fragmentation tree computations with kernel-based machine learning to predict molecular fingerprints and identify molecular structures. We introduce a family of kernels capturing the similarity of fragmentation trees, and combine these kernels using recently proposed multiple kernel learning approaches. Experiments on two large reference datasets show that the new methods significantly improve molecular fingerprint prediction accuracy. These improvements result in better metabolite identification, doubling the number of metabolites ranked at the top position of the candidates list. © The Author 2014. Published by Oxford University Press.

  14. Tenth Grade Students' Time Using a Computer as a Predictor of the Highest Level of Education Attempted

    ERIC Educational Resources Information Center

    Gaffey, Adam John

    2014-01-01

    As computing technology continued to grow in the lives of secondary students from 2002 to 2006, researchers failed to identify the influence using computers would have on the highest level of education students attempted. During the early part of the century schools moved towards increasing the usage of computers. Numerous stakeholders were unsure…

  15. Self-Assessment and Student Improvement in an Introductory Computer Course at the Community College Level

    ERIC Educational Resources Information Center

    Spicer-Sutton, Jama; Lampley, James; Good, Donald W.

    2014-01-01

    The purpose of this study was to determine a student's computer knowledge upon course entry and if there was a difference in college students' improvement scores as measured by the difference in pretest and post-test scores of new or novice users, moderate users, and expert users at the end of a college level introductory computing class. This…

  16. Self-Assessment and Student Improvement in an Introductory Computer Course at the Community College-Level

    ERIC Educational Resources Information Center

    Spicer-Sutton, Jama

    2013-01-01

    The purpose of this study was to determine a student's computer knowledge upon course entry and if there was a difference in college students' improvement scores as measured by the difference in pretest and posttest scores of new or novice users, moderate users, and expert users at the end of a college-level introductory computing class. This…

  17. Multi-level slug tests in highly permeable formations: 2. Hydraulic conductivity identification, method verification, and field applications

    USGS Publications Warehouse

    Zlotnik, V.A.; McGuire, V.L.

    1998-01-01

    Using the developed theory and modified Springer-Gelhar (SG) model, an identification method is proposed for estimating hydraulic conductivity from multi-level slug tests. The computerized algorithm calculates hydraulic conductivity from both monotonic and oscillatory well responses obtained using a double-packer system. Field verification of the method was performed at a specially designed fully penetrating well of 0.1-m diameter with a 10-m screen in a sand and gravel alluvial aquifer (MSEA site, Shelton, Nebraska). During well installation, disturbed core samples were collected every 0.6 m using a split-spoon sampler. Vertical profiles of hydraulic conductivity were produced on the basis of grain-size analysis of the disturbed core samples. These results closely correlate with the vertical profile of horizontal hydraulic conductivity obtained by interpreting multi-level slug test responses using the modified SG model. The identification method was applied to interpret the response from 474 slug tests in 156 locations at the MSEA site. More than 60% of responses were oscillatory. The method produced a good match to experimental data for both oscillatory and monotonic responses using an automated curve matching procedure. The proposed method allowed us to drastically increase the efficiency of each well used for aquifer characterization and to process massive arrays of field data. Recommendations generalizing this experience to massive application of the proposed method are developed.Using the developed theory and modified Springer-Gelhar (SG) model, an identification method is proposed for estimating hydraulic conductivity from multi-level slug tests. The computerized algorithm calculates hydraulic conductivity from both monotonic and oscillatory well responses obtained using a double-packer system. Field verification of the method was performed at a specially designed fully penetrating well of 0.1-m diameter with a 10-m screen in a sand and gravel alluvial

  18. An improved wavelet-Galerkin method for dynamic response reconstruction and parameter identification of shear-type frames

    NASA Astrophysics Data System (ADS)

    Bu, Haifeng; Wang, Dansheng; Zhou, Pin; Zhu, Hongping

    2018-04-01

    An improved wavelet-Galerkin (IWG) method based on the Daubechies wavelet is proposed for reconstructing the dynamic responses of shear structures. The proposed method flexibly manages wavelet resolution level according to excitation, thereby avoiding the weakness of the wavelet-Galerkin multiresolution analysis (WGMA) method in terms of resolution and the requirement of external excitation. IWG is implemented by this work in certain case studies, involving single- and n-degree-of-freedom frame structures subjected to a determined discrete excitation. Results demonstrate that IWG performs better than WGMA in terms of accuracy and computation efficiency. Furthermore, a new method for parameter identification based on IWG and an optimization algorithm are also developed for shear frame structures, and a simultaneous identification of structural parameters and excitation is implemented. Numerical results demonstrate that the proposed identification method is effective for shear frame structures.

  19. Probing Majorana modes in the tunneling spectra of a resonant level.

    PubMed

    Korytár, R; Schmitteckert, P

    2013-11-27

    Unambiguous identification of Majorana physics presents an outstanding problem whose solution could render topological quantum computing feasible. We develop a numerical approach to treat finite-size superconducting chains supporting Majorana modes, which is based on iterative application of a two-site Bogoliubov transformation. We demonstrate the applicability of the method by studying a resonant level attached to the superconductor subject to external perturbations. In the topological phase, we show that the spectrum of a single resonant level allows us to distinguish peaks coming from Majorana physics from the Kondo resonance.

  20. SU-E-P-10: Establishment of Local Diagnostic Reference Levels of Routine Exam in Computed Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, M; Wang, Y; Weng, H

    Introduction National diagnostic reference levels (NDRLs) can be used as a reference dose of radiological examination can provide radiation dose as the basis of patient dose optimization. Local diagnostic reference levels (LDRLs) by periodically view and check doses, more efficiency to improve the way of examination. Therefore, the important first step is establishing a diagnostic reference level. Computed Tomography in Taiwan had been built up the radiation dose limit value,in addition, many studies report shows that CT scan contributed most of the radiation dose in different medical. Therefore, this study was mainly to let everyone understand DRL’s international status. Formore » computed tomography in our hospital to establish diagnostic reference levels. Methods and Materials: There are two clinical CT scanners (a Toshiba Aquilion and a Siemens Sensation) were performed in this study. For CT examinations the basic recommended dosimetric quantity is the Computed Tomography Dose Index (CTDI). Each exam each different body part, we collect 10 patients at least. Carried out the routine examinations, and all exposure parameters have been collected and the corresponding CTDIv and DLP values have been determined. Results: The majority of patients (75%) were between 60–70 Kg of body weight. There are 25 examinations in this study. Table 1 shows the LDRL of each CT routine examination. Conclusions: Therefore, this study would like to let everyone know DRL’s international status, but also establishment of computed tomography of the local reference levels for our hospital, and providing radiation reference, as a basis for optimizing patient dose.« less

  1. Automated Photoreceptor Cell Identification on Nonconfocal Adaptive Optics Images Using Multiscale Circular Voting.

    PubMed

    Liu, Jianfei; Jung, HaeWon; Dubra, Alfredo; Tam, Johnny

    2017-09-01

    Adaptive optics scanning light ophthalmoscopy (AOSLO) has enabled quantification of the photoreceptor mosaic in the living human eye using metrics such as cell density and average spacing. These rely on the identification of individual cells. Here, we demonstrate a novel approach for computer-aided identification of cone photoreceptors on nonconfocal split detection AOSLO images. Algorithms for identification of cone photoreceptors were developed, based on multiscale circular voting (MSCV) in combination with a priori knowledge that split detection images resemble Nomarski differential interference contrast images, in which dark and bright regions are present on the two sides of each cell. The proposed algorithm locates dark and bright region pairs, iteratively refining the identification across multiple scales. Identification accuracy was assessed in data from 10 subjects by comparing automated identifications with manual labeling, followed by computation of density and spacing metrics for comparison to histology and published data. There was good agreement between manual and automated cone identifications with overall recall, precision, and F1 score of 92.9%, 90.8%, and 91.8%, respectively. On average, computed density and spacing values using automated identification were within 10.7% and 11.2% of the expected histology values across eccentricities ranging from 0.5 to 6.2 mm. There was no statistically significant difference between MSCV-based and histology-based density measurements (P = 0.96, Kolmogorov-Smirnov 2-sample test). MSCV can accurately detect cone photoreceptors on split detection images across a range of eccentricities, enabling quick, objective estimation of photoreceptor mosaic metrics, which will be important for future clinical trials utilizing adaptive optics.

  2. IDEN2-A program for visual identification of spectral lines and energy levels in optical spectra of atoms and simple molecules

    NASA Astrophysics Data System (ADS)

    Azarov, V. I.; Kramida, A.; Vokhmentsev, M. Ya.

    2018-04-01

    The article describes a Java program that can be used in a user-friendly way to visually identify spectral lines observed in complex spectra with theoretically predicted transitions between atomic or molecular energy levels. The program arranges various information about spectral lines and energy levels in such a way that line identification and determination of positions of experimentally observed energy levels become much easier tasks that can be solved fast and efficiently.

  3. Towards a computational- and algorithmic-level account of concept blending using analogies and amalgams

    NASA Astrophysics Data System (ADS)

    Besold, Tarek R.; Kühnberger, Kai-Uwe; Plaza, Enric

    2017-10-01

    Concept blending - a cognitive process which allows for the combination of certain elements (and their relations) from originally distinct conceptual spaces into a new unified space combining these previously separate elements, and enables reasoning and inference over the combination - is taken as a key element of creative thought and combinatorial creativity. In this article, we summarise our work towards the development of a computational-level and algorithmic-level account of concept blending, combining approaches from computational analogy-making and case-based reasoning (CBR). We present the theoretical background, as well as an algorithmic proposal integrating higher-order anti-unification matching and generalisation from analogy with amalgams from CBR. The feasibility of the approach is then exemplified in two case studies.

  4. Computational approaches to schizophrenia: A perspective on negative symptoms.

    PubMed

    Deserno, Lorenz; Heinz, Andreas; Schlagenhauf, Florian

    2017-08-01

    Schizophrenia is a heterogeneous spectrum disorder often associated with detrimental negative symptoms. In recent years, computational approaches to psychiatry have attracted growing attention. Negative symptoms have shown some overlap with general cognitive impairments and were also linked to impaired motivational processing in brain circuits implementing reward prediction. In this review, we outline how computational approaches may help to provide a better understanding of negative symptoms in terms of the potentially underlying behavioural and biological mechanisms. First, we describe the idea that negative symptoms could arise from a failure to represent reward expectations to enable flexible behavioural adaptation. It has been proposed that these impairments arise from a failure to use prediction errors to update expectations. Important previous studies focused on processing of so-called model-free prediction errors where learning is determined by past rewards only. However, learning and decision-making arise from multiple cognitive mechanisms functioning simultaneously, and dissecting them via well-designed tasks in conjunction with computational modelling is a promising avenue. Second, we move on to a proof-of-concept example on how generative models of functional imaging data from a cognitive task enable the identification of subgroups of patients mapping on different levels of negative symptoms. Combining the latter approach with behavioural studies regarding learning and decision-making may allow the identification of key behavioural and biological parameters distinctive for different dimensions of negative symptoms versus a general cognitive impairment. We conclude with an outlook on how this computational framework could, at some point, enrich future clinical studies. Copyright © 2016. Published by Elsevier B.V.

  5. Program CALIB. [for computing noise levels for helicopter version of S-191 filter wheel spectrometer

    NASA Technical Reports Server (NTRS)

    Mendlowitz, M. A.

    1973-01-01

    The program CALIB, which was written to compute noise levels and average signal levels of aperture radiance for the helicopter version of the S-191 filter wheel spectrometer is described. The program functions, and input description are included along with a compiled program listing.

  6. Effect of Computer Simulations at the Particulate and Macroscopic Levels on Students' Understanding of the Particulate Nature of Matter

    ERIC Educational Resources Information Center

    Tang, Hui; Abraham, Michael R.

    2016-01-01

    Computer-based simulations can help students visualize chemical representations and understand chemistry concepts, but simulations at different levels of representation may vary in effectiveness on student learning. This study investigated the influence of computer activities that simulate chemical reactions at different levels of representation…

  7. Sensor network based vehicle classification and license plate identification system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frigo, Janette Rose; Brennan, Sean M; Rosten, Edward J

    Typically, for energy efficiency and scalability purposes, sensor networks have been used in the context of environmental and traffic monitoring applications in which operations at the sensor level are not computationally intensive. But increasingly, sensor network applications require data and compute intensive sensors such video cameras and microphones. In this paper, we describe the design and implementation of two such systems: a vehicle classifier based on acoustic signals and a license plate identification system using a camera. The systems are implemented in an energy-efficient manner to the extent possible using commercially available hardware, the Mica motes and the Stargate platform.more » Our experience in designing these systems leads us to consider an alternate more flexible, modular, low-power mote architecture that uses a combination of FPGAs, specialized embedded processing units and sensor data acquisition systems.« less

  8. Robust uncertainty evaluation for system identification on distributed wireless platforms

    NASA Astrophysics Data System (ADS)

    Crinière, Antoine; Döhler, Michael; Le Cam, Vincent; Mevel, Laurent

    2016-04-01

    Health monitoring of civil structures by system identification procedures from automatic control is now accepted as a valid approach. These methods provide frequencies and modeshapes from the structure over time. For a continuous monitoring the excitation of a structure is usually ambient, thus unknown and assumed to be noise. Hence, all estimates from the vibration measurements are realizations of random variables with inherent uncertainty due to (unknown) process and measurement noise and finite data length. The underlying algorithms are usually running under Matlab under the assumption of large memory pool and considerable computational power. Even under these premises, computational and memory usage are heavy and not realistic for being embedded in on-site sensor platforms such as the PEGASE platform. Moreover, the current push for distributed wireless systems calls for algorithmic adaptation for lowering data exchanges and maximizing local processing. Finally, the recent breakthrough in system identification allows us to process both frequency information and its related uncertainty together from one and only one data sequence, at the expense of computational and memory explosion that require even more careful attention than before. The current approach will focus on presenting a system identification procedure called multi-setup subspace identification that allows to process both frequencies and their related variances from a set of interconnected wireless systems with all computation running locally within the limited memory pool of each system before being merged on a host supervisor. Careful attention will be given to data exchanges and I/O satisfying OGC standards, as well as minimizing memory footprints and maximizing computational efficiency. Those systems are built in a way of autonomous operations on field and could be later included in a wide distributed architecture such as the Cloud2SM project. The usefulness of these strategies is illustrated on

  9. Multiple Level Crowding: Crowding at the Object Parts Level and at the Object Configural level.

    PubMed

    Kimchi, Ruth; Pirkner, Yossef

    2015-01-01

    In crowding, identification of a peripheral target in the presence of nearby flankers is worse than when the target appears alone. Prevailing theories hold that crowding occurs because of integration or "pooling" of low-level features at a single, relatively early stage of visual processing. Recent studies suggest that crowding can occur also between high-level object representations. The most relevant findings come from studies with faces and may be specific to faces. We examined whether crowding can occur at the object configural level in addition to part-level crowding, using nonface objects. Target (a disconnected square or diamond made of four elements) identification was measured at varying eccentricities. The flankers were similar either to the target parts or to the target configuration. The results showed crowding in both cases: Flankers interfered with target identification such that identification accuracy decreased with an increase in eccentricity, and no interference was observed at the fovea. Crowding by object parts, however, was weaker and had smaller spatial extent than crowding by object configurations; we related this finding to the relationship between crowding and perceptual organization. These results provide strong evidence that crowding occurs not only between object parts but also between configural representations of objects. © The Author(s) 2015.

  10. Two-Level Weld-Material Homogenization for Efficient Computational Analysis of Welded Structure Blast-Survivability

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Arakere, G.; Hariharan, A.; Pandurangan, B.

    2012-06-01

    The introduction of newer joining technologies like the so-called friction-stir welding (FSW) into automotive engineering entails the knowledge of the joint-material microstructure and properties. Since, the development of vehicles (including military vehicles capable of surviving blast and ballistic impacts) nowadays involves extensive use of the computational engineering analyses (CEA), robust high-fidelity material models are needed for the FSW joints. A two-level material-homogenization procedure is proposed and utilized in this study to help manage computational cost and computer storage requirements for such CEAs. The method utilizes experimental (microstructure, microhardness, tensile testing, and x-ray diffraction) data to construct: (a) the material model for each weld zone and (b) the material model for the entire weld. The procedure is validated by comparing its predictions with the predictions of more detailed but more costly computational analyses.

  11. Continuous-Time Bilinear System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2003-01-01

    The objective of this paper is to describe a new method for identification of a continuous-time multi-input and multi-output bilinear system. The approach is to make judicious use of the linear-model properties of the bilinear system when subjected to a constant input. Two steps are required in the identification process. The first step is to use a set of pulse responses resulting from a constant input of one sample period to identify the state matrix, the output matrix, and the direct transmission matrix. The second step is to use another set of pulse responses with the same constant input over multiple sample periods to identify the input matrix and the coefficient matrices associated with the coupling terms between the state and the inputs. Numerical examples are given to illustrate the concept and the computational algorithm for the identification method.

  12. Molecular-Level Computational Investigation of Mechanical Transverse Behavior of p-Phenylene Terephthalamide (PPTA) Fibers

    DTIC Science & Technology

    2013-01-01

    fabricated today are based on polymer matrix composites containing Kevlarw KM2 reinforcements , the present work will deal with generic PPTA fibers . In...Multi-length scale enriched continuum-level material model for Kevlarw- fiber reinforced polymer-matrix composites”, Journal of Materials...mechanical transverse behavior of p-phenylene terephthalamide (PPTA) fibers Purpose – A series of all-atom molecular-level computational analyses is

  13. Plant Aquaporins: Genome-Wide Identification, Transcriptomics, Proteomics, and Advanced Analytical Tools.

    PubMed

    Deshmukh, Rupesh K; Sonah, Humira; Bélanger, Richard R

    2016-01-01

    Aquaporins (AQPs) are channel-forming integral membrane proteins that facilitate the movement of water and many other small molecules. Compared to animals, plants contain a much higher number of AQPs in their genome. Homology-based identification of AQPs in sequenced species is feasible because of the high level of conservation of protein sequences across plant species. Genome-wide characterization of AQPs has highlighted several important aspects such as distribution, genetic organization, evolution and conserved features governing solute specificity. From a functional point of view, the understanding of AQP transport system has expanded rapidly with the help of transcriptomics and proteomics data. The efficient analysis of enormous amounts of data generated through omic scale studies has been facilitated through computational advancements. Prediction of protein tertiary structures, pore architecture, cavities, phosphorylation sites, heterodimerization, and co-expression networks has become more sophisticated and accurate with increasing computational tools and pipelines. However, the effectiveness of computational approaches is based on the understanding of physiological and biochemical properties, transport kinetics, solute specificity, molecular interactions, sequence variations, phylogeny and evolution of aquaporins. For this purpose, tools like Xenopus oocyte assays, yeast expression systems, artificial proteoliposomes, and lipid membranes have been efficiently exploited to study the many facets that influence solute transport by AQPs. In the present review, we discuss genome-wide identification of AQPs in plants in relation with recent advancements in analytical tools, and their availability and technological challenges as they apply to AQPs. An exhaustive review of omics resources available for AQP research is also provided in order to optimize their efficient utilization. Finally, a detailed catalog of computational tools and analytical pipelines is

  14. Plant Aquaporins: Genome-Wide Identification, Transcriptomics, Proteomics, and Advanced Analytical Tools

    PubMed Central

    Deshmukh, Rupesh K.; Sonah, Humira; Bélanger, Richard R.

    2016-01-01

    Aquaporins (AQPs) are channel-forming integral membrane proteins that facilitate the movement of water and many other small molecules. Compared to animals, plants contain a much higher number of AQPs in their genome. Homology-based identification of AQPs in sequenced species is feasible because of the high level of conservation of protein sequences across plant species. Genome-wide characterization of AQPs has highlighted several important aspects such as distribution, genetic organization, evolution and conserved features governing solute specificity. From a functional point of view, the understanding of AQP transport system has expanded rapidly with the help of transcriptomics and proteomics data. The efficient analysis of enormous amounts of data generated through omic scale studies has been facilitated through computational advancements. Prediction of protein tertiary structures, pore architecture, cavities, phosphorylation sites, heterodimerization, and co-expression networks has become more sophisticated and accurate with increasing computational tools and pipelines. However, the effectiveness of computational approaches is based on the understanding of physiological and biochemical properties, transport kinetics, solute specificity, molecular interactions, sequence variations, phylogeny and evolution of aquaporins. For this purpose, tools like Xenopus oocyte assays, yeast expression systems, artificial proteoliposomes, and lipid membranes have been efficiently exploited to study the many facets that influence solute transport by AQPs. In the present review, we discuss genome-wide identification of AQPs in plants in relation with recent advancements in analytical tools, and their availability and technological challenges as they apply to AQPs. An exhaustive review of omics resources available for AQP research is also provided in order to optimize their efficient utilization. Finally, a detailed catalog of computational tools and analytical pipelines is

  15. DNA Barcoding for Efficient Species- and Pathovar-Level Identification of the Quarantine Plant Pathogen Xanthomonas

    PubMed Central

    Tian, Qian; Zhao, Wenjun; Lu, Songyu; Zhu, Shuifang; Li, Shidong

    2016-01-01

    Genus Xanthomonas comprises many economically important plant pathogens that affect a wide range of hosts. Indeed, fourteen Xanthomonas species/pathovars have been regarded as official quarantine bacteria for imports in China. To date, however, a rapid and accurate method capable of identifying all of the quarantine species/pathovars has yet to be developed. In this study, we therefore evaluated the capacity of DNA barcoding as a digital identification method for discriminating quarantine species/pathovars of Xanthomonas. For these analyses, 327 isolates, representing 45 Xanthomonas species/pathovars, as well as five additional species/pathovars from GenBank (50 species/pathovars total), were utilized to test the efficacy of four DNA barcode candidate genes (16S rRNA gene, cpn60, gyrB, and avrBs2). Of these candidate genes, cpn60 displayed the highest rate of PCR amplification and sequencing success. The tree-building (Neighbor-joining), ‘best close match’, and barcode gap methods were subsequently employed to assess the species- and pathovar-level resolution of each gene. Notably, all isolates of each quarantine species/pathovars formed a monophyletic group in the neighbor-joining tree constructed using the cpn60 sequences. Moreover, cpn60 also demonstrated the most satisfactory results in both barcoding gap analysis and the ‘best close match’ test. Thus, compared with the other markers tested, cpn60 proved to be a powerful DNA barcode, providing a reliable and effective means for the species- and pathovar-level identification of the quarantine plant pathogen Xanthomonas. PMID:27861494

  16. Optimizing Learning of Scientific Category Knowledge in the Classroom: The Case of Plant Identification

    PubMed Central

    Kirchoff, Bruce K.; Delaney, Peter F.; Horton, Meg; Dellinger-Johnston, Rebecca

    2014-01-01

    Learning to identify organisms is extraordinarily difficult, yet trained field biologists can quickly and easily identify organisms at a glance. They do this without recourse to the use of traditional characters or identification devices. Achieving this type of recognition accuracy is a goal of many courses in plant systematics. Teaching plant identification is difficult because of variability in the plants’ appearance, the difficulty of bringing them into the classroom, and the difficulty of taking students into the field. To solve these problems, we developed and tested a cognitive psychology–based computer program to teach plant identification. The program incorporates presentation of plant images in a homework-based, active-learning format that was developed to stimulate expert-level visual recognition. A controlled experimental test using a within-subject design was performed against traditional study methods in the context of a college course in plant systematics. Use of the program resulted in an 8–25% statistically significant improvement in final exam scores, depending on the type of identification question used (living plants, photographs, written descriptions). The software demonstrates how the use of routines to train perceptual expertise, interleaved examples, spaced repetition, and retrieval practice can be used to train identification of complex and highly variable objects. PMID:25185226

  17. Automated Photoreceptor Cell Identification on Nonconfocal Adaptive Optics Images Using Multiscale Circular Voting

    PubMed Central

    Liu, Jianfei; Jung, HaeWon; Dubra, Alfredo; Tam, Johnny

    2017-01-01

    Purpose Adaptive optics scanning light ophthalmoscopy (AOSLO) has enabled quantification of the photoreceptor mosaic in the living human eye using metrics such as cell density and average spacing. These rely on the identification of individual cells. Here, we demonstrate a novel approach for computer-aided identification of cone photoreceptors on nonconfocal split detection AOSLO images. Methods Algorithms for identification of cone photoreceptors were developed, based on multiscale circular voting (MSCV) in combination with a priori knowledge that split detection images resemble Nomarski differential interference contrast images, in which dark and bright regions are present on the two sides of each cell. The proposed algorithm locates dark and bright region pairs, iteratively refining the identification across multiple scales. Identification accuracy was assessed in data from 10 subjects by comparing automated identifications with manual labeling, followed by computation of density and spacing metrics for comparison to histology and published data. Results There was good agreement between manual and automated cone identifications with overall recall, precision, and F1 score of 92.9%, 90.8%, and 91.8%, respectively. On average, computed density and spacing values using automated identification were within 10.7% and 11.2% of the expected histology values across eccentricities ranging from 0.5 to 6.2 mm. There was no statistically significant difference between MSCV-based and histology-based density measurements (P = 0.96, Kolmogorov-Smirnov 2-sample test). Conclusions MSCV can accurately detect cone photoreceptors on split detection images across a range of eccentricities, enabling quick, objective estimation of photoreceptor mosaic metrics, which will be important for future clinical trials utilizing adaptive optics. PMID:28873173

  18. Incorporation of Radio Frequency Identification Tag in Dentures to Facilitate Recognition and Forensic Human Identification

    PubMed Central

    Nuzzolese, E; Marcario, V; Di Vella, G

    2010-01-01

    Forensic identification using odontology is based on the comparison of ante-mortem and post mortem dental records. The insertion of a radio frequency identification (RFId) tag into dentures could be used as an aid to identify decomposed bodies, by storing personal identification data in a small transponder that can be radio-transmitted to a reader connected to a computer. A small passive, 12 x 2,1 mm, read-only RFId-tag was incorporated into the manufacture of three trial complete upper dentures and tested for a signal. The aim of this article is to demonstrate the feasibility of manufacturing such a dental prosthesis, the technical protocols for its implantation in the denture resin and its working principles. Future research and tests are required in order to verify human compatibility of the tagged denture and also to evaluate any potential deterioration in strength when subjected to high temperatures, or for damage resulting from everyday wear and tear. It should also be able to withstand the extreme conditions resulting from major accidents or mass disasters and procedures used to perform a forensic identification. PMID:20657641

  19. Automated colour identification in melanocytic lesions.

    PubMed

    Sabbaghi, S; Aldeen, M; Garnavi, R; Varigos, G; Doliantis, C; Nicolopoulos, J

    2015-08-01

    Colour information plays an important role in classifying skin lesion. However, colour identification by dermatologists can be very subjective, leading to cases of misdiagnosis. Therefore, a computer-assisted system for quantitative colour identification is highly desirable for dermatologists to use. Although numerous colour detection systems have been developed, few studies have focused on imitating the human visual perception of colours in melanoma application. In this paper we propose a new methodology based on QuadTree decomposition technique for automatic colour identification in dermoscopy images. Our approach mimics the human perception of lesion colours. The proposed method is trained on a set of 47 images from NIH dataset and applied to a test set of 190 skin lesions obtained from PH2 dataset. The results of our proposed method are compared with a recently reported colour identification method using the same dataset. The effectiveness of our method in detecting colours in dermoscopy images is vindicated by obtaining approximately 93% accuracy when the CIELab1 colour space is used.

  20. A Study of Effectiveness of Computer Assisted Instruction (CAI) over Classroom Lecture (CRL) at ICS Level

    ERIC Educational Resources Information Center

    Kaousar, Tayyeba; Choudhry, Bushra Naoreen; Gujjar, Aijaz Ahmed

    2008-01-01

    This study was aimed to evaluate the effectiveness of CAI vs. classroom lecture for computer science at ICS level. The objectives were to compare the learning effects of two groups with classroom lecture and computer-assisted instruction studying the same curriculum and the effects of CAI and CRL in terms of cognitive development. Hypotheses of…

  1. Computer-assisted identification and volumetric quantification of dynamic contrast enhancement in brain MRI: an interactive system

    NASA Astrophysics Data System (ADS)

    Wu, Shandong; Avgeropoulos, Nicholas G.; Rippe, David J.

    2013-03-01

    We present a dedicated segmentation system for tumor identification and volumetric quantification in dynamic contrast brain magnetic resonance (MR) scans. Our goal is to offer a practically useful tool at the end of clinicians in order to boost volumetric tumor assessment. The system is designed to work in an interactive mode such that maximizes the integration of computing capacity and clinical intelligence. We demonstrate the main functions of the system in terms of its functional flow and conduct preliminary validation using a representative pilot dataset. The system is inexpensive, user-friendly, easy to deploy and integrate with picture archiving and communication systems (PACS), and possible to be open-source, which enable it to potentially serve as a useful assistant for radiologists and oncologists. It is anticipated that in the future the system can be integrated into clinical workflow so that become routine available to help clinicians make more objective interpretations of treatment interventions and natural history of disease to best advocate patient needs.

  2. Computational analyses of spectral trees from electrospray multi-stage mass spectrometry to aid metabolite identification.

    PubMed

    Cao, Mingshu; Fraser, Karl; Rasmussen, Susanne

    2013-10-31

    Mass spectrometry coupled with chromatography has become the major technical platform in metabolomics. Aided by peak detection algorithms, the detected signals are characterized by mass-over-charge ratio (m/z) and retention time. Chemical identities often remain elusive for the majority of the signals. Multi-stage mass spectrometry based on electrospray ionization (ESI) allows collision-induced dissociation (CID) fragmentation of selected precursor ions. These fragment ions can assist in structural inference for metabolites of low molecular weight. Computational investigations of fragmentation spectra have increasingly received attention in metabolomics and various public databases house such data. We have developed an R package "iontree" that can capture, store and analyze MS2 and MS3 mass spectral data from high throughput metabolomics experiments. The package includes functions for ion tree construction, an algorithm (distMS2) for MS2 spectral comparison, and tools for building platform-independent ion tree (MS2/MS3) libraries. We have demonstrated the utilization of the package for the systematic analysis and annotation of fragmentation spectra collected in various metabolomics platforms, including direct infusion mass spectrometry, and liquid chromatography coupled with either low resolution or high resolution mass spectrometry. Assisted by the developed computational tools, we have demonstrated that spectral trees can provide informative evidence complementary to retention time and accurate mass to aid with annotating unknown peaks. These experimental spectral trees once subjected to a quality control process, can be used for querying public MS2 databases or de novo interpretation. The putatively annotated spectral trees can be readily incorporated into reference libraries for routine identification of metabolites.

  3. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  4. [Computer simulation by passenger wound analysis of vehicle collision].

    PubMed

    Zou, Dong-Hua; Liu, Nning-Guo; Shen, Jie; Zhang, Xiao-Yun; Jin, Xian-Long; Chen, Yi-Jiu

    2006-08-15

    To reconstruct the course of vehicle collision, so that to provide the reference for forensic identification and disposal of traffic accidents. Through analyzing evidences left both on passengers and vehicles, technique of momentum impulse combined with multi-dynamics was applied to simulate the motion and injury of passengers as well as the track of vehicles. Model of computer stimulation perfectly reconstructed phases of the traffic collision, which coincide with details found by forensic investigation. Computer stimulation is helpful and feasible for forensic identification in traffic accidents.

  5. Does methodology matter in eyewitness identification research? The effect of live versus video exposure on eyewitness identification accuracy.

    PubMed

    Pozzulo, Joanna D; Crescini, Charmagne; Panton, Tasha

    2008-01-01

    The present study examined the effect of mode of target exposure (live versus video) on eyewitness identification accuracy. Adult participants (N=104) were exposed to a staged crime that they witnessed either live or on videotape. Participants were then asked to rate their stress and arousal levels prior to being presented with either a target-present or -absent simultaneous lineup. Across target-present and -absent lineups, mode of target exposure did not have a significant effect on identification accuracy. However, mode of target exposure was found to have a significant effect on stress and arousal levels. Participants who witnessed the crime live had higher levels of stress and arousal than those who were exposed to the videotaped crime. A higher level of arousal was significantly related to poorer identification accuracy for those in the video condition. For participants in the live condition however, stress and arousal had no effect on eyewitness identification accuracy. Implications of these findings in regards to the generalizability of laboratory-based research on eyewitness testimony to real-life crime are discussed.

  6. Automated Microbiological Detection/Identification System

    PubMed Central

    Aldridge, C.; Jones, P. W.; Gibson, S.; Lanham, J.; Meyer, M.; Vannest, R.; Charles, R.

    1977-01-01

    An automated, computerized system, the AutoMicrobic System, has been developed for the detection, enumeration, and identification of bacteria and yeasts in clinical specimens. The biological basis for the system resides in lyophilized, highly selective and specific media enclosed in wells of a disposable plastic cuvette; introduction of a suitable specimen rehydrates and inoculates the media in the wells. An automated optical system monitors, and the computer interprets, changes in the media, with enumeration and identification results automatically obtained in 13 h. Sixteen different selective media were developed and tested with a variety of seeded (simulated) and clinical specimens. The AutoMicrobic System has been extensively tested with urine specimens, using a urine test kit (Identi-Pak) that contains selective media for Escherichia coli, Proteus species, Pseudomonas aeruginosa, Klebsiella-Enterobacter species, Serratia species, Citrobacter freundii, group D enterococci, Staphylococcus aureus, and yeasts (Candida species and Torulopsis glabrata). The system has been tested with 3,370 seeded urine specimens and 1,486 clinical urines. Agreement with simultaneous conventional (manual) cultures, at levels of 70,000 colony-forming units per ml (or more), was 92% or better for seeded specimens; clinical specimens yielded results of 93% or better for all organisms except P. aeruginosa, where agreement was 86%. System expansion in progress includes antibiotic susceptibility testing and compatibility with most types of clinical specimens. Images PMID:334798

  7. QSPIN: A High Level Java API for Quantum Computing Experimentation

    NASA Technical Reports Server (NTRS)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  8. A Debugger for Computational Grid Applications

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele

    2000-01-01

    The p2d2 project at NAS has built a debugger for applications running on heterogeneous computational grids. It employs a client-server architecture to simplify the implementation. Its user interface has been designed to provide process control and state examination functions on a computation containing a large number of processes. It can find processes participating in distributed computations even when those processes were not created under debugger control. These process identification techniques work both on conventional distributed executions as well as those on a computational grid.

  9. System identification using Nuclear Norm & Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.

    2018-01-01

    In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.

  10. Can Synchronous Computer-Mediated Communication (CMC) Help Beginning-Level Foreign Language Learners Speak?

    ERIC Educational Resources Information Center

    Ko, Chao-Jung

    2012-01-01

    This study investigated the possibility that initial-level learners may acquire oral skills through synchronous computer-mediated communication (SCMC). Twelve Taiwanese French as a foreign language (FFL) students, divided into three groups, were required to conduct a variety of tasks in one of the three learning environments (video/audio, audio,…

  11. Optimizations for the EcoPod field identification tool

    PubMed Central

    Manoharan, Aswath; Stamberger, Jeannie; Yu, YuanYuan; Paepcke, Andreas

    2008-01-01

    Background We sketch our species identification tool for palm sized computers that helps knowledgeable observers with census activities. An algorithm turns an identification matrix into a minimal length series of questions that guide the operator towards identification. Historic observation data from the census geographic area helps minimize question volume. We explore how much historic data is required to boost performance, and whether the use of history negatively impacts identification of rare species. We also explore how characteristics of the matrix interact with the algorithm, and how best to predict the probability of observing a previously unseen species. Results Point counts of birds taken at Stanford University's Jasper Ridge Biological Preserve between 2000 and 2005 were used to examine the algorithm. A computer identified species by correctly answering, and counting the algorithm's questions. We also explored how the character density of the key matrix and the theoretical minimum number of questions for each bird in the matrix influenced the algorithm. Our investigation of the required probability smoothing determined whether Laplace smoothing of observation probabilities was sufficient, or whether the more complex Good-Turing technique is required. Conclusion Historic data improved identification speed, but only impacted the top 25% most frequently observed birds. For rare birds the history based algorithms did not impose a noticeable penalty in the number of questions required for identification. For our dataset neither age of the historic data, nor the number of observation years impacted the algorithm. Density of characters for different taxa in the identification matrix did not impact the algorithms. Intrinsic differences in identifying different birds did affect the algorithm, but the differences affected the baseline method of not using historic data to exactly the same degree. We found that Laplace smoothing performed better for rare species

  12. Comparative phyloinformatics of virus genes at micro and macro levels in a distributed computing environment.

    PubMed

    Singh, Dadabhai T; Trehan, Rahul; Schmidt, Bertil; Bretschneider, Timo

    2008-01-01

    Preparedness for a possible global pandemic caused by viruses such as the highly pathogenic influenza A subtype H5N1 has become a global priority. In particular, it is critical to monitor the appearance of any new emerging subtypes. Comparative phyloinformatics can be used to monitor, analyze, and possibly predict the evolution of viruses. However, in order to utilize the full functionality of available analysis packages for large-scale phyloinformatics studies, a team of computer scientists, biostatisticians and virologists is needed--a requirement which cannot be fulfilled in many cases. Furthermore, the time complexities of many algorithms involved leads to prohibitive runtimes on sequential computer platforms. This has so far hindered the use of comparative phyloinformatics as a commonly applied tool in this area. In this paper the graphical-oriented workflow design system called Quascade and its efficient usage for comparative phyloinformatics are presented. In particular, we focus on how this task can be effectively performed in a distributed computing environment. As a proof of concept, the designed workflows are used for the phylogenetic analysis of neuraminidase of H5N1 isolates (micro level) and influenza viruses (macro level). The results of this paper are hence twofold. Firstly, this paper demonstrates the usefulness of a graphical user interface system to design and execute complex distributed workflows for large-scale phyloinformatics studies of virus genes. Secondly, the analysis of neuraminidase on different levels of complexity provides valuable insights of this virus's tendency for geographical based clustering in the phylogenetic tree and also shows the importance of glycan sites in its molecular evolution. The current study demonstrates the efficiency and utility of workflow systems providing a biologist friendly approach to complex biological dataset analysis using high performance computing. In particular, the utility of the platform Quascade

  13. Damage Identification of Piles Based on Vibration Characteristics

    PubMed Central

    Zhang, Xiaozhong; Yao, Wenjuan; Chen, Bo; Liu, Dewen

    2014-01-01

    A method of damage identification of piles was established by using vibration characteristics. The approach focused on the application of the element strain energy and sensitive modals. A damage identification equation of piles was deduced using the structural vibration equation. The equation contained three major factors: change rate of element modal strain energy, damage factor of pile, and sensitivity factor of modal damage. The sensitive modals of damage identification were selected by using sensitivity factor of modal damage firstly. Subsequently, the indexes for early-warning of pile damage were established by applying the change rate of strain energy. Then the technology of computational analysis of wavelet transform was used to damage identification for pile. The identification of small damage of pile was completely achieved, including the location of damage and the extent of damage. In the process of identifying the extent of damage of pile, the equation of damage identification was used in many times. Finally, a stadium project was used as an example to demonstrate the effectiveness of the proposed method of damage identification for piles. The correctness and practicability of the proposed method were verified by comparing the results of damage identification with that of low strain test. The research provided a new way for damage identification of piles. PMID:25506062

  14. Computer Aided Battery Engineering Consortium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pesaran, Ahmad

    A multi-national lab collaborative team was assembled that includes experts from academia and industry to enhance recently developed Computer-Aided Battery Engineering for Electric Drive Vehicles (CAEBAT)-II battery crush modeling tools and to develop microstructure models for electrode design - both computationally efficient. Task 1. The new Multi-Scale Multi-Domain model framework (GH-MSMD) provides 100x to 1,000x computation speed-up in battery electrochemical/thermal simulation while retaining modularity of particles and electrode-, cell-, and pack-level domains. The increased speed enables direct use of the full model in parameter identification. Task 2. Mechanical-electrochemical-thermal (MECT) models for mechanical abuse simulation were simultaneously coupled, enabling simultaneous modelingmore » of electrochemical reactions during the short circuit, when necessary. The interactions between mechanical failure and battery cell performance were studied, and the flexibility of the model for various batteries structures and loading conditions was improved. Model validation is ongoing to compare with test data from Sandia National Laboratories. The ABDT tool was established in ANSYS. Task 3. Microstructural modeling was conducted to enhance next-generation electrode designs. This 3- year project will validate models for a variety of electrodes, complementing Advanced Battery Research programs. Prototype tools have been developed for electrochemical simulation and geometric reconstruction.« less

  15. Internship training in computer science: Exploring student satisfaction levels.

    PubMed

    Jaradat, Ghaith M

    2017-08-01

    The requirement of employability in the job market prompted universities to conduct internship training as part of their study plans. There is a need to train students on important academic and professional skills related to the workplace with an IT component. This article describes a statistical study that measures satisfaction levels among students in the faculty of Information Technology and Computer Science in Jordan. The objective of this study is to explore factors that influence student satisfaction with regards to enrolling in an internship training program. The study was conducted to gather student perceptions, opinions, preferences and satisfaction levels related to the program. Data were collected via a mixed method survey (surveys and interviews) from student-respondents. The survey collects demographic and background information from students, including their perception of faculty performance in the training poised to prepare them for the job market. Findings from this study show that students expect internship training to improve their professional and personal skills as well as to increase their workplace-related satisfaction. It is concluded that improving the internship training is crucial among the students as it is expected to enrich their experiences, knowledge and skills in the personal and professional life. It is also expected to increase their level of confidence when it comes to exploring their future job opportunities in the Jordanian market. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Innovative architectures for dense multi-microprocessor computers

    NASA Technical Reports Server (NTRS)

    Donaldson, Thomas; Doty, Karl; Engle, Steven W.; Larson, Robert E.; O'Reilly, John G.

    1988-01-01

    The results of a Phase I Small Business Innovative Research (SBIR) project performed for the NASA Langley Computational Structural Mechanics Group are described. The project resulted in the identification of a family of chordal-ring interconnection architectures with excellent potential to serve as the basis for new multimicroprocessor (MMP) computers. The paper presents examples of how computational algorithms from structural mechanics can be efficiently implemented on the chordal-ring architecture.

  17. Personal Identification by Keystroke Dynamics in Japanese Free Text Typing

    NASA Astrophysics Data System (ADS)

    Samura, Toshiharu; Nishimura, Haruhiko

    Biometrics is classified into verification and identification. Many researchers on the keystroke dynamics have treated the verification of a fixed short password which is used for the user login. In this research, we pay attention to the identification and investigate several characteristics of the keystroke dynamics in Japanese free text typing. We developed Web-based typing software in order to collect the keystroke data on the Local Area Network and performed experiments on a total of 112 subjects, from which three groups of typing level, the beginner's level and above, the normal level and above and the middle level and above were constructed. Based on the identification methods by the weighted Euclid distance and the neural network for the extracted feature indexes in Japanese texts, we evaluated identification performances for the three groups. As a result, high accuracy of personal identification was confirmed in both methods, in proportion to the typing level of the group.

  18. Fault tolerance in computational grids: perspectives, challenges, and issues.

    PubMed

    Haider, Sajjad; Nazir, Babar

    2016-01-01

    Computational grids are established with the intention of providing shared access to hardware and software based resources with special reference to increased computational capabilities. Fault tolerance is one of the most important issues faced by the computational grids. The main contribution of this survey is the creation of an extended classification of problems that incur in the computational grid environments. The proposed classification will help researchers, developers, and maintainers of grids to understand the types of issues to be anticipated. Moreover, different types of problems, such as omission, interaction, and timing related have been identified that need to be handled on various layers of the computational grid. In this survey, an analysis and examination is also performed pertaining to the fault tolerance and fault detection mechanisms. Our conclusion is that a dependable and reliable grid can only be established when more emphasis is on fault identification. Moreover, our survey reveals that adaptive and intelligent fault identification, and tolerance techniques can improve the dependability of grid working environments.

  19. Triple redundant computer system/display and keyboard subsystem interface

    NASA Technical Reports Server (NTRS)

    Gulde, F. J.

    1973-01-01

    Interfacing of the redundant display and keyboard subsystem with the triple redundant computer system is defined according to space shuttle design. The study is performed in three phases: (1) TRCS configuration and characteristics identification; (2) display and keyboard subsystem configuration and characteristics identification, and (3) interface approach definition.

  20. Security Applications Of Computer Motion Detection

    NASA Astrophysics Data System (ADS)

    Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry

    1987-05-01

    An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.

  1. Comparison of species-level identification and antifungal susceptibility results from diagnostic and reference laboratories for bloodstream Candida surveillance isolates, South Africa, 2009-2010.

    PubMed

    Naicker, Serisha D; Govender, Nevashan; Patel, Jaymati; Zietsman, Inge L; Wadula, Jeannette; Coovadia, Yacoob; Kularatne, Ranmini; Seetharam, Sharona; Govender, Nelesh P

    2016-11-01

    From February 2009 through August 2010, we compared species-level identification of bloodstream Candida isolates and susceptibility to fluconazole, voriconazole, and caspofungin between diagnostic and reference South African laboratories during national surveillance for candidemia. Diagnostic laboratories identified isolates to genus/species level and performed antifungal susceptibility testing, as indicated. At a reference laboratory, viable Candida isolates were identified to species-level using automated systems, biochemical tests, or DNA sequencing; broth dilution susceptibility testing was performed. Categorical agreement (CA) was calculated for susceptibility results of isolates with concordant species identification. Overall, 2172 incident cases were detected, 773 (36%) by surveillance audit. The Vitek 2 YST system (bioMérieux Inc, Marcy l'Etoile, France) was used for identification (360/863, 42%) and susceptibility testing (198/473, 42%) of a large proportion of isolates. For the five most common species (n = 1181), species-level identification was identical in the majority of cases (Candida albicans: 98% (507/517); Candida parapsilosis: 92% (450/488); Candida glabrata: 89% (89/100); Candida tropicalis: 91% (49/54), and Candida krusei: 86% (19/22)). However, diagnostic laboratories were significantly less likely to correctly identify Candida species other than C. albicans versus C. albicans (607/664, 91% vs. 507/517, 98%; P < .001). Susceptibility data were compared for isolates belonging to the five most common species and fluconazole, voriconazole, and caspofungin in 860, 580, and 99 cases, respectively. Diagnostic laboratories significantly under-reported fluconazole resistance in C. parapsilosis (225/393, 57% vs. 239/393, 61%; P < .001) but over-reported fluconazole non-susceptibility in C. albicans (36/362, 10% vs. 3/362, 0.8%; P < .001). Diagnostic laboratories were less likely to correctly identify Candida species other than C. albicans, under

  2. Comparison of traditional phenotypic identification methods with partial 5' 16S rRNA gene sequencing for species-level identification of nonfermenting Gram-negative bacilli.

    PubMed

    Cloud, Joann L; Harmsen, Dag; Iwen, Peter C; Dunn, James J; Hall, Gerri; Lasala, Paul Rocco; Hoggan, Karen; Wilson, Deborah; Woods, Gail L; Mellmann, Alexander

    2010-04-01

    Correct identification of nonfermenting Gram-negative bacilli (NFB) is crucial for patient management. We compared phenotypic identifications of 96 clinical NFB isolates with identifications obtained by 5' 16S rRNA gene sequencing. Sequencing identified 88 isolates (91.7%) with >99% similarity to a sequence from the assigned species; 61.5% of sequencing results were concordant with phenotypic results, indicating the usability of sequencing to identify NFB.

  3. A Framework for People Re-Identification in Multi-Camera Surveillance Systems

    ERIC Educational Resources Information Center

    Ammar, Sirine; Zaghden, Nizar; Neji, Mahmoud

    2017-01-01

    People re-identification has been a very active research topic recently in computer vision. It is an important application in surveillance system with disjoint cameras. This paper is focused on the implementation of a human re-identification system. First the face of detected people is divided into three parts and some soft-biometric traits are…

  4. Efficient computation of hashes

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.

    2014-06-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  5. [Measurement of intracranial hematoma volume by personal computer].

    PubMed

    DU, Wanping; Tan, Lihua; Zhai, Ning; Zhou, Shunke; Wang, Rui; Xue, Gongshi; Xiao, An

    2011-01-01

    To explore the method for intracranial hematoma volume measurement by the personal computer. Forty cases of various intracranial hematomas were measured by the computer tomography with quantitative software and personal computer with Photoshop CS3 software, respectively. the data from the 2 methods were analyzed and compared. There was no difference between the data from the computer tomography and the personal computer (P>0.05). The personal computer with Photoshop CS3 software can measure the volume of various intracranial hematomas precisely, rapidly and simply. It should be recommended in the clinical medicolegal identification.

  6. Group-level self-definition and self-investment: a hierarchical (multicomponent) model of in-group identification.

    PubMed

    Leach, Colin Wayne; van Zomeren, Martijn; Zebel, Sven; Vliek, Michael L W; Pennekamp, Sjoerd F; Doosje, Bertjan; Ouwerkerk, Jaap W; Spears, Russell

    2008-07-01

    Recent research shows individuals' identification with in-groups to be psychologically important and socially consequential. However, there is little agreement about how identification should be conceptualized or measured. On the basis of previous work, the authors identified 5 specific components of in-group identification and offered a hierarchical 2-dimensional model within which these components are organized. Studies 1 and 2 used confirmatory factor analysis to validate the proposed model of self-definition (individual self-stereotyping, in-group homogeneity) and self-investment (solidarity, satisfaction, and centrality) dimensions, across 3 different group identities. Studies 3 and 4 demonstrated the construct validity of the 5 components by examining their (concurrent) correlations with established measures of in-group identification. Studies 5-7 demonstrated the predictive and discriminant validity of the 5 components by examining their (prospective) prediction of individuals' orientation to, and emotions about, real intergroup relations. Together, these studies illustrate the conceptual and empirical value of a hierarchical multicomponent model of in-group identification.

  7. Computer Music

    NASA Astrophysics Data System (ADS)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  8. Computer Use and Computer Anxiety in Older Korean Americans.

    PubMed

    Yoon, Hyunwoo; Jang, Yuri; Xie, Bo

    2016-09-01

    Responding to the limited literature on computer use in ethnic minority older populations, the present study examined predictors of computer use and computer anxiety in older Korean Americans. Separate regression models were estimated for computer use and computer anxiety with the common sets of predictors: (a) demographic variables (age, gender, marital status, and education), (b) physical health indicators (chronic conditions, functional disability, and self-rated health), and (c) sociocultural factors (acculturation and attitudes toward aging). Approximately 60% of the participants were computer-users, and they had significantly lower levels of computer anxiety than non-users. A higher likelihood of computer use and lower levels of computer anxiety were commonly observed among individuals with younger age, male gender, advanced education, more positive ratings of health, and higher levels of acculturation. In addition, positive attitudes toward aging were found to reduce computer anxiety. Findings provide implications for developing computer training and education programs for the target population. © The Author(s) 2015.

  9. The combined rapid detection and species-level identification of yeasts in simulated blood culture using a colorimetric sensor array.

    PubMed

    Shrestha, Nabin K; Lim, Sung H; Wilson, Deborah A; SalasVargas, Ana Victoria; Churi, Yair S; Rhodes, Paul A; Mazzone, Peter J; Procop, Gary W

    2017-01-01

    A colorimetric sensor array (CSA) has been demonstrated to rapidly detect and identify bacteria growing in blood cultures by obtaining a species-specific "fingerprint" of the volatile organic compounds (VOCs) produced during growth. This capability has been demonstrated in prokaryotes, but has not been reported for eukaryotic cells growing in culture. The purpose of this study was to explore if a disposable CSA could differentially identify 7 species of pathogenic yeasts growing in blood culture. Culture trials of whole blood inoculated with a panel of clinically important pathogenic yeasts at four different microorganism loads were performed. Cultures were done in both standard BacT/Alert and CSA-embedded bottles, after adding 10 mL of spiked blood to each bottle. Color changes in the CSA were captured as images by an optical scanner at defined time intervals. The captured images were analyzed to identify the yeast species. Time to detection by the CSA was compared to that in the BacT/Alert system. One hundred sixty-two yeast culture trials were performed, including strains of several species of Candida (Ca. albicans, Ca. glabrata, Ca. parapsilosis, and Ca. tropicalis), Clavispora (synonym Candida) lusitaniae, Pichia kudriavzevii (synonym Candida krusei) and Cryptococcus neoformans, at loads of 8.2 × 105, 8.3 × 103, 8.5 × 101, and 1.7 CFU/mL. In addition, 8 negative trials (no yeast) were conducted. All negative trials were correctly identified as negative, and all positive trials were detected. Colorimetric responses were species-specific and did not vary by inoculum load over the 500000-fold range of loads tested, allowing for accurate species-level identification. The mean sensitivity for species-level identification by CSA was 74% at detection, and increased with time, reaching almost 95% at 4 hours after detection. At an inoculum load of 1.7 CFU/mL, mean time to detection with the CSA was 6.8 hours (17%) less than with the BacT/Alert platform. The CSA

  10. Comparison of two matrix-assisted laser desorption ionization-time of flight mass spectrometry methods with conventional phenotypic identification for routine identification of bacteria to the species level.

    PubMed

    Cherkaoui, Abdessalam; Hibbs, Jonathan; Emonet, Stéphane; Tangomo, Manuela; Girard, Myriam; Francois, Patrice; Schrenzel, Jacques

    2010-04-01

    Bacterial identification relies primarily on culture-based methodologies requiring 24 h for isolation and an additional 24 to 48 h for species identification. Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) is an emerging technology newly applied to the problem of bacterial species identification. We evaluated two MALDI-TOF MS systems with 720 consecutively isolated bacterial colonies under routine clinical laboratory conditions. Isolates were analyzed in parallel on both devices, using the manufacturers' default recommendations. We compared MS with conventional biochemical test system identifications. Discordant results were resolved with "gold standard" 16S rRNA gene sequencing. The first MS system (Bruker) gave high-confidence identifications for 680 isolates, of which 674 (99.1%) were correct; the second MS system (Shimadzu) gave high-confidence identifications for 639 isolates, of which 635 (99.4%) were correct. Had MS been used for initial testing and biochemical identification used only in the absence of high-confidence MS identifications, the laboratory would have saved approximately US$5 per isolate in marginal costs and reduced average turnaround time by more than an 8-h shift, with no loss in accuracy. Our data suggest that implementation of MS as a first test strategy for one-step species identification would improve timeliness and reduce isolate identification costs in clinical bacteriology laboratories now.

  11. A New Dual-purpose Quality Control Dosimetry Protocol for Diagnostic Reference-level Determination in Computed Tomography.

    PubMed

    Sohrabi, Mehdi; Parsi, Masoumeh; Sina, Sedigheh

    2018-05-17

    A diagnostic reference level is an advisory dose level set by a regulatory authority in a country as an efficient criterion for protection of patients from unwanted medical exposure. In computed tomography, the direct dose measurement and data collection methods are commonly applied for determination of diagnostic reference levels. Recently, a new quality-control-based dose survey method was proposed by the authors to simplify the diagnostic reference-level determination using a retrospective quality control database usually available at a regulatory authority in a country. In line with such a development, a prospective dual-purpose quality control dosimetry protocol is proposed for determination of diagnostic reference levels in a country, which can be simply applied by quality control service providers. This new proposed method was applied to five computed tomography scanners in Shiraz, Iran, and diagnostic reference levels for head, abdomen/pelvis, sinus, chest, and lumbar spine examinations were determined. The results were compared to those obtained by the data collection and quality-control-based dose survey methods, carried out in parallel in this study, and were found to agree well within approximately 6%. This is highly acceptable for quality-control-based methods according to International Atomic Energy Agency tolerance levels (±20%).

  12. C-mii: a tool for plant miRNA and target identification.

    PubMed

    Numnark, Somrak; Mhuantong, Wuttichai; Ingsriswang, Supawadee; Wichadakul, Duangdao

    2012-01-01

    MicroRNAs (miRNAs) have been known to play an important role in several biological processes in both animals and plants. Although several tools for miRNA and target identification are available, the number of tools tailored towards plants is limited, and those that are available have specific functionality, lack graphical user interfaces, and restrict the number of input sequences. Large-scale computational identifications of miRNAs and/or targets of several plants have been also reported. Their methods, however, are only described as flow diagrams, which require programming skills and the understanding of input and output of the connected programs to reproduce. To overcome these limitations and programming complexities, we proposed C-mii as a ready-made software package for both plant miRNA and target identification. C-mii was designed and implemented based on established computational steps and criteria derived from previous literature with the following distinguishing features. First, software is easy to install with all-in-one programs and packaged databases. Second, it comes with graphical user interfaces (GUIs) for ease of use. Users can identify plant miRNAs and targets via step-by-step execution, explore the detailed results from each step, filter the results according to proposed constraints in plant miRNA and target biogenesis, and export sequences and structures of interest. Third, it supplies bird's eye views of the identification results with infographics and grouping information. Fourth, in terms of functionality, it extends the standard computational steps of miRNA target identification with miRNA-target folding and GO annotation. Fifth, it provides helper functions for the update of pre-installed databases and automatic recovery. Finally, it supports multi-project and multi-thread management. C-mii constitutes the first complete software package with graphical user interfaces enabling computational identification of both plant miRNA genes and mi

  13. C-mii: a tool for plant miRNA and target identification

    PubMed Central

    2012-01-01

    Background MicroRNAs (miRNAs) have been known to play an important role in several biological processes in both animals and plants. Although several tools for miRNA and target identification are available, the number of tools tailored towards plants is limited, and those that are available have specific functionality, lack graphical user interfaces, and restrict the number of input sequences. Large-scale computational identifications of miRNAs and/or targets of several plants have been also reported. Their methods, however, are only described as flow diagrams, which require programming skills and the understanding of input and output of the connected programs to reproduce. Results To overcome these limitations and programming complexities, we proposed C-mii as a ready-made software package for both plant miRNA and target identification. C-mii was designed and implemented based on established computational steps and criteria derived from previous literature with the following distinguishing features. First, software is easy to install with all-in-one programs and packaged databases. Second, it comes with graphical user interfaces (GUIs) for ease of use. Users can identify plant miRNAs and targets via step-by-step execution, explore the detailed results from each step, filter the results according to proposed constraints in plant miRNA and target biogenesis, and export sequences and structures of interest. Third, it supplies bird's eye views of the identification results with infographics and grouping information. Fourth, in terms of functionality, it extends the standard computational steps of miRNA target identification with miRNA-target folding and GO annotation. Fifth, it provides helper functions for the update of pre-installed databases and automatic recovery. Finally, it supports multi-project and multi-thread management. Conclusions C-mii constitutes the first complete software package with graphical user interfaces enabling computational identification of

  14. Field Level Computer Exploitation Package

    DTIC Science & Technology

    2007-03-01

    to take advantage of the data retrieved from the computer. Major Barge explained that if a tool could be designed that nearly anyone could use...the study of network forensics. This has become a necessity because of the constantly growing eCommerce industry and the stiff competition between...Security. One big advantage that Insert has is the fact that it is quite small compared to most bootable CDs. At only 60 megabytes it can be burned

  15. Contribution of serum FGF21 level to the identification of left ventricular systolic dysfunction and cardiac death.

    PubMed

    Shen, Yun; Zhang, Xueli; Pan, Xiaoping; Xu, Yiting; Xiong, Qin; Lu, Zhigang; Ma, Xiaojing; Bao, Yuqian; Jia, Weiping

    2017-08-18

    The relationship between fibroblast growth factor 21 (FGF21) and cardiovascular disease has been well established in recent studies. This study aimed to investigate the relationship between FGF21 and left ventricular systolic dysfunction and cardiac death. Two-dimensional echocardiography was used to measure the left ventricular ejection fraction (LVEF) to estimate left ventricular systolic function. The optimal cutoff of FGF21 for identifying left ventricular systolic dysfunction at baseline was analyzed via receiver operating characteristic (ROC) curves. The identification of different serum levels of FGF21 and their association with cardiac death was analyzed via Kaplan-Meier survival curves. Serum FGF21 level was measured by an enzyme-linked immunosorbent assay kit, and serum N-terminal pro-brain natriuretic peptide (NT-pro-BNP) level was determined by a chemiluminescent immunoassay. A total of 253 patients were recruited for this study at baseline. Patients were excluded if they lacked echocardiography or laboratory measurement data, and there were 218 patients enrolled in the final analysis. The average age was 66.32 ± 10.10 years. The optimal cutoff values of FGF21 and NT-pro-BNP for identifying left ventricular systolic dysfunction at baseline were 321.5 pg/mL and 131.3 ng/L, respectively, determined separately via ROC analysis. The areas under the curves were non-significant among FGF21, NT-pro-BNP and FGF21 + NT-pro-BNP as determined by pairwise comparisons. Both a higher serum level of FGF21 and a higher serum level of NT-pro-BNP were independent risk factors for left ventricular systolic dysfunction at baseline (odd ratio (OR) 3.138 [1.037-9.500], P = 0.043, OR 9.207 [2.036-41.643], P = 0.004, separately). Further Kaplan-Meier survival analysis indicated an association between both a higher serum level of FGF21 and a higher serum level of NT-pro-BNP with cardiac death in 5 years [RR 5.000 (1.326-18.861), P = 0.026; RR 9.643 (2

  16. Whale Identification

    NASA Technical Reports Server (NTRS)

    1991-01-01

    R:BASE for DOS, a computer program developed under NASA contract, has been adapted by the National Marine Mammal Laboratory and the College of the Atlantic to provide and advanced computerized photo matching technique for identification of humpback whales. The program compares photos with stored digitized descriptions, enabling researchers to track and determine distribution and migration patterns. R:BASE is a spinoff of RIM (Relational Information Manager), which was used to store data for analyzing heat shielding tiles on the Space Shuttle Orbiter. It is now the world's second largest selling line of microcomputer database management software.

  17. Global identification, xenophobia and globalisation: A cross-national exploration.

    PubMed

    Ariely, Gal

    2017-12-01

    This paper explores the ways in which globalisation influences social identity. Combining a psychological social-identity framework with sociological considerations regarding the contextual impact of globalisation, it tests whether global identification-that is, people's identification as global citizens-constitutes an inclusive category, negatively linked to xenophobic attitudes towards immigrants across countries and whether the actual country level of globalisation moderates the relationship between global identification and xenophobia. Unlike most psychological studies of globalisation, it draws its data from 124 national samples across 86 countries, with 154,760 respondents overall, using three different cross-national surveys. Study 1 (International Social Survey Program National Identity Module III 2013; N = 39,426, countries = 32) evinces that while global identification is in fact negatively linked to xenophobia, the correlation is moderated by the country level of globalisation, countries marked by higher levels of globalisation exhibiting a stronger negative relation between global identification and xenophobia than those characterised by a lower level of globalisation. Study 2 (European Values Study 2008; N = 53,083, countries = 44) and Study 3 (World Values Survey 6; N = 65,251, countries = 48) replicated these results across other countries employing dissimilar scales for global identification and xenophobia. © 2016 International Union of Psychological Science.

  18. A new modelling and identification scheme for time-delay systems with experimental investigation: a relay feedback approach

    NASA Astrophysics Data System (ADS)

    Pandey, Saurabh; Majhi, Somanath; Ghorai, Prasenjit

    2017-07-01

    In this paper, the conventional relay feedback test has been modified for modelling and identification of a class of real-time dynamical systems in terms of linear transfer function models with time-delay. An ideal relay and unknown systems are connected through a negative feedback loop to bring the sustained oscillatory output around the non-zero setpoint. Thereafter, the obtained limit cycle information is substituted in the derived mathematical equations for accurate identification of unknown plants in terms of overdamped, underdamped, critically damped second-order plus dead time and stable first-order plus dead time transfer function models. Typical examples from the literature are included for the validation of the proposed identification scheme through computer simulations. Subsequently, the comparisons between estimated model and true system are drawn through integral absolute error criterion and frequency response plots. Finally, the obtained output responses through simulations are verified experimentally on real-time liquid level control system using Yokogawa Distributed Control System CENTUM CS3000 set up.

  19. Integrand-level reduction of loop amplitudes by computational algebraic geometry methods

    NASA Astrophysics Data System (ADS)

    Zhang, Yang

    2012-09-01

    We present an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry. This algorithm uses (1) the Gröbner basis method to determine the basis for integrand-level reduction, (2) the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts. The resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cuts, via polynomial fitting techniques. The basis determination part of the algorithm has been implemented in the Mathematica package, BasisDet. The primary decomposition part can be readily carried out by algebraic geometry softwares, with the output of the package BasisDet. The algorithm works in both D = 4 and D = 4 - 2 ɛ dimensions, and we present some two and three-loop examples of applications of this algorithm.

  20. Accuracy and consistency of grass pollen identification by human analysts using electron micrographs of surface ornamentation1

    PubMed Central

    Mander, Luke; Baker, Sarah J.; Belcher, Claire M.; Haselhorst, Derek S.; Rodriguez, Jacklyn; Thorn, Jessica L.; Tiwari, Shivangi; Urrego, Dunia H.; Wesseln, Cassandra J.; Punyasena, Surangi W.

    2014-01-01

    • Premise of the study: Humans frequently identify pollen grains at a taxonomic rank above species. Grass pollen is a classic case of this situation, which has led to the development of computational methods for identifying grass pollen species. This paper aims to provide context for these computational methods by quantifying the accuracy and consistency of human identification. • Methods: We measured the ability of nine human analysts to identify 12 species of grass pollen using scanning electron microscopy images. These are the same images that were used in computational identifications. We have measured the coverage, accuracy, and consistency of each analyst, and investigated their ability to recognize duplicate images. • Results: Coverage ranged from 87.5% to 100%. Mean identification accuracy ranged from 46.67% to 87.5%. The identification consistency of each analyst ranged from 32.5% to 87.5%, and each of the nine analysts produced considerably different identification schemes. The proportion of duplicate image pairs that were missed ranged from 6.25% to 58.33%. • Discussion: The identification errors made by each analyst, which result in a decline in accuracy and consistency, are likely related to psychological factors such as the limited capacity of human memory, fatigue and boredom, recency effects, and positivity bias. PMID:25202649

  1. Human identification based on cranial computed tomography scan — a case report

    PubMed Central

    Silva, RF; Botelho, TL; Prado, FB; Kawagushi, JT; Daruge Júnior, E; Bérzin, F

    2011-01-01

    Today, there is increasing use of CT scanning on a clinical basis, aiding in the diagnosis of diseases or injuries. This exam also provides important information that allows identification of individuals. This paper reports the use of a CT scan on the skull, taken when the victim was alive, for the positive identification of a victim of a traffic accident in which the fingerprint analysis was impossible. The authors emphasize that the CT scan is a tool primarily used in clinical diagnosis and may contribute significantly to forensic purpose, allowing the exploration of virtual corpses before the classic autopsy. The use of CT scans might increase the quantity and quality of information involved in the death of the person examined. PMID:21493883

  2. [Isolation and identification methods of enterobacteria group and its technological advancement].

    PubMed

    Furuta, Itaru

    2007-08-01

    In the last half-century, isolation and identification methods of enterobacteria groups have markedly improved by technological advancement. Clinical microbiology tests have changed overtime from tube methods to commercial identification kits and automated identification. Tube methods are the original method for the identification of enterobacteria groups, that is, a basically essential method to recognize bacterial fermentation and biochemical principles. In this paper, traditional tube tests are discussed, such as the utilization of carbohydrates, indole, methyl red, and citrate and urease tests. Commercial identification kits and automated instruments by computer based analysis as current methods are also discussed, and those methods provide rapidity and accuracy. Nonculture techniques of nucleic acid typing methods using PCR analysis, and immunochemical methods using monoclonal antibodies can be further developed.

  3. CHIRAL--A Computer Aided Application of the Cahn-Ingold-Prelog Rules.

    ERIC Educational Resources Information Center

    Meyer, Edgar F., Jr.

    1978-01-01

    A computer program is described for identification of chiral centers in molecules. Essential input to the program includes both atomic and bonding information. The program does not require computer graphic input-output. (BB)

  4. A Feasibility Study of View-independent Gait Identification

    DTIC Science & Technology

    2012-03-01

    ice skates . For walking, the footprint records for single pixels form clusters that are well separated in space and time. (Any overlap of contact...Pattern Recognition 2007, 1-8. Cheng M-H, Ho M-F & Huang C-L (2008), "Gait Analysis for Human Identification Through Manifold Learning and HMM... Learning and Cybernetics 2005, 4516-4521 Moeslund T B & Granum E (2001), "A Survey of Computer Vision-Based Human Motion Capture", Computer Vision

  5. PathoScope 2.0: a complete computational framework for strain identification in environmental or clinical sequencing samples

    PubMed Central

    2014-01-01

    Background Recent innovations in sequencing technologies have provided researchers with the ability to rapidly characterize the microbial content of an environmental or clinical sample with unprecedented resolution. These approaches are producing a wealth of information that is providing novel insights into the microbial ecology of the environment and human health. However, these sequencing-based approaches produce large and complex datasets that require efficient and sensitive computational analysis workflows. Many recent tools for analyzing metagenomic-sequencing data have emerged, however, these approaches often suffer from issues of specificity, efficiency, and typically do not include a complete metagenomic analysis framework. Results We present PathoScope 2.0, a complete bioinformatics framework for rapidly and accurately quantifying the proportions of reads from individual microbial strains present in metagenomic sequencing data from environmental or clinical samples. The pipeline performs all necessary computational analysis steps; including reference genome library extraction and indexing, read quality control and alignment, strain identification, and summarization and annotation of results. We rigorously evaluated PathoScope 2.0 using simulated data and data from the 2011 outbreak of Shiga-toxigenic Escherichia coli O104:H4. Conclusions The results show that PathoScope 2.0 is a complete, highly sensitive, and efficient approach for metagenomic analysis that outperforms alternative approaches in scope, speed, and accuracy. The PathoScope 2.0 pipeline software is freely available for download at: http://sourceforge.net/projects/pathoscope/. PMID:25225611

  6. A unified framework for evaluating the risk of re-identification of text de-identification tools.

    PubMed

    Scaiano, Martin; Middleton, Grant; Arbuckle, Luk; Kolhatkar, Varada; Peyton, Liam; Dowling, Moira; Gipson, Debbie S; El Emam, Khaled

    2016-10-01

    It has become regular practice to de-identify unstructured medical text for use in research using automatic methods, the goal of which is to remove patient identifying information to minimize re-identification risk. The metrics commonly used to determine if these systems are performing well do not accurately reflect the risk of a patient being re-identified. We therefore developed a framework for measuring the risk of re-identification associated with textual data releases. We apply the proposed evaluation framework to a data set from the University of Michigan Medical School. Our risk assessment results are then compared with those that would be obtained using a typical contemporary micro-average evaluation of recall in order to illustrate the difference between the proposed evaluation framework and the current baseline method. We demonstrate how this framework compares against common measures of the re-identification risk associated with an automated text de-identification process. For the probability of re-identification using our evaluation framework we obtained a mean value for direct identifiers of 0.0074 and a mean value for quasi-identifiers of 0.0022. The 95% confidence interval for these estimates were below the relevant thresholds. The threshold for direct identifier risk was based on previously used approaches in the literature. The threshold for quasi-identifiers was determined based on the context of the data release following commonly used de-identification criteria for structured data. Our framework attempts to correct for poorly distributed evaluation corpora, accounts for the data release context, and avoids the often optimistic assumptions that are made using the more traditional evaluation approach. It therefore provides a more realistic estimate of the true probability of re-identification. This framework should be used as a basis for computing re-identification risk in order to more realistically evaluate future text de-identification tools

  7. Selection of a computer code for Hanford low-level waste engineered-system performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGrail, B.P.; Mahoney, L.A.

    Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected tomore » affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.« less

  8. Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition.

    PubMed

    Ding, Changxing; Choi, Jonghyun; Tao, Dacheng; Davis, Larry S

    2016-03-01

    To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme.

  9. Computer Music

    NASA Astrophysics Data System (ADS)

    Cook, Perry

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.). Although most people would think that analog synthesizers and electronic music substantially predate the use of computers in music, many experiments and complete computer music systems were being constructed and used as early as the 1950s.

  10. Improved molecular level identification of organic compounds using comprehensive two-dimensional chromatography, dual ionization energies and high resolution mass spectrometry

    DOE PAGES

    Worton, David R.; Decker, Monika; Isaacman-VanWertz, Gabriel; ...

    2017-05-22

    A new analytical methodology combining comprehensive two-dimensional gas chromatography (GC×GC), dual ionization energies and high resolution time of flight mass spectrometry has been developed that improves molecular level identification of organic compounds in complex environmental samples. GC×GC maximizes compound separation providing cleaner mass spectra by minimizing erroneous fragments from interferences and co-eluting peaks. Traditional electron ionization (EI, 70 eV) provides MS fragmentation patterns that can be matched to published EI MS libraries while vacuum ultraviolet photoionization (VUV, 10.5 eV) yields MS with reduced fragmentation enhancing the abundance of the molecular ion providing molecular formulas when combined with high resolution massmore » spectrometry. We demonstrate this new approach by applying it to a sample of organic aerosol. In this sample, 238 peaks were matched to EI MS library data with FM ≥ 800 but a fifth (42 compounds) were determined to be incorrectly identified because the molecular formula was not confirmed by the VUV MS data. This highlights the importance of using a complementary technique to confirm compound identifications even for peaks with very good matching statistics. In total, 171 compounds were identified by EI MS matching to library spectra with confirmation of the molecular formula from the high resolution VUV MS data and were not dependent on the matching statistics being above a threshold value. A large number of unidentified peaks were still observed with FM < 800, which in routine analysis would typically be neglected. Where possible, these peaks were assigned molecular formulas from the VUV MS data (211 in total). In total, the combination of EI and VUV MS data provides more than twice as much molecular level peak information than traditional approaches and improves confidence in the identification of individual organic compounds. The molecular formula data from the VUV MS data was used, in

  11. Improved molecular level identification of organic compounds using comprehensive two-dimensional chromatography, dual ionization energies and high resolution mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worton, David R.; Decker, Monika; Isaacman-VanWertz, Gabriel

    A new analytical methodology combining comprehensive two-dimensional gas chromatography (GC×GC), dual ionization energies and high resolution time of flight mass spectrometry has been developed that improves molecular level identification of organic compounds in complex environmental samples. GC×GC maximizes compound separation providing cleaner mass spectra by minimizing erroneous fragments from interferences and co-eluting peaks. Traditional electron ionization (EI, 70 eV) provides MS fragmentation patterns that can be matched to published EI MS libraries while vacuum ultraviolet photoionization (VUV, 10.5 eV) yields MS with reduced fragmentation enhancing the abundance of the molecular ion providing molecular formulas when combined with high resolution massmore » spectrometry. We demonstrate this new approach by applying it to a sample of organic aerosol. In this sample, 238 peaks were matched to EI MS library data with FM ≥ 800 but a fifth (42 compounds) were determined to be incorrectly identified because the molecular formula was not confirmed by the VUV MS data. This highlights the importance of using a complementary technique to confirm compound identifications even for peaks with very good matching statistics. In total, 171 compounds were identified by EI MS matching to library spectra with confirmation of the molecular formula from the high resolution VUV MS data and were not dependent on the matching statistics being above a threshold value. A large number of unidentified peaks were still observed with FM < 800, which in routine analysis would typically be neglected. Where possible, these peaks were assigned molecular formulas from the VUV MS data (211 in total). In total, the combination of EI and VUV MS data provides more than twice as much molecular level peak information than traditional approaches and improves confidence in the identification of individual organic compounds. The molecular formula data from the VUV MS data was used, in

  12. Projective Identification in Common Couple Dances.

    ERIC Educational Resources Information Center

    Middelberg, Carol V.

    2001-01-01

    Integrates the object relations concept of projective identification and the systemic concept of marital dances to develop a more powerful model for working with more difficult and distressed couples. Suggests how object relations techniques can be used to interrupt projective identifications and resolve conflict on intrapsychic level so the…

  13. Versatile analog pulse height computer performs real-time arithmetic operations

    NASA Technical Reports Server (NTRS)

    Brenner, R.; Strauss, M. G.

    1967-01-01

    Multipurpose analog pulse height computer performs real-time arithmetic operations on relatively fast pulses. This computer can be used for identification of charged particles, pulse shape discrimination, division of signals from position sensitive detectors, and other on-line data reduction techniques.

  14. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  15. Instruction-Level Characterization of Scientific Computing Applications Using Hardware Performance Counters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Cameron, K.W.

    1998-11-24

    Workload characterization has been proven an essential tool to architecture design and performance evaluation in both scientific and commercial computing areas. Traditional workload characterization techniques include FLOPS rate, cache miss ratios, CPI (cycles per instruction or IPC, instructions per cycle) etc. With the complexity of sophisticated modern superscalar microprocessors, these traditional characterization techniques are not powerful enough to pinpoint the performance bottleneck of an application on a specific microprocessor. They are also incapable of immediately demonstrating the potential performance benefit of any architectural or functional improvement in a new processor design. To solve these problems, many people rely on simulators,more » which have substantial constraints especially on large-scale scientific computing applications. This paper presents a new technique of characterizing applications at the instruction level using hardware performance counters. It has the advantage of collecting instruction-level characteristics in a few runs virtually without overhead or slowdown. A variety of instruction counts can be utilized to calculate some average abstract workload parameters corresponding to microprocessor pipelines or functional units. Based on the microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. In particular, the analysis results can provide some insight to the problem that only a small percentage of processor peak performance can be achieved even for many very cache-friendly codes. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. Eventually, these abstract parameters can lead to the creation of an analytical microprocessor pipeline model and memory hierarchy model.« less

  16. Sound production due to large-scale coherent structures. [and identification of noise mechanisms in turbulent shear flow

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.

    1979-01-01

    The sound due to the large-scale (wavelike) structure in an infinite free turbulent shear flow is examined. Specifically, a computational study of a plane shear layer is presented, which accounts, by way of triple decomposition of the flow field variables, for three distinct component scales of motion (mean, wave, turbulent), and from which the sound - due to the large-scale wavelike structure - in the acoustic field can be isolated by a simple phase average. The computational approach has allowed for the identification of a specific noise production mechanism, viz the wave-induced stress, and has indicated the effect of coherent structure amplitude and growth and decay characteristics on noise levels produced in the acoustic far field.

  17. Control Law Design in a Computational Aeroelasticity Environment

    NASA Technical Reports Server (NTRS)

    Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.

    2003-01-01

    A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.

  18. High performance data acquisition, identification, and monitoring for active magnetic bearings

    NASA Technical Reports Server (NTRS)

    Herzog, Raoul; Siegwart, Roland

    1994-01-01

    Future active magnetic bearing systems (AMB) must feature easier on-site tuning, higher stiffness and damping, better robustness with respect to undesirable vibrations in housing and foundation, and enhanced monitoring and identification abilities. To get closer to these goals we developed a fast parallel link from the digitally controlled AMB to Matlab, which is used on a host computer for data processing, identification, and controller layout. This enables the magnetic bearing to take its frequency responses without using any additional measurement equipment. These measurements can be used for AMB identification.

  19. Parallel computation of level set method for 500 Hz visual servo control

    NASA Astrophysics Data System (ADS)

    Fei, Xianfeng; Igarashi, Yasunobu; Hashimoto, Koichi

    2008-11-01

    We propose a 2D microorganism tracking system using a parallel level set method and a column parallel vision system (CPV). This system keeps a single microorganism in the middle of the visual field under a microscope by visual servoing an automated stage. We propose a new energy function for the level set method. This function constrains an amount of light intensity inside the detected object contour to control the number of the detected objects. This algorithm is implemented in CPV system and computational time for each frame is 2 [ms], approximately. A tracking experiment for about 25 s is demonstrated. Also we demonstrate a single paramecium can be kept tracking even if other paramecia appear in the visual field and contact with the tracked paramecium.

  20. NASTRAN computer system level 12.1

    NASA Technical Reports Server (NTRS)

    Butler, T. G.

    1971-01-01

    Program uses finite element displacement method for solving linear response of large, three-dimensional structures subject to static, dynamic, thermal, and random loadings. Program adapts to computers of different manufacture, permits up-dating and extention, allows interchange of output and input information between users, and is extensively documented.

  1. Computer aided manual validation of mass spectrometry-based proteomic data.

    PubMed

    Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M

    2013-06-15

    Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Route Sanitizer: Connected Vehicle Trajectory De-Identification Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Jason M; Ferber, Aaron E

    Route Sanitizer is ORNL's connected vehicle moving object database de-identification tool and a graphical user interface to ORNL's connected vehicle de-identification algorithm. It uses the Google Chrome (soon to be Electron) platform so it will run on different computing platforms. The basic de-identification strategy is record redaction: portions of a vehicle trajectory (e.g. sequences of precise temporal spatial records) are removed. It does not alter retained records. The algorithm uses custom techniques to find areas within trajectories that may be considered private, then it suppresses those in addition to enough of the trajectory surrounding those locations to protect against "inferencemore » attacks" in a mathematically sound way. Map data is integrated into the process to make this possible.« less

  3. Introjective identification: the analytic work of evocation.

    PubMed

    Eekhoff, Judy K

    2016-12-01

    This paper focuses on a particular counter-transference process-introjective identification and the evocation it enables. Introjective identification enables evocation because it engages the analyst's radical openness to the experience of the patient at the most primordial level. The accumulated wisdom of Ferenczi and those who followed him is used to discuss the role of introjective identification in the treatment of patients with non-neurotic structures.

  4. Acceleration of Cherenkov angle reconstruction with the new Intel Xeon/FPGA compute platform for the particle identification in the LHCb Upgrade

    NASA Astrophysics Data System (ADS)

    Faerber, Christian

    2017-10-01

    The LHCb experiment at the LHC will upgrade its detector by 2018/2019 to a ‘triggerless’ readout scheme, where all the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40 MHz. This increases the data bandwidth from the detector down to the Event Filter farm to 40 TBit/s, which also has to be processed to select the interesting proton-proton collision for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered for use inside the new Event Filter farm. In the high performance computing sector more and more FPGA compute accelerators are used to improve the compute performance and reduce the power consumption (e.g. in the Microsoft Catapult project and Bing search engine). Also for the LHCb upgrade the usage of an experimental FPGA accelerated computing platform in the Event Building or in the Event Filter farm is being considered and therefore tested. This platform from Intel hosts a general CPU and a high performance FPGA linked via a high speed link which is for this platform a QPI link. On the FPGA an accelerator is implemented. The used system is a two socket platform from Intel with a Xeon CPU and an FPGA. The FPGA has cache-coherent memory access to the main memory of the server and can collaborate with the CPU. As a first step, a computing intensive algorithm to reconstruct Cherenkov angles for the LHCb RICH particle identification was successfully ported in Verilog to the Intel Xeon/FPGA platform and accelerated by a factor of 35. The same algorithm was ported to the Intel Xeon/FPGA platform with OpenCL. The implementation work and the performance will be compared. Also another FPGA accelerator the Nallatech 385 PCIe accelerator with the same Stratix V FPGA were tested for performance. The results show that the Intel

  5. Positrons vs electrons channeling in silicon crystal: energy levels, wave functions and quantum chaos manifestations

    NASA Astrophysics Data System (ADS)

    Shul'ga, N. F.; Syshchenko, V. V.; Tarnovsky, A. I.; Solovyev, I. I.; Isupov, A. Yu.

    2018-01-01

    The motion of fast electrons through the crystal during axial channeling could be regular and chaotic. The dynamical chaos in quantum systems manifests itself in both statistical properties of energy spectra and morphology of wave functions of the individual stationary states. In this report, we investigate the axial channeling of high and low energy electrons and positrons near [100] direction of a silicon crystal. This case is particularly interesting because of the fact that the chaotic motion domain occupies only a small part of the phase space for the channeling electrons whereas the motion of the channeling positrons is substantially chaotic for the almost all initial conditions. The energy levels of transverse motion, as well as the wave functions of the stationary states, have been computed numerically. The group theory methods had been used for classification of the computed eigenfunctions and identification of the non-degenerate and doubly degenerate energy levels. The channeling radiation spectrum for the low energy electrons has been also computed.

  6. The UAB Informatics Institute and 2016 CEGS N-GRID de-identification shared task challenge.

    PubMed

    Bui, Duy Duc An; Wyatt, Mathew; Cimino, James J

    2017-11-01

    Clinical narratives (the text notes found in patients' medical records) are important information sources for secondary use in research. However, in order to protect patient privacy, they must be de-identified prior to use. Manual de-identification is considered to be the gold standard approach but is tedious, expensive, slow, and impractical for use with large-scale clinical data. Automated or semi-automated de-identification using computer algorithms is a potentially promising alternative. The Informatics Institute of the University of Alabama at Birmingham is applying de-identification to clinical data drawn from the UAB hospital's electronic medical records system before releasing them for research. We participated in a shared task challenge by the Centers of Excellence in Genomic Science (CEGS) Neuropsychiatric Genome-Scale and RDoC Individualized Domains (N-GRID) at the de-identification regular track to gain experience developing our own automatic de-identification tool. We focused on the popular and successful methods from previous challenges: rule-based, dictionary-matching, and machine-learning approaches. We also explored new techniques such as disambiguation rules, term ambiguity measurement, and used multi-pass sieve framework at a micro level. For the challenge's primary measure (strict entity), our submissions achieved competitive results (f-measures: 87.3%, 87.1%, and 86.7%). For our preferred measure (binary token HIPAA), our submissions achieved superior results (f-measures: 93.7%, 93.6%, and 93%). With those encouraging results, we gain the confidence to improve and use the tool for the real de-identification task at the UAB Informatics Institute. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Cloud computing in pharmaceutical R&D: business risks and mitigations.

    PubMed

    Geiger, Karl

    2010-05-01

    Cloud computing provides information processing power and business services, delivering these services over the Internet from centrally hosted locations. Major technology corporations aim to supply these services to every sector of the economy. Deploying business processes 'in the cloud' requires special attention to the regulatory and business risks assumed when running on both hardware and software that are outside the direct control of a company. The identification of risks at the correct service level allows a good mitigation strategy to be selected. The pharmaceutical industry can take advantage of existing risk management strategies that have already been tested in the finance and electronic commerce sectors. In this review, the business risks associated with the use of cloud computing are discussed, and mitigations achieved through knowledge from securing services for electronic commerce and from good IT practice are highlighted.

  8. Literary Identification as Transformative Feminist Pedagogy

    ERIC Educational Resources Information Center

    Jespersen, T. Christine

    2014-01-01

    The project of social justice in education, or maybe any education, involves ideological shifts. These shifts include profound changes to the self. One way that ideological beliefs change is through a process of identification, which occurs on conscious and unconscious levels. As we build our identities through identification, the incorporation of…

  9. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  10. Multivariable frequency domain identification via 2-norm minimization

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1992-01-01

    The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.

  11. Diagnostic reference levels of paediatric computed tomography examinations performed at a dedicated Australian paediatric hospital.

    PubMed

    Bibbo, Giovanni; Brown, Scott; Linke, Rebecca

    2016-08-01

    Diagnostic Reference Levels (DRL) of procedures involving ionizing radiation are important tools to optimizing radiation doses delivered to patients and in identifying cases where the levels of doses are unusually high. This is particularly important for paediatric patients undergoing computed tomography (CT) examinations as these examinations are associated with relatively high-dose. Paediatric CT studies, performed at our institution from January 2010 to March 2014, have been retrospectively analysed to determine the 75th and 95th percentiles of both the volume computed tomography dose index (CTDIvol ) and dose-length product (DLP) for the most commonly performed studies to: establish local diagnostic reference levels for paediatric computed tomography examinations performed at our institution, benchmark our DRL with national and international published paediatric values, and determine the compliance of CT radiographer with established protocols. The derived local 75th percentile DRL have been found to be acceptable when compared with those published by the Australian National Radiation Dose Register and two national children's hospitals, and at the international level with the National Reference Doses for the UK. The 95th percentiles of CTDIvol for the various CT examinations have been found to be acceptable values for the CT scanner Dose-Check Notification. Benchmarking CT radiographers shows that they follow the set protocols for the various examinations without significant variations in the machine setting factors. The derivation of DRL has given us the tool to evaluate and improve the performance of our CT service by improved compliance and a reduction in radiation dose to our paediatric patients. We have also been able to benchmark our performance with similar national and international institutions. © 2016 The Royal Australian and New Zealand College of Radiologists.

  12. Identification of Trihalomethanes (THMs) Levels in Water Supply: A Case Study in Perlis, Malaysia

    NASA Astrophysics Data System (ADS)

    Jalil, Mohd Faizal Ab; Hamidin, Nasrul; Anas Nagoor Gunny, Ahmad; Nihla Kamarudzaman, Ain

    2018-03-01

    In Malaysia, chlorination is used for drinking water disinfection at water treatment plants due to its cost-effectiveness and efficiency. However, the use of chlorine poses potential health risks due to the formation of disinfection by-products such as trihalomethanes (THMs). THMs are formed due to the reaction between chlorine and some natural organic matter. The objective of the study is to analyze the level of THMs in the water supply in Perlis, Malaysia. The water samples were collected from end-user tap water near the water treatment plant (WTP) located in Perlis, including Timah Tasoh WTP, Kampung Sungai Baru WTP, Arau Phase I, II, III, and IV WTPs. The THMs were analyzed using a Gas Chromatography-Mass Spectrometry (GC/MS). The results showed that the water supply from Timah Tasoh WTP generates the most THMs, whereas Kuala Sungai Baru shows the fewest amounts of total THMs. In conclusion, the presence of THMs in tap water has caused great concern since these components can cause cancer in humans. Therefore, the identification of THM formation is crucial in order to make sure that the tap water quality remains at acceptable safety levels.

  13. Integration of low level and ontology derived features for automatic weapon recognition and identification

    NASA Astrophysics Data System (ADS)

    Sirakov, Nikolay M.; Suh, Sang; Attardo, Salvatore

    2011-06-01

    This paper presents a further step of a research toward the development of a quick and accurate weapons identification methodology and system. A basic stage of this methodology is the automatic acquisition and updating of weapons ontology as a source of deriving high level weapons information. The present paper outlines the main ideas used to approach the goal. In the next stage, a clustering approach is suggested on the base of hierarchy of concepts. An inherent slot of every node of the proposed ontology is a low level features vector (LLFV), which facilitates the search through the ontology. Part of the LLFV is the information about the object's parts. To partition an object a new approach is presented capable of defining the objects concavities used to mark the end points of weapon parts, considered as convexities. Further an existing matching approach is optimized to determine whether an ontological object matches the objects from an input image. Objects from derived ontological clusters will be considered for the matching process. Image resizing is studied and applied to decrease the runtime of the matching approach and investigate its rotational and scaling invariance. Set of experiments are preformed to validate the theoretical concepts.

  14. Identification of protein-ligand binding sites by the level-set variational implicit-solvent approach.

    PubMed

    Guo, Zuojun; Li, Bo; Cheng, Li-Tien; Zhou, Shenggao; McCammon, J Andrew; Che, Jianwei

    2015-02-10

    Protein–ligand binding is a key biological process at the molecular level. The identification and characterization of small-molecule binding sites on therapeutically relevant proteins have tremendous implications for target evaluation and rational drug design. In this work, we used the recently developed level-set variational implicit-solvent model (VISM) with the Coulomb field approximation (CFA) to locate and characterize potential protein–small-molecule binding sites. We applied our method to a data set of 515 protein–ligand complexes and found that 96.9% of the cocrystallized ligands bind to the VISM-CFA-identified pockets and that 71.8% of the identified pockets are occupied by cocrystallized ligands. For 228 tight-binding protein–ligand complexes (i.e, complexes with experimental pKd values larger than 6), 99.1% of the cocrystallized ligands are in the VISM-CFA-identified pockets. In addition, it was found that the ligand binding orientations are consistent with the hydrophilic and hydrophobic descriptions provided by VISM. Quantitative characterization of binding pockets with topological and physicochemical parameters was used to assess the “ligandability” of the pockets. The results illustrate the key interactions between ligands and receptors and can be very informative for rational drug design.

  15. Research in applied mathematics, numerical analysis, and computer science

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

  16. Leaf epidermis images for robust identification of plants

    PubMed Central

    da Silva, Núbia Rosa; Oliveira, Marcos William da Silva; Filho, Humberto Antunes de Almeida; Pinheiro, Luiz Felipe Souza; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez

    2016-01-01

    This paper proposes a methodology for plant analysis and identification based on extracting texture features from microscopic images of leaf epidermis. All the experiments were carried out using 32 plant species with 309 epidermal samples captured by an optical microscope coupled to a digital camera. The results of the computational methods using texture features were compared to the conventional approach, where quantitative measurements of stomatal traits (density, length and width) were manually obtained. Epidermis image classification using texture has achieved a success rate of over 96%, while success rate was around 60% for quantitative measurements taken manually. Furthermore, we verified the robustness of our method accounting for natural phenotypic plasticity of stomata, analysing samples from the same species grown in different environments. Texture methods were robust even when considering phenotypic plasticity of stomatal traits with a decrease of 20% in the success rate, as quantitative measurements proved to be fully sensitive with a decrease of 77%. Results from the comparison between the computational approach and the conventional quantitative measurements lead us to discover how computational systems are advantageous and promising in terms of solving problems related to Botany, such as species identification. PMID:27217018

  17. Computational Psychiatry

    PubMed Central

    Wang, Xiao-Jing; Krystal, John H.

    2014-01-01

    Psychiatric disorders such as autism and schizophrenia arise from abnormalities in brain systems that underlie cognitive, emotional and social functions. The brain is enormously complex and its abundant feedback loops on multiple scales preclude intuitive explication of circuit functions. In close interplay with experiments, theory and computational modeling are essential for understanding how, precisely, neural circuits generate flexible behaviors and their impairments give rise to psychiatric symptoms. This Perspective highlights recent progress in applying computational neuroscience to the study of mental disorders. We outline basic approaches, including identification of core deficits that cut across disease categories, biologically-realistic modeling bridging cellular and synaptic mechanisms with behavior, model-aided diagnosis. The need for new research strategies in psychiatry is urgent. Computational psychiatry potentially provides powerful tools for elucidating pathophysiology that may inform both diagnosis and treatment. To achieve this promise will require investment in cross-disciplinary training and research in this nascent field. PMID:25442941

  18. Variation in Microbial Identification System Accuracy for Yeast Identification Depending on Commercial Source of Sabouraud Dextrose Agar

    PubMed Central

    Kellogg, James A.; Bankert, David A.; Chaturvedi, Vishnu

    1999-01-01

    The accuracy of the Microbial Identification System (MIS; MIDI, Inc.) for identification of yeasts to the species level was compared by using 438 isolates grown on prepoured BBL Sabouraud dextrose agar (SDA) and prepoured Remel SDA. Correct identification was observed for 326 (74%) of the yeasts cultured on BBL SDA versus only 214 (49%) of yeasts grown on Remel SDA (P < 0.001). The commercial source of the SDA used in the MIS procedure significantly influences the system’s accuracy. PMID:10325387

  19. Characterisation of recycled acrylonitrile-butadiene-styrene and high-impact polystyrene from waste computer equipment in Brazil.

    PubMed

    Hirayama, Denise; Saron, Clodoaldo

    2015-06-01

    Polymeric materials constitute a considerable fraction of waste computer equipment and polymers acrylonitrile-butadiene-styrene and high-impact polystyrene are the main thermoplastic polymeric components found in waste computer equipment. Identification, separation and characterisation of additives present in acrylonitrile-butadiene-styrene and high-impact polystyrene are fundamental procedures to mechanical recycling of these polymers. The aim of this study was to evaluate the methods for identification of acrylonitrile-butadiene-styrene and high-impact polystyrene from waste computer equipment in Brazil, as well as their potential for mechanical recycling. The imprecise utilisation of symbols for identification of the polymers and the presence of additives containing toxic elements in determinate computer devices are some of the difficulties found for recycling of acrylonitrile-butadiene-styrene and high-impact polystyrene from waste computer equipment. However, the considerable performance of mechanical properties of the recycled acrylonitrile-butadiene-styrene and high-impact polystyrene when compared with the virgin materials confirms the potential for mechanical recycling of these polymers. © The Author(s) 2015.

  20. How enhanced molecular ions in Cold EI improve compound identification by the NIST library.

    PubMed

    Alon, Tal; Amirav, Aviv

    2015-12-15

    Library-based compound identification with electron ionization (EI) mass spectrometry (MS) is a well-established identification method which provides the names and structures of sample compounds up to the isomer level. The library (such as NIST) search algorithm compares different EI mass spectra in the library's database with the measured EI mass spectrum, assigning each of them a similarity score called 'Match' and an overall identification probability. Cold EI, electron ionization of vibrationally cold molecules in supersonic molecular beams, provides mass spectra with all the standard EI fragment ions combined with enhanced Molecular Ions and high-mass fragments. As a result, Cold EI mass spectra differ from those provided by standard EI and tend to yield lower matching scores. However, in most cases, library identification actually improves with Cold EI, as library identification probabilities for the correct library mass spectra increase, despite the lower matching factors. This research examined the way that enhanced molecular ion abundances affect library identification probability and the way that Cold EI mass spectra, which include enhanced molecular ions and high-mass fragment ions, typically improve library identification results. It involved several computer simulations, which incrementally modified the relative abundances of the various ions and analyzed the resulting mass spectra. The simulation results support previous measurements, showing that while enhanced molecular ion and high-mass fragment ions lower the matching factor of the correct library compound, the matching factors of the incorrect library candidates are lowered even more, resulting in a rise in the identification probability for the correct compound. This behavior which was previously observed by analyzing Cold EI mass spectra can be explained by the fact that high-mass ions, and especially the molecular ion, characterize a compound more than low-mass ions and therefore carries more

  1. The combined rapid detection and species-level identification of yeasts in simulated blood culture using a colorimetric sensor array

    PubMed Central

    Lim, Sung H.; Wilson, Deborah A.; SalasVargas, Ana Victoria; Churi, Yair S.; Rhodes, Paul A.; Mazzone, Peter J.; Procop, Gary W.

    2017-01-01

    Background A colorimetric sensor array (CSA) has been demonstrated to rapidly detect and identify bacteria growing in blood cultures by obtaining a species-specific “fingerprint” of the volatile organic compounds (VOCs) produced during growth. This capability has been demonstrated in prokaryotes, but has not been reported for eukaryotic cells growing in culture. The purpose of this study was to explore if a disposable CSA could differentially identify 7 species of pathogenic yeasts growing in blood culture. Methods Culture trials of whole blood inoculated with a panel of clinically important pathogenic yeasts at four different microorganism loads were performed. Cultures were done in both standard BacT/Alert and CSA-embedded bottles, after adding 10 mL of spiked blood to each bottle. Color changes in the CSA were captured as images by an optical scanner at defined time intervals. The captured images were analyzed to identify the yeast species. Time to detection by the CSA was compared to that in the BacT/Alert system. Results One hundred sixty-two yeast culture trials were performed, including strains of several species of Candida (Ca. albicans, Ca. glabrata, Ca. parapsilosis, and Ca. tropicalis), Clavispora (synonym Candida) lusitaniae, Pichia kudriavzevii (synonym Candida krusei) and Cryptococcus neoformans, at loads of 8.2 × 105, 8.3 × 103, 8.5 × 101, and 1.7 CFU/mL. In addition, 8 negative trials (no yeast) were conducted. All negative trials were correctly identified as negative, and all positive trials were detected. Colorimetric responses were species-specific and did not vary by inoculum load over the 500000-fold range of loads tested, allowing for accurate species-level identification. The mean sensitivity for species-level identification by CSA was 74% at detection, and increased with time, reaching almost 95% at 4 hours after detection. At an inoculum load of 1.7 CFU/mL, mean time to detection with the CSA was 6.8 hours (17%) less than with the

  2. In-Flight System Identification

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1998-01-01

    A method is proposed and studied whereby the system identification cycle consisting of experiment design and data analysis can be repeatedly implemented aboard a test aircraft in real time. This adaptive in-flight system identification scheme has many advantages, including increased flight test efficiency, adaptability to dynamic characteristics that are imperfectly known a priori, in-flight improvement of data quality through iterative input design, and immediate feedback of the quality of flight test results. The technique uses equation error in the frequency domain with a recursive Fourier transform for the real time data analysis, and simple design methods employing square wave input forms to design the test inputs in flight. Simulation examples are used to demonstrate that the technique produces increasingly accurate model parameter estimates resulting from sequentially designed and implemented flight test maneuvers. The method has reasonable computational requirements, and could be implemented aboard an aircraft in real time.

  3. An Infrastructure for Multi-Level Secure Service-Oriented Architecture (MLS-SOA) Using the Multiple Single-Level Approach

    DTIC Science & Technology

    2009-12-17

    IEEE TDKE, 1996. 8( 1). 14. Garvey, T.D., The inference Problem for Computer Security. 1992, SRI International. 15. Chaum , D ., Blind Signatures for...Pervasive Computing Environments. IEEE Transactions on Vehicular Technology, 2006. 55(4). 17. Chaum , D ., Security without Identification: Transaction...Systems to make Big Brother Obsolete. Communications of the ACM 1985. 28(10). 18. Chaum , D ., Untraceable Electronic Mail, Return Addresses, and Digital

  4. Fractal feature of sEMG from Flexor digitorum superficialis muscle correlated with levels of contraction during low-level finger flexions.

    PubMed

    Arjunan, Sridhar P; Kumar, Dinesh K; Naik, Ganesh R

    2010-01-01

    This research paper reports an experimental study on identification of the changes in fractal properties of surface Electromyogram (sEMG) with the changes in the force levels during low-level finger flexions. In the previous study, the authors have identified a novel fractal feature, Maximum fractal length (MFL) as a measure of strength of low-level contractions and has used this feature to identify various wrist and finger movements. This study has tested the relationship between the MFL and force of contraction. The results suggest that changes in MFL is correlated with the changes in contraction levels (20%, 50% and 80% maximum voluntary contraction (MVC)) during low-level muscle activation such as finger flexions. From the statistical analysis and by visualisation using box-plot, it is observed that MFL (p ≈ 0.001) is a more correlated to force of contraction compared to RMS (p≈0.05), even when the muscle contraction is less than 50% MVC during low-level finger flexions. This work has established that this fractal feature will be useful in providing information about changes in levels of force during low-level finger movements for prosthetic control or human computer interface.

  5. Adventitious sounds identification and extraction using temporal-spectral dominance-based features.

    PubMed

    Jin, Feng; Krishnan, Sridhar Sri; Sattar, Farook

    2011-11-01

    Respiratory sound (RS) signals carry significant information about the underlying functioning of the pulmonary system by the presence of adventitious sounds (ASs). Although many studies have addressed the problem of pathological RS classification, only a limited number of scientific works have focused on the analysis of the evolution of symptom-related signal components in joint time-frequency (TF) plane. This paper proposes a new signal identification and extraction method for various ASs based on instantaneous frequency (IF) analysis. The presented TF decomposition method produces a noise-resistant high definition TF representation of RS signals as compared to the conventional linear TF analysis methods, yet preserving the low computational complexity as compared to those quadratic TF analysis methods. The discarded phase information in conventional spectrogram has been adopted for the estimation of IF and group delay, and a temporal-spectral dominance spectrogram has subsequently been constructed by investigating the TF spreads of the computed time-corrected IF components. The proposed dominance measure enables the extraction of signal components correspond to ASs from noisy RS signal at high noise level. A new set of TF features has also been proposed to quantify the shapes of the obtained TF contours, and therefore strongly, enhances the identification of multicomponents signals such as polyphonic wheezes. An overall accuracy of 92.4±2.9% for the classification of real RS recordings shows the promising performance of the presented method.

  6. HAZARDOUS WASTE IDENTIFICATION

    EPA Science Inventory

    This research is in direct support of the regulatory reform efforts under the Hazarous Waste Identification (HWIR) and is related to the development of national "exit levels" based on sound scientific data and models. Research focuses on developing a systems approach to modelin...

  7. Evaluation of the Biolog automated microbial identification system

    NASA Technical Reports Server (NTRS)

    Klingler, J. M.; Stowe, R. P.; Obenhuber, D. C.; Groves, T. O.; Mishra, S. K.; Pierson, D. L.

    1992-01-01

    Biolog's identification system was used to identify 39 American Type Culture Collection reference taxa and 45 gram-negative isolates from water samples. Of the reference strains, 98% were identified to genus level and 76% to species level within 4 to 24 h. Identification of some authentic strains of Enterobacter, Klebsiella, and Serratia was unreliable. A total of 93% of the water isolates were identified.

  8. Crowding in Cellular Environments at an Atomistic Level from Computer Simulations

    PubMed Central

    2017-01-01

    The effects of crowding in biological environments on biomolecular structure, dynamics, and function remain not well understood. Computer simulations of atomistic models of concentrated peptide and protein systems at different levels of complexity are beginning to provide new insights. Crowding, weak interactions with other macromolecules and metabolites, and altered solvent properties within cellular environments appear to remodel the energy landscape of peptides and proteins in significant ways including the possibility of native state destabilization. Crowding is also seen to affect dynamic properties, both conformational dynamics and diffusional properties of macromolecules. Recent simulations that address these questions are reviewed here and discussed in the context of relevant experiments. PMID:28666087

  9. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  10. Survey of computed tomography scanners in Taiwan: Dose descriptors, dose guidance levels, and effective doses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, H. Y.; Tung, C. J.; Yu, C. C.

    2007-04-15

    The IAEA and the ICRP recommended dose guidance levels for the most frequent computed tomography (CT) examinations to promote strategies for the optimization of radiation dose to CT patients. A national survey, including on-site measurements and questionnaires, was conducted in Taiwan in order to establish dose guidance levels and evaluate effective doses for CT. The beam quality and output and the phantom doses were measured for nine representative CT scanners. Questionnaire forms were completed by respondents from facilities of 146 CT scanners out of 285 total scanners. Information on patient, procedure, scanner, and technique for the head and body examinationsmore » was provided. The weighted computed tomography dose index (CTDI{sub w}), the dose length product (DLP), organ doses and effective dose were calculated using measured data, questionnaire information and Monte Carlo simulation results. A cost-effective analysis was applied to derive the dose guidance levels on CTDI{sub w} and DLP for several CT examinations. The mean effective dose{+-}standard deviation distributes from 1.6{+-}0.9 mSv for the routine head examination to 13{+-}11 mSv for the examination of liver, spleen, and pancreas. The surveyed results and the dose guidance levels were provided to the national authorities to develop quality control standards and protocols for CT examinations.« less

  11. Blind quantum computation with identity authentication

    NASA Astrophysics Data System (ADS)

    Li, Qin; Li, Zhulin; Chan, Wai Hong; Zhang, Shengyu; Liu, Chengdong

    2018-04-01

    Blind quantum computation (BQC) allows a client with relatively few quantum resources or poor quantum technologies to delegate his computational problem to a quantum server such that the client's input, output, and algorithm are kept private. However, all existing BQC protocols focus on correctness verification of quantum computation but neglect authentication of participants' identity which probably leads to man-in-the-middle attacks or denial-of-service attacks. In this work, we use quantum identification to overcome such two kinds of attack for BQC, which will be called QI-BQC. We propose two QI-BQC protocols based on a typical single-server BQC protocol and a double-server BQC protocol. The two protocols can ensure both data integrity and mutual identification between participants with the help of a third trusted party (TTP). In addition, an unjammable public channel between a client and a server which is indispensable in previous BQC protocols is unnecessary, although it is required between TTP and each participant at some instant. Furthermore, the method to achieve identity verification in the presented protocols is general and it can be applied to other similar BQC protocols.

  12. Computational methods and challenges in hydrogen/deuterium exchange mass spectrometry.

    PubMed

    Claesen, Jürgen; Burzykowski, Tomasz

    2017-09-01

    Hydrogen/Deuterium exchange (HDX) has been applied, since the 1930s, as an analytical tool to study the structure and dynamics of (small) biomolecules. The popularity of using HDX to study proteins increased drastically in the last two decades due to the successful combination with mass spectrometry (MS). Together with this growth in popularity, several technological advances have been made, such as improved quenching and fragmentation. As a consequence of these experimental improvements and the increased use of protein-HDXMS, large amounts of complex data are generated, which require appropriate analysis. Computational analysis of HDXMS requires several steps. A typical workflow for proteins consists of identification of (non-)deuterated peptides or fragments of the protein under study (local analysis), or identification of the deuterated protein as a whole (global analysis); determination of the deuteration level; estimation of the protection extent or exchange rates of the labile backbone amide hydrogen atoms; and a statistically sound interpretation of the estimated protection extent or exchange rates. Several algorithms, specifically designed for HDX analysis, have been proposed. They range from procedures that focus on one specific step in the analysis of HDX data to complete HDX workflow analysis tools. In this review, we provide an overview of the computational methods and discuss outstanding challenges. © 2016 Wiley Periodicals, Inc. Mass Spec Rev 36:649-667, 2017. © 2016 Wiley Periodicals, Inc.

  13. Laser-aided material identification for the waste sorting process

    NASA Astrophysics Data System (ADS)

    Haferkamp, Heinz; Burmester, Ingo; Engel, Kai

    1994-03-01

    The LZH has carried out investigations in the field of rapid laser-supported material- identification systems for automatic material-sorting systems. The aim of this research is the fast identification of different sorts of plastics coming from recycled rubbish or electronic waste. Within a few milliseconds a spot on the sample which has to be identified is heated with a CO2 laser. The different and specific chemical and physical material properties of the examined sample cause a different temperature distribution on the surface which is measured with an IR thermographic system. This `thermal impulse response' has to be analyzed by means of a computer system. The results of previous investigations have shown that material identification of different sorts of plastics can possibly be done at a frequency of 30 Hz. Due to economic efficiency, a high velocity identification process is necessary to sort huge waste currents.

  14. MALDI-TOF MS versus VITEK 2 ANC card for identification of anaerobic bacteria.

    PubMed

    Li, Yang; Gu, Bing; Liu, Genyan; Xia, Wenying; Fan, Kun; Mei, Yaning; Huang, Peijun; Pan, Shiyang

    2014-05-01

    Matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) is an accurate, rapid and inexpensive technique that has initiated a revolution in the clinical microbiology laboratory for identification of pathogens. The Vitek 2 anaerobe and Corynebacterium (ANC) identification card is a newly developed method for identification of corynebacteria and anaerobic species. The aim of this study was to evaluate the effectiveness of the ANC card and MALDI-TOF MS techniques for identification of clinical anaerobic isolates. Five reference strains and a total of 50 anaerobic bacteria clinical isolates comprising ten different genera and 14 species were identified and analyzed by the ANC card together with Vitek 2 identification system and Vitek MS together with version 2.0 database respectively. 16S rRNA gene sequencing was used as reference method for accuracy in the identification. Vitek 2 ANC card and Vitek MS provided comparable results at species level for the five reference strains. Of 50 clinical strains, the Vitek MS provided identification for 46 strains (92%) to the species level, 47 (94%) to genus level, one (2%) low discrimination, two (4%) no identification and one (2%) misidentification. The Vitek 2 ANC card provided identification for 43 strains (86%) correct to the species level, 47 (94%) correct to the genus level, three (6%) low discrimination, three (6%) no identification and one (2%) misidentification. Both Vitek MS and Vitek 2 ANC card can be used for accurate routine clinical anaerobe identification. Comparing to the Vitek 2 ANC card, Vitek MS is easier, faster and more economic for each test. The databases currently available for both systems should be updated and further developed to enhance performance.

  15. Summary of research in applied mathematics, numerical analysis, and computer sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.

  16. Computational analysis of Ebolavirus data: prospects, promises and challenges.

    PubMed

    Michaelis, Martin; Rossman, Jeremy S; Wass, Mark N

    2016-08-15

    The ongoing Ebola virus (also known as Zaire ebolavirus, a member of the Ebolavirus family) outbreak in West Africa has so far resulted in >28000 confirmed cases compared with previous Ebolavirus outbreaks that affected a maximum of a few hundred individuals. Hence, Ebolaviruses impose a much greater threat than we may have expected (or hoped). An improved understanding of the virus biology is essential to develop therapeutic and preventive measures and to be better prepared for future outbreaks by members of the Ebolavirus family. Computational investigations can complement wet laboratory research for biosafety level 4 pathogens such as Ebolaviruses for which the wet experimental capacities are limited due to a small number of appropriate containment laboratories. During the current West Africa outbreak, sequence data from many Ebola virus genomes became available providing a rich resource for computational analysis. Here, we consider the studies that have already reported on the computational analysis of these data. A range of properties have been investigated including Ebolavirus evolution and pathogenicity, prediction of micro RNAs and identification of Ebolavirus specific signatures. However, the accuracy of the results remains to be confirmed by wet laboratory experiments. Therefore, communication and exchange between computational and wet laboratory researchers is necessary to make maximum use of computational analyses and to iteratively improve these approaches. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.

  17. Face identification with frequency domain matched filtering in mobile environments

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Su; Woo, Yong-Hyun; Yeom, Seokwon; Kim, Shin-Hwan

    2012-06-01

    Face identification at a distance is very challenging since captured images are often degraded by blur and noise. Furthermore, the computational resources and memory are often limited in the mobile environments. Thus, it is very challenging to develop a real-time face identification system on the mobile device. This paper discusses face identification based on frequency domain matched filtering in the mobile environments. Face identification is performed by the linear or phase-only matched filter and sequential verification stages. The candidate window regions are decided by the major peaks of the linear or phase-only matched filtering outputs. The sequential stages comprise a skin-color test and an edge mask filtering test, which verify color and shape information of the candidate regions in order to remove false alarms. All algorithms are built on the mobile device using Android platform. The preliminary results show that face identification of East Asian people can be performed successfully in the mobile environments.

  18. Spectral quality requirements for effluent identification

    NASA Astrophysics Data System (ADS)

    Czerwinski, R. N.; Seeley, J. A.; Wack, E. C.

    2005-11-01

    We consider the problem of remotely identifying gaseous materials using passive sensing of long-wave infrared (LWIR) spectral features at hyperspectral resolution. Gaseous materials are distinguishable in the LWIR because of their unique spectral fingerprints. A sensor degraded in capability by noise or limited spectral resolution, however, may be unable to positively identify contaminants, especially if they are present in low concentrations or if the spectral library used for comparisons includes materials with similar spectral signatures. This paper will quantify the relative importance of these parameters and express the relationships between them in a functional form which can be used as a rule of thumb in sensor design or in assessing sensor capability for a specific task. This paper describes the simulation of remote sensing datacontaining a gas cloud.In each simulation, the spectra are degraded in spectral resolution and through the addition of noise to simulate spectra collected by sensors of varying design and capability. We form a trade space by systematically varying the number of sensor spectral channels and signal-to-noise ratio over a range of values. For each scenario, we evaluate the capability of the sensor for gas identification by computing the ratio of the F-statistic for the truth gas tothe same statistic computed over the rest of the library.The effect of the scope of the library is investigated as well, by computing statistics on the variability of the identification capability as the library composition is varied randomly.

  19. PepArML: A Meta-Search Peptide Identification Platform

    PubMed Central

    Edwards, Nathan J.

    2014-01-01

    The PepArML meta-search peptide identification platform provides a unified search interface to seven search engines; a robust cluster, grid, and cloud computing scheduler for large-scale searches; and an unsupervised, model-free, machine-learning-based result combiner, which selects the best peptide identification for each spectrum, estimates false-discovery rates, and outputs pepXML format identifications. The meta-search platform supports Mascot; Tandem with native, k-score, and s-score scoring; OMSSA; MyriMatch; and InsPecT with MS-GF spectral probability scores — reformatting spectral data and constructing search configurations for each search engine on the fly. The combiner selects the best peptide identification for each spectrum based on search engine results and features that model enzymatic digestion, retention time, precursor isotope clusters, mass accuracy, and proteotypic peptide properties, requiring no prior knowledge of feature utility or weighting. The PepArML meta-search peptide identification platform often identifies 2–3 times more spectra than individual search engines at 10% FDR. PMID:25663956

  20. Rapid identification of pearl powder from Hyriopsis cumingii by Tri-step infrared spectroscopy combined with computer vision technology

    NASA Astrophysics Data System (ADS)

    Liu, Siqi; Wei, Wei; Bai, Zhiyi; Wang, Xichang; Li, Xiaohong; Wang, Chuanxian; Liu, Xia; Liu, Yuan; Xu, Changhua

    2018-01-01

    Pearl powder, an important raw material in cosmetics and Chinese patent medicines, is commonly uneven in quality and frequently adulterated with low-cost shell powder in the market. The aim of this study is to establish an adequate approach based on Tri-step infrared spectroscopy with enhancing resolution combined with chemometrics for qualitative identification of pearl powder originated from three different quality grades of pearls and quantitative prediction of the proportions of shell powder adulterated in pearl powder. Additionally, computer vision technology (E-eyes) can investigate the color difference among different pearl powders and make it traceable to the pearl quality trait-visual color categories. Though the different grades of pearl powder or adulterated pearl powder have almost identical IR spectra, SD-IR peak intensity at about 861 cm- 1 (v2 band) exhibited regular enhancement with the increasing quality grade of pearls, while the 1082 cm- 1 (v1 band), 712 cm- 1 and 699 cm- 1 (v4 band) were just the reverse. Contrastly, only the peak intensity at 862 cm- 1 was enhanced regularly with the increasing concentration of shell powder. Thus, the bands in the ranges of (1550-1350 cm- 1, 730-680 cm- 1) and (830-880 cm- 1, 690-725 cm- 1) could be exclusive ranges to discriminate three distinct pearl powders and identify adulteration, respectively. For massive sample analysis, a qualitative classification model and a quantitative prediction model based on IR spectra was established successfully by principal component analysis (PCA) and partial least squares (PLS), respectively. The developed method demonstrated great potential for pearl powder quality control and authenticity identification in a direct, holistic manner.

  1. Incidence of antibiotic resistance in coliforms from drinking water and their identification using the Biolog and the API identification systems.

    PubMed

    Tokajian, S; Hashwa, F

    2004-02-01

    Antibiotic-resistant bacteria were common in samples collected from an intermittent water distribution system in Lebanon. Multiply-resistant isolates were also present and most commonly to amoxycillin, cephalexin and sulfamethoxazole/trimethoprim. The aminoglycosides (amikacin, gentamicin and kanamycin) were the most effective, with almost all tested strains showing susceptibility to these antimicrobial agents. Both the Biolog GN MicroPlates and the API 20E strips can be used for the identification of coliform bacteria isolated from potable water, but the outcome of the identification should be viewed with caution. 51% of isolates were assigned similar identities by both the Biolog MicroPlates and the API 20E strips. The similarity at the species level was lower (33%) compared to that at the genus level (67%). The identification of Escherichia coli strains, which represented 30% of all tested organisms, showed 95% similarity in the assigned genus and species using both identification schemes.

  2. Emotional System for Military Target Identification

    DTIC Science & Technology

    2009-10-01

    algorithm [23], and used it to solve a facial recognition problem. In other works [24,25], we explored the potential of using emotional neural...other application areas, such as security ( facial recognition ) and medical (blood cell identification), can be also efficiently used in military...Application of an emotional neural network to facial recognition . Neural Computing and Applications, 18(4), 309-320. [25] Khashman, A. (2009). Blood cell

  3. Exploration of available feature detection and identification systems and their performance on radiographs

    NASA Astrophysics Data System (ADS)

    Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.

    2016-10-01

    Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.

  4. Computer animations stimulate contagious yawning in chimpanzees

    PubMed Central

    Campbell, Matthew W.; Carter, J. Devyn; Proctor, Darby; Eisenberg, Michelle L.; de Waal, Frans B. M.

    2009-01-01

    People empathize with fictional displays of behaviour, including those of cartoons and computer animations, even though the stimuli are obviously artificial. However, the extent to which other animals also may respond empathetically to animations has yet to be determined. Animations provide a potentially useful tool for exploring non-human behaviour, cognition and empathy because computer-generated stimuli offer complete control over variables and the ability to program stimuli that could not be captured on video. Establishing computer animations as a viable tool requires that non-human subjects identify with and respond to animations in a way similar to the way they do to images of actual conspecifics. Contagious yawning has been linked to empathy and poses a good test of involuntary identification and motor mimicry. We presented 24 chimpanzees with three-dimensional computer-animated chimpanzees yawning or displaying control mouth movements. The apes yawned significantly more in response to the yawn animations than to the controls, implying identification with the animations. These results support the phenomenon of contagious yawning in chimpanzees and suggest an empathic response to animations. Understanding how chimpanzees connect with animations, to both empathize and imitate, may help us to understand how humans do the same. PMID:19740888

  5. A systematic review of re-identification attacks on health data.

    PubMed

    El Emam, Khaled; Jonker, Elizabeth; Arbuckle, Luk; Malin, Bradley

    2011-01-01

    Privacy legislation in most jurisdictions allows the disclosure of health data for secondary purposes without patient consent if it is de-identified. Some recent articles in the medical, legal, and computer science literature have argued that de-identification methods do not provide sufficient protection because they are easy to reverse. Should this be the case, it would have significant and important implications on how health information is disclosed, including: (a) potentially limiting its availability for secondary purposes such as research, and (b) resulting in more identifiable health information being disclosed. Our objectives in this systematic review were to: (a) characterize known re-identification attacks on health data and contrast that to re-identification attacks on other kinds of data, (b) compute the overall proportion of records that have been correctly re-identified in these attacks, and (c) assess whether these demonstrate weaknesses in current de-identification methods. Searches were conducted in IEEE Xplore, ACM Digital Library, and PubMed. After screening, fourteen eligible articles representing distinct attacks were identified. On average, approximately a quarter of the records were re-identified across all studies (0.26 with 95% CI 0.046-0.478) and 0.34 for attacks on health data (95% CI 0-0.744). There was considerable uncertainty around the proportions as evidenced by the wide confidence intervals, and the mean proportion of records re-identified was sensitive to unpublished studies. Two of fourteen attacks were performed with data that was de-identified using existing standards. Only one of these attacks was on health data, which resulted in a success rate of 0.00013. The current evidence shows a high re-identification rate but is dominated by small-scale studies on data that was not de-identified according to existing standards. This evidence is insufficient to draw conclusions about the efficacy of de-identification methods.

  6. Nonlinear system identification technique validation

    NASA Astrophysics Data System (ADS)

    Rudko, M.; Bussgang, J. J.

    1982-01-01

    This final technical report describes the results obtained by SIGNATRON, Inc. of Lexington MA on Air Force Contract F30602-80-C-0104 for Rome Air Development Center. The objective of this effort is to develop a technique for identifying system response of nonlinear circuits by measurements of output response to known inputs. The report describes results of a study into the system identification technique based on the pencil-of-function method previously explored by Jain (1974) and Ewen (1979). The procedure identified roles of the linear response and is intended as a first step in nonlinear response and is intended as a first step in nonlinear circuit identification. There are serious implementation problems associated with the original approach such as loss of accuracy due to repeated integrations, lack of good measures of accuracy and computational iteration to identify the number of poles.

  7. Efficient marginalization to compute protein posterior probabilities from shotgun mass spectrometry data

    PubMed Central

    Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford

    2010-01-01

    The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337

  8. Reshaping Computer Literacy Teaching in Higher Education: Identification of Critical Success Factors

    ERIC Educational Resources Information Center

    Taylor, Estelle; Goede, Roelien; Steyn, Tjaart

    2011-01-01

    Purpose: Acquiring computer skills is more important today than ever before, especially in a developing country. Teaching of computer skills, however, has to adapt to new technology. This paper aims to model factors influencing the success of the learning of computer literacy by means of an e-learning environment. The research question for this…

  9. Identification of Cognitive Processes of Effective and Ineffective Students during Computer Programming

    ERIC Educational Resources Information Center

    Renumol, V. G.; Janakiram, Dharanipragada; Jayaprakash, S.

    2010-01-01

    Identifying the set of cognitive processes (CPs) a student can go through during computer programming is an interesting research problem. It can provide a better understanding of the human aspects in computer programming process and can also contribute to the computer programming education in general. The study identified the presence of a set of…

  10. Practical Recommendations for Trait-Level Estimation in the Navy Computer Adaptive Personality Scales (NCAPS)

    DTIC Science & Technology

    2010-11-30

    www.nprst.navy.mil NPRST-TN-11-1 November 2010 Practical Recommendations for -level Estimation in NCAPS Frederick L. Oswald, Ph.D. Rice University...Navy Computer Adaptive Personality Scales ( NCAPS ) Frederick L. Oswald, Ph.D. Rice University Reviewed, Approved, and Released by David M...Personality Scales ( NCAPS ) Frederick P. Oswald, Ph.D. Rice University 6100 Main St., MS25 Houston, TX 77005 Navy Personnel Research, Studies, and

  11. Component Position and Metal Ion Levels in Computer-Navigated Hip Resurfacing Arthroplasty.

    PubMed

    Mann, Stephen M; Kunz, Manuela; Ellis, Randy E; Rudan, John F

    2017-01-01

    Metal ion levels are used as a surrogate marker for wear in hip resurfacing arthroplasties. Improper component position, particularly on the acetabular side, plays an important role in problems with the bearing surfaces, such as edge loading, impingement on the acetabular component rim, lack of fluid-film lubrication, and acetabular component deformation. There are little data regarding femoral component position and its possible implications on wear and failure rates. The purpose of this investigation was to determine both femoral and acetabular component positions in our cohort of mechanically stable hip resurfacing arthroplasties and to determine if these were related to metal ion levels. One hundred fourteen patients who had undergone a computer-assisted metal-on-metal hip resurfacing were prospectively followed. Cobalt and chromium levels, Harris Hip, and UCLA activity scores in addition to measures of the acetabular and femoral component position and angles of the femur and acetabulum were recorded. Significant changes included increases in the position of the acetabular component compared to the native acetabulum; increase in femoral vertical offset; and decreases in global offset, gluteus medius activation angle, and abductor arm angle (P < .05). Multiple regression analysis found no significant predictors of cobalt and chromium metal ion levels. Femoral and acetabular components placed in acceptable position failed to predict increased metal ion levels, and increased levels did not adversely impact patient function or satisfaction. Further research is necessary to clarify factors contributing to prosthesis wear. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Identifying the Computer Competency Levels of Recreation Department Undergraduates

    ERIC Educational Resources Information Center

    Zorba, Erdal

    2011-01-01

    Computer-based and web-based applications are as major instructional tools to increase undergraduates' motivation at school. In the recreation field usage of, computer and the internet based recreational applications has become more prevalent in order to present visual and interactive entertainment activities. Recreation department undergraduates…

  13. An Intelligent Tutoring System for Antibody Identification

    PubMed Central

    Smith, Philip J.; Miller, Thomas E.; Fraser, Jane M.

    1990-01-01

    Empirical studies of medical technology students indicate that there is considerable need for additional skill development in performing tasks such as antibody identification. While this need is currently met by on-the-job training after employment, computer-based tutoring systems offer an alternative or supplemental problem-based learning environment that could be more cost effective. We have developed a prototype for such a tutoring system as part of a project to develop educational tools for the field of transfusion medicine. This system provides a microworld in which students can explore and solve cases, receiving assistance and tutoring from the computer as needed.

  14. Neural Network Design on the SRC-6 Reconfigurable Computer

    DTIC Science & Technology

    2006-12-01

    fingerprint identification. In this field, automatic identification methods are used to save time, especially for the purpose of fingerprint matching in...grid widths and lengths and therefore was useful in producing an accurate canvas with which to create sample training images. The added benefit of...tools available free of charge and readily accessible on the computer, it was simple to design bitmap data files visually on a canvas and then

  15. An automatic system to detect and extract texts in medical images for de-identification

    NASA Astrophysics Data System (ADS)

    Zhu, Yingxuan; Singh, P. D.; Siddiqui, Khan; Gillam, Michael

    2010-03-01

    Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.

  16. Multi-level Bayesian safety analysis with unprocessed Automatic Vehicle Identification data for an urban expressway.

    PubMed

    Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie

    2016-03-01

    In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Application of a fast skyline computation algorithm for serendipitous searching problems

    NASA Astrophysics Data System (ADS)

    Koizumi, Kenichi; Hiraki, Kei; Inaba, Mary

    2018-02-01

    Skyline computation is a method of extracting interesting entries from a large population with multiple attributes. These entries, called skyline or Pareto optimal entries, are known to have extreme characteristics that cannot be found by outlier detection methods. Skyline computation is an important task for characterizing large amounts of data and selecting interesting entries with extreme features. When the population changes dynamically, the task of calculating a sequence of skyline sets is called continuous skyline computation. This task is known to be difficult to perform for the following reasons: (1) information of non-skyline entries must be stored since they may join the skyline in the future; (2) the appearance or disappearance of even a single entry can change the skyline drastically; (3) it is difficult to adopt a geometric acceleration algorithm for skyline computation tasks with high-dimensional datasets. Our new algorithm called jointed rooted-tree (JR-tree) manages entries using a rooted tree structure. JR-tree delays extend the tree to deep levels to accelerate tree construction and traversal. In this study, we presented the difficulties in extracting entries tagged with a rare label in high-dimensional space and the potential of fast skyline computation in low-latency cell identification technology.

  18. Computer Needs and Computer Problems in Developing Countries.

    ERIC Educational Resources Information Center

    Huskey, Harry D.

    A survey of the computer environment in a developing country is provided. Levels of development are considered and the educational requirements of countries at various levels are discussed. Computer activities in India, Burma, Pakistan, Brazil and a United Nations sponsored educational center in Hungary are all described. (SK/Author)

  19. A Feature Selection Algorithm to Compute Gene Centric Methylation from Probe Level Methylation Data.

    PubMed

    Baur, Brittany; Bozdag, Serdar

    2016-01-01

    DNA methylation is an important epigenetic event that effects gene expression during development and various diseases such as cancer. Understanding the mechanism of action of DNA methylation is important for downstream analysis. In the Illumina Infinium HumanMethylation 450K array, there are tens of probes associated with each gene. Given methylation intensities of all these probes, it is necessary to compute which of these probes are most representative of the gene centric methylation level. In this study, we developed a feature selection algorithm based on sequential forward selection that utilized different classification methods to compute gene centric DNA methylation using probe level DNA methylation data. We compared our algorithm to other feature selection algorithms such as support vector machines with recursive feature elimination, genetic algorithms and ReliefF. We evaluated all methods based on the predictive power of selected probes on their mRNA expression levels and found that a K-Nearest Neighbors classification using the sequential forward selection algorithm performed better than other algorithms based on all metrics. We also observed that transcriptional activities of certain genes were more sensitive to DNA methylation changes than transcriptional activities of other genes. Our algorithm was able to predict the expression of those genes with high accuracy using only DNA methylation data. Our results also showed that those DNA methylation-sensitive genes were enriched in Gene Ontology terms related to the regulation of various biological processes.

  20. Interactomes to Biological Phase Space: a call to begin thinking at a new level in computational biology.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, George S.; Brown, William Michael

    2007-09-01

    Techniques for high throughput determinations of interactomes, together with high resolution protein collocalizations maps within organelles and through membranes will soon create a vast resource. With these data, biological descriptions, akin to the high dimensional phase spaces familiar to physicists, will become possible. These descriptions will capture sufficient information to make possible realistic, system-level models of cells. The descriptions and the computational models they enable will require powerful computing techniques. This report is offered as a call to the computational biology community to begin thinking at this scale and as a challenge to develop the required algorithms and codes tomore » make use of the new data.3« less

  1. System IDentification Programs for AirCraft (SIDPAC)

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2002-01-01

    A collection of computer programs for aircraft system identification is described and demonstrated. The programs, collectively called System IDentification Programs for AirCraft, or SIDPAC, were developed in MATLAB as m-file functions. SIDPAC has been used successfully at NASA Langley Research Center with data from many different flight test programs and wind tunnel experiments. SIDPAC includes routines for experiment design, data conditioning, data compatibility analysis, model structure determination, equation-error and output-error parameter estimation in both the time and frequency domains, real-time and recursive parameter estimation, low order equivalent system identification, estimated parameter error calculation, linear and nonlinear simulation, plotting, and 3-D visualization. An overview of SIDPAC capabilities is provided, along with a demonstration of the use of SIDPAC with real flight test data from the NASA Glenn Twin Otter aircraft. The SIDPAC software is available without charge to U.S. citizens by request to the author, contingent on the requestor completing a NASA software usage agreement.

  2. The Relationship between Internet and Computer Game Addiction Level and Shyness among High School Students

    ERIC Educational Resources Information Center

    Ayas, Tuncay

    2012-01-01

    This study is conducted to determine the relationship between the internet and computer games addiction level and the shyness among high school students. The participants of the study consist of 365 students attending high schools in Giresun city centre during 2009-2010 academic year. As a result of the study a positive, meaningful, and high…

  3. The influence of the level formants on the perception of synthetic vowel sounds

    NASA Astrophysics Data System (ADS)

    Kubzdela, Henryk; Owsianny, Mariuz

    A computer model of a generator of periodic complex sounds simulating consonants was developed. The system makes possible independent regulation of the level of each of the formants and instant generation of the sound. A trapezoid approximates the curve of the spectrum within the range of the formant. In using this model, each person in a group of six listeners experimentally selected synthesis parameters for six sounds that to him seemed optimal approximations of Polish consonants. From these, another six sounds were selected that were identified by a majority of the six persons and several additional listeners as being best qualified to serve as prototypes of Polish consonants. These prototypes were then used to randomly create sounds with various combinations at the level of the second and third formant and these were presented to seven listeners for identification. The results of the identifications are presented in table form in three variants and are described from the point of view of the requirements of automatic recognition of consonants in continuous speech.

  4. Thermoelectric pump performance analysis computer code

    NASA Technical Reports Server (NTRS)

    Johnson, J. L.

    1973-01-01

    A computer program is presented that was used to analyze and design dual-throat electromagnetic dc conduction pumps for the 5-kwe ZrH reactor thermoelectric system. In addition to a listing of the code and corresponding identification of symbols, the bases for this analytical model are provided.

  5. Predicting Regional Self-identification from Spatial Network Models

    PubMed Central

    Almquist, Zack W.; Butts, Carter T.

    2014-01-01

    Social scientists characterize social life as a hierarchy of environments, from the micro level of an individual’s knowledge and perceptions to the macro level of large-scale social networks. In accordance with this typology, individuals are typically thought to reside in micro- and macro-level structures, composed of multifaceted relations (e.g., acquaintanceship, friendship, and kinship). This article analyzes the effects of social structure on micro outcomes through the case of regional identification. Self identification occurs in many different domains, one of which is regional; i.e., the identification of oneself with a locationally-associated group (e.g., a “New Yorker” or “Parisian”). Here, regional self-identification is posited to result from an influence process based on the location of an individual’s alters (e.g., friends, kin or coworkers), such that one tends to identify with regions in which many of his or her alters reside. The structure of this paper is laid out as follows: initially, we begin with a discussion of the relevant social science literature for both social networks and identification. This discussion is followed with one about competing mechanisms for regional identification that are motivated first from the social network literature, and second by the social psychological and cognitive literature of decision making and heuristics. Next, the paper covers the data and methods employed to test the proposed mechanisms. Finally, the paper concludes with a discussion of its findings and further implications for the larger social science literature. PMID:25684791

  6. Improved Targeting Through Collaborative Decision-Making and Brain Computer Interfaces

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Barrero, David F.; McDonald-Maier, Klaus

    2013-01-01

    This paper reports a first step toward a brain-computer interface (BCI) for collaborative targeting. Specifically, we explore, from a broad perspective, how the collaboration of a group of people can increase the performance on a simple target identification task. To this end, we requested a group of people to identify the location and color of a sequence of targets appearing on the screen and measured the time and accuracy of the response. The individual results are compared to a collective identification result determined by simple majority voting, with random choice in case of drawn. The results are promising, as the identification becomes significantly more reliable even with this simple voting and a small number of people (either odd or even number) involved in the decision. In addition, the paper briefly analyzes the role of brain-computer interfaces in collaborative targeting, extending the targeting task by using a BCI instead of a mechanical response.

  7. Numerical studies of identification in nonlinear distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Lo, C. K.; Reich, Simeon; Rosen, I. G.

    1989-01-01

    An abstract approximation framework and convergence theory for the identification of first and second order nonlinear distributed parameter systems developed previously by the authors and reported on in detail elsewhere are summarized and discussed. The theory is based upon results for systems whose dynamics can be described by monotone operators in Hilbert space and an abstract approximation theorem for the resulting nonlinear evolution system. The application of the theory together with numerical evidence demonstrating the feasibility of the general approach are discussed in the context of the identification of a first order quasi-linear parabolic model for one dimensional heat conduction/mass transport and the identification of a nonlinear dissipation mechanism (i.e., damping) in a second order one dimensional wave equation. Computational and implementational considerations, in particular, with regard to supercomputing, are addressed.

  8. Learning Density in Vanuatu High School with Computer Simulation: Influence of Different Levels of Guidance

    ERIC Educational Resources Information Center

    Moli, Lemuel; Delserieys, Alice Pedregosa; Impedovo, Maria Antonietta; Castera, Jeremy

    2017-01-01

    This paper presents a study on discovery learning of scientific concepts with the support of computer simulation. In particular, the paper will focus on the effect of the levels of guidance on students with a low degree of experience in informatics and educational technology. The first stage of this study was to identify the common misconceptions…

  9. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing

    PubMed Central

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678

  10. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.

    PubMed

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.

  11. Comparison of 16S rRNA sequencing with biochemical testing for species-level identification of clinical isolates of Neisseria spp.

    PubMed

    Mechergui, Arij; Achour, Wafa; Ben Hassen, Assia

    2014-08-01

    We aimed to compare accuracy of genus and species level identification of Neisseria spp. using biochemical testing and 16S rRNA sequence analysis. These methods were evaluated using 85 Neisseria spp. clinical isolates initially identified to the genus level by conventional biochemical tests and API NH system (Bio-Mérieux(®)). In 34 % (29/85), more than one possibility was given by 16S rRNA sequence analysis. In 6 % (5/85), one of the possibilities offered by 16S rRNA gene sequencing, agreed with the result given by biochemical testing. In 4 % (3/85), the same species was given by both methods. 16S rRNA gene sequencing results did not correlate well with biochemical tests.

  12. 28 CFR 17.25 - Identification and markings.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... classified at a level equivalent to that level of classification assigned by the originating foreign government. (c) Information assigned a level of classification under predecessor Executive Orders shall be... ACCESS TO CLASSIFIED INFORMATION Classified Information § 17.25 Identification and markings. (a...

  13. Elastic Multi-scale Mechanisms: Computation and Biological Evolution.

    PubMed

    Diaz Ochoa, Juan G

    2018-01-01

    Explanations based on low-level interacting elements are valuable and powerful since they contribute to identify the key mechanisms of biological functions. However, many dynamic systems based on low-level interacting elements with unambiguous, finite, and complete information of initial states generate future states that cannot be predicted, implying an increase of complexity and open-ended evolution. Such systems are like Turing machines, that overlap with dynamical systems that cannot halt. We argue that organisms find halting conditions by distorting these mechanisms, creating conditions for a constant creativity that drives evolution. We introduce a modulus of elasticity to measure the changes in these mechanisms in response to changes in the computed environment. We test this concept in a population of predators and predated cells with chemotactic mechanisms and demonstrate how the selection of a given mechanism depends on the entire population. We finally explore this concept in different frameworks and postulate that the identification of predictive mechanisms is only successful with small elasticity modulus.

  14. Bytes and Bugs: Integrating Computer Programming with Bacteria Identification.

    ERIC Educational Resources Information Center

    Danciger, Michael

    1986-01-01

    By using a computer program to identify bacteria, students sharpen their analytical skills and gain familiarity with procedures used in laboratories outside the university. Although it is ideal for identifying a bacterium, the program can be adapted to many other disciplines. (Author)

  15. A framework for different levels of integration of computational models into web-based virtual patients.

    PubMed

    Kononowicz, Andrzej A; Narracott, Andrew J; Manini, Simone; Bayley, Martin J; Lawford, Patricia V; McCormack, Keith; Zary, Nabil

    2014-01-23

    Virtual patients are increasingly common tools used in health care education to foster learning of clinical reasoning skills. One potential way to expand their functionality is to augment virtual patients' interactivity by enriching them with computational models of physiological and pathological processes. The primary goal of this paper was to propose a conceptual framework for the integration of computational models within virtual patients, with particular focus on (1) characteristics to be addressed while preparing the integration, (2) the extent of the integration, (3) strategies to achieve integration, and (4) methods for evaluating the feasibility of integration. An additional goal was to pilot the first investigation of changing framework variables on altering perceptions of integration. The framework was constructed using an iterative process informed by Soft System Methodology. The Virtual Physiological Human (VPH) initiative has been used as a source of new computational models. The technical challenges associated with development of virtual patients enhanced by computational models are discussed from the perspectives of a number of different stakeholders. Concrete design and evaluation steps are discussed in the context of an exemplar virtual patient employing the results of the VPH ARCH project, as well as improvements for future iterations. The proposed framework consists of four main elements. The first element is a list of feasibility features characterizing the integration process from three perspectives: the computational modelling researcher, the health care educationalist, and the virtual patient system developer. The second element included three integration levels: basic, where a single set of simulation outcomes is generated for specific nodes in the activity graph; intermediate, involving pre-generation of simulation datasets over a range of input parameters; advanced, including dynamic solution of the model. The third element is the

  16. Computed Tomography Image Origin Identification Based on Original Sensor Pattern Noise and 3-D Image Reconstruction Algorithm Footprints.

    PubMed

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2017-07-01

    In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.

  17. Sonoma County Office of Education Computer Education Plan. County Level Plans.

    ERIC Educational Resources Information Center

    Malone, Greg

    1986-01-01

    This plan describes the educational computing and computer literacy program to be implemented by the schools in Sonoma County, California. Topics covered include the roles, responsibilities, and procedures of the county-wide computer committee; the goals of computer education in the county schools; the results of a needs assessment study; a 3-year…

  18. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  19. Universal computer test stand (recommended computer test requirements). [for space shuttle computer evaluation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Techniques are considered which would be used to characterize areospace computers with the space shuttle application as end usage. The system level digital problems which have been encountered and documented are surveyed. From the large cross section of tests, an optimum set is recommended that has a high probability of discovering documented system level digital problems within laboratory environments. Defined is a baseline hardware, software system which is required as a laboratory tool to test aerospace computers. Hardware and software baselines and additions necessary to interface the UTE to aerospace computers for test purposes are outlined.

  20. Rapid identification of pearl powder from Hyriopsis cumingii by Tri-step infrared spectroscopy combined with computer vision technology.

    PubMed

    Liu, Siqi; Wei, Wei; Bai, Zhiyi; Wang, Xichang; Li, Xiaohong; Wang, Chuanxian; Liu, Xia; Liu, Yuan; Xu, Changhua

    2018-01-15

    Pearl powder, an important raw material in cosmetics and Chinese patent medicines, is commonly uneven in quality and frequently adulterated with low-cost shell powder in the market. The aim of this study is to establish an adequate approach based on Tri-step infrared spectroscopy with enhancing resolution combined with chemometrics for qualitative identification of pearl powder originated from three different quality grades of pearls and quantitative prediction of the proportions of shell powder adulterated in pearl powder. Additionally, computer vision technology (E-eyes) can investigate the color difference among different pearl powders and make it traceable to the pearl quality trait-visual color categories. Though the different grades of pearl powder or adulterated pearl powder have almost identical IR spectra, SD-IR peak intensity at about 861cm -1 (v 2 band) exhibited regular enhancement with the increasing quality grade of pearls, while the 1082cm -1 (v 1 band), 712cm -1 and 699cm -1 (v 4 band) were just the reverse. Contrastly, only the peak intensity at 862cm -1 was enhanced regularly with the increasing concentration of shell powder. Thus, the bands in the ranges of (1550-1350cm -1 , 730-680cm -1 ) and (830-880cm -1 , 690-725cm -1 ) could be exclusive ranges to discriminate three distinct pearl powders and identify adulteration, respectively. For massive sample analysis, a qualitative classification model and a quantitative prediction model based on IR spectra was established successfully by principal component analysis (PCA) and partial least squares (PLS), respectively. The developed method demonstrated great potential for pearl powder quality control and authenticity identification in a direct, holistic manner. Copyright © 2017. Published by Elsevier B.V.

  1. Validation of Aircraft Noise Models at Lower Levels of Exposure

    NASA Technical Reports Server (NTRS)

    Page, Juliet A.; Plotkin, Kenneth J.; Carey, Jeffrey N.; Bradley, Kevin A.

    1996-01-01

    Noise levels around airports and airbases in the United States arc computed via the FAA's Integrated Noise Model (INM) or the Air Force's NOISEMAP (NMAP) program. These models were originally developed for use in the vicinity of airports, at distances which encompass a day night average sound level in decibels (Ldn) of 65 dB or higher. There is increasing interest in aircraft noise at larger distances from the airport. including en-route noise. To evaluate the applicability of INM and NMAP at larger distances, a measurement program was conducted at a major air carrier airport with monitoring sites located in areas exposed to an Ldn of 55 dB and higher. Automated Radar Terminal System (ARTS) radar tracking data were obtained to provide actual flight parameters and positive identification of aircraft. Flight operations were grouped according to aircraft type. stage length, straight versus curved flight tracks, and arrival versus departure. Sound exposure levels (SEL) were computed at monitoring locations, using the INM, and compared with measured values. While individual overflight SEL data was characterized by a high variance, analysis performed on an energy-averaging basis indicates that INM and similar models can be applied to regions exposed to an Ldn of 55 dB with no loss of reliability.

  2. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  3. Utility of 16S rDNA Sequencing for Identification of Rare Pathogenic Bacteria.

    PubMed

    Loong, Shih Keng; Khor, Chee Sieng; Jafar, Faizatul Lela; AbuBakar, Sazaly

    2016-11-01

    Phenotypic identification systems are established methods for laboratory identification of bacteria causing human infections. Here, the utility of phenotypic identification systems was compared against 16S rDNA identification method on clinical isolates obtained during a 5-year study period, with special emphasis on isolates that gave unsatisfactory identification. One hundred and eighty-seven clinical bacteria isolates were tested with commercial phenotypic identification systems and 16S rDNA sequencing. Isolate identities determined using phenotypic identification systems and 16S rDNA sequencing were compared for similarity at genus and species level, with 16S rDNA sequencing as the reference method. Phenotypic identification systems identified ~46% (86/187) of the isolates with identity similar to that identified using 16S rDNA sequencing. Approximately 39% (73/187) and ~15% (28/187) of the isolates showed different genus identity and could not be identified using the phenotypic identification systems, respectively. Both methods succeeded in determining the species identities of 55 isolates; however, only ~69% (38/55) of the isolates matched at species level. 16S rDNA sequencing could not determine the species of ~20% (37/187) of the isolates. The 16S rDNA sequencing is a useful method over the phenotypic identification systems for the identification of rare and difficult to identify bacteria species. The 16S rDNA sequencing method, however, does have limitation for species-level identification of some bacteria highlighting the need for better bacterial pathogen identification tools. © 2016 Wiley Periodicals, Inc.

  4. Conceptualizing the Dynamics between Bicultural Identification and Personal Social Networks

    PubMed Central

    Repke, Lydia; Benet-Martínez, Verónica

    2017-01-01

    An adequate understanding of the acculturation processes affecting immigrants and their descendants involves ascertaining the dynamic interplay between the way these individuals manage their multiple (and sometimes conflictual) cultural value systems and identifications and possible changes in their social networks. To fill this gap, the present research examines how key acculturation variables (e.g., strength of ethnic/host cultural identifications, bicultural identity integration or BII) relate to the composition and structure of bicultural individuals’ personal social networks. In Study 1, we relied on a generationally and culturally diverse community sample of 123 Latinos residing in the US. Participants nominated eight individuals (i.e., alters) from their habitual social networks and across two relational domains: friendships and colleagues. Results indicated that the interconnection of same ethnicity alters across different relationship domains is linked to cultural identifications, while the amount of coethnic and host individuals in the network is not. In particular, higher interconnection between Latino friends and colleagues was linked to lower levels of U.S. identification. Conversely, the interconnection of non-Latino friends and colleagues was associated with lower levels of Latino identification. This pattern of results suggests that the relational context for each type of cultural identification works in a subtractive and inverse manner. Further, time spent in the US was linked to both Latino and U.S. cultural identifications, but this relationship was moderated by the level of BII. Specifically, the association between time in the US and strength of both cultural identities was stronger for individuals reporting low levels of BII. Taking the findings from Study 1 as departure point, Study 2 used an agent-based model data simulation approach to explore the dynamic ways in which the content and the structure of an immigrant’s social network might

  5. Identification of histamine receptors and reduction of squalene levels by an antihistamine in sebocytes.

    PubMed

    Pelle, Edward; McCarthy, James; Seltmann, Holger; Huang, Xi; Mammone, Thomas; Zouboulis, Christos C; Maes, Daniel

    2008-05-01

    Overproduction of sebum, especially during adolescence, is causally related to acne and inflammation. As a way to reduce sebum and its interference with the process of follicular keratinization in the pilosebaceous unit leading to inflammatory acne lesions, antihistamines were investigated for their effect on sebocytes, the major cell of the sebaceous gland responsible for producing sebum. Reverse transcriptase-PCR analysis and immunofluorescence of an immortalized sebocyte cell line (SZ95) revealed the presence of histamine-1 receptor (H-1 receptor), and thus indicated that histamines and, conversely, antihistamines could potentially modulate sebocyte function directly. When sebocytes were incubated with an H-1 receptor antagonist, diphenhydramine (DPH), at non-cytotoxic doses, a significant decrease in squalene levels, a biomarker for sebum, was observed. As determined by high-performance liquid chromatography, untreated sebocytes contained 6.27 (+/-0.73) nmol squalene per 10(6) cells, whereas for DPH-treated cells, the levels were 2.37 (+/-0.24) and 2.03 (+/-0.97) nmol squalene per 10(6) cells at 50 and 100 microM, respectively. These data were further substantiated by the identification of histamine receptors in human sebaceous glands. In conclusion, our data show the presence of histamine receptors on sebocytes, demonstrate how an antagonist to these receptors modulated cellular function, and may indicate a new paradigm for acne therapy involving an H-1 receptor-mediated pathway.

  6. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Sacks, David B.; Yu, Yi-Kuo

    2018-06-01

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  7. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo

    2018-06-05

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  8. Assessment of Social Vulnerability Identification at Local Level around Merapi Volcano - A Self Organizing Map Approach

    NASA Astrophysics Data System (ADS)

    Lee, S.; Maharani, Y. N.; Ki, S. J.

    2015-12-01

    The application of Self-Organizing Map (SOM) to analyze social vulnerability to recognize the resilience within sites is a challenging tasks. The aim of this study is to propose a computational method to identify the sites according to their similarity and to determine the most relevant variables to characterize the social vulnerability in each cluster. For this purposes, SOM is considered as an effective platform for analysis of high dimensional data. By considering the cluster structure, the characteristic of social vulnerability of the sites identification can be fully understand. In this study, the social vulnerability variable is constructed from 17 variables, i.e. 12 independent variables which represent the socio-economic concepts and 5 dependent variables which represent the damage and losses due to Merapi eruption in 2010. These variables collectively represent the local situation of the study area, based on conducted fieldwork on September 2013. By using both independent and dependent variables, we can identify if the social vulnerability is reflected onto the actual situation, in this case, Merapi eruption 2010. However, social vulnerability analysis in the local communities consists of a number of variables that represent their socio-economic condition. Some of variables employed in this study might be more or less redundant. Therefore, SOM is used to reduce the redundant variable(s) by selecting the representative variables using the component planes and correlation coefficient between variables in order to find the effective sample size. Then, the selected dataset was effectively clustered according to their similarities. Finally, this approach can produce reliable estimates of clustering, recognize the most significant variables and could be useful for social vulnerability assessment, especially for the stakeholder as decision maker. This research was supported by a grant 'Development of Advanced Volcanic Disaster Response System considering

  9. Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.

    PubMed

    Kolossa, Antonio; Kopp, Bruno

    2016-01-01

    The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.

  10. Computational Lipidomics and Lipid Bioinformatics: Filling In the Blanks.

    PubMed

    Pauling, Josch; Klipp, Edda

    2016-12-22

    Lipids are highly diverse metabolites of pronounced importance in health and disease. While metabolomics is a broad field under the omics umbrella that may also relate to lipids, lipidomics is an emerging field which specializes in the identification, quantification and functional interpretation of complex lipidomes. Today, it is possible to identify and distinguish lipids in a high-resolution, high-throughput manner and simultaneously with a lot of structural detail. However, doing so may produce thousands of mass spectra in a single experiment which has created a high demand for specialized computational support to analyze these spectral libraries. The computational biology and bioinformatics community has so far established methodology in genomics, transcriptomics and proteomics but there are many (combinatorial) challenges when it comes to structural diversity of lipids and their identification, quantification and interpretation. This review gives an overview and outlook on lipidomics research and illustrates ongoing computational and bioinformatics efforts. These efforts are important and necessary steps to advance the lipidomics field alongside analytic, biochemistry, biomedical and biology communities and to close the gap in available computational methodology between lipidomics and other omics sub-branches.

  11. Rapid computational identification of the targets of protein kinase inhibitors.

    PubMed

    Rockey, William M; Elcock, Adrian H

    2005-06-16

    We describe a method for rapidly computing the relative affinities of an inhibitor for all individual members of a family of homologous receptors. The approach, implemented in a new program, SCR, models inhibitor-receptor interactions in full atomic detail with an empirical energy function and includes an explicit account of flexibility in homology-modeled receptors through sampling of libraries of side chain rotamers. SCR's general utility was demonstrated by application to seven different protein kinase inhibitors: for each inhibitor, relative binding affinities with panels of approximately 20 protein kinases were computed and compared with experimental data. For five of the inhibitors (SB203580, purvalanol B, imatinib, H89, and hymenialdisine), SCR provided excellent reproduction of the experimental trends and, importantly, was capable of identifying the targets of inhibitors even when they belonged to different kinase families. The method's performance in a predictive setting was demonstrated by performing separate training and testing applications, and its key assumptions were tested by comparison with a number of alternative approaches employing the ligand-docking program AutoDock (Morris et al. J. Comput. Chem. 1998, 19, 1639-1662). These comparison tests included using AutoDock in nondocking and docking modes and performing energy minimizations of inhibitor-kinase complexes with the molecular mechanics code GROMACS (Berendsen et al. Comput. Phys. Commun. 1995, 91, 43-56). It was found that a surprisingly important aspect of SCR's approach is its assumption that the inhibitor be modeled in the same orientation for each kinase: although this assumption is in some respects unrealistic, calculations that used apparently more realistic approaches produced clearly inferior results. Finally, as a large-scale application of the method, SB203580, purvalanol B, and imatinib were screened against an almost full complement of 493 human protein kinases using SCR in

  12. Kalman and particle filtering methods for full vehicle and tyre identification

    NASA Astrophysics Data System (ADS)

    Bogdanski, Karol; Best, Matthew C.

    2018-05-01

    This paper considers identification of all significant vehicle handling dynamics of a test vehicle, including identification of a combined-slip tyre model, using only those sensors currently available on most vehicle controller area network buses. Using an appropriately simple but efficient model structure, all of the independent parameters are found from test vehicle data, with the resulting model accuracy demonstrated on independent validation data. The paper extends previous work on augmented Kalman Filter state estimators to concentrate wholly on parameter identification. It also serves as a review of three alternative filtering methods; identifying forms of the unscented Kalman filter, extended Kalman filter and particle filter are proposed and compared for effectiveness, complexity and computational efficiency. All three filters are suited to applications of system identification and the Kalman Filters can also operate in real-time in on-line model predictive controllers or estimators.

  13. MIMO system identification using frequency response data

    NASA Technical Reports Server (NTRS)

    Medina, Enrique A.; Irwin, R. D.; Mitchell, Jerrel R.; Bukley, Angelia P.

    1992-01-01

    A solution to the problem of obtaining a multi-input, multi-output statespace model of a system from its individual input/output frequency responses is presented. The Residue Identification Algorithm (RID) identifies the system poles from a transfer function model of the determinant of the frequency response data matrix. Next, the residue matrices of the modes are computed guaranteeing that each input/output frequency response is fitted in the least squares sense. Finally, a realization of the system is computed. Results of the application of RID to experimental frequency responses of a large space structure ground test facility are presented and compared to those obtained via the Eigensystem Realization Algorithm.

  14. Gradient augmented level set method for phase change simulations

    NASA Astrophysics Data System (ADS)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  15. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    PubMed

    Tanabe, Akifumi S; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate

  16. Two New Computational Methods for Universal DNA Barcoding: A Benchmark Using Barcode Sequences of Bacteria, Archaea, Animals, Fungi, and Land Plants

    PubMed Central

    Tanabe, Akifumi S.; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used “1-nearest-neighbor” (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to

  17. 14 CFR 1203.301 - Identification of information requiring protection.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.301 Identification of information requiring protection. Classifiers shall identify the level of classification of each classified... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Identification of information requiring...

  18. 14 CFR 1203.301 - Identification of information requiring protection.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.301 Identification of information requiring protection. Classifiers shall identify the level of classification of each classified... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Identification of information requiring...

  19. Advanced imaging in acute stroke management-Part I: Computed tomographic.

    PubMed

    Saini, Monica; Butcher, Ken

    2009-01-01

    Neuroimaging is fundamental to stroke diagnosis and management. Non-contrast computed tomography (NCCT) has been the primary imaging modality utilized for this purpose for almost four decades. Although NCCT does permit identification of intracranial hemorrhage and parenchymal ischemic changes, insights into blood vessel patency and cerebral perfusion are limited. Advances in reperfusion strategies have made identification of potentially salvageable brain tissue a more practical concern. Advances in CT technology now permit identification of acute and chronic arterial lesions, as well as cerebral blood flow deficits. This review outlines principles of advanced CT image acquisition and its utility in acute stroke management.

  20. Comparing levels of school performance to science teachers' reports on knowledge/skills, instructional use and student use of computers

    NASA Astrophysics Data System (ADS)

    Kerr, Rebecca

    The purpose of this descriptive quantitative and basic qualitative study was to examine fifth and eighth grade science teachers' responses, perceptions of the role of technology in the classroom, and how they felt that computer applications, tools, and the Internet influence student understanding. The purposeful sample included survey and interview responses from fifth grade and eighth grade general and physical science teachers. Even though they may not be generalizable to other teachers or classrooms due to a low response rate, findings from this study indicated teachers with fewer years of teaching science had a higher level of computer use but less computer access, especially for students, in the classroom. Furthermore, teachers' choice of professional development moderated the relationship between the level of school performance and teachers' knowledge/skills, with the most positive relationship being with workshops that occurred outside of the school. Eighteen interviews revealed that teachers perceived the role of technology in classroom instruction mainly as teacher-centered and supplemental, rather than student-centered activities.

  1. Computational Identification and Comparative Analysis of Secreted and Transmembrane Proteins in Six Burkholderia Species.

    PubMed

    Nguyen, Thao Thi; Lee, Hyun-Hee; Park, Jungwook; Park, Inmyoung; Seo, Young-Su

    2017-04-01

    As a step towards discovering novel pathogenesis-related proteins, we performed a genome scale computational identification and characterization of secreted and transmembrane (TM) proteins, which are mainly responsible for bacteria-host interactions and interactions with other bacteria, in the genomes of six representative Burkholderia species. The species comprised plant pathogens ( B. glumae BGR1, B. gladioli BSR3), human pathogens ( B. pseudomallei K96243, B. cepacia LO6), and plant-growth promoting endophytes ( Burkholderia sp. KJ006, B. phytofirmans PsJN). The proportions of putative classically secreted proteins (CSPs) and TM proteins among the species were relatively high, up to approximately 20%. Lower proportions of putative type 3 non-classically secreted proteins (T3NCSPs) (~10%) and unclassified non-classically secreted proteins (NCSPs) (~5%) were observed. The numbers of TM proteins among the three clusters (plant pathogens, human pathogens, and endophytes) were different, while the distribution of these proteins according to the number of TM domains was conserved in which TM proteins possessing 1, 2, 4, or 12 TM domains were the dominant groups in all species. In addition, we observed conservation in the protein size distribution of the secreted protein groups among the species. There were species-specific differences in the functional characteristics of these proteins in the various groups of CSPs, T3NCSPs, and unclassified NCSPs. Furthermore, we assigned the complete sets of the conserved and unique NCSP candidates of the collected Burkholderia species using sequence similarity searching. This study could provide new insights into the relationship among plant-pathogenic, human-pathogenic, and endophytic bacteria.

  2. Computer vision cracks the leaf code

    PubMed Central

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A.; Wing, Scott L.; Serre, Thomas

    2016-01-01

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664

  3. Computational Embryology and Predictive Toxicology of Cleft Palate

    EPA Science Inventory

    Capacity to model and simulate key events in developmental toxicity using computational systems biology and biological knowledge steps closer to hazard identification across the vast landscape of untested environmental chemicals. In this context, we chose cleft palate as a model ...

  4. Simulation using computer-piloted point excitations of vibrations induced on a structure by an acoustic environment

    NASA Astrophysics Data System (ADS)

    Monteil, P.

    1981-11-01

    Computation of the overall levels and spectral densities of the responses measured on a launcher skin, the fairing for instance, merged into a random acoustic environment during take off, was studied. The analysis of transmission of these vibrations to the payload required the simulation of these responses by a shaker control system, using a small number of distributed shakers. Results show that this closed loop computerized digital system allows the acquisition of auto and cross spectral densities equal to those of the responses previously computed. However, wider application is sought, e.g., road and runway profiles. The problems of multiple input-output system identification, multiple true random signal generation, and real time programming are evoked. The system should allow for the control of four shakers.

  5. Optimizing learning of scientific category knowledge in the classroom: the case of plant identification.

    PubMed

    Kirchoff, Bruce K; Delaney, Peter F; Horton, Meg; Dellinger-Johnston, Rebecca

    2014-01-01

    Learning to identify organisms is extraordinarily difficult, yet trained field biologists can quickly and easily identify organisms at a glance. They do this without recourse to the use of traditional characters or identification devices. Achieving this type of recognition accuracy is a goal of many courses in plant systematics. Teaching plant identification is difficult because of variability in the plants' appearance, the difficulty of bringing them into the classroom, and the difficulty of taking students into the field. To solve these problems, we developed and tested a cognitive psychology-based computer program to teach plant identification. The program incorporates presentation of plant images in a homework-based, active-learning format that was developed to stimulate expert-level visual recognition. A controlled experimental test using a within-subject design was performed against traditional study methods in the context of a college course in plant systematics. Use of the program resulted in an 8-25% statistically significant improvement in final exam scores, depending on the type of identification question used (living plants, photographs, written descriptions). The software demonstrates how the use of routines to train perceptual expertise, interleaved examples, spaced repetition, and retrieval practice can be used to train identification of complex and highly variable objects. © 2014 B. K. Kirchoff et al. CBE—Life Sciences Education © 2014 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  6. Use of prior odds for missing persons identifications.

    PubMed

    Budowle, Bruce; Ge, Jianye; Chakraborty, Ranajit; Gill-King, Harrell

    2011-06-27

    Identification of missing persons from mass disasters is based on evaluation of a number of variables and observations regarding the combination of features derived from these variables. DNA typing now is playing a more prominent role in the identification of human remains, and particularly so for highly decomposed and fragmented remains. The strength of genetic associations, by either direct or kinship analyses, is often quantified by calculating a likelihood ratio. The likelihood ratio can be multiplied by prior odds based on nongenetic evidence to calculate the posterior odds, that is, by applying Bayes' Theorem, to arrive at a probability of identity. For the identification of human remains, the path creating the set and intersection of variables that contribute to the prior odds needs to be appreciated and well defined. Other than considering the total number of missing persons, the forensic DNA community has been silent on specifying the elements of prior odds computations. The variables include the number of missing individuals, eyewitness accounts, anthropological features, demographics and other identifying characteristics. The assumptions, supporting data and reasoning that are used to establish a prior probability that will be combined with the genetic data need to be considered and justified. Otherwise, data may be unintentionally or intentionally manipulated to achieve a probability of identity that cannot be supported and can thus misrepresent the uncertainty with associations. The forensic DNA community needs to develop guidelines for objectively computing prior odds.

  7. Comparison of Five System Identification Algorithms for Rotorcraft Higher Harmonic Control

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1998-01-01

    This report presents an analysis and performance comparison of five system identification algorithms. The methods are presented in the context of identifying a frequency-domain transfer matrix for the higher harmonic control (HHC) of helicopter vibration. The five system identification algorithms include three previously proposed methods: (1) the weighted-least- squares-error approach (in moving-block format), (2) the Kalman filter method, and (3) the least-mean-squares (LMS) filter method. In addition there are two new ones: (4) a generalized Kalman filter method and (5) a generalized LMS filter method. The generalized Kalman filter method and the generalized LMS filter method were derived as extensions of the classic methods to permit identification by using more than one measurement per identification cycle. Simulation results are presented for conditions ranging from the ideal case of a stationary transfer matrix and no measurement noise to the more complex cases involving both measurement noise and transfer-matrix variation. Both open-loop identification and closed- loop identification were simulated. Closed-loop mode identification was more challenging than open-loop identification because of the decreasing signal-to-noise ratio as the vibration became reduced. The closed-loop simulation considered both local-model identification, with measured vibration feedback and global-model identification with feedback of the identified uncontrolled vibration. The algorithms were evaluated in terms of their accuracy, stability, convergence properties, computation speeds, and relative ease of implementation.

  8. Mexican Identification. Project Mexico.

    ERIC Educational Resources Information Center

    Castellano, Rita

    This document presents an outline and teacher's guide for a community college-level teaching module in Mexican identification, designed for students in introductory courses in the social sciences. Although intended specifically for cultural anthropology, urban anthropology, comparative social organization and sex roles in cross-cultural…

  9. Effects of Using Simultaneous Prompting and Computer-Assisted Instruction during Small Group Instruction

    ERIC Educational Resources Information Center

    Ozen, Arzu; Ergenekon, Yasemin; Ulke-Kurkcuoglu, Burcu

    2017-01-01

    The current study investigated the relation between simultaneous prompting (SP), computer-assisted instruction (CAI), and the receptive identification of target pictures (presented on laptop computer) for four preschool students with developmental disabilities. The students' acquisition of nontarget information through observational learning also…

  10. Organizational Identification and Social Motivation: A Field Descriptive Study in Two Organizations.

    ERIC Educational Resources Information Center

    Barge, J. Kevin

    A study examined the relationships between leadership conversation and its impact upon organizational members' levels of organizational identification and behavior. It was hypothesized (1) that effective leader conversation would be associated with higher levels of role, means, goal and overall organizational identification, and (2) that…

  11. HOM identification by bead pulling in the Brookhaven ERL cavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hahn H.; Calaga, R.; Jain, P.

    2012-06-25

    Several past measurements of the Brookhaven ERL at superconducting temperature produced a long list of higher order modes (HOMs). The Niobium 5-cell cavity is terminated with HOM ferrite dampers that successfully reduce the Q-factors to tolerable levels. However, a number of undamped resonances with Q {ge} 10{sup 6} were found at 4 K and their mode identification remained as a goal for this paper. The approach taken here consists in taking different S{sub 21} measurements on a copper cavity replica of the ERL which can be compared with the actual data and also with Microwave Studio computer simulations. Several differentmore » S{sub 21} transmission measurements are used, including those taken from the fundamental input coupler to the pick-up probe across the cavity, between probes in a single cell, and between beam-position monitor probes in the beam tubes. Mode identification is supported by bead pulling with a metallic needle or a dielectric sphere that are calibrated in the fundamental mode. This paper presents results for HOMs in the first two dipole bands with the prototypical 958 MHz trapped mode, the lowest beam tube resonances, and high-Q modes in the first quadrupole band and beyond.« less

  12. SU-E-I-74: Image-Matching Technique of Computed Tomography Images for Personal Identification: A Preliminary Study Using Anthropomorphic Chest Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsunobu, Y; Shiotsuki, K; Morishita, J

    Purpose: Fingerprints, dental impressions, and DNA are used to identify unidentified bodies in forensic medicine. Cranial Computed tomography (CT) images and/or dental radiographs are also used for identification. Radiological identification is important, particularly in the absence of comparative fingerprints, dental impressions, and DNA samples. The development of an automated radiological identification system for unidentified bodies is desirable. We investigated the potential usefulness of bone structure for matching chest CT images. Methods: CT images of three anthropomorphic chest phantoms were obtained on different days in various settings. One of the phantoms was assumed to be an unidentified body. The bone imagemore » and the bone image with soft tissue (BST image) were extracted from the CT images. To examine the usefulness of the bone image and/or the BST image, the similarities between the two-dimensional (2D) or threedimensional (3D) images of the same and different phantoms were evaluated in terms of the normalized cross-correlation value (NCC). Results: For the 2D and 3D BST images, the NCCs obtained from the same phantom assumed to be an unidentified body (2D, 0.99; 3D, 0.93) were higher than those for the different phantoms (2D, 0.95 and 0.91; 3D, 0.89 and 0.80). The NCCs for the same phantom (2D, 0.95; 3D, 0.88) were greater compared to those of the different phantoms (2D, 0.61 and 0.25; 3D, 0.23 and 0.10) for the bone image. The difference in the NCCs between the same and different phantoms tended to be larger for the bone images than for the BST images. These findings suggest that the image-matching technique is more useful when utilizing the bone image than when utilizing the BST image to identify different people. Conclusion: This preliminary study indicated that evaluating the similarity of bone structure in 2D and 3D images is potentially useful for identifying of an unidentified body.« less

  13. A Systematic Review of Re-Identification Attacks on Health Data

    PubMed Central

    El Emam, Khaled; Jonker, Elizabeth; Arbuckle, Luk; Malin, Bradley

    2011-01-01

    Background Privacy legislation in most jurisdictions allows the disclosure of health data for secondary purposes without patient consent if it is de-identified. Some recent articles in the medical, legal, and computer science literature have argued that de-identification methods do not provide sufficient protection because they are easy to reverse. Should this be the case, it would have significant and important implications on how health information is disclosed, including: (a) potentially limiting its availability for secondary purposes such as research, and (b) resulting in more identifiable health information being disclosed. Our objectives in this systematic review were to: (a) characterize known re-identification attacks on health data and contrast that to re-identification attacks on other kinds of data, (b) compute the overall proportion of records that have been correctly re-identified in these attacks, and (c) assess whether these demonstrate weaknesses in current de-identification methods. Methods and Findings Searches were conducted in IEEE Xplore, ACM Digital Library, and PubMed. After screening, fourteen eligible articles representing distinct attacks were identified. On average, approximately a quarter of the records were re-identified across all studies (0.26 with 95% CI 0.046–0.478) and 0.34 for attacks on health data (95% CI 0–0.744). There was considerable uncertainty around the proportions as evidenced by the wide confidence intervals, and the mean proportion of records re-identified was sensitive to unpublished studies. Two of fourteen attacks were performed with data that was de-identified using existing standards. Only one of these attacks was on health data, which resulted in a success rate of 0.00013. Conclusions The current evidence shows a high re-identification rate but is dominated by small-scale studies on data that was not de-identified according to existing standards. This evidence is insufficient to draw conclusions about the

  14. Image analysis of pubic bone for age estimation in a computed tomography sample.

    PubMed

    López-Alcaraz, Manuel; González, Pedro Manuel Garamendi; Aguilera, Inmaculada Alemán; López, Miguel Botella

    2015-03-01

    Radiology has demonstrated great utility for age estimation, but most of the studies are based on metrical and morphological methods in order to perform an identification profile. A simple image analysis-based method is presented, aimed to correlate the bony tissue ultrastructure with several variables obtained from the grey-level histogram (GLH) of computed tomography (CT) sagittal sections of the pubic symphysis surface and the pubic body, and relating them with age. The CT sample consisted of 169 hospital Digital Imaging and Communications in Medicine (DICOM) archives of known sex and age. The calculated multiple regression models showed a maximum R (2) of 0.533 for females and 0.726 for males, with a high intra- and inter-observer agreement. The method suggested is considered not only useful for performing an identification profile during virtopsy, but also for application in further studies in order to attach a quantitative correlation for tissue ultrastructure characteristics, without complex and expensive methods beyond image analysis.

  15. An on-line equivalent system identification scheme for adaptive control. Ph.D. Thesis - Stanford Univ.

    NASA Technical Reports Server (NTRS)

    Sliwa, S. M.

    1984-01-01

    A prime obstacle to the widespread use of adaptive control is the degradation of performance and possible instability resulting from the presence of unmodeled dynamics. The approach taken is to explicitly include the unstructured model uncertainty in the output error identification algorithm. The order of the compensator is successively increased by including identified modes. During this model building stage, heuristic rules are used to test for convergence prior to designing compensators. Additionally, the recursive identification algorithm as extended to multi-input, multi-output systems. Enhancements were also made to reduce the computational burden of an algorithm for obtaining minimal state space realizations from the inexact, multivariate transfer functions which result from the identification process. A number of potential adaptive control applications for this approach are illustrated using computer simulations. Results indicated that when speed of adaptation and plant stability are not critical, the proposed schemes converge to enhance system performance.

  16. Going deeper in the automated identification of Herbarium specimens.

    PubMed

    Carranza-Rojas, Jose; Goeau, Herve; Bonnet, Pierre; Mata-Montero, Erick; Joly, Alexis

    2017-08-11

    Hundreds of herbarium collections have accumulated a valuable heritage and knowledge of plants over several centuries. Recent initiatives started ambitious preservation plans to digitize this information and make it available to botanists and the general public through web portals. However, thousands of sheets are still unidentified at the species level while numerous sheets should be reviewed and updated following more recent taxonomic knowledge. These annotations and revisions require an unrealistic amount of work for botanists to carry out in a reasonable time. Computer vision and machine learning approaches applied to herbarium sheets are promising but are still not well studied compared to automated species identification from leaf scans or pictures of plants in the field. In this work, we propose to study and evaluate the accuracy with which herbarium images can be potentially exploited for species identification with deep learning technology. In addition, we propose to study if the combination of herbarium sheets with photos of plants in the field is relevant in terms of accuracy, and finally, we explore if herbarium images from one region that has one specific flora can be used to do transfer learning to another region with other species; for example, on a region under-represented in terms of collected data. This is, to our knowledge, the first study that uses deep learning to analyze a big dataset with thousands of species from herbaria. Results show the potential of Deep Learning on herbarium species identification, particularly by training and testing across different datasets from different herbaria. This could potentially lead to the creation of a semi, or even fully automated system to help taxonomists and experts with their annotation, classification, and revision works.

  17. Optimal sensor placement for time-domain identification using a wavelet-based genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mahdavi, Seyed Hossein; Razak, Hashim Abdul

    2016-06-01

    This paper presents a wavelet-based genetic algorithm strategy for optimal sensor placement (OSP) effective for time-domain structural identification. Initially, the GA-based fitness evaluation is significantly improved by using adaptive wavelet functions. Later, a multi-species decimal GA coding system is modified to be suitable for an efficient search around the local optima. In this regard, a local operation of mutation is introduced in addition with regeneration and reintroduction operators. It is concluded that different characteristics of applied force influence the features of structural responses, and therefore the accuracy of time-domain structural identification is directly affected. Thus, the reliable OSP strategy prior to the time-domain identification will be achieved by those methods dealing with minimizing the distance of simulated responses for the entire system and condensed system considering the force effects. The numerical and experimental verification on the effectiveness of the proposed strategy demonstrates the considerably high computational performance of the proposed OSP strategy, in terms of computational cost and the accuracy of identification. It is deduced that the robustness of the proposed OSP algorithm lies in the precise and fast fitness evaluation at larger sampling rates which result in the optimum evaluation of the GA-based exploration and exploitation phases towards the global optimum solution.

  18. Structure identification methods for atomistic simulations of crystalline materials

    DOE PAGES

    Stukowski, Alexander

    2012-05-28

    Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.

  19. Structure Identification Using High Resolution Mass ...

    EPA Pesticide Factsheets

    The iCSS CompTox Dashboard is a publicly accessible dashboard provided by the National Center for Computation Toxicology at the US-EPA. It serves a number of purposes, including providing a chemistry database underpinning many of our public-facing projects (e.g. ToxCast and ExpoCast). The available data and searches provide a valuable path to structure identification using mass spectrometry as the source data. With an underlying database of over 720,000 chemicals, the dashboard has already been used to assist in identifying chemicals present in house dust. This poster reviews the benefits of the EPA’s platform and underlying algorithms used for the purpose of compound identification using high-resolution mass spectrometry data. Standard approaches for both mass and formula lookup are available but the dashboard delivers a novel approach for hit ranking based on functional use of the chemicals. The focus on high-quality data, novel ranking approaches and integration to other resources of value to mass spectrometrists makes the CompTox Dashboard a valuable resource for the identification of environmental chemicals. This abstract does not reflect U.S. EPA policy poster presented at the Eastern Analytical Symposium (EAS) held in Somerset, NJ

  20. Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic

    NASA Astrophysics Data System (ADS)

    Haag, T.; Herrmann, J.; Hanss, M.

    2010-10-01

    For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.

  1. Application of RNAMlet to surface defect identification of steels

    NASA Astrophysics Data System (ADS)

    Xu, Ke; Xu, Yang; Zhou, Peng; Wang, Lei

    2018-06-01

    As three main production lines of steels, continuous casting slabs, hot rolled steel plates and cold rolled steel strips have different surface appearances and are produced at different speeds of their production lines. Therefore, the algorithms for the surface defect identifications of the three steel products have different requirements for real-time and anti-interference. The existing algorithms cannot be adaptively applied to surface defect identification of the three steel products. A new method of adaptive multi-scale geometric analysis named RNAMlet was proposed. The idea of RNAMlet came from the non-symmetry anti-packing pattern representation model (NAM). The image is decomposed into a set of rectangular blocks asymmetrically according to gray value changes of image pixels. Then two-dimensional Haar wavelet transform is applied to all blocks. If the image background is complex, the number of blocks is large, and more details of the image are utilized. If the image background is simple, the number of blocks is small, and less computation time is needed. RNAMlet was tested with image samples of the three steel products, and compared with three classical methods of multi-scale geometric analysis, including Contourlet, Shearlet and Tetrolet. For the image samples with complicated backgrounds, such as continuous casting slabs and hot rolled steel plates, the defect identification rate obtained by RNAMlet was 1% higher than other three methods. For the image samples with simple backgrounds, such as cold rolled steel strips, the computation time of RNAMlet was one-tenth of the other three MGA methods, while the defect identification rates obtained by RNAMlet were higher than the other three methods.

  2. Do C-reactive protein level, white blood cell count, and pain location guide the selection of patients for computed tomography imaging in non-traumatic acute abdomen?

    PubMed

    Ozan, E; Atac, G K; Evrin, T; Alisar, K; Sonmez, L O; Alhan, A

    2017-02-01

    The value of abdominal computed tomography in non-traumatic abdominal pain has been well established. On the other hand, to manage computed tomography, appropriateness has become more of an issue as a result of the concomitant increase in patient radiation exposure with increased computed tomography use. The purpose of this study was to investigate whether C-reactive protein, white blood cell count, and pain location may guide the selection of patients for computed tomography in non-traumatic acute abdomen. Patients presenting with acute abdomen to the emergency department over a 12-month period and who subsequently underwent computed tomography were retrospectively reviewed. Those with serum C-reactive protein and white blood cell count measured on admission or within 24 h of the computed tomography were selected. Computed tomography examinations were retrospectively reviewed, and final diagnoses were designated either positive or negative for pathology relating to presentation with acute abdomen. White blood cell counts, C-reactive protein levels, and pain locations were analyzed to determine whether they increased or decreased the likelihood of producing a diagnostic computed tomography. The likelihood ratio for computed tomography positivity with a C-reactive protein level above 5 mg/L was 1.71, while this increased to 7.71 in patients with combined elevated C-reactive protein level and white blood cell count and right lower quadrant pain. Combined elevated C-reactive protein level and white blood cell count in patients with right lower quadrant pain may represent a potential factor that could guide the decision to perform computed tomography in non-traumatic acute abdomen.

  3. Authentication of Radio Frequency Identification Devices Using Electronic Characteristics

    ERIC Educational Resources Information Center

    Chinnappa Gounder Periaswamy, Senthilkumar

    2010-01-01

    Radio frequency identification (RFID) tags are low-cost devices that are used to uniquely identify the objects to which they are attached. Due to the low cost and size that is driving the technology, a tag has limited computational capabilities and resources. This limitation makes the implementation of conventional security protocols to prevent…

  4. Fast and sensitive alignment of microbial whole genome sequencing reads to large sequence datasets on a desktop PC: application to metagenomic datasets and pathogen identification.

    PubMed

    Pongor, Lőrinc S; Vera, Roberto; Ligeti, Balázs

    2014-01-01

    Next generation sequencing (NGS) of metagenomic samples is becoming a standard approach to detect individual species or pathogenic strains of microorganisms. Computer programs used in the NGS community have to balance between speed and sensitivity and as a result, species or strain level identification is often inaccurate and low abundance pathogens can sometimes be missed. We have developed Taxoner, an open source, taxon assignment pipeline that includes a fast aligner (e.g. Bowtie2) and a comprehensive DNA sequence database. We tested the program on simulated datasets as well as experimental data from Illumina, IonTorrent, and Roche 454 sequencing platforms. We found that Taxoner performs as well as, and often better than BLAST, but requires two orders of magnitude less running time meaning that it can be run on desktop or laptop computers. Taxoner is slower than the approaches that use small marker databases but is more sensitive due the comprehensive reference database. In addition, it can be easily tuned to specific applications using small tailored databases. When applied to metagenomic datasets, Taxoner can provide a functional summary of the genes mapped and can provide strain level identification. Taxoner is written in C for Linux operating systems. The code and documentation are available for research applications at http://code.google.com/p/taxoner.

  5. A Framework for Different Levels of Integration of Computational Models Into Web-Based Virtual Patients

    PubMed Central

    Narracott, Andrew J; Manini, Simone; Bayley, Martin J; Lawford, Patricia V; McCormack, Keith; Zary, Nabil

    2014-01-01

    Background Virtual patients are increasingly common tools used in health care education to foster learning of clinical reasoning skills. One potential way to expand their functionality is to augment virtual patients’ interactivity by enriching them with computational models of physiological and pathological processes. Objective The primary goal of this paper was to propose a conceptual framework for the integration of computational models within virtual patients, with particular focus on (1) characteristics to be addressed while preparing the integration, (2) the extent of the integration, (3) strategies to achieve integration, and (4) methods for evaluating the feasibility of integration. An additional goal was to pilot the first investigation of changing framework variables on altering perceptions of integration. Methods The framework was constructed using an iterative process informed by Soft System Methodology. The Virtual Physiological Human (VPH) initiative has been used as a source of new computational models. The technical challenges associated with development of virtual patients enhanced by computational models are discussed from the perspectives of a number of different stakeholders. Concrete design and evaluation steps are discussed in the context of an exemplar virtual patient employing the results of the VPH ARCH project, as well as improvements for future iterations. Results The proposed framework consists of four main elements. The first element is a list of feasibility features characterizing the integration process from three perspectives: the computational modelling researcher, the health care educationalist, and the virtual patient system developer. The second element included three integration levels: basic, where a single set of simulation outcomes is generated for specific nodes in the activity graph; intermediate, involving pre-generation of simulation datasets over a range of input parameters; advanced, including dynamic solution of the

  6. POOLMS: A computer program for fitting and model selection for two level factorial replication-free experiments

    NASA Technical Reports Server (NTRS)

    Amling, G. E.; Holms, A. G.

    1973-01-01

    A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.

  7. Goal-Directed Behavior and Instrumental Devaluation: A Neural System-Level Computational Model

    PubMed Central

    Mannella, Francesco; Mirolli, Marco; Baldassarre, Gianluca

    2016-01-01

    Devaluation is the key experimental paradigm used to demonstrate the presence of instrumental behaviors guided by goals in mammals. We propose a neural system-level computational model to address the question of which brain mechanisms allow the current value of rewards to control instrumental actions. The model pivots on and shows the computational soundness of the hypothesis for which the internal representation of instrumental manipulanda (e.g., levers) activate the representation of rewards (or “action-outcomes”, e.g., foods) while attributing to them a value which depends on the current internal state of the animal (e.g., satiation for some but not all foods). The model also proposes an initial hypothesis of the integrated system of key brain components supporting this process and allowing the recalled outcomes to bias action selection: (a) the sub-system formed by the basolateral amygdala and insular cortex acquiring the manipulanda-outcomes associations and attributing the current value to the outcomes; (b) three basal ganglia-cortical loops selecting respectively goals, associative sensory representations, and actions; (c) the cortico-cortical and striato-nigro-striatal neural pathways supporting the selection, and selection learning, of actions based on habits and goals. The model reproduces and explains the results of several devaluation experiments carried out with control rats and rats with pre- and post-training lesions of the basolateral amygdala, the nucleus accumbens core, the prelimbic cortex, and the dorso-medial striatum. The results support the soundness of the hypotheses of the model and show its capacity to integrate, at the system-level, the operations of the key brain structures underlying devaluation. Based on its hypotheses and predictions, the model also represents an operational framework to support the design and analysis of new experiments on the motivational aspects of goal-directed behavior. PMID:27803652

  8. A Multiple Identity Approach to Gender: Identification with Women, Identification with Feminists, and Their Interaction

    PubMed Central

    van Breen, Jolien A.; Spears, Russell; Kuppens, Toon; de Lemus, Soledad

    2017-01-01

    Across four studies, we examine multiple identities in the context of gender and propose that women's attitudes toward gender group membership are governed by two largely orthogonal dimensions of gender identity: identification with women and identification with feminists. We argue that identification with women reflects attitudes toward the content society gives to group membership: what does it mean to be a woman in terms of group characteristics, interests and values? Identification with feminists, on the other hand, is a politicized identity dimension reflecting attitudes toward the social position of the group: what does it mean to be a woman in terms of disadvantage, inequality, and relative status? We examine the utility of this multiple identity approach in four studies. Study 1 showed that identification with women reflects attitudes toward group characteristics, such as femininity and self-stereotyping, while identification with feminists reflects attitudes toward the group's social position, such as perceived sexism. The two dimensions are shown to be largely independent, and as such provide support for the multiple identity approach. In Studies 2–4, we examine the utility of this multiple identity approach in predicting qualitative differences in gender attitudes. Results show that specific combinations of identification with women and feminists predicted attitudes toward collective action and gender stereotypes. Higher identification with feminists led to endorsement of radical collective action (Study 2) and critical attitudes toward gender stereotypes (Studies 3–4), especially at lower levels of identification with women. The different combinations of high vs. low identification with women and feminists can be thought of as reflecting four theoretical identity “types.” A woman can be (1) strongly identified with neither women nor feminists (“low identifier”), (2) strongly identified with women but less so with feminists (

  9. A Multiple Identity Approach to Gender: Identification with Women, Identification with Feminists, and Their Interaction.

    PubMed

    van Breen, Jolien A; Spears, Russell; Kuppens, Toon; de Lemus, Soledad

    2017-01-01

    Across four studies, we examine multiple identities in the context of gender and propose that women's attitudes toward gender group membership are governed by two largely orthogonal dimensions of gender identity: identification with women and identification with feminists. We argue that identification with women reflects attitudes toward the content society gives to group membership: what does it mean to be a woman in terms of group characteristics, interests and values? Identification with feminists, on the other hand, is a politicized identity dimension reflecting attitudes toward the social position of the group: what does it mean to be a woman in terms of disadvantage, inequality, and relative status? We examine the utility of this multiple identity approach in four studies. Study 1 showed that identification with women reflects attitudes toward group characteristics, such as femininity and self-stereotyping, while identification with feminists reflects attitudes toward the group's social position, such as perceived sexism. The two dimensions are shown to be largely independent, and as such provide support for the multiple identity approach. In Studies 2-4, we examine the utility of this multiple identity approach in predicting qualitative differences in gender attitudes. Results show that specific combinations of identification with women and feminists predicted attitudes toward collective action and gender stereotypes. Higher identification with feminists led to endorsement of radical collective action (Study 2) and critical attitudes toward gender stereotypes (Studies 3-4), especially at lower levels of identification with women. The different combinations of high vs. low identification with women and feminists can be thought of as reflecting four theoretical identity "types." A woman can be (1) strongly identified with neither women nor feminists ("low identifier"), (2) strongly identified with women but less so with feminists ("traditional identifier"), (3

  10. The Effects of Linear Microphone Array Changes on Computed Sound Exposure Level Footprints

    NASA Technical Reports Server (NTRS)

    Mueller, Arnold W.; Wilson, Mark R.

    1997-01-01

    Airport land planning commissions often are faced with determining how much area around an airport is affected by the sound exposure levels (SELS) associated with helicopter operations. This paper presents a study of the effects changing the size and composition of a microphone array has on the computed SEL contour (ground footprint) areas used by such commissions. Descent flight acoustic data measured by a fifteen microphone array were reprocessed for five different combinations of microphones within this array. This resulted in data for six different arrays for which SEL contours were computed. The fifteen microphone array was defined as the 'baseline' array since it contained the greatest amount of data. The computations used a newly developed technique, the Acoustic Re-propagation Technique (ART), which uses parts of the NASA noise prediction program ROTONET. After the areas of the SEL contours were calculated the differences between the areas were determined. The area differences for the six arrays are presented that show a five and a three microphone array (with spacing typical of that required by the FAA FAR Part 36 noise certification procedure) compare well with the fifteen microphone array. All data were obtained from a database resulting from a joint project conducted by NASA and U.S. Army researchers at Langley and Ames Research Centers. A brief description of the joint project test design, microphone array set-up, and data reduction methodology associated with the database are discussed.

  11. Learning Disability Identification Consistency: The Impact of Methodology and Student Evaluation Data

    ERIC Educational Resources Information Center

    Maki, Kathrin E.; Burns, Matthew K.; Sullivan, Amanda

    2017-01-01

    Learning disability (LD) identification has long been controversial and has undergone substantive reform. This study examined the consistency of school psychologists' LD identification decisions across three identification methods and across student evaluation data conclusiveness levels. Data were collected from 376 practicing school psychologists…

  12. Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.

    PubMed

    Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T

    2011-01-01

    The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution

  13. Identification of aerobic Gram-positive bacilli by use of Vitek MS.

    PubMed

    Navas, Maria; Pincus, David H; Wilkey, Kathy; Sercia, Linda; LaSalvia, Margaret; Wilson, Deborah; Procop, Gary W; Richter, Sandra S

    2014-04-01

    The accuracy of Vitek MS mass spectrometric identifications was assessed for 206 clinically significant isolates of aerobic Gram-positive bacilli representing 20 genera and 38 species. The Vitek MS identifications were correct for 85% of the isolates (56.3% to the species level, 28.6% limited to the genus level), with misidentifications occurring for 7.3% of the isolates.

  14. DNA barcoding for molecular identification of Demodex based on mitochondrial genes.

    PubMed

    Hu, Li; Yang, YuanJun; Zhao, YaE; Niu, DongLing; Yang, Rui; Wang, RuiLing; Lu, Zhaohui; Li, XiaoQi

    2017-12-01

    There has been no widely accepted DNA barcode for species identification of Demodex. In this study, we attempted to solve this issue. First, mitochondrial cox1-5' and 12S gene fragments of Demodex folloculorum, D. brevis, D. canis, and D. caprae were amplified, cloned, and sequenced for the first time; intra/interspecific divergences were computed and phylogenetic trees were reconstructed. Then, divergence frequency distribution plots of those two gene fragments were drawn together with mtDNA cox1-middle region and 16S obtained in previous studies. Finally, their identification efficiency was evaluated by comparing barcoding gap. Results indicated that 12S had the higher identification efficiency. Specifically, for cox1-5' region of the four Demodex species, intraspecific divergences were less than 2.0%, and interspecific divergences were 21.1-31.0%; for 12S, intraspecific divergences were less than 1.4%, and interspecific divergences were 20.8-26.9%. The phylogenetic trees demonstrated that the four Demodex species clustered separately, and divergence frequency distribution plot showed that the largest intraspecific divergence of 12S (1.4%) was less than cox1-5' region (2.0%), cox1-middle region (3.1%), and 16S (2.8%). The barcoding gap of 12S was 19.4%, larger than cox1-5' region (19.1%), cox1-middle region (11.3%), and 16S (13.0%); the interspecific divergence span of 12S was 6.2%, smaller than cox1-5' region (10.0%), cox1-middle region (14.1%), and 16S (11.4%). Moreover, 12S has a moderate length (517 bp) for sequencing at once. Therefore, we proposed mtDNA 12S was more suitable than cox1 and 16S to be a DNA barcode for classification and identification of Demodex at lower category level.

  15. Reducing the cost of using collocation to compute vibrational energy levels: Results for CH2NH.

    PubMed

    Avila, Gustavo; Carrington, Tucker

    2017-08-14

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in the work of Avila and Carrington, Jr. [J. Chem. Phys. 143, 214108 (2015)]. Known quadrature and collocation methods using a Smolyak grid require storing intermediate vectors with more elements than points on the Smolyak grid. This is due to the fact that grid labels are constrained among themselves and basis labels are constrained among themselves. We show that by using the so-called hierarchical basis functions, one can significantly reduce the memory required. In this paper, the intermediate vectors have only as many elements as the Smolyak grid. The ideas are tested by computing energy levels of CH 2 NH.

  16. Flight-Time Identification of a UH-60A Helicopter and Slung Load

    NASA Technical Reports Server (NTRS)

    Cicolani, Luigi S.; McCoy, Allen H.; Tischler, Mark B.; Tucker, George E.; Gatenio, Pinhas; Marmar, Dani

    1998-01-01

    This paper describes a flight test demonstration of a system for identification of the stability and handling qualities parameters of a helicopter-slung load configuration simultaneously with flight testing, and the results obtained.Tests were conducted with a UH-60A Black Hawk at speeds from hover to 80 kts. The principal test load was an instrumented 8 x 6 x 6 ft cargo container. The identification used frequency domain analysis in the frequency range to 2 Hz, and focussed on the longitudinal and lateral control axes since these are the axes most affected by the load pendulum modes in the frequency range of interest for handling qualities. Results were computed for stability margins, handling qualities parameters and load pendulum stability. The computations took an average of 4 minutes before clearing the aircraft to the next test point. Important reductions in handling qualities were computed in some cases, depending, on control axis and load-slung combination. A database, including load dynamics measurements, was accumulated for subsequent simulation development and validation.

  17. Inverse problems and optimal experiment design in unsteady heat transfer processes identification

    NASA Technical Reports Server (NTRS)

    Artyukhin, Eugene A.

    1991-01-01

    Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.

  18. Applied Computational Electromagnetics Society Journal. Volume 7, Number 1, Summer 1992

    DTIC Science & Technology

    1992-01-01

    previously-solved computational problem in electrical engineering, physics, or related fields of study. The technical activities promoted by this...in solution technique or in data input/output; identification of new applica- tions for electromagnetics modeling codes and techniques; integration of...papers will represent the computational electromagnetics aspects of research in electrical engineering, physics, or related disciplines. However, papers

  19. Teaching Concept Mapping and University Level Study Strategies Using Computers.

    ERIC Educational Resources Information Center

    Mikulecky, Larry; And Others

    1989-01-01

    Assesses the utility and effectiveness of three interactive computer programs and associated print materials in instructing and modeling for undergraduates how to comprehend and reconceptualize scientific textbook material. Finds that "how to" reading strategies can be taught via computer and transferred to new material. (RS)

  20. Talent identification in youth soccer.

    PubMed

    Unnithan, Viswanath; White, Jordan; Georgiou, Andreas; Iga, John; Drust, Barry

    2012-01-01

    The purpose of this review article was firstly to evaluate the traditional approach to talent identification in youth soccer and secondly present pilot data on a more holistic method for talent identification. Research evidence exists to suggest that talent identification mechanisms that are predicated upon the physical (anthropometric) attributes of the early maturing individual only serve to identify current performance levels. Greater body mass and stature have both been related to faster ball shooting speed and vertical jump capacity respectively in elite youth soccer players. This approach, however, may prematurely exclude those late maturing individuals. Multiple physiological measures have also been used in an effort to determine key predictors of performance; with agility and sprint times, being identified as variables that could discriminate between elite and sub-elite groups of adolescent soccer players. Successful soccer performance is the product of multiple systems interacting with one another. Consequently, a more holistic approach to talent identification should be considered. Recent work, with elite youth soccer players, has considered whether multiple small-sided games could act as a talent identification tool in this population. The results demonstrated that there was a moderate agreement between the more technically gifted soccer player and success during multiple small-sided games.

  1. Reduction in Hospital-Wide Clinical Laboratory Specimen Identification Errors following Process Interventions: A 10-Year Retrospective Observational Study

    PubMed Central

    Ning, Hsiao-Chen; Lin, Chia-Ni; Chiu, Daniel Tsun-Yee; Chang, Yung-Ta; Wen, Chiao-Ni; Peng, Shu-Yu; Chu, Tsung-Lan; Yu, Hsin-Ming; Wu, Tsu-Lan

    2016-01-01

    Background Accurate patient identification and specimen labeling at the time of collection are crucial steps in the prevention of medical errors, thereby improving patient safety. Methods All patient specimen identification errors that occurred in the outpatient department (OPD), emergency department (ED), and inpatient department (IPD) of a 3,800-bed academic medical center in Taiwan were documented and analyzed retrospectively from 2005 to 2014. To reduce such errors, the following series of strategies were implemented: a restrictive specimen acceptance policy for the ED and IPD in 2006; a computer-assisted barcode positive patient identification system for the ED and IPD in 2007 and 2010, and automated sample labeling combined with electronic identification systems introduced to the OPD in 2009. Results Of the 2000345 specimens collected in 2005, 1023 (0.0511%) were identified as having patient identification errors, compared with 58 errors (0.0015%) among 3761238 specimens collected in 2014, after serial interventions; this represents a 97% relative reduction. The total number (rate) of institutional identification errors contributed from the ED, IPD, and OPD over a 10-year period were 423 (0.1058%), 556 (0.0587%), and 44 (0.0067%) errors before the interventions, and 3 (0.0007%), 52 (0.0045%) and 3 (0.0001%) after interventions, representing relative 99%, 92% and 98% reductions, respectively. Conclusions Accurate patient identification is a challenge of patient safety in different health settings. The data collected in our study indicate that a restrictive specimen acceptance policy, computer-generated positive identification systems, and interdisciplinary cooperation can significantly reduce patient identification errors. PMID:27494020

  2. PCR Followed by Electrospray Ionization Mass Spectrometry for Broad-Range Identification of Fungal Pathogens

    PubMed Central

    Massire, Christian; Buelow, Daelynn R.; Zhang, Sean X.; Lovari, Robert; Matthews, Heather E.; Toleno, Donna M.; Ranken, Raymond R.; Hall, Thomas A.; Metzgar, David; Sampath, Rangarajan; Blyn, Lawrence B.; Ecker, David J.; Gu, Zhengming; Walsh, Thomas J.

    2013-01-01

    Invasive fungal infections are a significant cause of morbidity and mortality among immunocompromised patients. Early and accurate identification of these pathogens is central to direct therapy and to improve overall outcome. PCR coupled with electrospray ionization mass spectrometry (PCR/ESI-MS) was evaluated as a novel means for identification of fungal pathogens. Using a database grounded by 60 ATCC reference strains, a total of 394 clinical fungal isolates (264 molds and 130 yeasts) were analyzed by PCR/ESI-MS; results were compared to phenotypic identification, and discrepant results were sequence confirmed. PCR/ESI-MS identified 81.4% of molds to either the genus or species level, with concordance rates of 89.7% and 87.4%, respectively, to phenotypic identification. Likewise, PCR/ESI-MS was able to identify 98.4% of yeasts to either the genus or species level, agreeing with 100% of phenotypic results at both the genus and species level. PCR/ESI-MS performed best with Aspergillus and Candida isolates, generating species-level identification in 94.4% and 99.2% of isolates, respectively. PCR/ESI-MS is a promising new technology for broad-range detection and identification of medically important fungal pathogens that cause invasive mycoses. PMID:23303501

  3. Tracking at High Level Trigger in CMS

    NASA Astrophysics Data System (ADS)

    Tosi, M.

    2016-04-01

    The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II. We will present the performance of HLT tracking algorithms, discussing its impact on CMS physics program, as well as new developments done towards the next data taking in 2015.

  4. Computational mass spectrometry for small molecules

    PubMed Central

    2013-01-01

    The identification of small molecules from mass spectrometry (MS) data remains a major challenge in the interpretation of MS data. This review covers the computational aspects of identifying small molecules, from the identification of a compound searching a reference spectral library, to the structural elucidation of unknowns. In detail, we describe the basic principles and pitfalls of searching mass spectral reference libraries. Determining the molecular formula of the compound can serve as a basis for subsequent structural elucidation; consequently, we cover different methods for molecular formula identification, focussing on isotope pattern analysis. We then discuss automated methods to deal with mass spectra of compounds that are not present in spectral libraries, and provide an insight into de novo analysis of fragmentation spectra using fragmentation trees. In addition, this review shortly covers the reconstruction of metabolic networks using MS data. Finally, we list available software for different steps of the analysis pipeline. PMID:23453222

  5. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1971-01-01

    An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

  6. Computer-Game Construction: A Gender-Neutral Attractor to Computing Science

    ERIC Educational Resources Information Center

    Carbonaro, Mike; Szafron, Duane; Cutumisu, Maria; Schaeffer, Jonathan

    2010-01-01

    Enrollment in Computing Science university programs is at a dangerously low level. A major reason for this is the general lack of interest in Computing Science by females. In this paper, we discuss our experience with using a computer game construction environment as a vehicle to encourage female participation in Computing Science. Experiments…

  7. New software for computer-assisted dental-data matching in Disaster Victim Identification and long-term missing persons investigations: "DAVID Web".

    PubMed

    Clement, J G; Winship, V; Ceddia, J; Al-Amad, S; Morales, A; Hill, A J

    2006-05-15

    In 1997 an internally supported but unfunded pilot project at the Victorian Institute of Forensic Medicine (VIFM) Australia led to the development of a computer system which closely mimicked Interpol paperwork for the storage, later retrieval and tentative matching of the many AM and PM dental records that are often needed for rapid Disaster Victim Identification. The program was called "DAVID" (Disaster And Victim IDentification). It combined the skills of the VIFM Information Technology systems manager (VW), an experienced odontologist (JGC) and an expert database designer (JC); all current authors on this paper. Students did much of the writing of software to prescription from Monash University. The student group involved won an Australian Information Industry Award in recognition of the contribution the new software could have made to the DVI process. Unfortunately, the potential of the software was never realized because paradoxically the federal nature of Australia frequently thwarts uniformity of systems across the entire country. As a consequence, the final development of DAVID never took place. Given the recent problems encountered post-tsunami by the odontologists who were obliged to use the Plass Data system (Plass Data Software, Holbaek, Denmark) and with the impending risks imposed upon Victoria by the decision to host the Commonwealth Games in Melbourne during March 2006, funding was sought and obtained from the state government to update counter disaster preparedness at the VIFM. Some of these funds have been made available to upgrade and complete the DAVID project. In the wake of discussions between leading expert odontologists from around the world held in Geneva during July 2003 at the invitation of the International Committee of the Red Cross significant alterations to the initial design parameters of DAVID were proposed. This was part of broader discussions directed towards developing instruments which could be used by the ICRC's "The Missing

  8. A Model Computer Literacy Course.

    ERIC Educational Resources Information Center

    Orndorff, Joseph

    Designed to address the varied computer skill levels of college students, this proposed computer literacy course would be modular in format, with modules tailored to address various levels of expertise and permit individualized instruction. An introductory module would present both the history and future of computers and computing, followed by an…

  9. Identification of Aerobic Gram-Positive Bacilli by Use of Vitek MS

    PubMed Central

    Navas, Maria; Pincus, David H.; Wilkey, Kathy; Sercia, Linda; LaSalvia, Margaret; Wilson, Deborah; Procop, Gary W.

    2014-01-01

    The accuracy of Vitek MS mass spectrometric identifications was assessed for 206 clinically significant isolates of aerobic Gram-positive bacilli representing 20 genera and 38 species. The Vitek MS identifications were correct for 85% of the isolates (56.3% to the species level, 28.6% limited to the genus level), with misidentifications occurring for 7.3% of the isolates. PMID:24501030

  10. Identification of bacteria isolated from veterinary clinical specimens using MALDI-TOF MS.

    PubMed

    Pavlovic, Melanie; Wudy, Corinna; Zeller-Peronnet, Veronique; Maggipinto, Marzena; Zimmermann, Pia; Straubinger, Alix; Iwobi, Azuka; Märtlbauer, Erwin; Busch, Ulrich; Huber, Ingrid

    2015-01-01

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) has recently emerged as a rapid and accurate identification method for bacterial species. Although it has been successfully applied for the identification of human pathogens, it has so far not been well evaluated for routine identification of veterinary bacterial isolates. This study was performed to compare and evaluate the performance of MALDI-TOF MS based identification of veterinary bacterial isolates with commercially available conventional test systems. Discrepancies of both methods were resolved by sequencing 16S rDNA and, if necessary, the infB gene for Actinobacillus isolates. A total of 375 consecutively isolated veterinary samples were collected. Among the 357 isolates (95.2%) correctly identified at the genus level by MALDI-TOF MS, 338 of them (90.1% of the total isolates) were also correctly identified at the species level. Conventional methods offered correct species identification for 319 isolates (85.1%). MALDI-TOF identification therefore offered more accurate identification of veterinary bacterial isolates. An update of the in-house mass spectra database with additional reference spectra clearly improved the identification results. In conclusion, the presented data suggest that MALDI-TOF MS is an appropriate platform for classification and identification of veterinary bacterial isolates.

  11. Student conceptions about the DNA structure within a hierarchical organizational level: Improvement by experiment- and computer-based outreach learning.

    PubMed

    Langheinrich, Jessica; Bogner, Franz X

    2015-01-01

    As non-scientific conceptions interfere with learning processes, teachers need both, to know about them and to address them in their classrooms. For our study, based on 182 eleventh graders, we analyzed the level of conceptual understanding by implementing the "draw and write" technique during a computer-supported gene technology module. To give participants the hierarchical organizational level which they have to draw, was a specific feature of our study. We introduced two objective category systems for analyzing drawings and inscriptions. Our results indicated a long- as well as a short-term increase in the level of conceptual understanding and in the number of drawn elements and their grades concerning the DNA structure. Consequently, we regard the "draw and write" technique as a tool for a teacher to get to know students' alternative conceptions. Furthermore, our study points the modification potential of hands-on and computer-supported learning modules. © 2015 The International Union of Biochemistry and Molecular Biology.

  12. Practical Problem-Based Learning in Computing Education

    ERIC Educational Resources Information Center

    O'Grady, Michael J.

    2012-01-01

    Computer Science (CS) is a relatively new disciple and how best to introduce it to new students remains an open question. Likewise, the identification of appropriate instructional strategies for the diverse topics that constitute the average curriculum remains open to debate. One approach considered by a number of practitioners in CS education…

  13. Integration of Computer Technology Into an Introductory-Level Neuroscience Laboratory

    ERIC Educational Resources Information Center

    Evert, Denise L.; Goodwin, Gregory; Stavnezer, Amy Jo

    2005-01-01

    We describe 3 computer-based neuroscience laboratories. In the first 2 labs, we used commercially available interactive software to enhance the study of functional and comparative neuroanatomy and neurophysiology. In the remaining lab, we used customized software and hardware in 2 psychophysiological experiments. With the use of the computer-based…

  14. Reliability of landmark identification in cephalometric radiography acquired by a storage phosphor imaging system.

    PubMed

    Chen, Y-J; Chen, S-K; Huang, H-W; Yao, C-C; Chang, H-F

    2004-09-01

    To compare the cephalometric landmark identification on softcopy and hardcopy of direct digital cephalography acquired by a storage-phosphor (SP) imaging system. Ten digital cephalograms and their conventional counterpart, hardcopy on a transparent blue film, were obtained by a SP imaging system and a dye sublimation printer. Twelve orthodontic residents identified 19 cephalometric landmarks on monitor-displayed SP digital images with computer-aided method and on their hardcopies with conventional method. The x- and y-coordinates for each landmark, indicating the horizontal and vertical positions, were analysed to assess the reliability of landmark identification and evaluate the concordance of the landmark locations in softcopy and hardcopy of SP digital cephalometric radiography. For each of the 19 landmarks, the location differences as well as the horizontal and vertical components were statistically significant between SP digital cephalometric radiography and its hardcopy. Smaller interobserver errors on SP digital images than those on their hardcopies were noted for all the landmarks, except point Go in vertical direction. The scatter-plots demonstrate the characteristic distribution of the interobserver error in both horizontal and vertical directions. Generally, the dispersion of interobserver error on SP digital cephalometric radiography is less than that on its hardcopy with conventional method. The SP digital cephalometric radiography could yield better or comparable level of performance in landmark identification as its hardcopy, except point Go in vertical direction.

  15. Use of the BioMerieux ID 32C yeast identification system for identification of aerobic actinomycetes of medical importance.

    PubMed Central

    Muir, D B; Pritchard, R C

    1997-01-01

    The BioMerieux ID 32C Yeast Identification System was examined to determine its usefulness as a rapid method for the identification of medically important aerobic actinomycetes. More than 290 strains were tested by this method and the results were compared to those obtained by conventional methods. It was found that aerobic actinomycetes could be differentiated to species level in 7 days by the ID 32C system. PMID:9399526

  16. Systematic toxicological analysis: computer-assisted identification of poisons in biological materials.

    PubMed

    Stimpfl, Th; Demuth, W; Varmuza, K; Vycudilik, W

    2003-06-05

    A new software was developed to improve the chances for identification of a "general unknown" in complex biological materials. To achieve this goal, the total ion current chromatogram was simplified by filtering the acquired mass spectra via an automated subtraction procedure, which removed mass spectra originating from the sample matrix, as well as interfering substances from the extraction procedure. It could be shown that this tool emphasizes mass spectra of exceptional compounds, and therefore provides the forensic toxicologist with further evidence-even in cases where mass spectral data of the unknown compound are not available in "standard" spectral libraries.

  17. Metamodel-based inverse method for parameter identification: elastic-plastic damage model

    NASA Astrophysics Data System (ADS)

    Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb

    2017-04-01

    This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.

  18. Seed storage proteins as a system for teaching protein identification by mass spectrometry in biochemistry laboratory.

    PubMed

    Wilson, Karl A; Tan-Wilson, Anna

    2013-01-01

    Mass spectrometry (MS) has become an important tool in studying biological systems. One application is the identification of proteins and peptides by the matching of peptide and peptide fragment masses to the sequences of proteins in protein sequence databases. Often prior protein separation of complex protein mixtures by 2D-PAGE is needed, requiring more time and expertise than instructors of large laboratory classes can devote. We have developed an experimental module for our Biochemistry Laboratory course that engages students in MS-based protein identification following protein separation by one-dimensional SDS-PAGE, a technique that is usually taught in this type of course. The module is based on soybean seed storage proteins, a relatively simple mixture of proteins present in high levels in the seed, allowing the identification of the main protein bands by MS/MS and in some cases, even by peptide mass fingerprinting. Students can identify their protein bands using software available on the Internet, and are challenged to deduce post-translational modifications that have occurred upon germination. A collection of mass spectral data and tutorials that can be used as a stand-alone computer-based laboratory module were also assembled. Copyright © 2013 International Union of Biochemistry and Molecular Biology, Inc.

  19. A novel computer-aided detection system for pulmonary nodule identification in CT images

    NASA Astrophysics Data System (ADS)

    Han, Hao; Li, Lihong; Wang, Huafeng; Zhang, Hao; Moore, William; Liang, Zhengrong

    2014-03-01

    Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel approach for CADe of lung nodules using a two-stage vector quantization (VQ) scheme. The first-stage VQ aims to extract lung from the chest volume, while the second-stage VQ is designed to extract initial nodule candidates (INCs) within the lung volume. Then rule-based expert filtering is employed to prune obvious FPs from INCs, and the commonly-used support vector machine (SVM) classifier is adopted to further reduce the FPs. The proposed system was validated on 100 CT scans randomly selected from the 262 scans that have at least one juxta-pleural nodule annotation in the publicly available database - Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The two-stage VQ only missed 2 out of the 207 nodules at agreement level 1, and the INCs detection for each scan took about 30 seconds in average. Expert filtering reduced FPs more than 18 times, while maintaining a sensitivity of 93.24%. As it is trivial to distinguish INCs attached to pleural wall versus not on wall, we investigated the feasibility of training different SVM classifiers to further reduce FPs from these two kinds of INCs. Experiment results indicated that SVM classification over the entire set of INCs was in favor of, where the optimal operating of our CADe system achieved a sensitivity of 89.4% at a specificity of 86.8%.

  20. QPSO-Based Adaptive DNA Computing Algorithm

    PubMed Central

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409

  1. Cloud identification using genetic algorithms and massively parallel computation

    NASA Technical Reports Server (NTRS)

    Buckles, Bill P.; Petry, Frederick E.

    1996-01-01

    As a Guest Computational Investigator under the NASA administered component of the High Performance Computing and Communication Program, we implemented a massively parallel genetic algorithm on the MasPar SIMD computer. Experiments were conducted using Earth Science data in the domains of meteorology and oceanography. Results obtained in these domains are competitive with, and in most cases better than, similar problems solved using other methods. In the meteorological domain, we chose to identify clouds using AVHRR spectral data. Four cloud speciations were used although most researchers settle for three. Results were remarkedly consistent across all tests (91% accuracy). Refinements of this method may lead to more timely and complete information for Global Circulation Models (GCMS) that are prevalent in weather forecasting and global environment studies. In the oceanographic domain, we chose to identify ocean currents from a spectrometer having similar characteristics to AVHRR. Here the results were mixed (60% to 80% accuracy). Given that one is willing to run the experiment several times (say 10), then it is acceptable to claim the higher accuracy rating. This problem has never been successfully automated. Therefore, these results are encouraging even though less impressive than the cloud experiment. Successful conclusion of an automated ocean current detection system would impact coastal fishing, naval tactics, and the study of micro-climates. Finally we contributed to the basic knowledge of GA (genetic algorithm) behavior in parallel environments. We developed better knowledge of the use of subpopulations in the context of shared breeding pools and the migration of individuals. Rigorous experiments were conducted based on quantifiable performance criteria. While much of the work confirmed current wisdom, for the first time we were able to submit conclusive evidence. The software developed under this grant was placed in the public domain. An extensive user

  2. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  3. How to think about your drink: Action-identification and the relation between mindfulness and dyscontrolled drinking.

    PubMed

    Schellhas, Laura; Ostafin, Brian D; Palfai, Tibor P; de Jong, Peter J

    2016-05-01

    Cross-sectional and intervention research have shown that mindfulness is inversely associated with difficulties in controlling alcohol use. However, little is known regarding the mechanisms through which mindfulness is related to increased control over drinking. One potential mechanism consists of the way individuals represent their drinking behaviour. Action identification theory proposes that self-control of behaviour is improved by shifting from high-level representations regarding the meaning of a behaviour to lower-level representations regarding "how-to" aspects of a behaviour. Because mindfulness involves present-moment awareness, it may help to facilitate such shifts. We hypothesized that an inverse relation between mindfulness and dyscontrolled drinking would be partially accounted for by the way individuals mentally represent their drinking behaviour - i.e., reduced levels of high-level action identification and increased levels of low-level action identification. One hundred and twenty five undergraduate psychology students completed self-report measures of mindful awareness, action identification of alcohol use, and difficulty in controlling alcohol use. Results supported the hypothesis that high-level action identification partially mediates the relation between mindfulness and dyscontrolled drinking but did not support a mediating role for low-level action identification. These results suggest that mindfulness can improve self-control of alcohol by changing the way we think about our drinking behaviour. Copyright © 2016. Published by Elsevier Ltd.

  4. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    NASA Astrophysics Data System (ADS)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  5. Village-Level Identification of Nitrate Sources: Collaboration of Experts and Local Population in Benin, Africa

    NASA Astrophysics Data System (ADS)

    Crane, P.; Silliman, S. E.; Boukari, M.; Atoro, I.; Azonsi, F.

    2005-12-01

    Deteriorating groundwater quality, as represented by high nitrates, in the Colline province of Benin, West Africa was identified by the Benin national water agency, Direction Hydraulique. For unknown reasons the Colline province had consistently higher nitrate levels than any other region of the country. In an effort to address this water quality issue, a collaborative team was created that incorporated professionals from the Universite d'Abomey-Calavi (Benin), the University of Notre Dame (USA), Direction l'Hydraulique (a government water agency in Benin), Centre Afrika Obota (an educational NGO in Benin), and the local population of the village of Adourekoman. The goals of the project were to: (i) identify the source of nitrates, (ii) test field techniques for long term, local monitoring, and (iii) identify possible solutions to the high levels of groundwater nitrates. In order to accomplish these goals, the following methods were utilized: regional sampling of groundwater quality, field methods that allowed the local population to regularly monitor village groundwater quality, isotopic analysis, and sociological methods of surveys, focus groups, and observations. It is through the combination of these multi-disciplinary methods that all three goals were successfully addressed leading to preliminary identification of the sources of nitrates in the village of Adourekoman, confirmation of utility of field techniques, and initial assessment of possible solutions to the contamination problem.

  6. Parallel Computing for the Computed-Tomography Imaging Spectrometer

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon

    2008-01-01

    This software computes the tomographic reconstruction of spatial-spectral data from raw detector images of the Computed-Tomography Imaging Spectrometer (CTIS), which enables transient-level, multi-spectral imaging by capturing spatial and spectral information in a single snapshot.

  7. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2015-08-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  8. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  9. Computing technology in the 1980's. [computers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Advances in computing technology have been led by consistently improving semiconductor technology. The semiconductor industry has turned out ever faster, smaller, and less expensive devices since transistorized computers were first introduced 20 years ago. For the next decade, there appear to be new advances possible, with the rate of introduction of improved devices at least equal to the historic trends. The implication of these projections is that computers will enter new markets and will truly be pervasive in business, home, and factory as their cost diminishes and their computational power expands to new levels. The computer industry as we know it today will be greatly altered in the next decade, primarily because the raw computer system will give way to computer-based turn-key information and control systems.

  10. Identification of human microRNA targets from isolated argonaute protein complexes.

    PubMed

    Beitzinger, Michaela; Peters, Lasse; Zhu, Jia Yun; Kremmer, Elisabeth; Meister, Gunter

    2007-06-01

    MicroRNAs (miRNAs) constitute a class of small non-coding RNAs that regulate gene expression on the level of translation and/or mRNA stability. Mammalian miRNAs associate with members of the Argonaute (Ago) protein family and bind to partially complementary sequences in the 3' untranslated region (UTR) of specific target mRNAs. Computer algorithms based on factors such as free binding energy or sequence conservation have been used to predict miRNA target mRNAs. Based on such predictions, up to one third of all mammalian mRNAs seem to be under miRNA regulation. However, due to the low degree of complementarity between the miRNA and its target, such computer programs are often imprecise and therefore not very reliable. Here we report the first biochemical identification approach of miRNA targets from human cells. Using highly specific monoclonal antibodies against members of the Ago protein family, we co-immunoprecipitate Ago-bound mRNAs and identify them by cloning. Interestingly, most of the identified targets are also predicted by different computer programs. Moreover, we randomly analyzed six different target candidates and were able to experimentally validate five as miRNA targets. Our data clearly indicate that miRNA targets can be experimentally identified from Ago complexes and therefore provide a new tool to directly analyze miRNA function.

  11. A grass molecular identification system for forensic botany: a critical evaluation of the strengths and limitations.

    PubMed

    Ward, Jodie; Gilmore, Simon R; Robertson, James; Peakall, Rod

    2009-11-01

    Plant material is frequently encountered in criminal investigations but often overlooked as potential evidence. We designed a DNA-based molecular identification system for 100 Australian grasses that consisted of a series of polymerase chain reaction assays that enabled the progressive identification of grasses to different taxonomic levels. The identification system was based on DNA sequence variation at four chloroplast and two mitochondrial loci. Seventeen informative indels and 68 single-nucleotide polymorphisms were utilized as molecular markers for subfamily to species-level identification. To identify an unknown sample to subfamily level required a minimum of four markers or nine markers for species identification. The accuracy of the system was confirmed by blind tests. We have demonstrated "proof of concept" of a molecular identification system for trace botanical samples. Our evaluation suggests that the adoption of a system that combines this approach with DNA sequencing could assist the morphological identification of grasses found as forensic evidence.

  12. Confidence assignment for mass spectrometry based peptide identifications via the extreme value distribution.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2016-09-01

    There is a growing trend for biomedical researchers to extract evidence and draw conclusions from mass spectrometry based proteomics experiments, the cornerstone of which is peptide identification. Inaccurate assignments of peptide identification confidence thus may have far-reaching and adverse consequences. Although some peptide identification methods report accurate statistics, they have been limited to certain types of scoring function. The extreme value statistics based method, while more general in the scoring functions it allows, demands accurate parameter estimates and requires, at least in its original design, excessive computational resources. Improving the parameter estimate accuracy and reducing the computational cost for this method has two advantages: it provides another feasible route to accurate significance assessment, and it could provide reliable statistics for scoring functions yet to be developed. We have formulated and implemented an efficient algorithm for calculating the extreme value statistics for peptide identification applicable to various scoring functions, bypassing the need for searching large random databases. The source code, implemented in C ++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit yyu@ncbi.nlm.nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  13. Learning about Bird Species on the Primary Level

    ERIC Educational Resources Information Center

    Randler, Christoph

    2009-01-01

    Animal species identification is often emphasized as a basic prerequisite for an understanding of ecology because ecological interactions are based on interactions between species at least as it is taught on the school level. Therefore, training identification skills or using identification books seems a worthwhile task in biology education, and…

  14. Human-Computer Interaction with Medical Decisions Support Systems

    NASA Technical Reports Server (NTRS)

    Adolf, Jurine A.; Holden, Kritina L.

    1994-01-01

    Decision Support Systems (DSSs) have been available to medical diagnosticians for some time, yet their acceptance and use have not increased with advances in technology and availability of DSS tools. Medical DSSs will be necessary on future long duration space missions, because access to medical resources and personnel will be limited. Human-Computer Interaction (HCI) experts at NASA's Human Factors and Ergonomics Laboratory (HFEL) have been working toward understanding how humans use DSSs, with the goal of being able to identify and solve the problems associated with these systems. Work to date consists of identification of HCI research areas, development of a decision making model, and completion of two experiments dealing with 'anchoring'. Anchoring is a phenomenon in which the decision maker latches on to a starting point and does not make sufficient adjustments when new data are presented. HFEL personnel have replicated a well-known anchoring experiment and have investigated the effects of user level of knowledge. Future work includes further experimentation on level of knowledge, confidence in the source of information and sequential decision making.

  15. DEVELOPMENT OF COMPUTATIONAL TOOLS FOR OPTIMAL IDENTIFICATION OF BIOLOGICAL NETWORKS

    EPA Science Inventory

    Following the theoretical analysis and computer simulations, the next step for the development of SNIP will be a proof-of-principle laboratory application. Specifically, we have obtained a synthetic transcriptional cascade (harbored in Escherichia coli...

  16. Undergraduate computational physics projects on quantum computing

    NASA Astrophysics Data System (ADS)

    Candela, D.

    2015-08-01

    Computational projects on quantum computing suitable for students in a junior-level quantum mechanics course are described. In these projects students write their own programs to simulate quantum computers. Knowledge is assumed of introductory quantum mechanics through the properties of spin 1/2. Initial, more easily programmed projects treat the basics of quantum computation, quantum gates, and Grover's quantum search algorithm. These are followed by more advanced projects to increase the number of qubits and implement Shor's quantum factoring algorithm. The projects can be run on a typical laptop or desktop computer, using most programming languages. Supplementing resources available elsewhere, the projects are presented here in a self-contained format especially suitable for a short computational module for physics students.

  17. Investigation of scene identification algorithms for radiation budget measurements

    NASA Technical Reports Server (NTRS)

    Diekmann, F. J.

    1986-01-01

    The computation of Earth radiation budget from satellite measurements requires the identification of the scene in order to select spectral factors and bidirectional models. A scene identification procedure is developed for AVHRR SW and LW data by using two radiative transfer models. These AVHRR GAC pixels are then attached to corresponding ERBE pixels and the results are sorted into scene identification probability matrices. These scene intercomparisons show that there generally is a higher tendency for underestimation of cloudiness over ocean at high cloud amounts, e.g., mostly cloudy instead of overcast, partly cloudy instead of mostly cloudy, for the ERBE relative to the AVHRR results. Reasons for this are explained. Preliminary estimates of the errors of exitances due to scene misidentification demonstrates the high dependency on the probability matrices. While the longwave error can generally be neglected the shortwave deviations have reached maximum values of more than 12% of the respective exitances.

  18. Framework for Computer Assisted Instruction Courseware: A Case Study.

    ERIC Educational Resources Information Center

    Betlach, Judith A.

    1987-01-01

    Systematically investigates, defines, and organizes variables related to production of internally designed and implemented computer assisted instruction (CAI) courseware: special needs of users; costs; identification and definition of realistic training needs; CAI definition and design methodology; hardware and software requirements; and general…

  19. Sigma: Strain-level inference of genomes from metagenomic analysis for biosurveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Tae-Hyuk; Chai, Juanjuan; Pan, Chongle

    Motivation: Metagenomic sequencing of clinical samples provides a promising technique for direct pathogen detection and characterization in biosurveillance. Taxonomic analysis at the strain level can be used to resolve serotypes of a pathogen in biosurveillance. Sigma was developed for strain-level identification and quantification of pathogens using their reference genomes based on metagenomic analysis. Results: Sigma provides not only accurate strain-level inferences, but also three unique capabilities: (i) Sigma quantifies the statistical uncertainty of its inferences, which includes hypothesis testing of identified genomes and confidence interval estimation of their relative abundances; (ii) Sigma enables strain variant calling by assigning metagenomic readsmore » to their most likely reference genomes; and (iii) Sigma supports parallel computing for fast analysis of large datasets. In conclusion, the algorithm performance was evaluated using simulated mock communities and fecal samples with spike-in pathogen strains. Availability and Implementation: Sigma was implemented in C++ with source codes and binaries freely available at http://sigma.omicsbio.org.« less

  20. Sigma: Strain-level inference of genomes from metagenomic analysis for biosurveillance

    DOE PAGES

    Ahn, Tae-Hyuk; Chai, Juanjuan; Pan, Chongle

    2014-09-29

    Motivation: Metagenomic sequencing of clinical samples provides a promising technique for direct pathogen detection and characterization in biosurveillance. Taxonomic analysis at the strain level can be used to resolve serotypes of a pathogen in biosurveillance. Sigma was developed for strain-level identification and quantification of pathogens using their reference genomes based on metagenomic analysis. Results: Sigma provides not only accurate strain-level inferences, but also three unique capabilities: (i) Sigma quantifies the statistical uncertainty of its inferences, which includes hypothesis testing of identified genomes and confidence interval estimation of their relative abundances; (ii) Sigma enables strain variant calling by assigning metagenomic readsmore » to their most likely reference genomes; and (iii) Sigma supports parallel computing for fast analysis of large datasets. In conclusion, the algorithm performance was evaluated using simulated mock communities and fecal samples with spike-in pathogen strains. Availability and Implementation: Sigma was implemented in C++ with source codes and binaries freely available at http://sigma.omicsbio.org.« less