Science.gov

Sample records for level computational identification

  1. MODEL IDENTIFICATION AND COMPUTER ALGEBRA.

    PubMed

    Bollen, Kenneth A; Bauldry, Shawn

    2010-10-01

    Multiequation models that contain observed or latent variables are common in the social sciences. To determine whether unique parameter values exist for such models, one needs to assess model identification. In practice analysts rely on empirical checks that evaluate the singularity of the information matrix evaluated at sample estimates of parameters. The discrepancy between estimates and population values, the limitations of numerical assessments of ranks, and the difference between local and global identification make this practice less than perfect. In this paper we outline how to use computer algebra systems (CAS) to determine the local and global identification of multiequation models with or without latent variables. We demonstrate a symbolic CAS approach to local identification and develop a CAS approach to obtain explicit algebraic solutions for each of the model parameters. We illustrate the procedures with several examples, including a new proof of the identification of a model for handling missing data using auxiliary variables. We present an identification procedure for Structural Equation Models that makes use of CAS and that is a useful complement to current methods. PMID:21769158

  2. [Identification of the Pseudomonas genus bacteria by computer analysis].

    PubMed

    Kotsofliak, O I; Reva, O N; Kiprianova, E A; Smirnov, V V

    2003-01-01

    A computer program for the simplified phenotypic identification of Pseudomonas has been developed. The information concerning 66 species included in up-to-date Pseudomonas genus characterized by 113 tests was accumulated in a database. The identification key is represented in interactive mode on a website http://www.imv.kiev.ua/PsmIK/default.htm. The program was used for the identification of 46 Pseudomonas strains isolated from rhizosphere. For 23 more strains unidentified by conventional technique, the level of similarity was 67-74%. This fact allows suggesting that they might be representatives of new Pseudomonas species. PMID:15077543

  3. A method for identification of vertebral level.

    PubMed

    Redfern, R M; Smith, E T

    1986-05-01

    A method of spinal level marking applicable particularly for use in thoracolumbar posterior spinal operations is described. The use of patent blue V dye in this procedure is discussed in a consecutive series of over 100 cases. No serious adverse effects were observed. The technique ensures accurate identification of spinal marking and helps to minimize anaesthetic time. PMID:3729267

  4. Computer-assisted identification of anaerobic bacteria.

    PubMed Central

    Kelley, R W; Kellogg, S T

    1978-01-01

    A computer program was developed to identify anaerobic bacteria by using simultaneous pattern recognition via a Bayesian probabilistic model. The system is intended for use as a rapid, precise, and reproducible aid in the identification of unknown isolates. The program operates on a data base of 28 genera comprising 238 species of anaerobic bacteria that can be separated by the program. Input to the program consists of biochemical and gas chromatographic test results in binary format. The system is flexible and yields outputs of: (i) most probable species, (ii) significant test results conflicting with established data, and (iii) differential tests of significance for missing test results. PMID:345970

  5. Personalized identification of abdominal wall hernia meshes on computed tomography.

    PubMed

    Pham, Tuan D; Le, Dinh T P; Xu, Jinwei; Nguyen, Duc T; Martindale, Robert G; Deveney, Clifford W

    2014-01-01

    An abdominal wall hernia is a protrusion of the intestine through an opening or area of weakness in the abdominal wall. Correct pre-operative identification of abdominal wall hernia meshes could help surgeons adjust the surgical plan to meet the expected difficulty and morbidity of operating through or removing the previous mesh. First, we present herein for the first time the application of image analysis for automated identification of hernia meshes. Second, we discuss the novel development of a new entropy-based image texture feature using geostatistics and indicator kriging. Third, we seek to enhance the hernia mesh identification by combining the new texture feature with the gray-level co-occurrence matrix feature of the image. The two features can characterize complementary information of anatomic details of the abdominal hernia wall and its mesh on computed tomography. Experimental results have demonstrated the effectiveness of the proposed study. The new computational tool has potential for personalized mesh identification which can assist surgeons in the diagnosis and repair of complex abdominal wall hernias. PMID:24184112

  6. A new approach for fault identification in computer networks

    NASA Astrophysics Data System (ADS)

    Zhao, Dong; Wang, Tao

    2004-04-01

    Effective management of computer networks has become a more and more difficult job because of the rapid development of the network systems. Fault identification is to find where is the problem of the network and what is it. Data mining generally refers to the process of extracting models from large stores of data. We can use data mining techniques to help us in the fault identification task. Existing approaches of fault identification are introduced and a new approach of fault identification is proposed. This approach improves MSDD algorithm but it need more computation. So some new techniques are used to increase the efficiency.

  7. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1972-01-01

    Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.

  8. Chip level simulation of fault tolerant computers

    NASA Technical Reports Server (NTRS)

    Armstrong, J. R.

    1982-01-01

    Chip-level modeling techniques in the evaluation of fault tolerant systems were researched. A fault tolerant computer was modeled. An efficient approach to functional fault simulation was developed. Simulation software was also developed.

  9. Multi-level RF identification system

    DOEpatents

    Steele, Kerry D.; Anderson, Gordon A.; Gilbert, Ronald W.

    2004-07-20

    A radio frequency identification system having a radio frequency transceiver for generating a continuous wave RF interrogation signal that impinges upon an RF identification tag. An oscillation circuit in the RF identification tag modulates the interrogation signal with a subcarrier of a predetermined frequency and modulates the frequency-modulated signal back to the transmitting interrogator. The interrogator recovers and analyzes the subcarrier signal and determines its frequency. The interrogator generates an output indicative of the frequency of the subcarrier frequency, thereby identifying the responding RFID tag as one of a "class" of RFID tags configured to respond with a subcarrier signal of a predetermined frequency.

  10. Identification of Protein–Excipient Interaction Hotspots Using Computational Approaches

    PubMed Central

    Barata, Teresa S.; Zhang, Cheng; Dalby, Paul A.; Brocchini, Steve; Zloh, Mire

    2016-01-01

    Protein formulation development relies on the selection of excipients that inhibit protein–protein interactions preventing aggregation. Empirical strategies involve screening many excipient and buffer combinations using force degradation studies. Such methods do not readily provide information on intermolecular interactions responsible for the protective effects of excipients. This study describes a molecular docking approach to screen and rank interactions allowing for the identification of protein–excipient hotspots to aid in the selection of excipients to be experimentally screened. Previously published work with Drosophila Su(dx) was used to develop and validate the computational methodology, which was then used to determine the formulation hotspots for Fab A33. Commonly used excipients were examined and compared to the regions in Fab A33 prone to protein–protein interactions that could lead to aggregation. This approach could provide information on a molecular level about the protective interactions of excipients in protein formulations to aid the more rational development of future formulations. PMID:27258262

  11. Computational Methods for Protein Identification from Mass Spectrometry Data

    PubMed Central

    McHugh, Leo; Arthur, Jonathan W

    2008-01-01

    Protein identification using mass spectrometry is an indispensable computational tool in the life sciences. A dramatic increase in the use of proteomic strategies to understand the biology of living systems generates an ongoing need for more effective, efficient, and accurate computational methods for protein identification. A wide range of computational methods, each with various implementations, are available to complement different proteomic approaches. A solid knowledge of the range of algorithms available and, more critically, the accuracy and effectiveness of these techniques is essential to ensure as many of the proteins as possible, within any particular experiment, are correctly identified. Here, we undertake a systematic review of the currently available methods and algorithms for interpreting, managing, and analyzing biological data associated with protein identification. We summarize the advances in computational solutions as they have responded to corresponding advances in mass spectrometry hardware. The evolution of scoring algorithms and metrics for automated protein identification are also discussed with a focus on the relative performance of different techniques. We also consider the relative advantages and limitations of different techniques in particular biological contexts. Finally, we present our perspective on future developments in the area of computational protein identification by considering the most recent literature on new and promising approaches to the problem as well as identifying areas yet to be explored and the potential application of methods from other areas of computational biology. PMID:18463710

  12. Multi level programming Paradigm for Extreme Computing

    NASA Astrophysics Data System (ADS)

    Petiton, S.; Sato, M.; Emad, N.; Calvin, C.; Tsuji, M.; Dandouna, M.

    2014-06-01

    Abstract: In order to propose a framework and programming paradigms for post-petascale computing, on the road to exascale computing and beyond, we introduced new languages, associated with a hierarchical multi-level programming paradigm, allowing scientific end-users and developers to program highly hierarchical architectures designed for extreme computing. In this paper, we explain the interest of such hierarchical multi-level programming paradigm for extreme computing and its well adaptation to several large computational science applications, such as for linear algebra solvers used for reactor core physic. We describe the YML language and framework allowing describing graphs of parallel components, which may be developed using PGAS-like language such as XMP, scheduled and computed on supercomputers. Then, we propose experimentations on supercomputers (such as the "K" and "Hooper" ones) of the hybrid method MERAM (Multiple Explicitly Restarted Arnoldi Method) as a case study for iterative methods manipulating sparse matrices, and the block Gauss-Jordan method as a case study for direct method manipulating dense matrices. We conclude proposing evolutions for this programming paradigm.

  13. Computational phosphoproteomics: From identification to localization

    PubMed Central

    Lee, Dave C H; Jones, Andrew R; Hubbard, Simon J

    2015-01-01

    Analysis of the phosphoproteome by MS has become a key technology for the characterization of dynamic regulatory processes in the cell, since kinase and phosphatase action underlie many major biological functions. However, the addition of a phosphate group to a suitable side chain often confounds informatic analysis by generating product ion spectra that are more difficult to interpret (and consequently identify) relative to unmodified peptides. Collectively, these challenges have motivated bioinformaticians to create novel software tools and pipelines to assist in the identification of phosphopeptides in proteomic mixtures, and help pinpoint or “localize” the most likely site of modification in cases where there is ambiguity. Here we review the challenges to be met and the informatics solutions available to address them for phosphoproteomic analysis, as well as highlighting the difficulties associated with using them and the implications for data standards. PMID:25475148

  14. Crop identification and area estimation by computer-aided analysis of Landsat data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E.; Hixson, M. M.; Davis, B. J.; Etheridge, J. B.

    1977-01-01

    This report describes the results of a study involving the use of computer-aided analysis techniques applied to Landsat MSS data for identification and area estimation of winter wheat in Kansas and corn and soybeans in Indiana. Key elements of the approach included use of aerial photography for classifier training, stratification of Landsat data and extension of training statistics to areas without training data, and classification of a systematic sample of pixels from each county. Major results and conclusions are: (1) Landsat data was adequate for accurate identification and area estimation of winter wheat in Kansas, but corn and soybean estimates for Indiana were less accurate; (2) computer-aided analysis techniques can be effectively used to extract crop identification information from Landsat MSS data, and (3) systematic sampling of entire counties made possible by computer classification methods resulted in very precise area estimates at county as well as district and state levels.

  15. Human operator identification model and related computer programs

    NASA Technical Reports Server (NTRS)

    Kessler, K. M.; Mohr, J. N.

    1978-01-01

    Four computer programs which provide computational assistance in the analysis of man/machine systems are reported. The programs are: (1) Modified Transfer Function Program (TF); (2) Time Varying Response Program (TVSR); (3) Optimal Simulation Program (TVOPT); and (4) Linear Identification Program (SCIDNT). The TV program converts the time domain state variable system representative to frequency domain transfer function system representation. The TVSR program computes time histories of the input/output responses of the human operator model. The TVOPT program is an optimal simulation program and is similar to TVSR in that it produces time histories of system states associated with an operator in the loop system. The differences between the two programs are presented. The SCIDNT program is an open loop identification code which operates on the simulated data from TVOPT (or TVSR) or real operator data from motion simulators.

  16. Computational identification of 69 retroposons in Arabidopsis.

    PubMed

    Zhang, Yujun; Wu, Yongrui; Liu, Yilei; Han, Bin

    2005-06-01

    Retroposition is a shot-gun strategy of the genome to achieve evolutionary diversities by mixing and matching coding sequences with novel regulatory elements. We have identified 69 retroposons in the Arabidopsis (Arabidopsis thaliana) genome by a computational approach. Most of them were derivatives of mature mRNAs, and 20 genes contained relics of the reverse transcription process, such as truncations, deletions, and extra sequence additions. Of them, 22 are processed pseudogenes, and 52 genes are likely to be actively transcribed, especially in tissues from apical meristems (roots and flowers). Functional compositions of these retroposon parental genes imply that not the mRNA itself but its expression in gamete cells defines a suitable template for retroposition. The presence/absence patterns of retroposons can be used as cladistic markers for biogeographic research. Effects of human and the Mediterranean Pleistocene refugia in Arabidopsis biogeographic distributions were revealed based on two recent retroposons (At1g61410 and At5g52090). An evolutionary rate of new gene creation by retroposition was calculated as 0.6 genes per million years. Retroposons can also be used as molecular fossils of the parental gene expressions in ancient time. Extensions of 3' untranslated regions for those expressed parental genes are revealed as a possible trend of plant transcriptome evolution. In addition, we reported the first plant functional chimeric gene that adapts to intercompartmental transport by capturing two additional exons after retroposition. PMID:15923328

  17. Identification of Computational and Experimental Reduced-Order Models

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Hong, Moeljo S.; Bartels, Robert E.; Piatak, David J.; Scott, Robert C.

    2003-01-01

    The identification of computational and experimental reduced-order models (ROMs) for the analysis of unsteady aerodynamic responses and for efficient aeroelastic analyses is presented. For the identification of a computational aeroelastic ROM, the CFL3Dv6.0 computational fluid dynamics (CFD) code is used. Flutter results for the AGARD 445.6 Wing and for a Rigid Semispan Model (RSM) computed using CFL3Dv6.0 are presented, including discussion of associated computational costs. Modal impulse responses of the unsteady aerodynamic system are computed using the CFL3Dv6.0 code and transformed into state-space form. The unsteady aerodynamic state-space ROM is then combined with a state-space model of the structure to create an aeroelastic simulation using the MATLAB/SIMULINK environment. The MATLAB/SIMULINK ROM is then used to rapidly compute aeroelastic transients, including flutter. The ROM shows excellent agreement with the aeroelastic analyses computed using the CFL3Dv6.0 code directly. For the identification of experimental unsteady pressure ROMs, results are presented for two configurations: the RSM and a Benchmark Supercritical Wing (BSCW). Both models were used to acquire unsteady pressure data due to pitching oscillations on the Oscillating Turntable (OTT) system at the Transonic Dynamics Tunnel (TDT). A deconvolution scheme involving a step input in pitch and the resultant step response in pressure, for several pressure transducers, is used to identify the unsteady pressure impulse responses. The identified impulse responses are then used to predict the pressure responses due to pitching oscillations at several frequencies. Comparisons with the experimental data are then presented.

  18. Computational Strategies for a System-Level Understanding of Metabolism

    PubMed Central

    Cazzaniga, Paolo; Damiani, Chiara; Besozzi, Daniela; Colombo, Riccardo; Nobile, Marco S.; Gaglio, Daniela; Pescini, Dario; Molinari, Sara; Mauri, Giancarlo; Alberghina, Lilia; Vanoni, Marco

    2014-01-01

    Cell metabolism is the biochemical machinery that provides energy and building blocks to sustain life. Understanding its fine regulation is of pivotal relevance in several fields, from metabolic engineering applications to the treatment of metabolic disorders and cancer. Sophisticated computational approaches are needed to unravel the complexity of metabolism. To this aim, a plethora of methods have been developed, yet it is generally hard to identify which computational strategy is most suited for the investigation of a specific aspect of metabolism. This review provides an up-to-date description of the computational methods available for the analysis of metabolic pathways, discussing their main advantages and drawbacks.  In particular, attention is devoted to the identification of the appropriate scale and level of accuracy in the reconstruction of metabolic networks, and to the inference of model structure and parameters, especially when dealing with a shortage of experimental measurements. The choice of the proper computational methods to derive in silico data is then addressed, including topological analyses, constraint-based modeling and simulation of the system dynamics. A description of some computational approaches to gain new biological knowledge or to formulate hypotheses is finally provided. PMID:25427076

  19. An Efficient Algorithm for Stiffness Identification of Truss Structures Through Distributed Local Computation

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Burgueño, R.; Elvin, N. G.

    2010-02-01

    This paper presents an efficient stiffness identification technique for truss structures based on distributed local computation. Sensor nodes on each element are assumed to collect strain data and communicate only with sensors on neighboring elements. This can significantly reduce the energy demand for data transmission and the complexity of transmission protocols, thus enabling a simplified wireless implementation. Element stiffness parameters are identified by simple low order matrix inversion at a local level, which reduces the computational energy, allows for distributed computation and makes parallel data processing possible. The proposed method also permits addressing the problem of missing data or faulty sensors. Numerical examples, with and without missing data, are presented and the element stiffness parameters are accurately identified. The computation efficiency of the proposed method is n2 times higher than previously proposed global damage identification methods.

  20. Biracial Identification, Familial Influence and Levels of Acculturation.

    ERIC Educational Resources Information Center

    Morales, Pamilla C.; Steward, Robbie

    The levels of acculturation of biracial and monoracial Hispanics were examined in college students to determine the level of family or community influence in defining racial identification for the biracial individual. The sample was composed of Hispanic undergraduate students currently enrolled at the University of Kansas. Survey packets were…

  1. An experimental modal testing/identification technique for personal computers

    NASA Technical Reports Server (NTRS)

    Roemer, Michael J.; Schlonski, Steven T.; Mook, D. Joseph

    1990-01-01

    A PC-based system for mode shape identification is evaluated. A time-domain modal identification procedure is utilized to identify the mode shapes of a beam apparatus from discrete time-domain measurements. The apparatus includes a cantilevered aluminum beam, four accelerometers, four low-pass filters, and the computer. The method's algorithm is comprised of an identification algorithm: the Eigensystem Realization Algorithm (ERA) and an estimation algorithm called Minimum Model Error (MME). The identification ability of this algorithm is compared with ERA alone, a frequency-response-function technique, and an Euler-Bernoulli beam model. Detection of modal parameters and mode shapes by the PC-based time-domain system is shown to be accurate in an application with an aluminum beam, while mode shapes identified by the frequency-domain technique are not as accurate as predicted. The new method is shown to be significantly less sensitive to noise and poorly excited modes than other leading methods. The results support the use of time-domain identification systems for mode shape prediction.

  2. Computed tomographic identification of calcified optic nerve drusen

    SciTech Connect

    Ramirez, H.; Blatt, E.S.; Hibri, N.S.

    1983-07-01

    Four cases of optic disk drusen were accurately diagnosed with orbital computed tomography (CT). The radiologist should be aware of the characteristic CT finding of discrete calcification within an otherwise normal optic disk. This benign process is easily differentiated from lesions such as calcific neoplastic processes of the posterior globe. CT identification of optic disk drusen is essential in the evaluation of visual field defects, migraine-like headaches, and pseudopapilledema.

  3. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive...

  4. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive...

  5. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive...

  6. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive...

  7. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive...

  8. Tracking by Identification Using Computer Vision and Radio

    PubMed Central

    Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez

    2013-01-01

    We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485

  9. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  10. Phosphor-stimulated computed cephalometry: reliability of landmark identification.

    PubMed

    Lim, K F; Foong, K W

    1997-11-01

    The aim of this randomized, controlled, prospective study was to determine the reliability of computed lateral cephalometry (Fuji Medical Systems, Tokyo, Japan) in terms of landmark identification compared to conventional lateral cephalometry (CAWO, Schrobenhausen, Germany). To assess the reliability of landmark identification on lateral cephalographs, 20 computed images, taken at 30 per cent reduced radiation (70 kV, 15 mA, 0.35 s) were compared to 20 conventional images (70 kV, 15 mA, 0.5 s). The 40 lateral cephalographs were taken from 20 orthodontic patients at immediate post-treatment and 1 year after retention. The order and type of imaging was randomized. Five orthodontists identified eight skeletal, four dental and five soft tissue landmarks on each of the 40 films. The error of identification was analysed in the XY Cartesian co-ordinate following digitization. Skeletal landmarks exhibited characteristic dispersion with respect to the Cartesian co-ordinates. Root apices were more variable than crown tips. Soft tissue landmarks were more consistent in the X co-ordinate. Two-way ANOVA shows that there is no significant difference between the two imaging systems in both co-ordinates (P > 0.05). Moreover, the differences are generally small (< 0.5 mm), and are unlikely to be of clinical significance. Most of the variables attained statistical power of at least 0.8 in the X-co-ordinate while only the dental landmarks achieved statistical power of at least 0.78 in the Y-co-ordinate. Based on the results of the study: (1) computed lateral cephalographs can be taken at 30 per cent radiation reduction, compared to conventional lateral cephalograph; (2) each anatomical landmark exhibits its characteristic dispersion of error in both the Cartesian co-ordinates; (3) there is no trend between the two imaging systems, with equivocal result, and none of the landmarks attained statistical significance when both raters and imaging systems are considered as factorial

  11. Postmortem computed tomography (PMCT) and disaster victim identification.

    PubMed

    Brough, A L; Morgan, B; Rutty, G N

    2015-09-01

    Radiography has been used for identification since 1927, and established a role in mass fatality investigations in 1949. More recently, postmortem computed tomography (PMCT) has been used for disaster victim identification (DVI). PMCT offers several advantages compared with fluoroscopy, plain film and dental X-rays, including: speed, reducing the number of on-site personnel and imaging modalities required, making it potentially more efficient. However, there are limitations that inhibit the international adoption of PMCT into routine practice. One particular problem is that due to the fact that forensic radiology is a relatively new sub-speciality, there are no internationally established standards for image acquisition, image interpretation and archiving. This is reflected by the current INTERPOL DVI form, which does not contain a PMCT section. The DVI working group of the International Society of Forensic Radiology and Imaging supports the use of imaging in mass fatality response and has published positional statements in this area. This review will discuss forensic radiology, PMCT, and its role in disaster victim identification. PMID:26108152

  12. Computer Literacy for Teachers: Level 1.

    ERIC Educational Resources Information Center

    Oliver, Marvin E.

    This brief, non-technical narrative for teachers addresses the following questions: (1) What are computers? (2) How do they work? (3) What can they do? (4) Why should we care? (5) What do they have to do with reading instruction? and (6) What is the future of computers for teachers and students? Specific topics discussed include the development of…

  13. A Teaching Exercise for the Identification of Bacteria Using An Interactive Computer Program.

    ERIC Educational Resources Information Center

    Bryant, Trevor N.; Smith, John E.

    1979-01-01

    Describes an interactive Fortran computer program which provides an exercise in the identification of bacteria. Provides a way of enhancing a student's approach to systematic bacteriology and numerical identification procedures. (Author/MA)

  14. Identification of Cichlid Fishes from Lake Malawi Using Computer Vision

    PubMed Central

    Joo, Deokjin; Kwan, Ye-seul; Song, Jongwoo; Pinho, Catarina; Hey, Jody; Won, Yong-Jin

    2013-01-01

    Background The explosively radiating evolution of cichlid fishes of Lake Malawi has yielded an amazing number of haplochromine species estimated as many as 500 to 800 with a surprising degree of diversity not only in color and stripe pattern but also in the shape of jaw and body among them. As these morphological diversities have been a central subject of adaptive speciation and taxonomic classification, such high diversity could serve as a foundation for automation of species identification of cichlids. Methodology/Principal Finding Here we demonstrate a method for automatic classification of the Lake Malawi cichlids based on computer vision and geometric morphometrics. For this end we developed a pipeline that integrates multiple image processing tools to automatically extract informative features of color and stripe patterns from a large set of photographic images of wild cichlids. The extracted information was evaluated by statistical classifiers Support Vector Machine and Random Forests. Both classifiers performed better when body shape information was added to the feature of color and stripe. Besides the coloration and stripe pattern, body shape variables boosted the accuracy of classification by about 10%. The programs were able to classify 594 live cichlid individuals belonging to 12 different classes (species and sexes) with an average accuracy of 78%, contrasting to a mere 42% success rate by human eyes. The variables that contributed most to the accuracy were body height and the hue of the most frequent color. Conclusions Computer vision showed a notable performance in extracting information from the color and stripe patterns of Lake Malawi cichlids although the information was not enough for errorless species identification. Our results indicate that there appears an unavoidable difficulty in automatic species identification of cichlid fishes, which may arise from short divergence times and gene flow between closely related species. PMID:24204918

  15. Chip level simulation of fault tolerant computers

    NASA Technical Reports Server (NTRS)

    Armstrong, J. R.

    1983-01-01

    Chip level modeling techniques, functional fault simulation, simulation software development, a more efficient, high level version of GSP, and a parallel architecture for functional simulation are discussed.

  16. Factors Influencing Exemplary Science Teachers' Levels of Computer Use

    ERIC Educational Resources Information Center

    Hakverdi, Meral; Dana, Thomas M.; Swain, Colleen

    2011-01-01

    The purpose of this study was to examine exemplary science teachers' use of technology in science instruction, factors influencing their level of computer use, their level of knowledge/skills in using specific computer applications for science instruction, their use of computer-related applications/tools during their instruction, and their…

  17. Sound source localization identification accuracy: Level and duration dependencies.

    PubMed

    Yost, William A

    2016-07-01

    Sound source localization accuracy for noises was measured for sources in the front azimuthal open field mainly as a function of overall noise level and duration. An identification procedure was used in which listeners identify which loudspeakers presented a sound. Noises were filtered and differed in bandwidth and center frequency. Sound source localization accuracy depended on the bandwidth of the stimuli, and for the narrow bandwidths, accuracy depended on the filter's center frequency. Sound source localization accuracy did not depend on overall level or duration. PMID:27475204

  18. Disaster victim identification: new applications for postmortem computed tomography.

    PubMed

    Blau, Soren; Robertson, Shelley; Johnstone, Marnie

    2008-07-01

    Mass fatalities can present the forensic anthropologist and forensic pathologist with a different set of challenges to those presented by a single fatality. To date radiography has played an important role in the disaster victim identification (DVI) process. The aim of this paper is to highlight the benefits of applying computed tomography (CT) technology to the DVI process. The paper begins by reviewing the extent to which sophisticated imaging techniques, specifically CT, have been increasingly used to assist in the analysis of deceased individuals. A small scale case study is then presented which describes aspects of the DVI process following a recent Australian aviation disaster involving two individuals. Having grided the scene of the disaster, a total of 41 bags of heavily disrupted human remains were collected. A postmortem examination was subsequently undertaken. Analysis of the CT images of all body parts (n = 162) made it possible not only to identify and side differentially preserved skeletal elements which were anatomically unrecognizable in the heavily disrupted body masses, but also to observe and record useful identifying features such as surgical implants. In this case the role of the forensic anthropologist and CT technology were paramount in facilitating a quick identification, and subsequently, an effective and timely reconciliation, of body parts. Although this case study is small scale, it illustrates the enormous potential for CT imaging to complement the existing DVI process. PMID:18547358

  19. Computer program to predict aircraft noise levels

    NASA Technical Reports Server (NTRS)

    Clark, B. J.

    1981-01-01

    Methods developed at the NASA Lewis Research Center for predicting the noise contributions from various aircraft noise sources were programmed to predict aircraft noise levels either in flight or in ground tests. The noise sources include fan inlet and exhaust, jet, flap (for powered lift), core (combustor), turbine, and airframe. Noise propagation corrections are available for atmospheric attenuation, ground reflections, extra ground attenuation, and shielding. Outputs can include spectra, overall sound pressure level, perceived noise level, tone-weighted perceived noise level, and effective perceived noise level at locations specified by the user. Footprint contour coordinates and approximate footprint areas can also be calculated. Inputs and outputs can be in either System International or U.S. customary units. The subroutines for each noise source and propagation correction are described. A complete listing is given.

  20. Evaluation of new computer-enhanced identification program for microorganisms: adaptation of BioBASE for identification of members of the family Enterobacteriaceae.

    PubMed Central

    Miller, J M; Alachi, P

    1996-01-01

    We report the use of BioBASE, a computer-enhanced numerical identification software package, as a valuable aid for the rapid identification of unknown enteric bacilli when using conventional biochemicals. We compared BioBASE identification results with those of the Centers for Disease Control and Prevention's mainframe computer to determine the former's accuracy in identifying both common and rare unknown isolates of the family Enterobacteriaceae by using the same compiled data matrix. Of 293 enteric strains tested by BioBASE, 278 (94.9%) were correctly identified to the species level; 13 (4.4%) were assigned unacceptable or low discrimination profiles, but 8 of these (2.7%) were listed as the first choice; and 2 (0.7%) were not identified correctly because of their highly unusual biochemical profiles. The software is user friendly, rapid, and accurate and would be of value to any laboratory that uses conventional biochemicals. PMID:8748298

  1. Factors influencing exemplary science teachers' levels of computer use

    NASA Astrophysics Data System (ADS)

    Hakverdi, Meral

    This study examines exemplary science teachers' use of technology in science instruction, factors influencing their level of computer use, their level of knowledge/skills in using specific computer applications for science instruction, their use of computer-related applications/tools during their instruction, and their students' use of computer applications/tools in or for their science class. After a relevant review of the literature certain variables were selected for analysis. These variables included personal self-efficacy in teaching with computers, outcome expectancy, pupil-control ideology, level of computer use, age, gender, teaching experience, personal computer use, professional computer use and science teachers' level of knowledge/skills in using specific computer applications for science instruction. The sample for this study includes middle and high school science teachers who received the Presidential Award for Excellence in Science Teaching Award (sponsored by the White House and the National Science Foundation) between the years 1997 and 2003 from all 50 states and U.S. territories. Award-winning science teachers were contacted about the survey via e-mail or letter with an enclosed return envelope. Of the 334 award-winning science teachers, usable responses were received from 92 science teachers, which made a response rate of 27.5%. Analysis of the survey responses indicated that exemplary science teachers have a variety of knowledge/skills in using computer related applications/tools. The most commonly used computer applications/tools are information retrieval via the Internet, presentation tools, online communication, digital cameras, and data collection probes. Results of the study revealed that students' use of technology in their science classroom is highly correlated with the frequency of their science teachers' use of computer applications/tools. The results of the multiple regression analysis revealed that personal self-efficacy related to

  2. 24 CFR 990.175 - Utilities expense level: Computation of the current consumption level.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...: Computation of the current consumption level. 990.175 Section 990.175 Housing and Urban Development... Calculating Formula Expenses § 990.175 Utilities expense level: Computation of the current consumption level. The current consumption level shall be the actual amount of each utility consumed during the...

  3. 24 CFR 990.175 - Utilities expense level: Computation of the current consumption level.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...: Computation of the current consumption level. 990.175 Section 990.175 Housing and Urban Development... Calculating Formula Expenses § 990.175 Utilities expense level: Computation of the current consumption level. The current consumption level shall be the actual amount of each utility consumed during the...

  4. 24 CFR 990.175 - Utilities expense level: Computation of the current consumption level.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...: Computation of the current consumption level. 990.175 Section 990.175 Housing and Urban Development... Calculating Formula Expenses § 990.175 Utilities expense level: Computation of the current consumption level. The current consumption level shall be the actual amount of each utility consumed during the...

  5. 24 CFR 990.180 - Utilities expense level: Computation of the rolling base consumption level.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...: Computation of the rolling base consumption level. 990.180 Section 990.180 Housing and Urban Development... Calculating Formula Expenses § 990.180 Utilities expense level: Computation of the rolling base consumption level. (a) General. (1) The rolling base consumption level (RBCL) shall be equal to the average...

  6. 24 CFR 990.180 - Utilities expense level: Computation of the rolling base consumption level.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...: Computation of the rolling base consumption level. 990.180 Section 990.180 Housing and Urban Development... Calculating Formula Expenses § 990.180 Utilities expense level: Computation of the rolling base consumption level. (a) General. (1) The rolling base consumption level (RBCL) shall be equal to the average...

  7. Computational Identification of Novel Genes: Current and Future Perspectives

    PubMed Central

    Klasberg, Steffen; Bitard-Feildel, Tristan; Mallet, Ludovic

    2016-01-01

    While it has long been thought that all genomic novelties are derived from the existing material, many genes lacking homology to known genes were found in recent genome projects. Some of these novel genes were proposed to have evolved de novo, ie, out of noncoding sequences, whereas some have been shown to follow a duplication and divergence process. Their discovery called for an extension of the historical hypotheses about gene origination. Besides the theoretical breakthrough, increasing evidence accumulated that novel genes play important roles in evolutionary processes, including adaptation and speciation events. Different techniques are available to identify genes and classify them as novel. Their classification as novel is usually based on their similarity to known genes, or lack thereof, detected by comparative genomics or against databases. Computational approaches are further prime methods that can be based on existing models or leveraging biological evidences from experiments. Identification of novel genes remains however a challenging task. With the constant software and technologies updates, no gold standard, and no available benchmark, evaluation and characterization of genomic novelty is a vibrant field. In this review, the classical and state-of-the-art tools for gene prediction are introduced. The current methods for novel gene detection are presented; the methodological strategies and their limits are discussed along with perspective approaches for further studies. PMID:27493475

  8. Computational identification of obligatorily autocatalytic replicators embedded in metabolic networks

    PubMed Central

    Kun, Ádám; Papp, Balázs; Szathmáry, Eörs

    2008-01-01

    Background If chemical A is necessary for the synthesis of more chemical A, then A has the power of replication (such systems are known as autocatalytic systems). We provide the first systems-level analysis searching for small-molecular autocatalytic components in the metabolisms of diverse organisms, including an inferred minimal metabolism. Results We find that intermediary metabolism is invariably autocatalytic for ATP. Furthermore, we provide evidence for the existence of additional, organism-specific autocatalytic metabolites in the forms of coenzymes (NAD+, coenzyme A, tetrahydrofolate, quinones) and sugars. Although the enzymatic reactions of a number of autocatalytic cycles are present in most of the studied organisms, they display obligatorily autocatalytic behavior in a few networks only, hence demonstrating the need for a systems-level approach to identify metabolic replicators embedded in large networks. Conclusion Metabolic replicators are apparently common and potentially both universal and ancestral: without their presence, kick-starting metabolic networks is impossible, even if all enzymes and genes are present in the same cell. Identification of metabolic replicators is also important for attempts to create synthetic cells, as some of these autocatalytic molecules will presumably be needed to be added to the system as, by definition, the system cannot synthesize them without their initial presence. PMID:18331628

  9. 24 CFR 990.180 - Utilities expense level: Computation of the rolling base consumption level.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...: Computation of the rolling base consumption level. 990.180 Section 990.180 Housing and Urban Development... Calculating Formula Expenses § 990.180 Utilities expense level: Computation of the rolling base consumption... RBCL not to be comparable to the current year's consumption level. (c) Financial incentives. The...

  10. Identification of natural images and computer-generated graphics based on statistical and textural features.

    PubMed

    Peng, Fei; Li, Jiao-ting; Long, Min

    2015-03-01

    To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. PMID:25537575

  11. Dysregulation in level of goal and action identification across psychological disorders.

    PubMed

    Watkins, Edward

    2011-03-01

    Goals, events, and actions can be mentally represented within a hierarchical framework that ranges from more abstract to more concrete levels of identification. A more abstract level of identification involves general, superordinate, and decontextualized mental representations that convey the meaning of goals, events, and actions, "why" an action is performed, and its purpose, ends, and consequences. A more concrete level of identification involves specific and subordinate mental representations that include contextual details of goals, events, and actions, and the specific "how" details of an action. This review considers three lines of evidence for considering that dysregulation of level of goal/action identification may be a transdiagnostic process. First, there is evidence that different levels of identification have distinct functional consequences and that in non-clinical samples level of goal/action identification appears to be regulated in a flexible and adaptive way to match the level of goal/action identification to circumstances. Second, there is evidence that level of goal/action identification causally influences symptoms and processes involved in psychological disorders, including emotional response, repetitive thought, impulsivity, problem solving and procrastination. Third, there is evidence that the level of goal/action identification is biased and/or dysregulated in certain psychological disorders, with a bias towards more abstract identification for negative events in depression, GAD, PTSD, and social anxiety. PMID:20579789

  12. Dysregulation in level of goal and action identification across psychological disorders

    PubMed Central

    Watkins, Edward

    2011-01-01

    Goals, events, and actions can be mentally represented within a hierarchical framework that ranges from more abstract to more concrete levels of identification. A more abstract level of identification involves general, superordinate, and decontextualized mental representations that convey the meaning of goals, events, and actions, “why” an action is performed, and its purpose, ends, and consequences. A more concrete level of identification involves specific and subordinate mental representations that include contextual details of goals, events, and actions, and the specific “how” details of an action. This review considers three lines of evidence for considering that dysregulation of level of goal/action identification may be a transdiagnostic process. First, there is evidence that different levels of identification have distinct functional consequences and that in non-clinical samples level of goal/action identification appears to be regulated in a flexible and adaptive way to match the level of goal/action identification to circumstances. Second, there is evidence that level of goal/action identification causally influences symptoms and processes involved in psychological disorders, including emotional response, repetitive thought, impulsivity, problem solving and procrastination. Third, there is evidence that the level of goal/action identification is biased and/or dysregulated in certain psychological disorders, with a bias towards more abstract identification for negative events in depression, GAD, PTSD, and social anxiety. PMID:20579789

  13. A novel computational method for the identification of plant alternative splice sites.

    PubMed

    Cui, Ying; Han, Jiuqiang; Zhong, Dexing; Liu, Ruiling

    2013-02-01

    Alternative splicing (AS) increases protein diversity by generating multiple transcript isoforms from a single gene in higher eukaryotes. Up to 48% of plant genes exhibit alternative splicing, which has proven to be involved in some important plant functions such as the stress response. A hybrid feature extraction approach which combing the position weight matrix (PWM) with the increment of diversity (ID) was proposed to represent the base conservative level (BCL) near splice sites and the similarity level of two datasets, respectively. Using the extracted features, the support vector machine (SVM) was applied to classify alternative and constitutive splice sites. By the proposed algorithm, 80.8% of donor sites and 85.4% of acceptor sites were correctly classified. It is anticipated that the novel computational method is promising for the identification of AS sites in plants. PMID:23313482

  14. The algorithmic level is the bridge between computation and brain.

    PubMed

    Love, Bradley C

    2015-04-01

    Every scientist chooses a preferred level of analysis and this choice shapes the research program, even determining what counts as evidence. This contribution revisits Marr's (1982) three levels of analysis (implementation, algorithmic, and computational) and evaluates the prospect of making progress at each individual level. After reviewing limitations of theorizing within a level, two strategies for integration across levels are considered. One is top-down in that it attempts to build a bridge from the computational to algorithmic level. Limitations of this approach include insufficient theoretical constraint at the computation level to provide a foundation for integration, and that people are suboptimal for reasons other than capacity limitations. Instead, an inside-out approach is forwarded in which all three levels of analysis are integrated via the algorithmic level. This approach maximally leverages mutual data constraints at all levels. For example, algorithmic models can be used to interpret brain imaging data, and brain imaging data can be used to select among competing models. Examples of this approach to integration are provided. This merging of levels raises questions about the relevance of Marr's tripartite view. PMID:25823496

  15. Levels of Evaluation for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Reeves, Thomas C.; Lent, Richard M.

    The uses and methods of four levels of evaluation which can be conducted during the development and implementation phases of computer-based instruction (CBI) programs are discussed in this paper. The levels of evaluation presented are: (1) documentation, (2) formative evaluation, (3) assessment of immediate learner effectiveness, and (4) impact…

  16. A Program for the Identification of the Enterobacteriaceae for Use in Teaching the Principles of Computer Identification of Bacteria.

    ERIC Educational Resources Information Center

    Hammonds, S. J.

    1990-01-01

    A technique for the numerical identification of bacteria using normalized likelihoods calculated from a probabilistic database is described, and the principles of the technique are explained. The listing of the computer program is included. Specimen results from the program, and examples of how they should be interpreted, are given. (KR)

  17. Computer ethics and teritary level education in Hong Kong

    SciTech Connect

    Wong, E.Y.W.; Davison, R.M.; Wade, P.W.

    1994-12-31

    This paper seeks to highlight some ethical issues relating to the increasing proliferation of Information Technology into our everyday lives. The authors explain their understanding of computer ethics, and give some reasons why the study of computer ethics is becoming increasingly pertinent. The paper looks at some of the problems that arise in attempting to develop appropriate ethical concepts in a constantly changing environment, and explores some of the ethical dilemmas arising from the increasing use of computers. Some initial research undertaken to explore the ideas and understanding of tertiary level students in Hong Kong on a number of ethical issues of interest is described, and our findings discussed. We hope that presenting this paper and eliciting subsequent discussion will enable us to draw up more comprehensive guidelines for the teaching of computer related ethics to tertiary level students, as well as reveal some directions for future research.

  18. Rugoscopy: Human identification by computer-assisted photographic superimposition technique

    PubMed Central

    Mohammed, Rezwana Begum; Patil, Rajendra G.; Pammi, V. R.; Sandya, M. Pavana; Kalyan, Siva V.; Anitha, A.

    2013-01-01

    Background: Human identification has been studied since fourteenth century and it has gradually advanced for forensic purposes. Traditional methods such as dental, fingerprint, and DNA comparisons are probably the most common techniques used in this context, allowing fast and secure identification processes. But, in circumstances where identification of an individual by fingerprint or dental record comparison is difficult, palatal rugae may be considered as an alternative source of material. Aim: The present study was done to evaluate the individualistic nature and use of palatal rugae patterns for personal identification and also to test the efficiency of computerized software for forensic identification by photographic superimposition of palatal photographs obtained from casts. Materials and Methods: Two sets of Alginate impressions were made from the upper arches of 100 individuals (50 males and 50 females) with one month interval in between and the casts were poured. All the teeth except the incisors were removed to ensure that only the palate could be used in identification process. In one set of the casts, the palatal rugae were highlighted with a graphite pencil. All the 200 casts were randomly numbered, and then, they were photographed with a 10.1 Mega Pixel Kodak digital camera using standardized method. Using computerized software, the digital photographs of the models without highlighting the palatal rugae were overlapped over the images (transparent) of the palatal rugae with highlighted palatal rugae, in order to identify the pairs by superimposition technique. Incisors were remained and used as landmarks to determine the magnification required to bring the two set of photographs to the same size, in order to make perfect superimposition of images. Results: The result of the overlapping of the digital photographs of highlighted palatal rugae over normal set of models without highlighted palatal rugae resulted in 100% positive identification. Conclusion

  19. Computing NLTE Opacities -- Node Level Parallel Calculation

    SciTech Connect

    Holladay, Daniel

    2015-09-11

    Presentation. The goal: to produce a robust library capable of computing reasonably accurate opacities inline with the assumption of LTE relaxed (non-LTE). Near term: demonstrate acceleration of non-LTE opacity computation. Far term (if funded): connect to application codes with in-line capability and compute opacities. Study science problems. Use efficient algorithms that expose many levels of parallelism and utilize good memory access patterns for use on advanced architectures. Portability to multiple types of hardware including multicore processors, manycore processors such as KNL, GPUs, etc. Easily coupled to radiation hydrodynamics and thermal radiative transfer codes.

  20. Enhanced Fault-Tolerant Quantum Computing in d -Level Systems

    NASA Astrophysics Data System (ADS)

    Campbell, Earl T.

    2014-12-01

    Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d -level qudit systems with prime d . The codes use n =d -1 qudits and can detect up to ˜d /3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d .

  1. Variance and bias computation for enhanced system identification

    NASA Technical Reports Server (NTRS)

    Bergmann, Martin; Longman, Richard W.; Juang, Jer-Nan

    1989-01-01

    A study is made of the use of a series of variance and bias confidence criteria recently developed for the eigensystem realization algorithm (ERA) identification technique. The criteria are shown to be very effective, not only for indicating the accuracy of the identification results (especially in terms of confidence intervals), but also for helping the ERA user to obtain better results. They help determine the best sample interval, the true system order, how much data to use and whether to introduce gaps in the data used, what dimension Hankel matrix to use, and how to limit the bias or correct for bias in the estimates.

  2. Contours identification of elements in a cone beam computed tomography for investigating maxillary cysts

    NASA Astrophysics Data System (ADS)

    Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia

    2013-10-01

    Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.

  3. Teaching Higher Level Thinking Skills through Computer Courseware.

    ERIC Educational Resources Information Center

    Edwards, Lois

    A rationale is presented for teaching gifted students to gain computer literacy, learn programing, use utility software (e.g., word processing packages), and use interactive educational courseware containing drills, simulations, or educational strategy games to develop higher level and creative thinking skills. Evaluation of courseware for gifted…

  4. OS friendly microprocessor architecture: Hardware level computer security

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; La Fratta, Patrick

    2016-05-01

    We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.

  5. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  6. Logic as Marr's Computational Level: Four Case Studies.

    PubMed

    Baggio, Giosuè; van Lambalgen, Michiel; Hagoort, Peter

    2015-04-01

    We sketch four applications of Marr's levels-of-analysis methodology to the relations between logic and experimental data in the cognitive neuroscience of language and reasoning. The first part of the paper illustrates the explanatory power of computational level theories based on logic. We show that a Bayesian treatment of the suppression task in reasoning with conditionals is ruled out by EEG data, supporting instead an analysis based on defeasible logic. Further, we describe how results from an EEG study on temporal prepositions can be reanalyzed using formal semantics, addressing a potential confound. The second part of the article demonstrates the predictive power of logical theories drawing on EEG data on processing progressive constructions and on behavioral data on conditional reasoning in people with autism. Logical theories can constrain processing hypotheses all the way down to neurophysiology, and conversely neuroscience data can guide the selection of alternative computational level models of cognition. PMID:25417838

  7. Multi-level Hierarchical Poly Tree computer architectures

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug

    1990-01-01

    Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.

  8. Correlations of Electrophysiological Measurements with Identification Levels of Ancient Chinese Characters

    PubMed Central

    Qi, Zhengyang; Wang, Xiaolong; Hao, Shuang; Zhu, Chuanlin; He, Weiqi; Luo, Wenbo

    2016-01-01

    Studies of event-related potential (ERP) in the human brain have shown that the N170 component can reliably distinguish among different object categories. However, it is unclear whether this is true for different identifiable levels within a single category. In the present study, we used ERP recording to examine the neural response to different identification levels and orientations (upright vs. inverted) of Chinese characters. The results showed that P1, N170, and P250 were modulated by different identification levels of Chinese characters. Moreover, time frequency analysis showed similar results, indicating that identification levels were associated with object recognition, particularly during processing of a single categorical stimulus. PMID:26982215

  9. 24 CFR 990.175 - Utilities expense level: Computation of the current consumption level.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Utilities expense level: Computation of the current consumption level. 990.175 Section 990.175 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF HOUSING AND...

  10. 24 CFR 990.180 - Utilities expense level: Computation of the rolling base consumption level.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Utilities expense level: Computation of the rolling base consumption level. 990.180 Section 990.180 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF HOUSING...

  11. 24 CFR 990.175 - Utilities expense level: Computation of the current consumption level.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Utilities expense level: Computation of the current consumption level. 990.175 Section 990.175 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT (CONTINUED) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF HOUSING AND...

  12. 24 CFR 990.180 - Utilities expense level: Computation of the rolling base consumption level.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Utilities expense level: Computation of the rolling base consumption level. 990.180 Section 990.180 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT (CONTINUED) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF HOUSING...

  13. Reservoir Computing approach to Great Lakes water level forecasting

    NASA Astrophysics Data System (ADS)

    Coulibaly, Paulin

    2010-02-01

    SummaryThe use of echo state network (ESN) for dynamical system modeling is known as Reservoir Computing and has been shown to be effective for a number of applications, including signal processing, learning grammatical structure, time series prediction and motor/system control. However, the performance of Reservoir Computing approach on hydrological time series remains largely unexplored. This study investigates the potential of ESN or Reservoir Computing for long-term prediction of lake water levels. Great Lakes water levels from 1918 to 2005 are used to develop and evaluate the ESN models. The forecast performance of the ESN-based models is compared with the results obtained from two benchmark models, the conventional recurrent neural network (RNN) and the Bayesian neural network (BNN). The test results indicate a strong ability of ESN models to provide improved lake level forecasts up to 10-month ahead - suggesting that the inherent structure and innovative learning approach of the ESN is suitable for hydrological time series modeling. Another particular advantage of ESN learning approach is that it simplifies the network training complexity and avoids the limitations inherent to the gradient descent optimization method. Overall, it is shown that the ESN can be a good alternative method for improved lake level forecasting, performing better than both the RNN and the BNN on the four selected Great Lakes time series, namely, the Lakes Erie, Huron-Michigan, Ontario, and Superior.

  14. GC/IR computer-aided identification of anaerobic bacteria

    NASA Astrophysics Data System (ADS)

    Ye, Hunian; Zhang, Feng S.; Yang, Hua; Li, Zhu; Ye, Song

    1993-09-01

    A new method was developed to identify anaerobic bacteria by using pattern recognition. The method is depended on GC / JR data. The system is intended for use as a precise rapid and reproduceable aid in the identification of unknown isolates. Key Words: Anaerobic bacteria Pattern recognition Computeraided identification GC / JR 1 . TNTRODUCTTON A major problem in the field of anaerobic bacteriology is the difficulty in accurately precisely and rapidly identifying unknown isolates. Tn the proceedings of the Third International Symposium on Rapid Methods and Automation in Microbiology C. M. Moss said: " Chromatographic analysis is a new future for clinical microbiology" . 12 years past and so far it seems that this is an idea whose time has not get come but it close. Now two major advances that have brought the technology forword in terms ofmaking it appropriate for use in the clinical laboratory can aldo be cited. One is the development and implementation of fused silica capillary columns. In contrast to packed columns and those of'' greater width these columns allow reproducible recovery of hydroxey fatty acids with the same carbon chain length. The second advance is the efficient data processing afforded by modern microcomputer systems. On the other hand the practical steps for sample preparation also are an advance in the clinical laboratory. Chromatographic Analysis means mainly of analysis of fatty acids. The most common

  15. A new computer-assisted technique to aid personal identification.

    PubMed

    De Angelis, Danilo; Sala, Remo; Cantatore, Angela; Grandi, Marco; Cattaneo, Cristina

    2009-07-01

    The paper describes a procedure aimed at identification from two-dimensional (2D) images (video-surveillance tapes, for example) by comparison with a three-dimensional (3D) facial model of a suspect. The application is intended to provide a tool which can help in analyzing compatibility or incompatibility between a criminal and a suspect's facial traits. The authors apply the concept of "geometrically compatible images". The idea is to use a scanner to reconstruct a 3D facial model of a suspect and to compare it to a frame extracted from the video-surveillance sequence which shows the face of the perpetrator. Repositioning and reorientation of the 3D model according to subject's face framed in the crime scene photo are manually accomplished, after automatic resizing. Repositioning and reorientation are performed in correspondence of anthropometric landmarks, distinctive for that person and detected both on the 2D face and on the 3D model. In this way, the superimposition between the original two-dimensional facial image and the three-dimensional one is obtained and a judgment is formulated by an expert on the basis of the fit between the anatomical facial districts of the two subjects. The procedure reduces the influence of face orientation and may be a useful tool in identification. PMID:19082838

  16. Computer-guided drug repurposing: identification of trypanocidal activity of clofazimine, benidipine and saquinavir.

    PubMed

    Bellera, Carolina L; Balcazar, Darío E; Vanrell, M Cristina; Casassa, A Florencia; Palestro, Pablo H; Gavernet, Luciana; Labriola, Carlos A; Gálvez, Jorge; Bruno-Blanch, Luis E; Romano, Patricia S; Carrillo, Carolina; Talevi, Alan

    2015-03-26

    In spite of remarkable advances in the knowledge on Trypanosoma cruzi biology, no medications to treat Chagas disease have been approved in the last 40 years and almost 8 million people remain infected. Since the public sector and non-profit organizations play a significant role in the research efforts on Chagas disease, it is important to implement research strategies that promote translation of basic research into the clinical practice. Recent international public-private initiatives address the potential of drug repositioning (i.e. finding second or further medical uses for known-medications) which can substantially improve the success at clinical trials and the innovation in the pharmaceutical field. In this work, we present the computer-aided identification of approved drugs clofazimine, benidipine and saquinavir as potential trypanocidal compounds and test their effects at biochemical as much as cellular level on different parasite stages. According to the obtained results, we discuss biopharmaceutical, toxicological and physiopathological criteria applied to decide to move clofazimine and benidipine into preclinical phase, in an acute model of infection. The article illustrates the potential of computer-guided drug repositioning to integrate and optimize drug discovery and preclinical development; it also proposes rational rules to select which among repositioned candidates should advance to investigational drug status and offers a new insight on clofazimine and benidipine as candidate treatments for Chagas disease. One Sentence Summary: We present the computer-guided drug repositioning of three approved drugs as potential new treatments for Chagas disease, integrating computer-aided drug screening and biochemical, cellular and preclinical tests. PMID:25707014

  17. Computing Bounds on Resource Levels for Flexible Plans

    NASA Technical Reports Server (NTRS)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow

  18. Bytes and Bugs: Integrating Computer Programming with Bacteria Identification.

    ERIC Educational Resources Information Center

    Danciger, Michael

    1986-01-01

    By using a computer program to identify bacteria, students sharpen their analytical skills and gain familiarity with procedures used in laboratories outside the university. Although it is ideal for identifying a bacterium, the program can be adapted to many other disciplines. (Author)

  19. DEVELOPMENT OF COMPUTATIONAL TOOLS FOR OPTIMAL IDENTIFICATION OF BIOLOGICAL NETWORKS

    EPA Science Inventory

    Following the theoretical analysis and computer simulations, the next step for the development of SNIP will be a proof-of-principle laboratory application. Specifically, we have obtained a synthetic transcriptional cascade (harbored in Escherichia coli...

  20. A Simple Computer Application for the Identification of Conifer Genera

    ERIC Educational Resources Information Center

    Strain, Steven R.; Chmielewski, Jerry G.

    2010-01-01

    The National Science Education Standards prescribe that an understanding of the importance of classifying organisms be one component of a student's educational experience in the life sciences. The use of a classification scheme to identify organisms is one way of addressing this goal. We describe Conifer ID, a computer application that assists…

  1. A survey of computational methods and error rate estimation procedures for peptide and protein identification in shotgun proteomics

    PubMed Central

    Nesvizhskii, Alexey I.

    2010-01-01

    This manuscript provides a comprehensive review of the peptide and protein identification process using tandem mass spectrometry (MS/MS) data generated in shotgun proteomic experiments. The commonly used methods for assigning peptide sequences to MS/MS spectra are critically discussed and compared, from basic strategies to advanced multi-stage approaches. A particular attention is paid to the problem of false-positive identifications. Existing statistical approaches for assessing the significance of peptide to spectrum matches are surveyed, ranging from single-spectrum approaches such as expectation values to global error rate estimation procedures such as false discovery rates and posterior probabilities. The importance of using auxiliary discriminant information (mass accuracy, peptide separation coordinates, digestion properties, and etc.) is discussed, and advanced computational approaches for joint modeling of multiple sources of information are presented. This review also includes a detailed analysis of the issues affecting the interpretation of data at the protein level, including the amplification of error rates when going from peptide to protein level, and the ambiguities in inferring the identifies of sample proteins in the presence of shared peptides. Commonly used methods for computing protein-level confidence scores are discussed in detail. The review concludes with a discussion of several outstanding computational issues. PMID:20816881

  2. A color and texture based multi-level fusion scheme for ethnicity identification

    NASA Astrophysics Data System (ADS)

    Du, Hongbo; Salah, Sheerko Hma; Ahmed, Hawkar O.

    2014-05-01

    Ethnicity identification of face images is of interest in many areas of application. Different from face recognition of individuals, ethnicity identification classifies faces according to the common features of a specific ethnic group. This paper presents a multi-level fusion scheme for ethnicity identification that combines texture features of local areas of a face using local binary patterns with color features using HSV binning. The scheme fuses the decisions from a k-nearest neighbor classifier and a support vector machine classifier into a final identification decision. We have tested the scheme on a collection of face images from a number of publicly available databases. The results demonstrate the effectiveness of the combined features and improvements on accuracy of identification by the fusion scheme over the identification using individual features and other state-of-art techniques.

  3. Computational Identification of Active Enhancers in Model Organisms

    PubMed Central

    Wang, Chengqi; Zhang, Michael Q.; Zhang, Zhihua

    2013-01-01

    As a class of cis-regulatory elements, enhancers were first identified as the genomic regions that are able to markedly increase the transcription of genes nearly 30 years ago. Enhancers can regulate gene expression in a cell-type specific and developmental stage specific manner. Although experimental technologies have been developed to identify enhancers genome-wide, the design principle of the regulatory elements and the way they rewire the transcriptional regulatory network tempo-spatially are far from clear. At present, developing predictive methods for enhancers, particularly for the cell-type specific activity of enhancers, is central to computational biology. In this review, we survey the current computational approaches for active enhancer prediction and discuss future directions. PMID:23685394

  4. Computed tomography identification of an exophytic colonic liposarcoma.

    PubMed

    Chou, Chung Kuao; Chen, Sung-Ting

    2016-09-01

    It may be difficult to ascertain the relationship between a large intra-abdominal tumor and the adjacent organs if they are close together. In the current case, a definitive preoperative diagnosis of an exophytic colonic tumor was obtained by the demonstration of obtuse angles between the tumor and colon and by distinct recognition of the mucosa-submucosa of the colonic wall on computed tomography; the accuracy of this preoperative diagnosis was subsequently confirmed by pathologic findings. PMID:27594941

  5. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  6. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2015-08-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  7. All-memristive neuromorphic computing with level-tuned neurons

    NASA Astrophysics Data System (ADS)

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-01

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.

  8. All-memristive neuromorphic computing with level-tuned neurons.

    PubMed

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-01

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors. PMID:27455898

  9. MAX--An Interactive Computer Program for Teaching Identification of Clay Minerals by X-ray Diffraction.

    ERIC Educational Resources Information Center

    Kohut, Connie K.; And Others

    1993-01-01

    Discusses MAX, an interactive computer program for teaching identification of clay minerals based on standard x-ray diffraction characteristics. The program provides tutorial-type exercises for identification of 16 clay standards, self-evaluation exercises, diffractograms of 28 soil clay minerals, and identification of nonclay minerals. (MDH)

  10. Computational identification of MoRFs in protein sequences

    PubMed Central

    Malhis, Nawar; Gsponer, Jörg

    2015-01-01

    Motivation: Intrinsically disordered regions of proteins play an essential role in the regulation of various biological processes. Key to their regulatory function is the binding of molecular recognition features (MoRFs) to globular protein domains in a process known as a disorder-to-order transition. Predicting the location of MoRFs in protein sequences with high accuracy remains an important computational challenge. Method: In this study, we introduce MoRFCHiBi, a new computational approach for fast and accurate prediction of MoRFs in protein sequences. MoRFCHiBi combines the outcomes of two support vector machine (SVM) models that take advantage of two different kernels with high noise tolerance. The first, SVMS, is designed to extract maximal information from the general contrast in amino acid compositions between MoRFs, their surrounding regions (Flanks), and the remainders of the sequences. The second, SVMT, is used to identify similarities between regions in a query sequence and MoRFs of the training set. Results: We evaluated the performance of our predictor by comparing its results with those of two currently available MoRF predictors, MoRFpred and ANCHOR. Using three test sets that have previously been collected and used to evaluate MoRFpred and ANCHOR, we demonstrate that MoRFCHiBi outperforms the other predictors with respect to different evaluation metrics. In addition, MoRFCHiBi is downloadable and fast, which makes it useful as a component in other computational prediction tools. Availability and implementation: http://www.chibi.ubc.ca/morf/. Contact: gsponer@chibi.ubc.ca. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25637562

  11. Cloud identification using genetic algorithms and massively parallel computation

    NASA Technical Reports Server (NTRS)

    Buckles, Bill P.; Petry, Frederick E.

    1996-01-01

    As a Guest Computational Investigator under the NASA administered component of the High Performance Computing and Communication Program, we implemented a massively parallel genetic algorithm on the MasPar SIMD computer. Experiments were conducted using Earth Science data in the domains of meteorology and oceanography. Results obtained in these domains are competitive with, and in most cases better than, similar problems solved using other methods. In the meteorological domain, we chose to identify clouds using AVHRR spectral data. Four cloud speciations were used although most researchers settle for three. Results were remarkedly consistent across all tests (91% accuracy). Refinements of this method may lead to more timely and complete information for Global Circulation Models (GCMS) that are prevalent in weather forecasting and global environment studies. In the oceanographic domain, we chose to identify ocean currents from a spectrometer having similar characteristics to AVHRR. Here the results were mixed (60% to 80% accuracy). Given that one is willing to run the experiment several times (say 10), then it is acceptable to claim the higher accuracy rating. This problem has never been successfully automated. Therefore, these results are encouraging even though less impressive than the cloud experiment. Successful conclusion of an automated ocean current detection system would impact coastal fishing, naval tactics, and the study of micro-climates. Finally we contributed to the basic knowledge of GA (genetic algorithm) behavior in parallel environments. We developed better knowledge of the use of subpopulations in the context of shared breeding pools and the migration of individuals. Rigorous experiments were conducted based on quantifiable performance criteria. While much of the work confirmed current wisdom, for the first time we were able to submit conclusive evidence. The software developed under this grant was placed in the public domain. An extensive user

  12. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1971-01-01

    An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

  13. Parallel Computation of the Topology of Level Sets

    SciTech Connect

    Pascucci, V; Cole-McLaughlin, K

    2004-12-16

    This paper introduces two efficient algorithms that compute the Contour Tree of a 3D scalar field F and its augmented version with the Betti numbers of each isosurface. The Contour Tree is a fundamental data structure in scientific visualization that is used to preprocess the domain mesh to allow optimal computation of isosurfaces with minimal overhead storage. The Contour Tree can also be used to build user interfaces reporting the complete topological characterization of a scalar field, as shown in Figure 1. Data exploration time is reduced since the user understands the evolution of level set components with changing isovalue. The Augmented Contour Tree provides even more accurate information segmenting the range space of the scalar field in portion of invariant topology. The exploration time for a single isosurface is also improved since its genus is known in advance. Our first new algorithm augments any given Contour Tree with the Betti numbers of all possible corresponding isocontours in linear time with the size of the tree. Moreover we show how to extend the scheme introduced in [3] with the Betti number computation without increasing its complexity. Thus, we improve on the time complexity from our previous approach [10] from O(m log m) to O(n log n + m), where m is the number of cells and n is the number of vertices in the domain of F. Our second contribution is a new divide-and-conquer algorithm that computes the Augmented Contour Tree with improved efficiency. The approach computes the output Contour Tree by merging two intermediate Contour Trees and is independent of the interpolant. In this way we confine any knowledge regarding a specific interpolant to an independent function that computes the tree for a single cell. We have implemented this function for the trilinear interpolant and plan to replace it with higher order interpolants when needed. The time complexity is O(n + t log n), where t is the number of critical points of F. For the first time

  14. Bridging Levels of Understanding in Schizophrenia Through Computational Modeling

    PubMed Central

    Anticevic, Alan; Murray, John D.; Barch, Deanna M.

    2015-01-01

    Schizophrenia is an illness with a remarkably complex symptom presentation that has thus far been out of reach of neuroscientific explanation. This presents a fundamental problem for developing better treatments that target specific symptoms or root causes. One promising path forward is the incorporation of computational neuroscience, which provides a way to formalize experimental observations and, in turn, make theoretical predictions for subsequent studies. We review three complementary approaches: (a) biophysically based models developed to test cellular-level and synaptic hypotheses, (b) connectionist models that give insight into large-scale neural-system-level disturbances in schizophrenia, and (c) models that provide a formalism for observations of complex behavioral deficits, such as negative symptoms. We argue that harnessing all of these modeling approaches represents a productive approach for better understanding schizophrenia. We discuss how blending these approaches can allow the field to progress toward a more comprehensive understanding of schizophrenia and its treatment. PMID:25960938

  15. Application of replica plating and computer analysis for rapid identification of bacteria in some foods. I. Identification scheme.

    PubMed

    Corlett, D A; Lee, J S; Sinnhuber, R O

    1965-09-01

    A method was devised and tested for a quantitative identification of microbial flora in foods. The colonies developing on the initial isolation plates were picked with sterile toothpicks and inoculated on a master plate in prearranged spacing and order. The growth on the master plates was then replicated on a series of solid-agar plates containing differential or selective agents. The characteristic growth and physiological responses of microbial isolates to penicillin, tylosin, vancomycin, streptomycin, chloramphenicol, neomycin, colistin, and to S S Agar, Staphylococcus Medium No. 110, and Potato Dextrose Agar were recorded, together with Gram reaction and cell morphology. This information was then fed into an IBM 1410 digital computer which grouped and analyzed each isolate into 10 microbial genera, or groups, according to the identification key. The identification scheme was established by use of reference culture studies and from the literature. This system was used to analyze the microbial flora in dover sole (Microstomus pacificus) and ground beef. The method described in this article enables one to examine large numbers of microbial isolates with simplicity. PMID:5325942

  16. Computationally Inexpensive Identification of Non-Informative Model Parameters

    NASA Astrophysics Data System (ADS)

    Mai, J.; Cuntz, M.; Kumar, R.; Zink, M.; Samaniego, L. E.; Schaefer, D.; Thober, S.; Rakovec, O.; Musuuza, J. L.; Craven, J. R.; Spieler, D.; Schrön, M.; Prykhodko, V.; Dalmasso, G.; Langenberg, B.; Attinger, S.

    2014-12-01

    Sensitivity analysis is used, for example, to identify parameters which induce the largest variability in model output and are thus informative during calibration. Variance-based techniques are employed for this purpose, which unfortunately require a large number of model evaluations and are thus ineligible for complex environmental models. We developed, therefore, a computational inexpensive screening method, which is based on Elementary Effects, that automatically separates informative and non-informative model parameters. The method was tested using the mesoscale hydrologic model (mHM) with 52 parameters. The model was applied in three European catchments with different hydrological characteristics, i.e. Neckar (Germany), Sava (Slovenia), and Guadalquivir (Spain). The method identified the same informative parameters as the standard Sobol method but with less than 1% of model runs. In Germany and Slovenia, 22 of 52 parameters were informative mostly in the formulations of evapotranspiration, interflow and percolation. In Spain 19 of 52 parameters were informative with an increased importance of soil parameters. We showed further that Sobol' indexes calculated for the subset of informative parameters are practically the same as Sobol' indexes before the screening but the number of model runs was reduced by more than 50%. The model mHM was then calibrated twice in the three test catchments. First all 52 parameters were taken into account and then only the informative parameters were calibrated while all others are kept fixed. The Nash-Sutcliffe efficiencies were 0.87 and 0.83 in Germany, 0.89 and 0.88 in Slovenia, and 0.86 and 0.85 in Spain, respectively. This minor loss of at most 4% in model performance comes along with a substantial decrease of at least 65% in model evaluations. In summary, we propose an efficient screening method to identify non-informative model parameters that can be discarded during further applications. We have shown that sensitivity

  17. Frequency domain transfer function identification using the computer program SYSFIT

    SciTech Connect

    Trudnowski, D.J.

    1992-12-01

    Because the primary application of SYSFIT for BPA involves studying power system dynamics, this investigation was geared toward simulating the effects that might be encountered in studying electromechanical oscillations in power systems. Although the intended focus of this work is power system oscillations, the studies are sufficiently genetic that the results can be applied to many types of oscillatory systems with closely-spaced modes. In general, there are two possible ways of solving the optimization problem. One is to use a least-squares optimization function and to write the system in such a form that the problem becomes one of linear least-squares. The solution can then be obtained using a standard least-squares technique. The other method involves using a search method to obtain the optimal model. This method allows considerably more freedom in forming the optimization function and model, but it requires an initial guess of the system parameters. SYSFIT employs this second approach. Detailed investigations were conducted into three main areas: (1) fitting to exact frequency response data of a linear system; (2) fitting to the discrete Fourier transformation of noisy data; and (3) fitting to multi-path systems. The first area consisted of investigating the effects of alternative optimization cost function options; using different optimization search methods; incorrect model order, missing response data; closely-spaced poles; and closely-spaced pole-zero pairs. Within the second area, different noise colorations and levels were studied. In the third area, methods were investigated for improving fitting results by incorporating more than one system path. The following is a list of guidelines and properties developed from the study for fitting a transfer function to the frequency response of a system using optimization search methods.

  18. Computer experiments in preparation of system identification from transient rotor model tests, part 2

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Yin, S. K.

    1974-01-01

    System identification methods which can extract model rotor paramenters with reasonable accuracy from noise polluted blade flapping transient measurements were developed. Usually parameter identification requires data on the state variables, that is on deflections and on rate of deflections. The small size of rotor models makes it, however, difficult to measure more than the blade flapping deflections. For the computer experiments it was, therefore, assumed that only noisy deflection measurements are available. Parameter identifications were performed for one and two unknown parameters. Both rotating coordinates and multiblade coordinates were used. It was found that data processing with a digital filter allowed by numerical differentiation a sufficiently accurate determination of the rates of deflection and of the accelerations to obtain reasonable parameter estimates with a simple linear estimator.

  19. Bouc-Wen model parameter identification for a MR fluid damper using computationally efficient GA.

    PubMed

    Kwok, N M; Ha, Q P; Nguyen, M T; Li, J; Samali, B

    2007-04-01

    A non-symmetrical Bouc-Wen model is proposed in this paper for magnetorheological (MR) fluid dampers. The model considers the effect of non-symmetrical hysteresis which has not been taken into account in the original Bouc-Wen model. The model parameters are identified with a Genetic Algorithm (GA) using its flexibility in identification of complex dynamics. The computational efficiency of the proposed GA is improved with the absorption of the selection stage into the crossover and mutation operations. Crossover and mutation are also made adaptive to the fitness values such that their probabilities need not be user-specified. Instead of using a sufficiently number of generations or a pre-determined fitness value, the algorithm termination criterion is formulated on the basis of a statistical hypothesis test, thus enhancing the performance of the parameter identification. Experimental test data of the damper displacement and force are used to verify the proposed approach with satisfactory parameter identification results. PMID:17349644

  20. Assessing the precision of high-throughput computational and laboratory approaches for the genome-wide identification of protein subcellular localization in bacteria

    PubMed Central

    Rey, Sébastien; Gardy, Jennifer L; Brinkman, Fiona SL

    2005-01-01

    Background Identification of a bacterial protein's subcellular localization (SCL) is important for genome annotation, function prediction and drug or vaccine target identification. Subcellular fractionation techniques combined with recent proteomics technology permits the identification of large numbers of proteins from distinct bacterial compartments. However, the fractionation of a complex structure like the cell into several subcellular compartments is not a trivial task. Contamination from other compartments may occur, and some proteins may reside in multiple localizations. New computational methods have been reported over the past few years that now permit much more accurate, genome-wide analysis of the SCL of protein sequences deduced from genomes. There is a need to compare such computational methods with laboratory proteomics approaches to identify the most effective current approach for genome-wide localization characterization and annotation. Results In this study, ten subcellular proteome analyses of bacterial compartments were reviewed. PSORTb version 2.0 was used to computationally predict the localization of proteins reported in these publications, and these computational predictions were then compared to the localizations determined by the proteomics study. By using a combined approach, we were able to identify a number of contaminants and proteins with dual localizations, and were able to more accurately identify membrane subproteomes. Our results allowed us to estimate the precision level of laboratory subproteome studies and we show here that, on average, recent high-precision computational methods such as PSORTb now have a lower error rate than laboratory methods. Conclusion We have performed the first focused comparison of genome-wide proteomic and computational methods for subcellular localization identification, and show that computational methods have now attained a level of precision that is exceeding that of high-throughput laboratory

  1. The role of computational methods in the identification of bioactive compounds.

    PubMed

    Glick, Meir; Jacoby, Edgar

    2011-08-01

    Computational methods play an ever increasing role in lead finding. A vast repertoire of molecular design and virtual screening methods emerged in the past two decades and are today routinely used. There is increasing awareness that there is no single best computational protocol and correspondingly there is a shift recommending the combination of complementary methods. A promising trend for the application of computational methods in lead finding is to take advantage of the vast amounts of HTS (High Throughput Screening) data to allow lead assessment by detailed systems-based data analysis, especially for phenotypic screens where the identification of compound-target pairs is the primary goal. Herein, we review trends and provide examples of successful applications of computational methods in lead finding. PMID:21411361

  2. Identification of Restrictive Computer and Software Variables among Preoperational Users of a Computer Learning Center.

    ERIC Educational Resources Information Center

    Kozubal, Diane K.

    While manufacturers have produced a wide variety of software said to be easy for even the youngest child to use, there are conflicting perspectives on computer issues such as ease of use, influence on meeting educational objectives, effects on procedural learning, and rationale for use with young children. Addressing these concerns, this practicum…

  3. Automatic Measurement of Water Levels by Using Image Identification Method in Open Channel

    NASA Astrophysics Data System (ADS)

    Chung Yang, Han; Xue Yang, Jia

    2014-05-01

    Water level data is indispensable to hydrology research, and it is important information for hydraulic engineering and overall utilization of water resources. The information of water level can be transmitted to management office by the network so that the management office may well understand whether the river level is exceeding the warning line. The existing water level measurement method can only present water levels in a form of data without any of images, the methods which make data just be a data and lack the sense of reality. Those images such as the rising or overflow of river level that the existing measurement method cannot obtain simultaneously. Therefore, this research employs a newly, improved method for water level measurement. Through the Video Surveillance System to record the images on site, an image of water surface will be snapped, and then the snapped image will be pre-processed and be compared with its altitude reference value to obtain a water level altitude value. With the ever-growing technology, the application scope of image identification is widely in increase. This research attempts to use image identification technology to analyze water level automatically. The image observation method used in this research is one of non-contact water level gage but it is quite different from other ones; the image observation method is cheap and the facilities can be set up beside an embankment of river or near the houses, thus the impact coming from external factors will be significantly reduced, and a real scene picture will be transmitted through wireless transmission. According to the dynamic water flow test held in an indoor experimental channel, the results of the research indicated that all of error levels of water level identification were less than 2% which meant the image identification could achieve identification result at different water levels. This new measurement method can offer instant river level figures and on-site video so that a

  4. The Use of Computer-Assisted Identification of ARIMA Time-Series.

    ERIC Educational Resources Information Center

    Brown, Roger L.

    This study was conducted to determine the effects of using various levels of tutorial statistical software for the tentative identification of nonseasonal ARIMA models, a statistical technique proposed by Box and Jenkins for the interpretation of time-series data. The Box-Jenkins approach is an iterative process encompassing several stages of…

  5. Computational Identification of Key Regulators in Two Different Colorectal Cancer Cell Lines

    PubMed Central

    Wlochowitz, Darius; Haubrock, Martin; Arackal, Jetcy; Bleckmann, Annalen; Wolff, Alexander; Beißbarth, Tim; Wingender, Edgar; Gültas, Mehmet

    2016-01-01

    Transcription factors (TFs) are gene regulatory proteins that are essential for an effective regulation of the transcriptional machinery. Today, it is known that their expression plays an important role in several types of cancer. Computational identification of key players in specific cancer cell lines is still an open challenge in cancer research. In this study, we present a systematic approach which combines colorectal cancer (CRC) cell lines, namely 1638N-T1 and CMT-93, and well-established computational methods in order to compare these cell lines on the level of transcriptional regulation as well as on a pathway level, i.e., the cancer cell-intrinsic pathway repertoire. For this purpose, we firstly applied the Trinity platform to detect signature genes, and then applied analyses of the geneXplain platform to these for detection of upstream transcriptional regulators and their regulatory networks. We created a CRC-specific position weight matrix (PWM) library based on the TRANSFAC database (release 2014.1) to minimize the rate of false predictions in the promoter analyses. Using our proposed workflow, we specifically focused on revealing the similarities and differences in transcriptional regulation between the two CRC cell lines, and report a number of well-known, cancer-associated TFs with significantly enriched binding sites in the promoter regions of the signature genes. We show that, although the signature genes of both cell lines show no overlap, they may still be regulated by common TFs in CRC. Based on our findings, we suggest that canonical Wnt signaling is activated in 1638N-T1, but inhibited in CMT-93 through cross-talks of Wnt signaling with the VDR signaling pathway and/or LXR-related pathways. Furthermore, our findings provide indication of several master regulators being present such as MLK3 and Mapk1 (ERK2) which might be important in cell proliferation, migration, and invasion of 1638N-T1 and CMT-93, respectively. Taken together, we provide

  6. Computational Identification of Key Regulators in Two Different Colorectal Cancer Cell Lines.

    PubMed

    Wlochowitz, Darius; Haubrock, Martin; Arackal, Jetcy; Bleckmann, Annalen; Wolff, Alexander; Beißbarth, Tim; Wingender, Edgar; Gültas, Mehmet

    2016-01-01

    Transcription factors (TFs) are gene regulatory proteins that are essential for an effective regulation of the transcriptional machinery. Today, it is known that their expression plays an important role in several types of cancer. Computational identification of key players in specific cancer cell lines is still an open challenge in cancer research. In this study, we present a systematic approach which combines colorectal cancer (CRC) cell lines, namely 1638N-T1 and CMT-93, and well-established computational methods in order to compare these cell lines on the level of transcriptional regulation as well as on a pathway level, i.e., the cancer cell-intrinsic pathway repertoire. For this purpose, we firstly applied the Trinity platform to detect signature genes, and then applied analyses of the geneXplain platform to these for detection of upstream transcriptional regulators and their regulatory networks. We created a CRC-specific position weight matrix (PWM) library based on the TRANSFAC database (release 2014.1) to minimize the rate of false predictions in the promoter analyses. Using our proposed workflow, we specifically focused on revealing the similarities and differences in transcriptional regulation between the two CRC cell lines, and report a number of well-known, cancer-associated TFs with significantly enriched binding sites in the promoter regions of the signature genes. We show that, although the signature genes of both cell lines show no overlap, they may still be regulated by common TFs in CRC. Based on our findings, we suggest that canonical Wnt signaling is activated in 1638N-T1, but inhibited in CMT-93 through cross-talks of Wnt signaling with the VDR signaling pathway and/or LXR-related pathways. Furthermore, our findings provide indication of several master regulators being present such as MLK3 and Mapk1 (ERK2) which might be important in cell proliferation, migration, and invasion of 1638N-T1 and CMT-93, respectively. Taken together, we provide

  7. Computer scoring of the Levels of Emotional Awareness Scale.

    PubMed

    Barchard, Kimberly A; Bajgar, Jane; Leaf, Duncan Ermini; Lane, Richard D

    2010-05-01

    The Levels of Emotional Awareness Scale (LEAS; Lane, Quinlan, Schwartz, Walker, & Zeitlan, 1990) is the most commonly used measure of differentiation and complexity in the use of emotion words and is associated with important clinical outcomes. Hand scoring the LEAS is time consuming. Existing programs for scoring open-ended responses cannot mimic LEAS hand scoring. Therefore, Leaf and Barchard (2006) developed the Program for Open-Ended Scoring (POES) to score the LEAS. In this article, we report a study in which the reliability and validity of POES scoring were examined. In the study, we used three participant types (adult community members, university students, children), three LEAS versions (paper based, computer based, and the LEAS for children), and a diverse set of criterion variables. Across this variety of conditions, the four POES scoring methods had internal consistencies and validities that were comparable to hand scoring, indicating that POES scoring can be used in clinical practice and other applied settings in which hand scoring is impractical. PMID:20479190

  8. Computer-assisted photo identification outperforms visible implant elastomers in an endangered salamander, Eurycea tonkawae.

    PubMed

    Bendik, Nathan F; Morrison, Thomas A; Gluesenkamp, Andrew G; Sanders, Mark S; O'Donnell, Lisa J

    2013-01-01

    Despite recognition that nearly one-third of the 6300 amphibian species are threatened with extinction, our understanding of the general ecology and population status of many amphibians is relatively poor. A widely-used method for monitoring amphibians involves injecting captured individuals with unique combinations of colored visible implant elastomer (VIE). We compared VIE identification to a less-invasive method - computer-assisted photographic identification (photoID) - in endangered Jollyville Plateau salamanders (Eurycea tonkawae), a species with a known range limited to eight stream drainages in central Texas. We based photoID on the unique pigmentation patterns on the dorsal head region of 1215 individual salamanders using identification software Wild-ID. We compared the performance of photoID methods to VIEs using both 'high-quality' and 'low-quality' images, which were taken using two different camera types and technologies. For high-quality images, the photoID method had a false rejection rate of 0.76% compared to 1.90% for VIEs. Using a comparable dataset of lower-quality images, the false rejection rate was much higher (15.9%). Photo matching scores were negatively correlated with time between captures, suggesting that evolving natural marks could increase misidentification rates in longer term capture-recapture studies. Our study demonstrates the utility of large-scale capture-recapture using photo identification methods for Eurycea and other species with stable natural marks that can be reliably photographed. PMID:23555669

  9. Computer-Assisted Photo Identification Outperforms Visible Implant Elastomers in an Endangered Salamander, Eurycea tonkawae

    PubMed Central

    Bendik, Nathan F.; Morrison, Thomas A.; Gluesenkamp, Andrew G.; Sanders, Mark S.; O’Donnell, Lisa J.

    2013-01-01

    Despite recognition that nearly one-third of the 6300 amphibian species are threatened with extinction, our understanding of the general ecology and population status of many amphibians is relatively poor. A widely-used method for monitoring amphibians involves injecting captured individuals with unique combinations of colored visible implant elastomer (VIE). We compared VIE identification to a less-invasive method – computer-assisted photographic identification (photoID) – in endangered Jollyville Plateau salamanders (Eurycea tonkawae), a species with a known range limited to eight stream drainages in central Texas. We based photoID on the unique pigmentation patterns on the dorsal head region of 1215 individual salamanders using identification software Wild-ID. We compared the performance of photoID methods to VIEs using both ‘high-quality’ and ‘low-quality’ images, which were taken using two different camera types and technologies. For high-quality images, the photoID method had a false rejection rate of 0.76% compared to 1.90% for VIEs. Using a comparable dataset of lower-quality images, the false rejection rate was much higher (15.9%). Photo matching scores were negatively correlated with time between captures, suggesting that evolving natural marks could increase misidentification rates in longer term capture-recapture studies. Our study demonstrates the utility of large-scale capture-recapture using photo identification methods for Eurycea and other species with stable natural marks that can be reliably photographed. PMID:23555669

  10. Evolutionary Computation for the Identification of Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2009-01-01

    Over the past several years the Center for Evolutionary Computation and Automated Design at the Jet Propulsion Laboratory has developed a technique based on Evolutionary Computational Methods (ECM) that allows for the automated optimization of complex computationally modeled systems. An important application of this technique is for the identification of emergent behaviors in autonomous systems. Mobility platforms such as rovers or airborne vehicles are now being designed with autonomous mission controllers that can find trajectories over a solution space that is larger than can reasonably be tested. It is critical to identify control behaviors that are not predicted and can have surprising results (both good and bad). These emergent behaviors need to be identified, characterized and either incorporated into or isolated from the acceptable range of control characteristics. We use cluster analysis of automatically retrieved solutions to identify isolated populations of solutions with divergent behaviors.

  11. THE INFLUENCE OF AN INDIVIDUAL'S COGNITIVE STYLE UPON CONCEPT IDENTIFICATION AT VARYING LEVELS OF COMPLEXITY.

    ERIC Educational Resources Information Center

    DAVIS, J.K.

    THIS EXPERIMENT EXAMINED THE EXTENT TO WHICH AN INDIVIDUAL'S COGNITIVE STYLE INFLUENCED HIS PERFORMANCE ON CONCEPT IDENTIFICATION PROBLEMS OF VARYING LEVELS OF COMPLEXITY. COGNITIVE STYLE WAS OPERATIONALLY DEFINED IN TERMS OF AN INDIVIDUAL'S PERFORMANCE ON THE HIDDEN FIGURES TEST (HFT). IT WAS ASSUMED THAT SUBJECTS (SS) ABLE TO IDENTIFY THE HIDDEN…

  12. Levels of Conformity to Islamic Values and the Process of Identification.

    ERIC Educational Resources Information Center

    Nassir, Balkis

    This study was conducted to measure the conformity levels and the identification process among university women students in an Islamic culture. Identity/conformity tests and costume identity tests were administered to 129 undergraduate female students at King Abdulaziz University in Saudi Arabia. The Photographic Costume Identity Test and the…

  13. Computational Identification of MoRFs in Protein Sequences Using Hierarchical Application of Bayes Rule

    PubMed Central

    Malhis, Nawar; Wong, Eric T. C.; Nassar, Roy; Gsponer, Jörg

    2015-01-01

    Motivation Intrinsically disordered regions of proteins play an essential role in the regulation of various biological processes. Key to their regulatory function is often the binding to globular protein domains via sequence elements known as molecular recognition features (MoRFs). Development of computational tools for the identification of candidate MoRF locations in amino acid sequences is an important task and an area of growing interest. Given the relative sparseness of MoRFs in protein sequences, the accuracy of the available MoRF predictors is often inadequate for practical usage, which leaves a significant need and room for improvement. In this work, we introduce MoRFCHiBi_Web, which predicts MoRF locations in protein sequences with higher accuracy compared to current MoRF predictors. Methods Three distinct and largely independent property scores are computed with component predictors and then combined to generate the final MoRF propensity scores. The first score reflects the likelihood of sequence windows to harbour MoRFs and is based on amino acid composition and sequence similarity information. It is generated by MoRFCHiBi using small windows of up to 40 residues in size. The second score identifies long stretches of protein disorder and is generated by ESpritz with the DisProt option. Lastly, the third score reflects residue conservation and is assembled from PSSM files generated by PSI-BLAST. These propensity scores are processed and then hierarchically combined using Bayes rule to generate the final MoRFCHiBi_Web predictions. Results MoRFCHiBi_Web was tested on three datasets. Results show that MoRFCHiBi_Web outperforms previously developed predictors by generating less than half the false positive rate for the same true positive rate at practical threshold values. This level of accuracy paired with its relatively high processing speed makes MoRFCHiBi_Web a practical tool for MoRF prediction. Availability http://morf.chibi.ubc.ca:8080/morf/. PMID

  14. Automatic de-identification of electronic medical records using token-level and character-level conditional random fields.

    PubMed

    Liu, Zengjian; Chen, Yangxin; Tang, Buzhou; Wang, Xiaolong; Chen, Qingcai; Li, Haodi; Wang, Jingfeng; Deng, Qiwen; Zhu, Suisong

    2015-12-01

    De-identification, identifying and removing all protected health information (PHI) present in clinical data including electronic medical records (EMRs), is a critical step in making clinical data publicly available. The 2014 i2b2 (Center of Informatics for Integrating Biology and Bedside) clinical natural language processing (NLP) challenge sets up a track for de-identification (track 1). In this study, we propose a hybrid system based on both machine learning and rule approaches for the de-identification track. In our system, PHI instances are first identified by two (token-level and character-level) conditional random fields (CRFs) and a rule-based classifier, and then are merged by some rules. Experiments conducted on the i2b2 corpus show that our system submitted for the challenge achieves the highest micro F-scores of 94.64%, 91.24% and 91.63% under the "token", "strict" and "relaxed" criteria respectively, which is among top-ranked systems of the 2014 i2b2 challenge. After integrating some refined localization dictionaries, our system is further improved with F-scores of 94.83%, 91.57% and 91.95% under the "token", "strict" and "relaxed" criteria respectively. PMID:26122526

  15. Identifying the Computer Competency Levels of Recreation Department Undergraduates

    ERIC Educational Resources Information Center

    Zorba, Erdal

    2011-01-01

    Computer-based and web-based applications are as major instructional tools to increase undergraduates' motivation at school. In the recreation field usage of, computer and the internet based recreational applications has become more prevalent in order to present visual and interactive entertainment activities. Recreation department undergraduates…

  16. Improving protein identification from peptide mass fingerprinting through a parameterized multi-level scoring algorithm and an optimized peak detection.

    PubMed

    Gras, R; Müller, M; Gasteiger, E; Gay, S; Binz, P A; Bienvenut, W; Hoogland, C; Sanchez, J C; Bairoch, A; Hochstrasser, D F; Appel, R D

    1999-12-01

    We have developed a new algorithm to identify proteins by means of peptide mass fingerprinting. Starting from the matrix-assisted laser desorption/ionization-time-of-flight (MALDI-TOF) spectra and environmental data such as species, isoelectric point and molecular weight, as well as chemical modifications or number of missed cleavages of a protein, the program performs a fully automated identification of the protein. The first step is a peak detection algorithm, which allows precise and fast determination of peptide masses, even if the peaks are of low intensity or they overlap. In the second step the masses and environmental data are used by the identification algorithm to search in protein sequence databases (SWISS-PROT and/or TrEMBL) for protein entries that match the input data. Consequently, a list of candidate proteins is selected from the database, and a score calculation provides a ranking according to the quality of the match. To define the most discriminating scoring calculation we analyzed the respective role of each parameter in two directions. The first one is based on filtering and exploratory effects, while the second direction focuses on the levels where the parameters intervene in the identification process. Thus, according to our analysis, all input parameters contribute to the score, however with different weights. Since it is difficult to estimate the weights in advance, they have been computed with a generic algorithm, using a training set of 91 protein spectra with their environmental data. We tested the resulting scoring calculation on a test set of ten proteins and compared the identification results with those of other peptide mass fingerprinting programs. PMID:10612280

  17. Social Identification and Interpersonal Communication in Computer-Mediated Communication: What You Do versus Who You Are in Virtual Groups

    ERIC Educational Resources Information Center

    Wang, Zuoming; Walther, Joseph B.; Hancock, Jeffrey T.

    2009-01-01

    This study investigates the influence of interpersonal communication and intergroup identification on members' evaluations of computer-mediated groups. Participants (N= 256) in 64 four-person groups interacted through synchronous computer chat. Subgroup assignments to minimal groups instilled significantly greater in-group versus out-group…

  18. Computational aspects of hot-wire identification of thermal conductivity and diffusivity under high temperature

    NASA Astrophysics Data System (ADS)

    Vala, Jiří; Jarošová, Petra

    2016-07-01

    Development of advanced materials resistant to high temperature, needed namely for the design of heat storage for low-energy and passive buildings, requires simple, inexpensive and reliable methods of identification of their temperature-sensitive thermal conductivity and diffusivity, covering both well-advised experimental setting and implementation of robust and effective computational algorithms. Special geometrical configurations offer a possibility of quasi-analytical evaluation of temperature development for direct problems, whereas inverse problems of simultaneous evaluation of thermal conductivity and diffusivity must be handled carefully, using some least-squares (minimum variance) arguments. This paper demonstrates the proper mathematical and computational approach to such model problem, thanks to the radial symmetry of hot-wire measurements, including its numerical implementation.

  19. Identification of oxygen-related midgap level in GaAs

    NASA Technical Reports Server (NTRS)

    Lagowski, J.; Lin, D. G.; Gatos, H. C.; Aoyama, T.

    1984-01-01

    An oxygen-related deep level ELO was identified in GaAs employing Bridgman-grown crystals with controlled oxygen doping. The activation energy of ELO is almost the same as that of the dominant midgap level: EL2. This fact impedes the identification of ELO by standard deep level transient spectroscopy. However, it was found that the electron capture cross section of ELO is about four times greater than that of EL2. This characteristic served as the basis for the separation and quantitative investigation of ELO employing detailed capacitance transient measurements in conjunction with reference measurements on crystals grown without oxygen doping and containing only EL2.

  20. Computational Prediction of Electron Ionization Mass Spectra to Assist in GC/MS Compound Identification.

    PubMed

    Allen, Felicity; Pon, Allison; Greiner, Russ; Wishart, David

    2016-08-01

    We describe a tool, competitive fragmentation modeling for electron ionization (CFM-EI) that, given a chemical structure (e.g., in SMILES or InChI format), computationally predicts an electron ionization mass spectrum (EI-MS) (i.e., the type of mass spectrum commonly generated by gas chromatography mass spectrometry). The predicted spectra produced by this tool can be used for putative compound identification, complementing measured spectra in reference databases by expanding the range of compounds able to be considered when availability of measured spectra is limited. The tool extends CFM-ESI, a recently developed method for computational prediction of electrospray tandem mass spectra (ESI-MS/MS), but unlike CFM-ESI, CFM-EI can handle odd-electron ions and isotopes and incorporates an artificial neural network. Tests on EI-MS data from the NIST database demonstrate that CFM-EI is able to model fragmentation likelihoods in low-resolution EI-MS data, producing predicted spectra whose dot product scores are significantly better than full enumeration "bar-code" spectra. CFM-EI also outperformed previously reported results for MetFrag, MOLGEN-MS, and Mass Frontier on one compound identification task. It also outperformed MetFrag in a range of other compound identification tasks involving a much larger data set, containing both derivatized and nonderivatized compounds. While replicate EI-MS measurements of chemical standards are still a more accurate point of comparison, CFM-EI's predictions provide a much-needed alternative when no reference standard is available for measurement. CFM-EI is available at https://sourceforge.net/projects/cfm-id/ for download and http://cfmid.wishartlab.com as a web service. PMID:27381172

  1. Matrix Transformations in Lower Level Computer Graphics Course.

    ERIC Educational Resources Information Center

    Ying, Dao-ning

    1982-01-01

    Presents computer programs (Applesoft Basic) for: (1) 2-D rotation about any point through any angle; (2) matrix transformation for 2-D rotation; (3) 3-D translation; (4) 3-D rotation; and (5) hyperboloid rotated in 2-D space. Includes background information and sample output for the matrix transformation subroutines. (JN)

  2. Efficient Computation of the Topology of Level Sets

    SciTech Connect

    Pascucci, V; Cole-McLaughlin, K

    2002-07-19

    This paper introduces two efficient algorithms that compute the Contour Tree of a 3D scalar field F and its augmented version with the Betti numbers of each isosurface. The Contour Tree is a fundamental data structure in scientific visualization that is used to pre-process the domain mesh to allow optimal computation of isosurfaces with minimal storage overhead. The Contour Tree can be also used to build user interfaces reporting the complete topological characterization of a scalar field, as shown in Figure 1. In the first part of the paper we present a new scheme that augments the Contour Tree with the Betti numbers of each isocontour in linear time. We show how to extend the scheme introduced in 3 with the Betti number computation without increasing its complexity. Thus we improve on the time complexity from our previous approach 8 from 0(m log m) to 0(n log n+m), where m is the number of tetrahedra and n is the number of vertices in the domain of F. In the second part of the paper we introduce a new divide and conquer algorithm that computes the Augmented Contour Tree for scalar fields defined on rectilinear grids. The central part of the scheme computes the output contour tree by merging two intermediate contour trees and is independent of the interpolant. In this way we confine any knowledge regarding a specific interpolant to an oracle that computes the tree for a single cell. We have implemented this oracle for the trilinear interpolant and plan to replace it with higher order interpolants when needed. The complexity of the scheme is O(n + t log n), where t is the number of critical points of F. This allows for the first time to compute the Contour Tree in linear time in many practical cases when t = O(n{sup 1-e}). We report the running times for a parallel implementation of our algorithm, showing good scalability with the number of processors.

  3. The Role of Language Comprehension and Computation in Mathematical Word Problem Solving among Students with Different Levels of Computation Achievement

    ERIC Educational Resources Information Center

    Guerriero, Tara Stringer

    2010-01-01

    The purpose of this study was to examine how selected linguistic components (including consistency of relational terms and extraneous information) impact performance at each stage of mathematical word problem solving (comprehension, equation construction, and computation accuracy) among students with different levels of computation achievement. …

  4. High level waste storage tanks 242-A evaporator standards/requirement identification document

    SciTech Connect

    Biebesheimer, E.

    1996-01-01

    This document, the Standards/Requirements Identification Document (S/RIDS) for the subject facility, represents the necessary and sufficient requirements to provide an adequate level of protection of the worker, public health and safety, and the environment. It lists those source documents from which requirements were extracted, and those requirements documents considered, but from which no requirements where taken. Documents considered as source documents included State and Federal Regulations, DOE Orders, and DOE Standards

  5. Assigning unique identification numbers to new user accounts and groups in a computing environment with multiple registries

    DOEpatents

    DeRobertis, Christopher V.; Lu, Yantian T.

    2010-02-23

    A method, system, and program storage device for creating a new user account or user group with a unique identification number in a computing environment having multiple user registries is provided. In response to receiving a command to create a new user account or user group, an operating system of a clustered computing environment automatically checks multiple registries configured for the operating system to determine whether a candidate identification number for the new user account or user group has been assigned already to one or more existing user accounts or groups, respectively. The operating system automatically assigns the candidate identification number to the new user account or user group created in a target user registry if the checking indicates that the candidate identification number has not been assigned already to any of the existing user accounts or user groups, respectively.

  6. An artificial-neural-network method for the identification of saturated turbogenerator parameters based on a coupled finite-element/state-space computational algorithm

    SciTech Connect

    Chaudhry, S.R.; Ahmed-Zaid, S.; Demerdash, N.A.

    1995-12-01

    An artificial neural network (ANN) is used in the identification of saturated synchronous machine parameters under diverse operating conditions. The training data base for the ANN is generated by a time-stepping coupled finite-element/state-space (CFE-SS) modeling technique which is used in the computation of the saturated parameters of a 20-kV, 733-MVA, 0.85 pf (lagging) turbogenerator at discrete load points in the P-Q capability plane for three different levels of terminal voltage. These computed parameters constitute a learning data base for a multilayer ANN structure which is successfully trained using the back-propagation algorithm. Results indicate that the trained ANN can identify saturated machine reactances for arbitrary load points in the P-Q plane with an error less than 2% of those values obtained directly from the CFE-SS algorithm. Thus, significant savings in computational time are obtained in such parameter computation tasks.

  7. A conceptual framework of computations in mid-level vision

    PubMed Central

    Kubilius, Jonas; Wagemans, Johan; Op de Beeck, Hans P.

    2014-01-01

    If a picture is worth a thousand words, as an English idiom goes, what should those words—or, rather, descriptors—capture? What format of image representation would be sufficiently rich if we were to reconstruct the essence of images from their descriptors? In this paper, we set out to develop a conceptual framework that would be: (i) biologically plausible in order to provide a better mechanistic understanding of our visual system; (ii) sufficiently robust to apply in practice on realistic images; and (iii) able to tap into underlying structure of our visual world. We bring forward three key ideas. First, we argue that surface-based representations are constructed based on feature inference from the input in the intermediate processing layers of the visual system. Such representations are computed in a largely pre-semantic (prior to categorization) and pre-attentive manner using multiple cues (orientation, color, polarity, variation in orientation, and so on), and explicitly retain configural relations between features. The constructed surfaces may be partially overlapping to compensate for occlusions and are ordered in depth (figure-ground organization). Second, we propose that such intermediate representations could be formed by a hierarchical computation of similarity between features in local image patches and pooling of highly-similar units, and reestimated via recurrent loops according to the task demands. Finally, we suggest to use datasets composed of realistically rendered artificial objects and surfaces in order to better understand a model's behavior and its limitations. PMID:25566044

  8. NEW Fe I LEVEL ENERGIES AND LINE IDENTIFICATIONS FROM STELLAR SPECTRA

    SciTech Connect

    Peterson, Ruth C.; Kurucz, Robert L.

    2015-01-01

    The spectrum of the Fe I atom is critical to many areas of astrophysics and beyond. Measurements of the energies of its high-lying levels remain woefully incomplete, however, despite extensive laboratory and solar analysis. In this work, we use high-resolution archival absorption-line ultraviolet and optical spectra of stars whose warm temperatures favor moderate Fe I excitation. We derive the energy for a particular upper level in Kurucz's semiempirical calculations by adopting a trial value that yields the same wavelength for a given line predicted to be about as strong as that of a strong unidentified spectral line observed in the stellar spectra, then checking the new wavelengths of other strong predicted transitions that share the same upper level for coincidence with other strong observed unidentified lines. To date, this analysis has provided the upper energies of 66 Fe I levels. Many new energy levels are higher than those accessible to laboratory experiments; several exceed the Fe I ionization energy. These levels provide new identifications for over 2000 potentially detectable lines. Almost all of the new levels of odd parity include UV lines that were detected but unclassified in laboratory Fe I absorption spectra, providing an external check on the energy values. We motivate and present the procedure, provide the resulting new energy levels and their uncertainties, list all the potentially detectable UV and optical new Fe I line identifications and their gf values, point out new lines of astrophysical interest, and discuss the prospects for additional Fe I energy level determinations.

  9. Assessing Pre-Service Teachers' Computer Phobia Levels in Terms of Gender and Experience, Turkish Sample

    ERIC Educational Resources Information Center

    Ursavas, Omer Faruk; Karal, Hasan

    2009-01-01

    In this study it is aimed to determine the level of pre-service teachers' computer phobia. Whether or not computer phobia meaningfully varies statistically according to gender and computer experience has been tested in the study. The study was performed on 430 pre-service teachers at the Education Faculty in Rize/Turkey. Data in the study were…

  10. Single-board computer based control system for a portable Raman device with integrated chemical identification

    NASA Astrophysics Data System (ADS)

    Mobley, Joel; Cullum, Brian M.; Wintenberg, Alan L.; Shane Frank, S.; Maples, Robert A.; Stokes, David L.; Vo-Dinh, Tuan

    2004-06-01

    We report the development of a battery-powered portable chemical identification device for field use consisting of an acousto-optic tunable filter (AOTF)-based Raman spectrometer with integrated data processing and analysis software. The various components and custom circuitry are integrated into a self-contained instrument by control software that runs on an embedded single-board computer (SBC), which communicates with the various instrument modules through a 48-line bidirectional TTL bus. The user interacts with the instrument via a touch-sensitive liquid crystal display unit (LCD) that provides soft buttons for user control as well as visual feedback (e.g., spectral plots, stored data, instrument settings, etc.) from the instrument. The control software manages all operational aspects of the instrument with the exception of the power management module that is run by embedded firmware. The SBC-based software includes both automated and manual library searching capabilities, permitting rapid identification of samples in the field. The use of the SBC in tandem with the LCD touchscreen for interfacing and control provides the instrument with a great deal of flexibility as its function can be customized to specific users or tasks via software modifications alone. The instrument, as currently configured, can be operated as a research-grade Raman spectrometer for scientific applications and as a "black-box" chemical identification system for field use. The instrument can acquire 198-point spectra over a spectral range of 238-1620 cm-1, perform a library search, and display the results in less than 14 s. The operating modes of the instrument are demonstrated illustrating the utility and flexibility afforded the system by the SBC-LCD control module.

  11. Computed Tomography (CT) Scanning Facilitates Early Identification of Neonatal Cystic Fibrosis Piglets

    PubMed Central

    Guillon, Antoine; Chevaleyre, Claire; Barc, Celine; Berri, Mustapha; Adriaensen, Hans; Lecompte, François; Villemagne, Thierry; Pezant, Jérémy; Delaunay, Rémi; Moënne-Loccoz, Joseph; Berthon, Patricia; Bähr, Andrea; Wolf, Eckhard; Klymiuk, Nikolai; Attucci, Sylvie; Ramphal, Reuben; Sarradin, Pierre; Buzoni-Gatel, Dominique; Si-Tahar, Mustapha; Caballero, Ignacio

    2015-01-01

    Background Cystic Fibrosis (CF) is the most prevalent autosomal recessive disease in the Caucasian population. A cystic fibrosis transmembrane conductance regulator knockout (CFTR-/-) pig that displays most of the features of the human CF disease has been recently developed. However, CFTR-/- pigs presents a 100% prevalence of meconium ileus that leads to death in the first hours after birth, requiring a rapid diagnosis and surgical intervention to relieve intestinal obstruction. Identification of CFTR-/- piglets is usually performed by PCR genotyping, a procedure that lasts between 4 to 6 h. Here, we aimed to develop a procedure for rapid identification of CFTR-/- piglets that will allow placing them under intensive care soon after birth and immediately proceeding with the surgical correction. Methods and Principal Findings Male and female CFTR+/- pigs were crossed and the progeny was examined by computed tomography (CT) scan to detect the presence of meconium ileus and facilitate a rapid post-natal surgical intervention. Genotype was confirmed by PCR. CT scan presented a 94.4% sensitivity to diagnose CFTR-/- piglets. Diagnosis by CT scan reduced the birth-to-surgery time from a minimum of 10 h down to a minimum of 2.5 h and increased the survival of CFTR-/- piglets to a maximum of 13 days post-surgery as opposed to just 66 h after later surgery. Conclusion CT scan imaging of meconium ileus is an accurate method for rapid identification of CFTR-/- piglets. Early CT detection of meconium ileus may help to extend the lifespan of CFTR-/- piglets and, thus, improve experimental research on CF, still an incurable disease. PMID:26600426

  12. Young Children's Perspectives on Immigration within the Italian School Context: An Analysis of the Identification Level of Integration

    ERIC Educational Resources Information Center

    Pellegrini, Lucia

    2010-01-01

    The research explores the level of integration of a group of young children attending a primary school in Italy, a new immigration country, focusing on their identification of racial, ethnic and cultural pluralism. Specifically the study takes into account two processes of identification: the mechanism of attributions and the expression of…

  13. Identification of Yersinia enterocolitica at the Species and Subspecies Levels by Fourier Transform Infrared Spectroscopy ▿

    PubMed Central

    Kuhm, Andrea Elisabeth; Suter, Daniel; Felleisen, Richard; Rau, Jörg

    2009-01-01

    Yersinia enterocolitica and other Yersinia species, such as Y. pseudotuberculosis, Y. bercovieri, and Y. intermedia, were differentiated using Fourier transform infrared spectroscopy (FT-IR) combined with artificial neural network analysis. A set of well defined Yersinia strains from Switzerland and Germany was used to create a method for FT-IR-based differentiation of Yersinia isolates at the species level. The isolates of Y. enterocolitica were also differentiated by FT-IR into the main biotypes (biotypes 1A, 2, and 4) and serotypes (serotypes O:3, O:5, O:9, and “non-O:3, O:5, and O:9”). For external validation of the constructed methods, independently obtained isolates of different Yersinia species were used. A total of 79.9% of Y. enterocolitica sensu stricto isolates were identified correctly at the species level. The FT-IR analysis allowed the separation of all Y. bercovieri, Y. intermedia, and Y. rohdei strains from Y. enterocolitica, which could not be differentiated by the API 20E test system. The probability for correct biotype identification of Y. enterocolitica isolates was 98.3% (41 externally validated strains). For correct serotype identification, the probability was 92.5% (42 externally validated strains). In addition, the presence or absence of the ail gene, one of the main pathogenicity markers, was demonstrated using FT-IR. The probability for correct identification of isolates concerning the ail gene was 98.5% (51 externally validated strains). This indicates that it is possible to obtain information about genus, species, and in the case of Y. enterocolitica also subspecies type with a single measurement. Furthermore, this is the first example of the identification of specific pathogenicity using FT-IR. PMID:19617388

  14. Identification of Yersinia enterocolitica at the species and subspecies levels by Fourier transform infrared spectroscopy.

    PubMed

    Kuhm, Andrea Elisabeth; Suter, Daniel; Felleisen, Richard; Rau, Jörg

    2009-09-01

    Yersinia enterocolitica and other Yersinia species, such as Y. pseudotuberculosis, Y. bercovieri, and Y. intermedia, were differentiated using Fourier transform infrared spectroscopy (FT-IR) combined with artificial neural network analysis. A set of well defined Yersinia strains from Switzerland and Germany was used to create a method for FT-IR-based differentiation of Yersinia isolates at the species level. The isolates of Y. enterocolitica were also differentiated by FT-IR into the main biotypes (biotypes 1A, 2, and 4) and serotypes (serotypes O:3, O:5, O:9, and "non-O:3, O:5, and O:9"). For external validation of the constructed methods, independently obtained isolates of different Yersinia species were used. A total of 79.9% of Y. enterocolitica sensu stricto isolates were identified correctly at the species level. The FT-IR analysis allowed the separation of all Y. bercovieri, Y. intermedia, and Y. rohdei strains from Y. enterocolitica, which could not be differentiated by the API 20E test system. The probability for correct biotype identification of Y. enterocolitica isolates was 98.3% (41 externally validated strains). For correct serotype identification, the probability was 92.5% (42 externally validated strains). In addition, the presence or absence of the ail gene, one of the main pathogenicity markers, was demonstrated using FT-IR. The probability for correct identification of isolates concerning the ail gene was 98.5% (51 externally validated strains). This indicates that it is possible to obtain information about genus, species, and in the case of Y. enterocolitica also subspecies type with a single measurement. Furthermore, this is the first example of the identification of specific pathogenicity using FT-IR. PMID:19617388

  15. An integrated transcriptomic and computational analysis for biomarker identification in gastric cancer

    PubMed Central

    Cui, Juan; Chen, Yunbo; Chou, Wen-Chi; Sun, Liankun; Chen, Li; Suo, Jian; Ni, Zhaohui; Zhang, Ming; Kong, Xiaoxia; Hoffman, Lisabeth L.; Kang, Jinsong; Su, Yingying; Olman, Victor; Johnson, Darryl; Tench, Daniel W.; Amster, I. Jonathan; Orlando, Ron; Puett, David; Li, Fan; Xu, Ying

    2011-01-01

    This report describes an integrated study on identification of potential markers for gastric cancer in patients’ cancer tissues and sera based on: (i) genome-scale transcriptomic analyses of 80 paired gastric cancer/reference tissues and (ii) computational prediction of blood-secretory proteins supported by experimental validation. Our findings show that: (i) 715 and 150 genes exhibit significantly differential expressions in all cancers and early-stage cancers versus reference tissues, respectively; and a substantial percentage of the alteration is found to be influenced by age and/or by gender; (ii) 21 co-expressed gene clusters have been identified, some of which are specific to certain subtypes or stages of the cancer; (iii) the top-ranked gene signatures give better than 94% classification accuracy between cancer and the reference tissues, some of which are gender-specific; and (iv) 136 of the differentially expressed genes were predicted to have their proteins secreted into blood, 81 of which were detected experimentally in the sera of 13 validation samples and 29 found to have differential abundances in the sera of cancer patients versus controls. Overall, the novel information obtained in this study has led to identification of promising diagnostic markers for gastric cancer and can benefit further analyses of the key (early) abnormalities during its development. PMID:20965966

  16. Computational methods for the identification of spatially varying stiffness and damping in beams

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Rosen, I. G.

    1986-01-01

    A numerical approximation scheme for the estimation of functional parameters in Euler-Bernoulli models for the transverse vibration of flexible beams with tip bodies is developed. The method permits the identification of spatially varying flexural stiffness and Voigt-Kelvin viscoelastic damping coefficients which appear in the hybrid system of ordinary and partial differential equations and boundary conditions describing the dynamics of such structures. An inverse problem is formulated as a least squares fit to data subject to constraints in the form of a vector system of abstract first order evolution equations. Spline-based finite element approximations are used to finite dimensionalize the problem. Theoretical convergence results are given and numerical studies carried out on both conventional (serial) and vector computers are discussed.

  17. Features preferred in-identification system based on computer mouse movements

    NASA Astrophysics Data System (ADS)

    Jašek, Roman; Kolařík, Martin

    2016-06-01

    Biometric identification systems build features, which describe people, using various data. Usually it is not known which of features could be chosen, and a dedicated process called a feature selection is applied to resolve this uncertainty. The relevant features, that are selected with this process, then describe the people in the system as well as possible. It is likely that the relevancy of selected features also mean that these features describe the important things in the measured behavior and that they partly reveal how the behavior works. This paper uses this idea and evaluates results of many runs of feature selection runs in a system, that identifies people using their moving a computer mouse. It has been found that the most frequently selected features repeat between runs, and that these features describe the similar things of the movements.

  18. Computer game as a tool for training the identification of phonemic length.

    PubMed

    Pennala, Riitta; Richardson, Ulla; Ylinen, Sari; Lyytinen, Heikki; Martin, Maisa

    2014-12-01

    Computer-assisted training of Finnish phonemic length was conducted with 7-year-old Russian-speaking second-language learners of Finnish. Phonemic length plays a different role in these two languages. The training included game activities with two- and three-syllable word and pseudo-word minimal pairs with prototypical vowel durations. The lowest accuracy scores were recorded for two-syllable words. Accuracy scores were higher for the minimal pairs with larger rather than smaller differences in duration. Accuracy scores were lower for long duration than for short duration. The ability to identify quantity degree was generalized to stimuli used in the identification test in two of the children. Ideas for improving the game are introduced. PMID:23841573

  19. VISPA: a computational pipeline for the identification and analysis of genomic vector integration sites.

    PubMed

    Calabria, Andrea; Leo, Simone; Benedicenti, Fabrizio; Cesana, Daniela; Spinozzi, Giulio; Orsini, Massimilano; Merella, Stefania; Stupka, Elia; Zanetti, Gianluigi; Montini, Eugenio

    2014-01-01

    The analysis of the genomic distribution of viral vector genomic integration sites is a key step in hematopoietic stem cell-based gene therapy applications, allowing to assess both the safety and the efficacy of the treatment and to study the basic aspects of hematopoiesis and stem cell biology. Identifying vector integration sites requires ad-hoc bioinformatics tools with stringent requirements in terms of computational efficiency, flexibility, and usability. We developed VISPA (Vector Integration Site Parallel Analysis), a pipeline for automated integration site identification and annotation based on a distributed environment with a simple Galaxy web interface. VISPA was successfully used for the bioinformatics analysis of the follow-up of two lentiviral vector-based hematopoietic stem-cell gene therapy clinical trials. Our pipeline provides a reliable and efficient tool to assess the safety and efficacy of integrating vectors in clinical settings. PMID:25342980

  20. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  1. The identification of a selective dopamine D2 partial agonist, D3 antagonist displaying high levels of brain exposure.

    PubMed

    Holmes, Ian P; Blunt, Richard J; Lorthioir, Olivier E; Blowers, Stephen M; Gribble, Andy; Payne, Andrew H; Stansfield, Ian G; Wood, Martyn; Woollard, Patrick M; Reavill, Charlie; Howes, Claire M; Micheli, Fabrizio; Di Fabio, Romano; Donati, Daniele; Terreni, Silvia; Hamprecht, Dieter; Arista, Luca; Worby, Angela; Watson, Steve P

    2010-03-15

    The identification of a highly selective D(2) partial agonist, D(3) antagonist tool molecule which demonstrates high levels of brain exposure and selectivity against an extensive range of dopamine, serotonin, adrenergic, histamine, and muscarinic receptors is described. PMID:20153647

  2. Identification of Nonlinear Micron-Level Mechanics for a Precision Deployable Joint

    NASA Technical Reports Server (NTRS)

    Bullock, S. J.; Peterson, L. D.

    1994-01-01

    The experimental identification of micron-level nonlinear joint mechanics and dynamics for a pin-clevis joint used in a precision, adaptive, deployable space structure are investigated. The force-state mapping method is used to identify the behavior of the joint under a preload. The results of applying a single tension-compression cycle to the joint under a tensile preload are presented. The observed micron-level behavior is highly nonlinear and involves all six rigid body motion degrees-of-freedom of the joint. it is also suggests that at micron levels of motion modelling of the joint mechanics and dynamics must include the interactions between all internal components, such as the pin, bushings, and the joint node.

  3. Crop species identification using machine vision of computer extracted individual leaves

    NASA Astrophysics Data System (ADS)

    Camargo Neto, João; Meyer, George E.

    2005-11-01

    An unsupervised method for plant species identification was developed which uses computer extracted individual whole leaves from color images of crop canopies. Green canopies were isolated from soil/residue backgrounds using a modified Excess Green and Excess Red separation method. Connected components of isolated green regions of interest were changed into pixel fragments using the Gustafson-Kessel fuzzy clustering method. The fragments were reassembled as individual leaves using a genetic optimization algorithm and a fitness method. Pixels of whole leaves were then analyzed using the elliptic Fourier shape and Haralick's classical textural feature analyses. A binary template was constructed to represent each selected leaf region of interest. Elliptic Fourier descriptors were generated from a chain encoding of the leaf boundary. Leaf template orientation was corrected by rotating each extracted leaf to a standard horizontal position. This was done using information provided from the first harmonic set of coefficients. Textural features were computed from the grayscale co-occurrence matrix of the leaf pixel set. Standardized leaf orientation significantly improved the leaf textural venation results. Principle component analysis from SAS (R) was used to select the best Fourier descriptors and textural indices. Indices of local homogeneity, and entropy were found to contribute to improved classification rates. A SAS classification model was developed and correctly classified 83% of redroot pigweed, 100% of sunflower 83% of soybean, and 73% of velvetleaf species. An overall plant species correct classification rate of 86% was attained.

  4. A Computational Model for the Identification of Biochemical Pathways in the Krebs Cycle

    SciTech Connect

    Oliveira, Joseph S.; Bailey, Colin G.; Jones-Oliveira, Janet B.; Dixon, David A.; Gull, Dean W.; Chandler, Mary L.

    2003-03-01

    We have applied an algorithmic methodology which provably decomposes any complex network into a complete family of principal subcircuits to study the minimal circuits that describe the Krebs cycle. Every operational behavior that the network is capable of exhibiting can be represented by some combination of these principal subcircuits and this computational decomposition is linearly efficient. We have developed a computational model that can be applied to biochemical reaction systems which accurately renders pathways of such reactions via directed hypergraphs (Petri nets). We have applied the model to the citric acid cycle (Krebs cycle). The Krebs cycle, which oxidizes the acetyl group of acetyl CoA to CO2 and reduces NAD and FAD to NADH and FADH2 is a complex interacting set of nine subreaction networks. The Krebs cycle was selected because of its familiarity to the biological community and because it exhibits enough complexity to be interesting in order to introduce this novel analytic approach. This study validates the algorithmic methodology for the identification of significant biochemical signaling subcircuits, based solely upon the mathematical model and not upon prior biological knowledge. The utility of the algebraic-combinatorial model for identifying the complete set of biochemical subcircuits as a data set is demonstrated for this important metabolic process.

  5. Computer Literacy in Pennsylvania Community Colleges. Competencies in a Beginning Level College Computer Literacy Course.

    ERIC Educational Resources Information Center

    Tortorelli, Ann Eichorn

    A study was conducted at the 14 community colleges (17 campuses) in Pennsylvania to assess the perceptions of faculty about the relative importance of course content items in a beginning credit course in computer literacy, and to survey courses currently being offered. A detailed questionnaire consisting of 96 questions based on MECC (Minnesota…

  6. Identify Skills and Proficiency Levels Necessary for Entry-Level Employment for All Vocational Programs Using Computers to Process Data. Final Report.

    ERIC Educational Resources Information Center

    Crowe, Jacquelyn

    This study investigated computer and word processing operator skills necessary for employment in today's high technology office. The study was comprised of seven major phases: (1) identification of existing community college computer operator programs in the state of Washington; (2) attendance at an information management seminar; (3) production…

  7. Computer experiments on periodic systems identification using rotor blade transient flapping-torsion responses at high advance ratio

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Prelewicz, D. A.

    1974-01-01

    Systems identification methods have recently been applied to rotorcraft to estimate stability derivatives from transient flight control response data. While these applications assumed a linear constant coefficient representation of the rotorcraft, the computer experiments described in this paper used transient responses in flap-bending and torsion of a rotor blade at high advance ratio which is a rapidly time varying periodic system.

  8. Bladed-shrouded-disc aeroelastic analyses: Computer program updates in NASTRAN level 17.7

    NASA Technical Reports Server (NTRS)

    Gallo, A. M.; Elchuri, V.; Skalski, S. C.

    1981-01-01

    In October 1979, a computer program based on the state-of-the-art compressor and structural technologies applied to bladed-shrouded-disc was developed. The program was more operational in NASTRAN Level 16. The bladed disc computer program was updated for operation in NASTRAN Level 17.7. The supersonic cascade unsteady aerodynamics routine UCAS, delivered as part of the NASTRAN Level 16 program was recorded to improve its execution time. These improvements are presented.

  9. [Evaluation of the GNF computer-coding system for the identification of non-fermentative Gram-negative bacilli].

    PubMed

    Tarng, C M; Chen, M M; Tsai, W C

    1983-05-01

    In order to evaluate the effectiveness of the GNF computer-coding system for the identification of glucose non-fermenting gram-negative bacilli, we employed 406 strains of bacteria including 367 clinical isolates and 39 standard strains for testing. These strains were inoculated into the following eleven conventional biochemical test media: Triple Sugar Iron Agar, Simmon's Citrate Agar, Christensen's Urea Agar, Sulfide-Indole-Motility Medium, Semisolid Voges-Proskauer Test Medium, Moeller's Ornithine Decarboxylase Test Medium, Pyocyanin Test Medium, Oxidation/Fermentation (O/F) Glucose, O/F Fructose, Nitrate Broth, Moeller's Arginine Dihydrolase Test Medium. The results of these tests plus those from the hanging drop motility test and the oxidase test were converted into bacterial code number and then checked with the GNF computer-coding system. It was found that the first preference of agreement was 75.6%, second 15.3%, third 5.9%, and fourth or more 3.2%. In regard to the speed of bacterial identification by using the GNF system and information from hemolysis pattern and flagella stain, it was indicated that 84.7% would be correctly identified within 36-48 hours after isolation. If more confirmational tests were employed, the accurate identification rate would reach to 98.7% after 4 days of isolation. In addition, the use of the GNF computer-coding system can standardize identification procedures, shorten the identification period, and save cost in terms of materials supply, inoculation time, media preparation and media-storing space. Therefore, we conclude that the GNF computer-coding system is an effective tool in the identification of the glucose non-fermenting gram-negative bacilli. PMID:6617315

  10. A Unique Automation Platform for Measuring Low Level Radioactivity in Metabolite Identification Studies

    PubMed Central

    Krauser, Joel; Walles, Markus; Wolf, Thierry; Graf, Daniel; Swart, Piet

    2012-01-01

    Generation and interpretation of biotransformation data on drugs, i.e. identification of physiologically relevant metabolites, defining metabolic pathways and elucidation of metabolite structures, have become increasingly important to the drug development process. Profiling using 14C or 3H radiolabel is defined as the chromatographic separation and quantification of drug-related material in a given biological sample derived from an in vitro, preclinical in vivo or clinical study. Metabolite profiling is a very time intensive activity, particularly for preclinical in vivo or clinical studies which have defined limitations on radiation burden and exposure levels. A clear gap exists for certain studies which do not require specialized high volume automation technologies, yet these studies would still clearly benefit from automation. Use of radiolabeled compounds in preclinical and clinical ADME studies, specifically for metabolite profiling and identification are a very good example. The current lack of automation for measuring low level radioactivity in metabolite profiling requires substantial capacity, personal attention and resources from laboratory scientists. To help address these challenges and improve efficiency, we have innovated, developed and implemented a novel and flexible automation platform that integrates a robotic plate handling platform, HPLC or UPLC system, mass spectrometer and an automated fraction collector. PMID:22723932

  11. Rapid identification of mycobacteria to the species level by polymerase chain reaction and restriction enzyme analysis.

    PubMed Central

    Telenti, A; Marchesi, F; Balz, M; Bally, F; Böttger, E C; Bodmer, T

    1993-01-01

    A method for the rapid identification of mycobacteria to the species level was developed on the basis of evaluation by the polymerase chain reaction (PCR) of the gene encoding for the 65-kDa protein. The method involves restriction enzyme analysis of PCR products obtained with primers common to all mycobacteria. Using two restriction enzymes, BstEII and HaeIII, medically relevant and other frequent laboratory isolates were differentiated to the species or subspecies level by PCR-restriction enzyme pattern analysis. PCR-restriction enzyme pattern analysis was performed on isolates (n = 330) from solid and fluid culture media, including BACTEC, or from frozen and lyophilized stocks. The procedure does not involve hybridization steps or the use of radioactivity and can be completed within 1 working day. Images PMID:8381805

  12. The Potential Use of Radio Frequency Identification Devices for Active Monitoring of Blood Glucose Levels

    PubMed Central

    Moore, Bert

    2009-01-01

    Imagine a diabetes patient receiving a text message on his mobile phone warning him that his blood glucose level is too low or a patient's mobile phone calling an emergency number when the patient goes into diabetic shock. Both scenarios depend on automatic, continuous monitoring of blood glucose levels and transmission of that information to a phone. The development of advanced biological sensors and integration with passive radio frequency identification technologies are the key to this. These hold the promise of being able to free patients from finger stick sampling or externally worn devices while providing continuous blood glucose monitoring that allows patients to manage their health more actively. To achieve this promise, however, a number of technical issues need to be addressed. PMID:20046663

  13. Multivariate Effects of Level of Education, Computer Ownership, and Computer Use on Female Students' Attitudes towards CALL

    ERIC Educational Resources Information Center

    Rahimi, Mehrak; Yadollahi, Samaneh

    2012-01-01

    The aim of this study was investigating Iranian female students' attitude towards CALL and its relationship with their level of education, computer ownership, and frequency of use. One hundred and forty-two female students (50 junior high-school students, 49 high-school students and 43 university students) participated in this study. They filled…

  14. Identification, Recovery, and Refinement of Hitherto Undescribed Population-Level Genomes from the Human Gastrointestinal Tract

    PubMed Central

    Laczny, Cedric C.; Muller, Emilie E. L.; Heintz-Buschart, Anna; Herold, Malte; Lebrun, Laura A.; Hogan, Angela; May, Patrick; de Beaufort, Carine; Wilmes, Paul

    2016-01-01

    Linking taxonomic identity and functional potential at the population-level is important for the study of mixed microbial communities and is greatly facilitated by the availability of microbial reference genomes. While the culture-independent recovery of population-level genomes from environmental samples using the binning of metagenomic data has expanded available reference genome catalogs, several microbial lineages remain underrepresented. Here, we present two reference-independent approaches for the identification, recovery, and refinement of hitherto undescribed population-level genomes. The first approach is aimed at genome recovery of varied taxa and involves multi-sample automated binning using CANOPY CLUSTERING complemented by visualization and human-augmented binning using VIZBIN post hoc. The second approach is particularly well-suited for the study of specific taxa and employs VIZBIN de novo. Using these approaches, we reconstructed a total of six population-level genomes of distinct and divergent representatives of the Alphaproteobacteria class, the Mollicutes class, the Clostridiales order, and the Melainabacteria class from human gastrointestinal tract-derived metagenomic data. Our results demonstrate that, while automated binning approaches provide great potential for large-scale studies of mixed microbial communities, these approaches should be complemented with informative visualizations because expert-driven inspection and refinements are critical for the recovery of high-quality population-level genomes. PMID:27445992

  15. A Study of Effectiveness of Computer Assisted Instruction (CAI) over Classroom Lecture (CRL) at ICS Level

    ERIC Educational Resources Information Center

    Kaousar, Tayyeba; Choudhry, Bushra Naoreen; Gujjar, Aijaz Ahmed

    2008-01-01

    This study was aimed to evaluate the effectiveness of CAI vs. classroom lecture for computer science at ICS level. The objectives were to compare the learning effects of two groups with classroom lecture and computer-assisted instruction studying the same curriculum and the effects of CAI and CRL in terms of cognitive development. Hypotheses of…

  16. Computational techniques in tribology and material science at the atomic level

    NASA Technical Reports Server (NTRS)

    Ferrante, J.; Bozzolo, G. H.

    1992-01-01

    Computations in tribology and material science at the atomic level present considerable difficulties. Computational techniques ranging from first-principles to semi-empirical and their limitations are discussed. Example calculations of metallic surface energies using semi-empirical techniques are presented. Finally, application of the methods to calculation of adhesion and friction are presented.

  17. An electrical device for computing theoretical draw-downs of ground-water levels

    USGS Publications Warehouse

    Remson, Irwin; Halstead, M.H.

    1955-01-01

    The construction, calibration and use of an electrical "slide rule" for computing theoretical drawdowns of ground-water levels are described. The instrument facilitates the computation of drawdowns under given conditions of discharge or recharge by means of the Theis nonequilibrium equation. It is simple to construct and use and can be a valuable aid in ground-water studies.   

  18. Computer-Assisted Instruction in Elementary Logic at the University Level. Technical Report No. 239.

    ERIC Educational Resources Information Center

    Goldberg, Adele; Suppes, Patrick

    Earlier research by the authors in the design and use of computer-assisted instructional systems and curricula for teaching mathematical logic to gifted elementary school students has been extended to the teaching of university-level courses. This report is a description of the curriculum and problem types of a computer-based course offered at…

  19. Computer-aided identification of polymorphism sets diagnostic for groups of bacterial and viral genetic variants

    PubMed Central

    Price, Erin P; Inman-Bamber, John; Thiruvenkataswamy, Venugopal; Huygens, Flavia; Giffard, Philip M

    2007-01-01

    Background Single nucleotide polymorphisms (SNPs) and genes that exhibit presence/absence variation have provided informative marker sets for bacterial and viral genotyping. Identification of marker sets optimised for these purposes has been based on maximal generalized discriminatory power as measured by Simpson's Index of Diversity, or on the ability to identify specific variants. Here we describe the Not-N algorithm, which is designed to identify small sets of genetic markers diagnostic for user-specified subsets of known genetic variants. The algorithm does not treat the user-specified subset and the remaining genetic variants equally. Rather Not-N analysis is designed to underpin assays that provide 0% false negatives, which is very important for e.g. diagnostic procedures for clinically significant subgroups within microbial species. Results The Not-N algorithm has been incorporated into the "Minimum SNPs" computer program and used to derive genetic markers diagnostic for multilocus sequence typing-defined clonal complexes, hepatitis C virus (HCV) subtypes, and phylogenetic clades defined by comparative genome hybridization (CGH) data for Campylobacter jejuni, Yersinia enterocolitica and Clostridium difficile. Conclusion Not-N analysis is effective for identifying small sets of genetic markers diagnostic for microbial sub-groups. The best results to date have been obtained with CGH data from several bacterial species, and HCV sequence data. PMID:17672919

  20. Level sequence and splitting identification of closely spaced energy levels by angle-resolved analysis of fluorescence light

    NASA Astrophysics Data System (ADS)

    Wu, Z. W.; Volotka, A. V.; Surzhykov, A.; Dong, C. Z.; Fritzsche, S.

    2016-06-01

    The angular distribution and linear polarization of the fluorescence light following the resonant photoexcitation is investigated within the framework of density matrix and second-order perturbation theory. Emphasis has been placed on "signatures" for determining the level sequence and splitting of intermediate (partially) overlapping resonances, if analyzed as a function of photon energy of incident light. Detailed computations within the multiconfiguration Dirac-Fock method have been performed, especially for the 1 s22 s22 p63 s ,Ji=1 /2 +γ1→(1s22 s 2 p63 s ) 13 p3 /2,J =1 /2 ,3 /2 →1 s22 s22 p63 s ,Jf=1 /2 +γ2 photoexcitation and subsequent fluorescence emission of atomic sodium. A remarkably strong dependence of the angular distribution and linear polarization of the γ2 fluorescence emission is found upon the level sequence and splitting of the intermediate (1s22 s 2 p63 s ) 13 p3 /2,J =1 /2 ,3 /2 overlapping resonances owing to their finite lifetime (linewidth). We therefore suggest that accurate measurements of the angular distribution and linear polarization might help identify the sequence and small splittings of closely spaced energy levels, even if they cannot be spectroscopically resolved.

  1. Studying channelopathies at the functional level using a system identification approach

    NASA Astrophysics Data System (ADS)

    Faisal, A. Aldo

    2007-09-01

    The electrical activity of our brain's neurons is controlled by voltage-gated ion channels. Mutations in these ion channels have been recently associated with clinical conditions, so called channelopathies. The involved ion channels have been well characterised at a molecular and biophysical level. However, the impact of these mutations on neuron function have been only rudimentary studied. It remains unclear how operation and performance (in terms of input-output characteristics and reliability) are affected. Here, I show how system identification techniques provide neuronal performance measures which allow to quantitatively asses the impact of channelopathies by comparing whole cell input-output relationships. I illustrate the feasibility of this approach by comparing the effects on neuronal signalling of two human sodium channel mutations (NaV 1.1 W1204R, R1648H), linked to generalized epilepsy with febrile seizures, to the wild-type NaV 1.1 channel.

  2. [Tri-Level Infrared Spectroscopic Identification of Hot Melting Reflective Road Marking Paint].

    PubMed

    Li, Hao; Ma, Fang; Sun, Su-qin

    2015-12-01

    In order to detect the road marking paint from the trace evidence in traffic accident scene, and to differentiate their brands, we use Tri-level infrared spectroscopic identification, which employs the Fourier transform infrared spectroscopy (FTIR), the second derivative infrared spectroscopy(SD-IR), two-dimensional correlation infrared spectroscopy(2D-IR) to identify three different domestic brands of hot melting reflective road marking paints and their raw materials in formula we Selected. The experimental results show that three labels coatings in ATR and FTIR spectrograms are very similar in shape, only have different absorption peak wave numbers, they have wide and strong absorption peaks near 1435 cm⁻¹, and strong absorption peak near 879, 2955, 2919, 2870 cm⁻¹. After enlarging the partial areas of spectrograms and comparing them with each kind of raw material of formula spectrograms, we can distinguish them. In the region 700-970 and 1370-1 660 cm⁻¹ the spectrograms mainly reflect the different relative content of heavy calcium carbonate of three brands of the paints, and that of polyethylene wax (PE wax), ethylene vinyl acetate resin (EVA), dioctyl phthalate (DOP) in the region 2800-2960 cm⁻¹. The SD-IR not only verify the result of the FTIR analysis, but also further expand the microcosmic differences and reflect the different relative content of quartz sand in the 512-799 cm-1 region. Within the scope of the 1351 to 1525 cm⁻¹, 2D-IR have more significant differences in positions and numbers of automatically peaks. Therefore, the Tri-level infrared spectroscopic identification is a fast and effective method to distinguish the hot melting road marking paints with a gradually improvement in apparent resolution. PMID:26964206

  3. Multi-level Bayesian safety analysis with unprocessed Automatic Vehicle Identification data for an urban expressway.

    PubMed

    Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie

    2016-03-01

    In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature. PMID:26722989

  4. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... use, reproduction or disclosure. 227.7203-3 Section 227.7203-3 Federal Acquisition Regulations System... restrictions on use, reproduction or disclosure. (a) Use the provision at 252.227-7017, Identification and... for which restrictions, other than copyright, on use, modification, reproduction, release,...

  5. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... use, reproduction or disclosure. 227.7203-3 Section 227.7203-3 Federal Acquisition Regulations System... restrictions on use, reproduction or disclosure. (a) Use the provision at 252.227-7017, Identification and... for which restrictions, other than copyright, on use, modification, reproduction, release,...

  6. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... use, reproduction or disclosure. 227.7203-3 Section 227.7203-3 Federal Acquisition Regulations System... restrictions on use, reproduction or disclosure. (a) Use the provision at 252.227-7017, Identification and... for which restrictions, other than copyright, on use, modification, reproduction, release,...

  7. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... use, reproduction or disclosure. 227.7203-3 Section 227.7203-3 Federal Acquisition Regulations System... restrictions on use, reproduction or disclosure. (a) Use the provision at 252.227-7017, Identification and... for which restrictions, other than copyright, on use, modification, reproduction, release,...

  8. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... use, reproduction or disclosure. 227.7203-3 Section 227.7203-3 Federal Acquisition Regulations System... restrictions on use, reproduction or disclosure. (a) Use the provision at 252.227-7017, Identification and... for which restrictions, other than copyright, on use, modification, reproduction, release,...

  9. Factors Influencing the Integration of Computer Algebra Systems into University-Level Mathematics Education

    ERIC Educational Resources Information Center

    Lavicza, Zsolt

    2007-01-01

    Computer Algebra Systems (CAS) are increasing components of university-level mathematics education. However, little is known about the extent of CAS use and the factors influencing its integration into university curricula. Pre-university level studies suggest that beyond the availability of technology, teachers' conceptions and cultural elements…

  10. High level language for measurement complex control based on the computer E-100I

    NASA Technical Reports Server (NTRS)

    Zubkov, B. V.

    1980-01-01

    A high level language was designed to control the process of conducting an experiment using the computer "Elektrinika-1001". Program examples are given to control the measuring and actuating devices. The procedure of including these programs in the suggested high level language is described.

  11. Identification and Endodontic Management of Middle Mesial Canal in Mandibular Second Molar Using Cone Beam Computed Tomography.

    PubMed

    Paul, Bonny; Dube, Kavita

    2015-01-01

    Endodontic treatments are routinely done with the help of radiographs. However, radiographs represent only a two-dimensional image of an object. Failure to identify aberrant anatomy can lead to endodontic failure. This case report presents the use of three-dimensional imaging with cone beam computed tomography (CBCT) as an adjunct to digital radiography in identification and management of mandibular second molar with three mesial canals. PMID:26664763

  12. Identification and Endodontic Management of Middle Mesial Canal in Mandibular Second Molar Using Cone Beam Computed Tomography

    PubMed Central

    Paul, Bonny; Dube, Kavita

    2015-01-01

    Endodontic treatments are routinely done with the help of radiographs. However, radiographs represent only a two-dimensional image of an object. Failure to identify aberrant anatomy can lead to endodontic failure. This case report presents the use of three-dimensional imaging with cone beam computed tomography (CBCT) as an adjunct to digital radiography in identification and management of mandibular second molar with three mesial canals. PMID:26664763

  13. Low-bandwidth and non-compute intensive remote identification of microbes from raw sequencing reads.

    PubMed

    Gautier, Laurent; Lund, Ole

    2013-01-01

    Cheap DNA sequencing may soon become routine not only for human genomes but also for practically anything requiring the identification of living organisms from their DNA: tracking of infectious agents, control of food products, bioreactors, or environmental samples. We propose a novel general approach to the analysis of sequencing data where a reference genome does not have to be specified. Using a distributed architecture we are able to query a remote server for hints about what the reference might be, transferring a relatively small amount of data. Our system consists of a server with known reference DNA indexed, and a client with raw sequencing reads. The client sends a sample of unidentified reads, and in return receives a list of matching references. Sequences for the references can be retrieved and used for exhaustive computation on the reads, such as alignment. To demonstrate this approach we have implemented a web server, indexing tens of thousands of publicly available genomes and genomic regions from various organisms and returning lists of matching hits from query sequencing reads. We have also implemented two clients: one running in a web browser, and one as a python script. Both are able to handle a large number of sequencing reads and from portable devices (the browser-based running on a tablet), perform its task within seconds, and consume an amount of bandwidth compatible with mobile broadband networks. Such client-server approaches could develop in the future, allowing a fully automated processing of sequencing data and routine instant quality check of sequencing runs from desktop sequencers. A web access is available at http://tapir.cbs.dtu.dk. The source code for a python command-line client, a server, and supplementary data are available at http://bit.ly/1aURxkc. PMID:24391826

  14. Low-Bandwidth and Non-Compute Intensive Remote Identification of Microbes from Raw Sequencing Reads

    PubMed Central

    Gautier, Laurent; Lund, Ole

    2013-01-01

    Cheap DNA sequencing may soon become routine not only for human genomes but also for practically anything requiring the identification of living organisms from their DNA: tracking of infectious agents, control of food products, bioreactors, or environmental samples. We propose a novel general approach to the analysis of sequencing data where a reference genome does not have to be specified. Using a distributed architecture we are able to query a remote server for hints about what the reference might be, transferring a relatively small amount of data. Our system consists of a server with known reference DNA indexed, and a client with raw sequencing reads. The client sends a sample of unidentified reads, and in return receives a list of matching references. Sequences for the references can be retrieved and used for exhaustive computation on the reads, such as alignment. To demonstrate this approach we have implemented a web server, indexing tens of thousands of publicly available genomes and genomic regions from various organisms and returning lists of matching hits from query sequencing reads. We have also implemented two clients: one running in a web browser, and one as a python script. Both are able to handle a large number of sequencing reads and from portable devices (the browser-based running on a tablet), perform its task within seconds, and consume an amount of bandwidth compatible with mobile broadband networks. Such client-server approaches could develop in the future, allowing a fully automated processing of sequencing data and routine instant quality check of sequencing runs from desktop sequencers. A web access is available at http://tapir.cbs.dtu.dk. The source code for a python command-line client, a server, and supplementary data are available at http://bit.ly/1aURxkc. PMID:24391826

  15. Pixel-Level Analysis Techniques for False-Positive Identification in Kepler Data

    NASA Astrophysics Data System (ADS)

    Bryson, Steve; Jenkins, J.; Gilliland, R.; Batalha, N.; Gautier, T. N.; Rowe, J.; Dunham, E.; Latham, D.; Caldwell, D.; Twicken, J.; Tenenbaum, P.; Clarke, B.; Li, J.; Wu, H.; Quintana, E.; Ciardi, D.; Torres, G.; Dotson, J.; Still, M.

    2011-01-01

    The Kepler mission seeks to identify Earth-size exoplanets by detecting transits of their parent star. The resulting transit signature will be small ( 100 ppm). Several astrophysical phenomena can mimic an Earth-size transit signature, most notably background eclipsing binaries (BGEBs). As part of a larger false-positive identification effort, pixel-level analysis of the Kepler data has proven crucial in identifying the likelihood of these confounding signals. Pixel-level analysis is primarily useful for the case of the transit being a BGEB. Several analysis techniques are presented, including: - measurement of centroid motion in and out of transit compared with detailed modeling of expected centroid motion, including an estimate of the transit source location - transit source location determination through a high-precision PSF-fit of the difference between in- and out-of-transit pixels, directly measuring the location of the transit source - source location determination through fitting the observed summed flux time series (or the light curve derived from the transit model) to each pixel's time series data. These techniques have been automated and are being considered for inclusion in the Kepler Science Operations Center Data Analysis Pipeline. They are supplemented by various diagnostic plots of the Kepler data as well as comparison with background stars identified by the Kepler Follow-up Observing Program (FOP). The final determination of whether an observed transit is a false positive integrates several sources, including pixel-level analysis and FOP results. Pixel-level techniques can identify BGEBs that are separated from the Kepler target star by more than a certain radius, called the "radius of confusion". The determination of the radius of confusion, and the role it plays in assigning the probability of the transit being due to a planet, is briefly discussed. The statistics from the latest false-positive list are provided. Funding for this mission provided

  16. Identification of Low-level Point Radioactive Sources using a sensor network

    SciTech Connect

    Chin, J. C.; Rao, Nageswara S.; Yao, David K. Y.; Shankar, Mallikarjun; Yang, Yong; Hou, J. C.; Srivathsan, Sri; Iyengar, S. Sitharama

    2010-09-01

    Identification of a low-level point radioactive source amidst background radiation is achieved by a network of radiation sensors using a two-step approach. Based on measurements from three or more sensors, a geometric difference triangulation method or an N-sensor localization method is used to estimate the location and strength of the source. Then a sequential probability ratio test based on current measurements and estimated parameters is employed to finally decide: (1) the presence of a source with the estimated parameters, or (2) the absence of the source, or (3) the insufficiency of measurements to make a decision. This method achieves specified levels of false alarm and missed detection probabilities, while ensuring a close-to-minimal number of measurements for reaching a decision. This method minimizes the ghost-source problem of current estimation methods, and achieves a lower false alarm rate compared with current detection methods. This method is tested and demonstrated using: (1) simulations, and (2) a test-bed that utilizes the scaling properties of point radioactive sources to emulate high intensity ones that cannot be easily and safely handled in laboratory experiments.

  17. Pipasic: similarity and expression correction for strain-level identification and quantification in metaproteomics

    PubMed Central

    Penzlin, Anke; Lindner, Martin S.; Doellinger, Joerg; Dabrowski, Piotr Wojtek; Nitsche, Andreas; Renard, Bernhard Y.

    2014-01-01

    Motivation: Metaproteomic analysis allows studying the interplay of organisms or functional groups and has become increasingly popular also for diagnostic purposes. However, difficulties arise owing to the high sequence similarity between related organisms. Further, the state of conservation of proteins between species can be correlated with their expression level, which can lead to significant bias in results and interpretation. These challenges are similar but not identical to the challenges arising in the analysis of metagenomic samples and require specific solutions. Results: We introduce Pipasic (peptide intensity-weighted proteome abundance similarity correction) as a tool that corrects identification and spectral counting-based quantification results using peptide similarity estimation and expression level weighting within a non-negative lasso framework. Pipasic has distinct advantages over approaches only regarding unique peptides or aggregating results to the lowest common ancestor, as demonstrated on examples of viral diagnostics and an acid mine drainage dataset. Availability and implementation: Pipasic source code is freely available from https://sourceforge.net/projects/pipasic/. Contact: RenardB@rki.de Supplementary information: Supplementary data are available at Bioinformatics online PMID:24931978

  18. Simple distortion-invariant optical identification tag based on encrypted binary-phase computer-generated hologram for real-time vehicle identification and verification

    NASA Astrophysics Data System (ADS)

    Kim, Cheol-Su

    2010-11-01

    A simple distortion-invariant optical identification (ID) tag is presented for real-time vehicle identification and verification. The proposed scheme is composed of image encryption, ID tag creation, image decryption, and optical correlation for verification. To create the ID tag, a binary-phase computer-generated hologram (BPCGH) of a symbol image representing a vehicle is created using a simulated annealing algorithm. The BPCGH is then encrypted using an XOR operation and enlargement transformed into polar coordinates. The resulting ID tag is then attached to the vehicle. As the BPCGH consists of only binary phase values, it is robust to external distortions. To identify and verify the vehicle, several reverse processes are required, such as capturing the ID tag with a camera, extracting the ID tag from the captured image, transformation of the ID tag into rectangular coordinates, decryption, an inverse Fourier transform, and correlation. Computer simulation and experimental results confirm that the proposed optical ID tag is secure and robust to such distortions as scaling, rotation, cropping (scratches), and random noise. The ID tag can also be easily implemented, as it consists of only binary phase components.

  19. Realization of a holonomic quantum computer in a chain of three-level systems

    NASA Astrophysics Data System (ADS)

    Gürkan, Zeynep Nilhan; Sjöqvist, Erik

    2015-12-01

    Holonomic quantum computation is the idea to use non-Abelian geometric phases to implement universal quantum gates that are robust to fluctuations in control parameters. Here, we propose a compact design for a holonomic quantum computer based on coupled three-level systems. The scheme does not require adiabatic evolution and can be implemented in arrays of atoms or ions trapped in tailored standing wave potentials.

  20. Assessment of Social Vulnerability Identification at Local Level around Merapi Volcano - A Self Organizing Map Approach

    NASA Astrophysics Data System (ADS)

    Lee, S.; Maharani, Y. N.; Ki, S. J.

    2015-12-01

    The application of Self-Organizing Map (SOM) to analyze social vulnerability to recognize the resilience within sites is a challenging tasks. The aim of this study is to propose a computational method to identify the sites according to their similarity and to determine the most relevant variables to characterize the social vulnerability in each cluster. For this purposes, SOM is considered as an effective platform for analysis of high dimensional data. By considering the cluster structure, the characteristic of social vulnerability of the sites identification can be fully understand. In this study, the social vulnerability variable is constructed from 17 variables, i.e. 12 independent variables which represent the socio-economic concepts and 5 dependent variables which represent the damage and losses due to Merapi eruption in 2010. These variables collectively represent the local situation of the study area, based on conducted fieldwork on September 2013. By using both independent and dependent variables, we can identify if the social vulnerability is reflected onto the actual situation, in this case, Merapi eruption 2010. However, social vulnerability analysis in the local communities consists of a number of variables that represent their socio-economic condition. Some of variables employed in this study might be more or less redundant. Therefore, SOM is used to reduce the redundant variable(s) by selecting the representative variables using the component planes and correlation coefficient between variables in order to find the effective sample size. Then, the selected dataset was effectively clustered according to their similarities. Finally, this approach can produce reliable estimates of clustering, recognize the most significant variables and could be useful for social vulnerability assessment, especially for the stakeholder as decision maker. This research was supported by a grant 'Development of Advanced Volcanic Disaster Response System considering

  1. Evaluation of the Wider System, a New Computer-Assisted Image-Processing Device for Bacterial Identification and Susceptibility Testing

    PubMed Central

    Cantón, Rafael; Pérez-Vázquez, María; Oliver, Antonio; Sánchez Del Saz, Begoña; Gutiérrez, M. Olga; Martínez-Ferrer, Manuel; Baquero, Fernando

    2000-01-01

    The Wider system is a newly developed computer-assisted image-processing device for both bacterial identification and antimicrobial susceptibility testing. It has been adapted to be able to read and interpret commercial MicroScan panels. Two hundred forty-four fresh consecutive clinical isolates (138 isolates of the family Enterobacteriaceae, 25 nonfermentative gram-negative rods [NFGNRs], and 81 gram-positive cocci) were tested. In addition, 100 enterobacterial strains with known β-lactam resistance mechanisms (22 strains with chromosomal AmpC β-lactamase, 8 strains with chromosomal class A β-lactamase, 21 broad-spectrum and IRT β-lactamase-producing strains, 41 extended-spectrum β-lactamase-producing strains, and 8 permeability mutants) were tested. API galleries and National Committee for Clinical Laboratory Standards (NCCLS) microdilution methods were used as reference methods. The Wider system correctly identified 97.5% of the clinical isolates at the species level. Overall essential agreement (±1 log2 dilution for 3,719 organism-antimicrobial drug combinations) was 95.6% (isolates of the family Enterobacteriaceae, 96.6%; NFGNRs, 88.0%; gram-positive cocci, 95.6%). The lowest essential agreement was observed with Enterobacteriaceae versus imipenem (84.0%), NFGNR versus piperacillin (88.0%) and cefepime (88.0%), and gram-positive isolates versus penicillin (80.4%). The category error rate (NCCLS criteria) was 4.2% (2.0% very major errors, 0.6% major errors, and 1.5% minor errors). Essential agreement and interpretive error rates for eight β-lactam antibiotics against isolates of the family Enterobacteriaceae with known β-lactam resistance mechanisms were 94.8 and 5.4%, respectively. Interestingly, the very major error rate was only 0.8%. Minor errors (3.6%) were mainly observed with amoxicillin-clavulanate and cefepime against extended-spectrum β-lactamase-producing isolates. The Wider system is a new reliable tool which applies the image

  2. Automatic Identification of the Repolarization Endpoint by Computing the Dominant T-wave on a Reduced Number of Leads.

    PubMed

    Giuliani, C; Agostinelli, A; Di Nardo, F; Fioretti, S; Burattini, L

    2016-01-01

    Electrocardiographic (ECG) T-wave endpoint (Tend) identification suffers lack of reliability due to the presence of noise and variability among leads. Tend identification can be improved by using global repolarization waveforms obtained by combining several leads. The dominant T-wave (DTW) is a global repolarization waveform that proved to improve Tend identification when computed using the 15 (I to III, aVr, aVl, aVf, V1 to V6, X, Y, Z) leads usually available in clinics, of which only 8 (I, II, V1 to V6) are independent. The aim of the present study was to evaluate if the 8 independent leads are sufficient to obtain a DTW which allows a reliable Tend identification. To this aim Tend measures automatically identified from 15-dependent-lead DTWs of 46 control healthy subjects (CHS) and 103 acute myocardial infarction patients (AMIP) were compared with those obtained from 8-independent-lead DTWs. Results indicate that Tend distributions have not statistically different median values (CHS: 340 ms vs. 340 ms, respectively; AMIP: 325 ms vs. 320 ms, respectively), besides being strongly correlated (CHS: ρ=0.97, AMIP: 0.88; P<10(-27)). Thus, measuring Tend from the 15-dependent-lead DTWs is statistically equivalent to measuring Tend from the 8-independent-lead DTWs. In conclusion, for the clinical purpose of automatic Tend identification from DTW, the 8 independent leads can be used without a statistically significant loss of accuracy but with a significant decrement of computational effort. The lead dependence of 7 out of 15 leads does not introduce a significant bias in the Tend determination from 15 dependent lead DTWs. PMID:27347218

  3. Automatic Identification of the Repolarization Endpoint by Computing the Dominant T-wave on a Reduced Number of Leads

    PubMed Central

    Giuliani, C.; Agostinelli, A.; Di Nardo, F.; Fioretti, S.; Burattini, L.

    2016-01-01

    Electrocardiographic (ECG) T-wave endpoint (Tend) identification suffers lack of reliability due to the presence of noise and variability among leads. Tend identification can be improved by using global repolarization waveforms obtained by combining several leads. The dominant T-wave (DTW) is a global repolarization waveform that proved to improve Tend identification when computed using the 15 (I to III, aVr, aVl, aVf, V1 to V6, X, Y, Z) leads usually available in clinics, of which only 8 (I, II, V1 to V6) are independent. The aim of the present study was to evaluate if the 8 independent leads are sufficient to obtain a DTW which allows a reliable Tend identification. To this aim Tend measures automatically identified from 15-dependent-lead DTWs of 46 control healthy subjects (CHS) and 103 acute myocardial infarction patients (AMIP) were compared with those obtained from 8-independent-lead DTWs. Results indicate that Tend distributions have not statistically different median values (CHS: 340 ms vs. 340 ms, respectively; AMIP: 325 ms vs. 320 ms, respectively), besides being strongly correlated (CHS: ρ=0.97, AMIP: 0.88; P<10-27). Thus, measuring Tend from the 15-dependent-lead DTWs is statistically equivalent to measuring Tend from the 8-independent-lead DTWs. In conclusion, for the clinical purpose of automatic Tend identification from DTW, the 8 independent leads can be used without a statistically significant loss of accuracy but with a significant decrement of computational effort. The lead dependence of 7 out of 15 leads does not introduce a significant bias in the Tend determination from 15 dependent lead DTWs. PMID:27347218

  4. User-microprogrammable, local host computer with low-level parallelism

    SciTech Connect

    Tomita, S.; Shibayama, K.; Kitamura, T.; Nakata, T.; Hagiwara, H.

    1983-01-01

    This paper describes the architecture of a dynamically microprogrammable computer with low-level parallelism, called QA-2, which is designed as a high-performance, local host computer for laboratory use. The architectural principle of the QA-2 is the marriage of high-speed, parallel processing capability offered by four powerful arithmetic and logic units (ALUS) with architectural flexibility provided by large scale, dynamic user-microprogramming. By changing its writable control storage dynamically, the QA-2 can be tailored to a wide spectrum of research-oriented applications covering high-level language processing and real-time processing. 11 references.

  5. Computational identification of conserved microRNAs and their targets from expression sequence tags of blueberry (Vaccinium corybosum)

    PubMed Central

    Li, Xuyan; Hou, Yanming; Zhang, Li; Zhang, Wenhao; Quan, Chen; Cui, Yuhai; Bian, Shaomin

    2014-01-01

    MicroRNAs (miRNAs) are a class of endogenous, approximately 21nt in length, non-coding RNA, which mediate the expression of target genes primarily at post-transcriptional levels. miRNAs play critical roles in almost all plant cellular and metabolic processes. Although numerous miRNAs have been identified in the plant kingdom, the miRNAs in blueberry, which is an economically important small fruit crop, still remain totally unknown. In this study, we reported a computational identification of miRNAs and their targets in blueberry. By conducting an EST-based comparative genomics approach, 9 potential vco-miRNAs were discovered from 22,402 blueberry ESTs according to a series of filtering criteria, designated as vco-miR156–5p, vco-miR156–3p, vco-miR1436, vco-miR1522, vco-miR4495, vco-miR5120, vco-miR5658, vco-miR5783, and vco-miR5986. Based on sequence complementarity between miRNA and its target transcript, 34 target ESTs from blueberry and 70 targets from other species were identified for the vco-miRNAs. The targets were found to be involved in transcription, RNA splicing and binding, DNA duplication, signal transduction, transport and trafficking, stress response, as well as synthesis and metabolic process. These findings will greatly contribute to future research in regard to functions and regulatory mechanisms of blueberry miRNAs. PMID:25763692

  6. Species-level identification of staphylococci isolated from bovine mastitis in Brazil using partial 16S rRNA sequencing.

    PubMed

    Lange, Carla C; Brito, Maria A V P; Reis, Daniele R L; Machado, Marco A; Guimarães, Alessandro S; Azevedo, Ana L S; Salles, Érica B; Alvim, Mariana C T; Silva, Fabiana S; Meurer, Igor R

    2015-04-17

    Staphylococci isolated from bovine milk and not classified as Staphylococcus aureus represent a heterogeneous group of microorganisms that are frequently associated with bovine mastitis. The identification of these microorganisms is important, although it is difficult and relatively costly. Genotypic methods add precision in the identification of Staphylococcus species. In the present study, partial 16S rRNA sequencing was used for the species identification of coagulase-positive and coagulase-negative staphylococci isolated from bovine mastitis. Two hundred and two (95%) of the 213 isolates were successfully identified at the species level. The assigning of an isolate to a particular species was based on ≥99% identity with 16S rRNA sequences deposited in GenBank. The identified isolates belonged to 13 different Staphylococcus species; Staphylococcus chromogenes, S. aureus and Staphylococcus epidermidis were the most frequently identified species. Eight isolates could not be assigned to a single species, as the obtained sequences showed 99% or 100% similarity to sequences from two or three different Staphylococcus species. The relatedness of these isolates with the other isolates and reference strains was visualized using a cladogram. In conclusion, 16S rRNA sequencing was an objective and accurate method for the proper identification of Staphylococcus species isolated from bovine mastitis. Additional target genes could be used in non-conclusive cases for the species-level identification of these microorganisms. PMID:25704228

  7. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 4

    SciTech Connect

    Not Available

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 4) presents the standards and requirements for the following sections: Radiation Protection and Operations.

  8. High level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 6

    SciTech Connect

    Not Available

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 6) outlines the standards and requirements for the sections on: Environmental Restoration and Waste Management, Research and Development and Experimental Activities, and Nuclear Safety.

  9. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID)

    SciTech Connect

    Not Available

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 3) presents the standards and requirements for the following sections: Safeguards and Security, Engineering Design, and Maintenance.

  10. System-level identification of transcriptional circuits underlying mammalian circadian clocks.

    PubMed

    Ueda, Hiroki R; Hayashi, Satoko; Chen, Wenbin; Sano, Motoaki; Machida, Masayuki; Shigeyoshi, Yasufumi; Iino, Masamitsu; Hashimoto, Seiichi

    2005-02-01

    Mammalian circadian clocks consist of complexly integrated regulatory loops, making it difficult to elucidate them without both the accurate measurement of system dynamics and the comprehensive identification of network circuits. Toward a system-level understanding of this transcriptional circuitry, we identified clock-controlled elements on 16 clock and clock-controlled genes in a comprehensive surveillance of evolutionarily conserved cis elements and measurement of their transcriptional dynamics. Here we report the roles of E/E' boxes, DBP/E4BP4 binding elements and RevErbA/ROR binding elements in nine, seven and six genes, respectively. Our results indicate that circadian transcriptional circuits are governed by two design principles: regulation of E/E' boxes and RevErbA/ROR binding elements follows a repressor-precedes-activator pattern, resulting in delayed transcriptional activity, whereas regulation of DBP/E4BP4 binding elements follows a repressor-antiphasic-to-activator mechanism, which generates high-amplitude transcriptional activity. Our analysis further suggests that regulation of E/E' boxes is a topological vulnerability in mammalian circadian clocks, a concept that has been functionally verified using in vitro phenotype assay systems. PMID:15665827

  11. Scalability of a Base Level Design for an On-Board-Computer for Scientific Missions

    NASA Astrophysics Data System (ADS)

    Treudler, Carl Johann; Schroder, Jan-Carsten; Greif, Fabian; Stohlmann, Kai; Aydos, Gokce; Fey, Gorschwin

    2014-08-01

    Facing a wide range of mission requirements and the integration of diverse payloads requires extreme flexibility in the on-board-computing infrastructure for scientific missions. We show that scalability is principally difficult. We address this issue by proposing a base level design and show how the adoption to different needs is achieved. Inter-dependencies between scaling different aspects and their impact on different levels in the design are discussed.

  12. Neuromotor recovery from stroke: computational models at central, functional, and muscle synergy level

    PubMed Central

    Casadio, Maura; Tamagnone, Irene; Summa, Susanna; Sanguineti, Vittorio

    2013-01-01

    Computational models of neuromotor recovery after a stroke might help to unveil the underlying physiological mechanisms and might suggest how to make recovery faster and more effective. At least in principle, these models could serve: (i) To provide testable hypotheses on the nature of recovery; (ii) To predict the recovery of individual patients; (iii) To design patient-specific “optimal” therapy, by setting the treatment variables for maximizing the amount of recovery or for achieving a better generalization of the learned abilities across different tasks. Here we review the state of the art of computational models for neuromotor recovery through exercise, and their implications for treatment. We show that to properly account for the computational mechanisms of neuromotor recovery, multiple levels of description need to be taken into account. The review specifically covers models of recovery at central, functional and muscle synergy level. PMID:23986688

  13. 24 CFR 990.165 - Computation of project expense level (PEL).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... with 24 CFR 941.606 for a mixed-finance transaction, then the project covered by the mixed-finance... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Computation of project expense level (PEL). 990.165 Section 990.165 Housing and Urban Development Regulations Relating to Housing...

  14. Do Students Benefit Equally from Interactive Computer Simulations Regardless of Prior Knowledge Levels?

    ERIC Educational Resources Information Center

    Park, Seong Ik; Lee, Gyumin; Kim, Meekyoung

    2009-01-01

    The purposes of this study were to examine the effects of two types of interactive computer simulations and of prior knowledge levels on concept comprehension, cognitive load, and learning efficiency. Seventy-two 5th grade students were sampled from two elementary schools. They were divided into two groups (high and low) based on prior knowledge…

  15. The Relationship between Internet and Computer Game Addiction Level and Shyness among High School Students

    ERIC Educational Resources Information Center

    Ayas, Tuncay

    2012-01-01

    This study is conducted to determine the relationship between the internet and computer games addiction level and the shyness among high school students. The participants of the study consist of 365 students attending high schools in Giresun city centre during 2009-2010 academic year. As a result of the study a positive, meaningful, and high…

  16. Can Synchronous Computer-Mediated Communication (CMC) Help Beginning-Level Foreign Language Learners Speak?

    ERIC Educational Resources Information Center

    Ko, Chao-Jung

    2012-01-01

    This study investigated the possibility that initial-level learners may acquire oral skills through synchronous computer-mediated communication (SCMC). Twelve Taiwanese French as a foreign language (FFL) students, divided into three groups, were required to conduct a variety of tasks in one of the three learning environments (video/audio, audio,…

  17. BICYCLE: a computer code for calculating levelized life-cycle costs

    SciTech Connect

    Hardie, R.W.

    1980-08-01

    This report serves as a user's manual for the BICYCLE computer code. BICYCLE was specifically designed to calculate levelized life-cycle costs for plants that produce electricity, heat, gaseous fuels, or liquid fuels. Included in this report are (1) derivations of the equations used by BICYCLE, (2) input instructions, (3) sample case input, and (4) sample case output.

  18. BICYCLE II: a computer code for calculating levelized life-cycle costs

    SciTech Connect

    Hardie, R.W.

    1981-11-01

    This report describes the BICYCLE computer code. BICYCLE was specifically designed to calculate levelized life-cycle costs for plants that produce electricity, heat, gaseous fuels, or liquid fuels. Included are (1) derivations of the equations used by BICYCLE, (2) input instructions, (3) sample case input, and (4) sample case output.

  19. Computational identification of microRNAs in Anatid herpesvirus 1 genome

    PubMed Central

    2012-01-01

    Background MicroRNAs (miRNAs) are a group of short (~22 nt) noncoding RNAs that specifically regulate gene expression at the post-transcriptional level. miRNA precursors (pre-miRNAs), which are imperfect stem loop structures of ~70 nt, are processed into mature miRNAs by cellular RNases III. To date, thousands of miRNAs have been identified in different organisms. Several viruses have been reported to encode miRNAs. Findings Here, we extended the analysis of miRNA-encoding potential to the Anatid herpesvirus 1 (AHV-1). Using computational approaches, we found that AHV-1 putatively encodes 12 mature miRNAs. We then compared the 12 mature miRNAs candidates with the all known miRNAs of the herpesvirus family. Interestingly, the “seed sequences” (nt 2 to 8) of 2 miRNAs were predicted to have the high conservation in position and/or sequence with the 2 miRNAs of Marek’s disease virus type 1 (MDV-1). Additionally, we searched the targets from viral mRNAs. Conclusions Using computational approaches, we found that AHV-1 putatively encodes 12 mature miRNAs and 2 miRNAs have the high conservation with the 2 miRNAs of MDV-1. The result suggested that AHV-1 and MDV-1 should have closed evolutionary relation, which provides a valuable evidence of classification of AHV-1. Additionally, seven viral gene targets were found, which suggested that AHV-1 miRNAs could affect its own gene expression. PMID:22584005

  20. A novel computer-aided detection system for pulmonary nodule identification in CT images

    NASA Astrophysics Data System (ADS)

    Han, Hao; Li, Lihong; Wang, Huafeng; Zhang, Hao; Moore, William; Liang, Zhengrong

    2014-03-01

    Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel approach for CADe of lung nodules using a two-stage vector quantization (VQ) scheme. The first-stage VQ aims to extract lung from the chest volume, while the second-stage VQ is designed to extract initial nodule candidates (INCs) within the lung volume. Then rule-based expert filtering is employed to prune obvious FPs from INCs, and the commonly-used support vector machine (SVM) classifier is adopted to further reduce the FPs. The proposed system was validated on 100 CT scans randomly selected from the 262 scans that have at least one juxta-pleural nodule annotation in the publicly available database - Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The two-stage VQ only missed 2 out of the 207 nodules at agreement level 1, and the INCs detection for each scan took about 30 seconds in average. Expert filtering reduced FPs more than 18 times, while maintaining a sensitivity of 93.24%. As it is trivial to distinguish INCs attached to pleural wall versus not on wall, we investigated the feasibility of training different SVM classifiers to further reduce FPs from these two kinds of INCs. Experiment results indicated that SVM classification over the entire set of INCs was in favor of, where the optimal operating of our CADe system achieved a sensitivity of 89.4% at a specificity of 86.8%.

  1. Identification of masses in digital mammogram using gray level co-occurrence matrices

    PubMed Central

    Mohd. Khuzi, A; Besar, R; Wan Zaki, WMD; Ahmad, NN

    2009-01-01

    Digital mammogram has become the most effective technique for early breast cancer detection modality. Digital mammogram takes an electronic image of the breast and stores it directly in a computer. The aim of this study is to develop an automated system for assisting the analysis of digital mammograms. Computer image processing techniques will be applied to enhance images and this is followed by segmentation of the region of interest (ROI). Subsequently, the textural features will be extracted from the ROI. The texture features will be used to classify the ROIs as either masses or non-masses. In this study normal breast images and breast image with masses used as the standard input to the proposed system are taken from Mammographic Image Analysis Society (MIAS) digital mammogram database. In MIAS database, masses are grouped into either spiculated, circumscribed or ill-defined. Additional information includes location of masses centres and radius of masses. The extraction of the textural features of ROIs is done by using gray level co-occurrence matrices (GLCM) which is constructed at four different directions for each ROI. The results show that the GLCM at 0º, 45º, 90º and 135º with a block size of 8X8 give significant texture information to identify between masses and non-masses tissues. Analysis of GLCM properties i.e. contrast, energy and homogeneity resulted in receiver operating characteristics (ROC) curve area of Az = 0.84 for Otsu’s method, 0.82 for thresholding method and Az = 0.7 for K-mean clustering. ROC curve area of 0.8-0.9 is rated as good results. The authors’ proposed method contains no complicated algorithm. The detection is based on a decision tree with five criterions to be analysed. This simplicity leads to less computational time. Thus, this approach is suitable for automated real-time breast cancer diagnosis system. PMID:21611053

  2. Reshaping Computer Literacy Teaching in Higher Education: Identification of Critical Success Factors

    ERIC Educational Resources Information Center

    Taylor, Estelle; Goede, Roelien; Steyn, Tjaart

    2011-01-01

    Purpose: Acquiring computer skills is more important today than ever before, especially in a developing country. Teaching of computer skills, however, has to adapt to new technology. This paper aims to model factors influencing the success of the learning of computer literacy by means of an e-learning environment. The research question for this…

  3. Identification of Cognitive Processes of Effective and Ineffective Students during Computer Programming

    ERIC Educational Resources Information Center

    Renumol, V. G.; Janakiram, Dharanipragada; Jayaprakash, S.

    2010-01-01

    Identifying the set of cognitive processes (CPs) a student can go through during computer programming is an interesting research problem. It can provide a better understanding of the human aspects in computer programming process and can also contribute to the computer programming education in general. The study identified the presence of a set of…

  4. Man-Computer Symbiosis Through Interactive Graphics: A Survey and Identification of Critical Research Areas.

    ERIC Educational Resources Information Center

    Knoop, Patricia A.

    The purpose of this report was to determine the research areas that appear most critical to achieving man-computer symbiosis. An operational definition of man-computer symbiosis was developed by: (1) reviewing and summarizing what others have said about it, and (2) attempting to distinguish it from other types of man-computer relationships. From…

  5. Computational identification of miRNAs in medicinal plant Senecio vulgaris (Groundsel).

    PubMed

    Sahu, Sarika; Khushwaha, Anjana; Dixit, Rekha

    2011-01-01

    RNAs Interference plays a very important role in gene silencing. In vitro identification of miRNAs is a slow process as it is difficult to isolate them. Nucleotide sequences of miRNAs are highly conserved among the plants and, this form the key feature behind the identification of miRNAs in plant species by homology alignment. In silico identification of miRNAs from EST database is emerging as a novel, faster and reliable approach. Here EST sequences of Senecio vulgaris (Groundsel) were searched against known miRNA sequences by using BLASTN tool. A total of 10 miRNAs were identified from 1956 EST sequences and 115 GSS sequences. The most stable miRNA identified is svu-mir-1. This approach will accelerate advance research in regulation of gene expression in Groundsel by interfering RNAs. PMID:22347777

  6. Computational identification of miRNAs in medicinal plant Senecio vulgaris (Groundsel)

    PubMed Central

    Sahu, Sarika; Khushwaha, Anjana; Dixit, Rekha

    2011-01-01

    RNAs Interference plays a very important role in gene silencing. In vitro identification of miRNAs is a slow process as it is difficult to isolate them. Nucleotide sequences of miRNAs are highly conserved among the plants and, this form the key feature behind the identification of miRNAs in plant species by homology alignment. In silico identification of miRNAs from EST database is emerging as a novel, faster and reliable approach. Here EST sequences of Senecio vulgaris (Groundsel) were searched against known miRNA sequences by using BLASTN tool. A total of 10 miRNAs were identified from 1956 EST sequences and 115 GSS sequences. The most stable miRNA identified is svu-mir-1. This approach will accelerate advance research in regulation of gene expression in Groundsel by interfering RNAs. PMID:22347777

  7. Level set discrete element method for three-dimensional computations with triaxial case study

    NASA Astrophysics Data System (ADS)

    Kawamoto, Reid; Andò, Edward; Viggiani, Gioacchino; Andrade, José E.

    2016-06-01

    In this paper, we outline the level set discrete element method (LS-DEM) which is a discrete element method variant able to simulate systems of particles with arbitrary shape using level set functions as a geometric basis. This unique formulation allows seamless interfacing with level set-based characterization methods as well as computational ease in contact calculations. We then apply LS-DEM to simulate two virtual triaxial specimens generated from XRCT images of experiments and demonstrate LS-DEM's ability to quantitatively capture and predict stress-strain and volume-strain behavior observed in the experiments.

  8. Cooperative gene regulation by microRNA pairs and their identification using a computational workflow

    PubMed Central

    Schmitz, Ulf; Lai, Xin; Winter, Felix; Wolkenhauer, Olaf; Vera, Julio; Gupta, Shailendra K.

    2014-01-01

    MicroRNAs (miRNAs) are an integral part of gene regulation at the post-transcriptional level. Recently, it has been shown that pairs of miRNAs can repress the translation of a target mRNA in a cooperative manner, which leads to an enhanced effectiveness and specificity in target repression. However, it remains unclear which miRNA pairs can synergize and which genes are target of cooperative miRNA regulation. In this paper, we present a computational workflow for the prediction and analysis of cooperating miRNAs and their mutual target genes, which we refer to as RNA triplexes. The workflow integrates methods of miRNA target prediction; triplex structure analysis; molecular dynamics simulations and mathematical modeling for a reliable prediction of functional RNA triplexes and target repression efficiency. In a case study we analyzed the human genome and identified several thousand targets of cooperative gene regulation. Our results suggest that miRNA cooperativity is a frequent mechanism for an enhanced target repression by pairs of miRNAs facilitating distinctive and fine-tuned target gene expression patterns. Human RNA triplexes predicted and characterized in this study are organized in a web resource at www.sbi.uni-rostock.de/triplexrna/. PMID:24875477

  9. Energy Use and Power Levels in New Monitors and Personal Computers

    SciTech Connect

    Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay; Nordman, Bruce; Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla; Koomey, Jonathan G.

    2002-07-23

    Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can use to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC

  10. Integrating computational activities into the upper-level Paradigms in Physics curriculum at Oregon State University

    NASA Astrophysics Data System (ADS)

    McIntyre, David H.; Tate, Janet; Manogue, Corinne A.

    2008-04-01

    The Paradigms in Physics project at Oregon State University has reformed the entire upper-level physics curriculum. The reform has involved a rearrangement of content to better reflect the way physicists think about the field and the use of several new pedagogies that place responsibility for learning more firmly in the hands of the students. In particular, we employ a wide variety of computational examples and problems throughout the courses. Students use MAPLE, MATHEMATICA, JAVA, and other software packages to do calculations, visualizations, and simulations that develop their intuition and physical reasoning. These computational activities are indispensable to the success of the curriculum.

  11. Computation of the intervals of uncertainties about the parameters found for identification

    NASA Technical Reports Server (NTRS)

    Mereau, P.; Raymond, J.

    1982-01-01

    A modeling method to calculate the intervals of uncertainty for parameters found by identification is described. The region of confidence and the general approach to the calculation of these intervals are discussed. The general subprograms for determination of dimensions are described. They provide the organizational charts for the subprograms, the tests carried out and the listings of the different subprograms.

  12. Selection of a computer code for Hanford low-level waste engineered-system performance assessment

    SciTech Connect

    McGrail, B.P.; Mahoney, L.A.

    1995-10-01

    Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected to affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.

  13. Parallel of low-level computer vision algorithms on a multi-DSP system

    NASA Astrophysics Data System (ADS)

    Liu, Huaida; Jia, Pingui; Li, Lijian; Yang, Yiping

    2011-06-01

    Parallel hardware becomes a commonly used approach to satisfy the intensive computation demands of computer vision systems. A multiprocessor architecture based on hypercube interconnecting digital signal processors (DSPs) is described to exploit the temporal and spatial parallelism. This paper presents a parallel implementation of low level vision algorithms designed on multi-DSP system. The convolution operation has been parallelized by using redundant boundary partitioning. Performance of the parallel convolution operation is investigated by varying the image size, mask size and the number of processors. Experimental results show that the speedup is close to the ideal value. However, it can be found that the loading imbalance of processor can significantly affect the computation time and speedup of the multi- DSP system.

  14. K-nearest neighbors based methods for identification of different gear crack levels under different motor speeds and loads: Revisited

    NASA Astrophysics Data System (ADS)

    Wang, Dong

    2016-03-01

    Gears are the most commonly used components in mechanical transmission systems. Their failures may cause transmission system breakdown and result in economic loss. Identification of different gear crack levels is important to prevent any unexpected gear failure because gear cracks lead to gear tooth breakage. Signal processing based methods mainly require expertize to explain gear fault signatures which is usually not easy to be achieved by ordinary users. In order to automatically identify different gear crack levels, intelligent gear crack identification methods should be developed. The previous case studies experimentally proved that K-nearest neighbors based methods exhibit high prediction accuracies for identification of 3 different gear crack levels under different motor speeds and loads. In this short communication, to further enhance prediction accuracies of existing K-nearest neighbors based methods and extend identification of 3 different gear crack levels to identification of 5 different gear crack levels, redundant statistical features are constructed by using Daubechies 44 (db44) binary wavelet packet transform at different wavelet decomposition levels, prior to the use of a K-nearest neighbors method. The dimensionality of redundant statistical features is 620, which provides richer gear fault signatures. Since many of these statistical features are redundant and highly correlated with each other, dimensionality reduction of redundant statistical features is conducted to obtain new significant statistical features. At last, the K-nearest neighbors method is used to identify 5 different gear crack levels under different motor speeds and loads. A case study including 3 experiments is investigated to demonstrate that the developed method provides higher prediction accuracies than the existing K-nearest neighbors based methods for recognizing different gear crack levels under different motor speeds and loads. Based on the new significant statistical

  15. Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing

    PubMed Central

    Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong

    2014-01-01

    This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications. PMID:24574931

  16. Computational Psychiatry of ADHD: Neural Gain Impairments across Marrian Levels of Analysis

    PubMed Central

    Hauser, Tobias U.; Fiore, Vincenzo G.; Moutoussis, Michael; Dolan, Raymond J.

    2016-01-01

    Attention-deficit hyperactivity disorder (ADHD), one of the most common psychiatric disorders, is characterised by unstable response patterns across multiple cognitive domains. However, the neural mechanisms that explain these characteristic features remain unclear. Using a computational multilevel approach, we propose that ADHD is caused by impaired gain modulation in systems that generate this phenotypic increased behavioural variability. Using Marr's three levels of analysis as a heuristic framework, we focus on this variable behaviour, detail how it can be explained algorithmically, and how it might be implemented at a neural level through catecholamine influences on corticostriatal loops. This computational, multilevel, approach to ADHD provides a framework for bridging gaps between descriptions of neuronal activity and behaviour, and provides testable predictions about impaired mechanisms. PMID:26787097

  17. Identification of tumor-associated cassette exons in human cancer through EST-based computational prediction and experimental validation

    PubMed Central

    2010-01-01

    Background Many evidences report that alternative splicing, the mechanism which produces mRNAs and proteins with different structures and functions from the same gene, is altered in cancer cells. Thus, the identification and characterization of cancer-specific splice variants may give large impulse to the discovery of novel diagnostic and prognostic tumour biomarkers, as well as of new targets for more selective and effective therapies. Results We present here a genome-wide analysis of the alternative splicing pattern of human genes through a computational analysis of normal and cancer-specific ESTs from seventeen anatomical groups, using data available in AspicDB, a database resource for the analysis of alternative splicing in human. By using a statistical methodology, normal and cancer-specific genes, splice sites and cassette exons were predicted in silico. The condition association of some of the novel normal/tumoral cassette exons was experimentally verified by RT-qPCR assays in the same anatomical system where they were predicted. Remarkably, the presence in vivo of the predicted alternative transcripts, specific for the nervous system, was confirmed in patients affected by glioblastoma. Conclusion This study presents a novel computational methodology for the identification of tumor-associated transcript variants to be used as cancer molecular biomarkers, provides its experimental validation, and reports specific biomarkers for glioblastoma. PMID:20813049

  18. Zebra tape identification for the instantaneous angular speed computation and angular resampling of motorbike valve train measurements

    NASA Astrophysics Data System (ADS)

    Rivola, Alessandro; Troncossi, Marco

    2014-02-01

    An experimental test campaign was performed on the valve train of a racing motorbike engine in order to get insight into the dynamic of the system. In particular the valve motion was acquired in cold test conditions by means of a laser vibrometer able to acquire displacement and velocity signals. The valve time-dependent measurements needed to be referred to the camshaft angular position in order to analyse the data in the angular domain, as usually done for rotating machines. To this purpose the camshaft was fitted with a zebra tape whose dark and light stripes were tracked by means of an optical probe. Unfortunately, both manufacturing and mounting imperfections of the employed zebra tape, resulting in stripes with slightly different widths, precluded the possibility to directly obtain the correct relationship between camshaft angular position and time. In order to overcome this problem, the identification of the zebra tape was performed by means of the original and practical procedure that is the focus of the present paper. The method consists of three main steps: namely, an ad-hoc test corresponding to special operating conditions, the computation of the instantaneous angular speed, and the final association of the stripes with the corresponding shaft angular position. The results reported in the paper demonstrate the suitability of the simple procedure for the zebra tape identification performed with the final purpose to implement a computed order tracking technique for the data analysis.

  19. System-level tools and reconfigurable computing for next-generation HWIL systems

    NASA Astrophysics Data System (ADS)

    Stark, Derek; McAulay, Derek; Cantle, Allan J.; Devlin, Malachy

    2001-08-01

    Previous work has been presented on the creation of computing architectures called DIME, which addressed the particular computing demands of hardware in the loop systems. These demands include low latency, high data rates and interfacing. While it is essential to have a capable platform for handling and processing of the data streams, the tools must also complement this so that a system's engineer is able to construct their final system. The paper will present the work in the area of integration of system level design tools, such as MATLAB and SIMULINK, with a reconfigurable computing platform. This will demonstrate how algorithms can be implemented and simulated in a familiar rapid application development environment before they are automatically transposed for downloading directly to the computing platform. This complements the established control tools, which handle the configuration and control of the processing systems leading to a tool suite for system development and implementation. As the development tools have evolved the core-processing platform has also been enhanced. These improved platforms are based on dynamically reconfigurable computing, utilizing FPGA technologies, and parallel processing methods that more than double the performance and data bandwidth capabilities. This offers support for the processing of images in Infrared Scene Projectors with 1024 X 1024 resolutions at 400 Hz frame rates. The processing elements will be using the latest generation of FPGAs, which implies that the presented systems will be rated in terms of Tera (1012) operations per second.

  20. Handbook for the Identification and Assessment of Computer Courseware for the Adult Learner.

    ERIC Educational Resources Information Center

    Paul, Daniel M., Comp.

    This handbook provides evaluation guidelines, information on acquiring courseware, and evaluations and recommendations regarding available instructional computer software appropriate to the needs of adult learners enrolled in adult basic education or General Education Development. Section 1 addresses computer hardware problems and limitations,…

  1. POPCYCLE: a computer code for calculating nuclear and fossil plant levelized life-cycle power costs

    SciTech Connect

    Hardie, R.W.

    1982-02-01

    POPCYCLE, a computer code designed to calculate levelized life-cycle power costs for nuclear and fossil electrical generating plants is described. Included are (1) derivations of the equations and a discussion of the methodology used by POPCYCLE, (2) a description of the input required by the code, (3) a listing of the input for a sample case, and (4) the output for a sample case.

  2. Real time computation of in vivo drug levels during drug self-administration experiments.

    PubMed

    Tsibulsky, Vladimir L; Norman, Andrew B

    2005-05-01

    A growing body of evidence suggests that the drug concentration in the effect compartment of the body is the major factor regulating self-administration behavior. A novel computer-based protocol was developed to facilitate studies on mechanisms of drug addiction by determining correlations between drug levels and behavior during multiple drug injections and infusions. The core of the system is a user's program written in Medstate Notation language (Med-Associates, Inc.), which runs the self-administration session (with MED-PC software and hardware, Med-Associates, Inc.) and calculates the levels of infused and/or injected drugs in real time during the session. From the comparison of classical exponential and simple linear models of first-order kinetics, it is concluded that exponential solutions for the appropriate differential equations may be replaced with linear equations if the cycle of computation is much shorter than the shortest half-life for the drug. The choice between particular computation equations depends on assumptions about the pharmacokinetics of the particular drug: (i) one-, two- or three-compartment model, (ii) zero-, first- or second-order process of elimination, (iii) the constants of distribution and elimination half-lives of the drug are known or can be reasonably assumed, (iv) dependence of the constants on the drug level, and (v) temporal stability of all parameters during the session. This method of drug level computation can be employed not only for self-administration but also for other behavioral paradigms to advance pharmacokinetic/pharmacodynamic modeling. PMID:15878149

  3. Variance and bias computations for improved modal identification using ERA/DC

    NASA Technical Reports Server (NTRS)

    Longman, Richard W.; Lew, Jiann-Shiun; Tseng, Dong-Huei; Juang, Jer-Nan

    1991-01-01

    Variance and bias confidence criteria were recently developed for the eigensystem realization algorithm (ERA) identification technique. These criteria are extended for the modified version of ERA based on data correlation, ERA/DC, and also for the Q-Markov cover algorithm. The importance and usefulness of the variance and bias information are demonstrated in numerical studies. The criteria are shown to be very effective not only by indicating the accuracy of the identification results, especially in terms of confidence intervals, but also by helping the ERA user to obtain better results by seeing the effect of changing the sample time, adjusting the Hankel matrix dimension, choosing how many singular values to retain, deciding the model order, etc.

  4. Computational analysis of the curvature distribution and power losses of metal strip in tension levellers

    NASA Astrophysics Data System (ADS)

    Steinwender, L.; Kainz, A.; Krimpelstätter, K.; Zeman, K.

    2010-06-01

    Tension levelling is employed in strip processing lines to minimise residual stresses resp. to improve the strip flatness by inducing small elasto-plastic deformations. To improve the design of such machines, precise calculation models are essential to reliably predict tension losses due to plastic dissipation, power requirements of the driven bridle rolls (located upstream and downstream), reaction forces on levelling rolls as well as strains and stresses in the strip. FEM (Finite Element Method) simulations of the tension levelling process (based on Updated Lagrangian concepts) yield high computational costs due to the necessity of very fine meshes as well as due to the severely non-linear characteristics of contact, material and geometry. In an evaluation process of hierarchical models (models with different modeling levels), the reliability of both 3D and 2D modelling concepts (based on continuum and structural elements) was proved by extensive analyses as well as consistency checks against measurement data from an industrial tension leveller. To exploit the potential of computational cost savings, a customised modelling approach based on the principle of virtual work has been elaborated, which yields a drastic reduction of degrees of freedom compared to simulations by utilising commercial FEM-packages.

  5. Evaluation of Staf-Sistem 18-R for identification of staphylococcal clinical isolates to the species level.

    PubMed Central

    Piccolomini, R; Catamo, G; Picciani, C; D'Antonio, D

    1994-01-01

    The accuracy and efficiency of Staf-Sistem 18-R (Liofilchem s.r.l., Roseto degli Abruzzi, Teramo, Italy) were compared with those of conventional biochemical methods to identify 523 strains belonging to 16 different human Staphylococcus species. Overall, 491 strains (93.9%) were correctly identified (percentage of identification, > or = 90.0), with 28 (5.4%) requiring supplementary tests for complete identification. For 14 isolates (2.8%), the strains did not correspond to any key in the codebook and could not be identified by the manufacturer's computer service. Only 18 isolates (3.4%) were misidentified. The system is simple to use, is easy to handle, gives highly reproducible results, and is inexpensive. With the inclusion of more discriminating tests and adjustment in supplementary code numbers for some species, such as Staphylococcus lugdunensis and Staphylococcus schleiferi, Staf-Sistem 18-R is a suitable alternative for identification of human coagulase-positive and coagulase-negative Staphylococcus species in microbiological laboratories. Images PMID:8195373

  6. Computer models for defining eustatic sea level fluctuations in carbonate rocks

    SciTech Connect

    Read, J.F.; Elrick, M.E.; Osleger, D.A. )

    1990-05-01

    One- and two-dimensional computer models of carbonate sequences help define the amplitudes and periods of 20,000 years to 1 m.y. and 1-3 m.y. sea level fluctuations that have affected carbonate rocks. The models show that with low-amplitude 20-100 k.y. sea level fluctuations, tidal flats are likely to extend across the platform during short-term regressions, and vadose diagenesis is limited because sea level rarely drops far below the platform surface. With high-amplitude 20-100 k.y., sea level fluctuations, tidal flats are confined to coastal locations, very deep-water facies are overlain by shallow-water beds, and during regression sea level falls far below the platform, leaving karstic surfaces. The models also allow testing of random vs. Milankovitch-driven sea level changes. The feasibility of cyclic sedimentation due to autocyclic processes under static sea level can be shown by the modeling to be less likely than Milankovich climatic forcing for developing cyclic carbonate sequences. Models also help define relative dominance of 100 k.y. vs. 20 or 40 k.y. sea level oscillations. The presence of shallow-ramp vs. deep ramp upward-shallowing cycles that are common on many platforms provides valuable constraints on the modeling in that the sea level fluctuations generating the shallow cycles also have to be able to generate the deeper ramp cycles. Sea level fluctuations of 1-3 m.y. are constrained by the modeling because overestimated amplitudes result in sequences and high-frequency cycles that are far too thick. Rate of long-term falls are constrained by the modeling because where fall rate exceeds driving subsidence, the outer platform becomes unconformable, whereas it remains conformable where fall rate is below driving subsidence rate. Quantitative modeling techniques thus provide the means of constraining amplitudes and frequencies of eustatic sea level fluctuations in ancient carbonate sequences.

  7. Compiling high-level languages for configurable computers: applying lessons from heterogeneous processing

    NASA Astrophysics Data System (ADS)

    Weaver, Glen E.; Weems, Charles C.; McKinley, Kathryn S.

    1996-10-01

    Configurable systems offer increased performance by providing hardware that matches the computational structure of a problem. This hardware is currently programmed with CAD tools and explicit library calls. To attain widespread acceptance, configurable computing must become transparently accessible from high-level programming languages, but the changeable nature of the target hardware presents a major challenge to traditional compiler technology. A compiler for a configurable computer should optimize the use of functions embedded in hardware and schedule hardware reconfigurations. The hurdles to be overcome in achieving this capability are similar in some ways to those facing compilation for heterogeneous systems. For example, current traditional compilers have neither an interface to accept new primitive operators, nor a mechanism for applying optimizations to new operators. We are building a compiler for heterogeneous computing, called Scale, which replaces the traditional monolithic compiler architecture with a flexible framework. Scale has three main parts: translation director, compilation library, and a persistent store which holds our intermediate representation as well as other data structures. The translation director exploits the framework's flexibility by using architectural information to build a plan to direct each compilation. The translation library serves as a toolkit for use by the translation director. Our compiler intermediate representation, Score, facilities the addition of new IR nodes by distinguishing features used in defining nodes from properties on which transformations depend. In this paper, we present an overview of the scale architecture and its capabilities for dealing with heterogeneity, followed by a discussion of how those capabilities apply to problems in configurable computing. We then address aspects of configurable computing that are likely to require extensions to our approach and propose some extensions.

  8. Task analysis for computer-aided design (CAD) at a keystroke level.

    PubMed

    Chi, C F; Chung, K L

    1996-08-01

    The purpose of this research was to develop a new model to describe and predict a computerized task. AutoCAD was utilized as the experimental tool to collect operating procedure and time data at a keystroke level for a computer aided design (CAD) task. Six undergraduate students participated in the experiment. They were required to complete one simple and one complex engineering drawing. A model which characterized the task performance by software commands and predicted task execution time using keystroke-level model operators was proposed and applied to the analysis of the dialogue data. This task parameter model adopted software commands, e.g. LINE, OFFSET in AutoCAD, to describe the function of a task unit and used up to five parameters to indicate the number of keystrokes, chosen function for a command and ways of starting and ending a command. Each task unit in the task parameter model can be replaced by a number of primitive operators as in the keystroke level model to predict the task execution time. The observed task execution times of all task units were found to be highly correlated with the task execution times predicted by the keystroke level model. Therefore, the task parameter model was proved to be a usable analytical tool for evaluating the human-computer interface (HCI). PMID:15677066

  9. A level set segmentation for computer-aided dental x-ray analysis

    NASA Astrophysics Data System (ADS)

    Li, Shuo; Fevens, Thomas; Krzyzak, Adam; Li, Song

    2005-04-01

    A level-set-based segmentation framework for Computer Aided Dental X-rays Analysis (CADXA) is proposed. In this framework, we first employ level set methods to segment the dental X-ray image into three regions: Normal Region (NR), Potential Abnormal Region (PAR), Abnormal and Background Region (ABR). The segmentation results are then used to build uncertainty maps based on a proposed uncertainty measurement method and an analysis scheme is applied. The level set segmentation method consists of two stages: a training stage and a segmentation stage. During the training stage, manually chosen representative images are segmented using hierarchical level set region detection. The segmentation results are used to train a support vector machine (SVM) classifier. During the segmentation stage, a dental X-ray image is first classified by the trained SVM. The classifier provides an initial contour which is close to the correct boundary for the coupled level set method which is then used to further segment the image. Different dental X-ray images are used to test the framework. Experimental results show that the proposed framework achieves faster level set segmentation and provides more detailed information and indications of possible problems to the dentist. To our best knowledge, this is one of the first results on CADXA using level set methods.

  10. A Comparative Study to Evaluate the Effectiveness of Computer Assisted Instruction (CAI) versus Class Room Lecture (RL) for Computer Science at ICS Level

    ERIC Educational Resources Information Center

    Kausar, Tayyaba; Choudhry, Bushra Naoreen; Gujjar, Aijaz Ahmed

    2008-01-01

    This study was aimed to evaluate the effectiveness of CAI vs. classroom lecture for computer science at ICS level. The objectives were to compare the learning effects of two groups with class room lecture and computer assisted instruction studying the same curriculum and the effects of CAI and CRL in terms of cognitive development. Hypothesis of…

  11. A Comparative Study to Evaluate the Effectiveness of Computer Assisted Instruction (CAI) versus Class Room Lecture (CRL) for Computer Science at ICS Level

    ERIC Educational Resources Information Center

    Kausar, Tayyaba; Choudhry, Bushra Naoreen; Gujjar, Aijaz Ahmed

    2008-01-01

    This study was aimed to evaluate the effectiveness of CAI vs. classroom lecture for computer science at ICS level. The objectives were to compare the learning effects of two groups with class room lecture and computer assisted instruction studying the same curriculum and the effects of CAI and CRL in terms of cognitive development. Hypothesis of…

  12. PCR detection, identification to species level, and fingerprinting of Campylobacter jejuni and Campylobacter coli direct from diarrheic samples.

    PubMed Central

    Linton, D; Lawson, A J; Owen, R J; Stanley, J

    1997-01-01

    Three sets of primers were designed for PCR detection and differentiation of Campylobacter jejuni and Campylobacter coli. The first PCR assay was designed to coidentify C. jejuni and C. coli based on their 16S rRNA gene sequences. The second PCR assay, based on the hippuricase gene sequence, identified all tested reference strains of C. jejuni and also strains of that species which lack detectable hippuricase activity. The third PCR assay, based on the sequence of a cloned (putative) aspartokinase gene and the downstream open reading frame, identified all tested reference strains of C. coli. The assays will find immediate application in the rapid identification to species level of isolates. The assays combine with a protocol for purification of total DNA from fecal samples to allow reproducible PCR identification of campylobacters directly from stools. Of 20 clinical samples from which campylobacters had been cultured, we detected C. jejuni in 17, C. coli in 2, and coinfection of C. jejuni and Campylobacter hyointestinalis in 1. These results were concordant with culture and phenotypic identification to species level. Strain typing by PCR-restriction fragment length polymorphism of the flagellin (flaA) gene detected identical flaA types in fecal DNA and the corresponding campylobacter isolate. Twenty-five Campylobacter-negative stool samples gave no reaction with the PCR assays. These PCR assays can rapidly define the occurrence, species incidence, and flaA genotypes of enteropathogenic campylobacters. PMID:9316909

  13. Effect of Computer Simulations at the Particulate and Macroscopic Levels on Students' Understanding of the Particulate Nature of Matter

    ERIC Educational Resources Information Center

    Tang, Hui; Abraham, Michael R.

    2016-01-01

    Computer-based simulations can help students visualize chemical representations and understand chemistry concepts, but simulations at different levels of representation may vary in effectiveness on student learning. This study investigated the influence of computer activities that simulate chemical reactions at different levels of representation…

  14. Diagnostic reference levels and patient doses in computed tomography examinations in Greece.

    PubMed

    Simantirakis, G; Hourdakis, C J; Economides, S; Kaisas, I; Kalathaki, M; Koukorava, C; Manousaridis, G; Pafilis, C; Tritakis, P; Vogiatzi, S; Kamenopoulou, V; Dimitriou, P

    2015-02-01

    The purpose of this study is to present a national survey that was performed in Greece for the establishment of national Dose Reference Levels (DRLs) for seven common adult Computed Tomography (CT) examinations. Volumetric computed tomography dose index and dose-length product values were collected from the post-data page of 65 'modern' systems that incorporate tube current modulation. Moreover, phantom dose measurements on 26 'older' systems were performed. Finally, the effective dose to the patient from a typical acquisition during these examinations was estimated. The suggested national DRLs are generally comparable with respective published values from similar European studies, with the exception of sinuses CT, which presents significantly higher values. This fact, along with the large variation of the systems' dose values that were observed even for scanners of the same type, indicates a need for further patient protection optimisation without compromising the clinical outcome. PMID:24891405

  15. High-level waste storage tank farms/242-A evaporator standards/requirements identification document (S/RID), Vol. 7

    SciTech Connect

    Not Available

    1994-04-01

    This Requirements Identification Document (RID) describes an Occupational Health and Safety Program as defined through the Relevant DOE Orders, regulations, industry codes/standards, industry guidance documents and, as appropriate, good industry practice. The definition of an Occupational Health and Safety Program as specified by this document is intended to address Defense Nuclear Facilities Safety Board Recommendations 90-2 and 91-1, which call for the strengthening of DOE complex activities through the identification and application of relevant standards which supplement or exceed requirements mandated by DOE Orders. This RID applies to the activities, personnel, structures, systems, components, and programs involved in maintaining the facility and executing the mission of the High-Level Waste Storage Tank Farms.

  16. Dose Assessment in Computed Tomography Examination and Establishment of Local Diagnostic Reference Levels in Mazandaran, Iran

    PubMed Central

    Janbabanezhad Toori, A.; Shabestani-Monfared, A.; Deevband, M.R.; Abdi, R.; Nabahati, M.

    2015-01-01

    Background Medical X-rays are the largest man-made source of public exposure to ionizing radiation. While the benefits of Computed Tomography (CT) are well known in accurate diagnosis, those benefits are not risk-free. CT is a device with higher patient dose in comparison with other conventional radiation procedures. Objective This study is aimed at evaluating radiation dose to patients from Computed Tomography (CT) examination in Mazandaran hospitals and defining diagnostic reference level (DRL). Methods Patient-related data on CT protocol for four common CT examinations including brain, sinus, chest and abdomen & pelvic were collected. In each center, Computed Tomography Dose Index (CTDI) measurements were performed using pencil ionization chamber and CT dosimetry phantom according to AAPM report No. 96 for those techniques. Then, Weighted Computed Tomography Dose Index (CTDIW), Volume Computed Tomography Dose Index (CTDI vol) and Dose Length Product (DLP) were calculated. Results The CTDIw for brain, sinus, chest and abdomen & pelvic ranged (15.6-73), (3.8-25. 8), (4.5-16.3) and (7-16.3), respectively. Values of DLP had a range of (197.4-981), (41.8-184), (131-342.3) and (283.6-486) for brain, sinus, chest and abdomen & pelvic, respectively. The 3rd quartile of CTDIW, derived from dose distribution for each examination is the proposed quantity for DRL. The DRLs of brain, sinus, chest and abdomen & pelvic are measured 59.5, 17, 7.8 and 11 mGy, respectively. Conclusion Results of this study demonstrated large scales of dose for the same examination among different centers. For all examinations, our values were lower than international reference doses. PMID:26688796

  17. A Framework for Different Levels of Integration of Computational Models Into Web-Based Virtual Patients

    PubMed Central

    Narracott, Andrew J; Manini, Simone; Bayley, Martin J; Lawford, Patricia V; McCormack, Keith; Zary, Nabil

    2014-01-01

    Background Virtual patients are increasingly common tools used in health care education to foster learning of clinical reasoning skills. One potential way to expand their functionality is to augment virtual patients’ interactivity by enriching them with computational models of physiological and pathological processes. Objective The primary goal of this paper was to propose a conceptual framework for the integration of computational models within virtual patients, with particular focus on (1) characteristics to be addressed while preparing the integration, (2) the extent of the integration, (3) strategies to achieve integration, and (4) methods for evaluating the feasibility of integration. An additional goal was to pilot the first investigation of changing framework variables on altering perceptions of integration. Methods The framework was constructed using an iterative process informed by Soft System Methodology. The Virtual Physiological Human (VPH) initiative has been used as a source of new computational models. The technical challenges associated with development of virtual patients enhanced by computational models are discussed from the perspectives of a number of different stakeholders. Concrete design and evaluation steps are discussed in the context of an exemplar virtual patient employing the results of the VPH ARCH project, as well as improvements for future iterations. Results The proposed framework consists of four main elements. The first element is a list of feasibility features characterizing the integration process from three perspectives: the computational modelling researcher, the health care educationalist, and the virtual patient system developer. The second element included three integration levels: basic, where a single set of simulation outcomes is generated for specific nodes in the activity graph; intermediate, involving pre-generation of simulation datasets over a range of input parameters; advanced, including dynamic solution of the

  18. TPASS: a gamma-ray spectrum analysis and isotope identification computer code

    SciTech Connect

    Dickens, J.K.

    1981-03-01

    The gamma-ray spectral data-reduction and analysis computer code TPASS is described. This computer code is used to analyze complex Ge(Li) gamma-ray spectra to obtain peak areas corrected for detector efficiencies, from which are determined gamma-ray yields. These yields are compared with an isotope gamma-ray data file to determine the contributions to the observed spectrum from decay of specific radionuclides. A complete FORTRAN listing of the code and a complex test case are given.

  19. A Feature Selection Algorithm to Compute Gene Centric Methylation from Probe Level Methylation Data

    PubMed Central

    Baur, Brittany; Bozdag, Serdar

    2016-01-01

    DNA methylation is an important epigenetic event that effects gene expression during development and various diseases such as cancer. Understanding the mechanism of action of DNA methylation is important for downstream analysis. In the Illumina Infinium HumanMethylation 450K array, there are tens of probes associated with each gene. Given methylation intensities of all these probes, it is necessary to compute which of these probes are most representative of the gene centric methylation level. In this study, we developed a feature selection algorithm based on sequential forward selection that utilized different classification methods to compute gene centric DNA methylation using probe level DNA methylation data. We compared our algorithm to other feature selection algorithms such as support vector machines with recursive feature elimination, genetic algorithms and ReliefF. We evaluated all methods based on the predictive power of selected probes on their mRNA expression levels and found that a K-Nearest Neighbors classification using the sequential forward selection algorithm performed better than other algorithms based on all metrics. We also observed that transcriptional activities of certain genes were more sensitive to DNA methylation changes than transcriptional activities of other genes. Our algorithm was able to predict the expression of those genes with high accuracy using only DNA methylation data. Our results also showed that those DNA methylation-sensitive genes were enriched in Gene Ontology terms related to the regulation of various biological processes. PMID:26872146

  20. Computer-aided identification of the water diffusion coefficient for maize kernels dried in a thin layer

    NASA Astrophysics Data System (ADS)

    Kujawa, Sebastian; Weres, Jerzy; Olek, Wiesław

    2016-07-01

    Uncertainties in mathematical modelling of water transport in cereal grain kernels during drying and storage are mainly due to implementing unreliable values of the water diffusion coefficient and simplifying the geometry of kernels. In the present study an attempt was made to reduce the uncertainties by developing a method for computer-aided identification of the water diffusion coefficient and more accurate 3D geometry modelling for individual kernels using original inverse finite element algorithms. The approach was exemplified by identifying the water diffusion coefficient for maize kernels subjected to drying. On the basis of the developed method, values of the water diffusion coefficient were estimated, 3D geometry of a maize kernel was represented by isoparametric finite elements, and the moisture content inside maize kernels dried in a thin layer was predicted. Validation of the results against experimental data showed significantly lower error values than in the case of results obtained for the water diffusion coefficient values available in the literature.

  1. ROLE OF COMPUTATIONAL CHEMISTRY IN SUPPORT OF HAZARD IDENTIFICATION (ID) MECHANISM-BASED SARS

    EPA Science Inventory

    A mechanism-based structure-activity relationship (SAR) study attempts to determine a structural basis for activity by targeting a single or a few stages in a postulated mechanism of action.Computational chemistry approaches provide a valuable complement to experiment for probing...

  2. Computation of expectation values from vibrational coupled-cluster at the two-mode coupling level

    NASA Astrophysics Data System (ADS)

    Zoccante, Alberto; Seidler, Peter; Christiansen, Ove

    2011-04-01

    In this work we show how the vibrational coupled-cluster method at the two-mode coupling level can be used to calculate zero-point vibrational averages of properties. A technique is presented, where any expectation value can be calculated using a single set of Lagrangian multipliers computed solving iteratively a single linear set of equations. Sample calculations are presented which show that the resulting algorithm scales only with the third power of the number of modes, therefore making large systems accessible. Moreover, we present applications to water, pyrrole, and para-nitroaniline.

  3. Computer Aided Drug Design Approaches for Identification of Novel Autotaxin (ATX) Inhibitors.

    PubMed

    Vrontaki, Eleni; Melagraki, Georgia; Kaffe, Eleanna; Mavromoustakos, Thomas; Kokotos, George; Aidinis, Vassilis; Afantitis, Antreas

    2016-01-01

    Autotaxin (ATX) has become an attractive target with a huge pharmacological and pharmacochemical interest in LPA-related diseases and to date many small organic molecules have been explored as potential ATX inhibitors. As a useful aid in the various efforts of identifying novel effective ATX inhibitors, in silico methods can serve as an important and valuable tool. Especially, Virtual Screening (VS) has recently received increased attention due to the large datasets made available, the development of advanced VS techniques and the encouraging fact that VS has contributed to the discovery of several compounds that have either reached the market or entered clinical trials. Different techniques and workflows have been reported in literature with the goal to prioritize possible potent hits. In this review article several deployed virtual screening strategies for the identification of novel potent ATX inhibitors are described. PMID:26997151

  4. IUE data reduction: Wavelength determinations and line identifications using a VAX/750 computer

    NASA Astrophysics Data System (ADS)

    Davidson, J. P.; Bord, D. J.

    A fully automated, interactive system for determining the wavelengths of features in extracted IUE spectra is described. Wavelengths are recorded from video displays of expanded plots of individual orders using a movable cursor, and then corrected for IUE wavelength scale errors. The estimated accuracy of an individual wavelength in the final tabulation is 0.050 A. Such lists are ideally suited for line identification work using the method of wavelength coincidence statistics (WCS). The results of WCS studies of the ultraviolet spectra of the chemically peculiar (CP) stars iota Coronae Borealis and kappa Camcri. Aside from confirming a number of previously reported aspects of the abundance patterns in these stars, the searches produced some interesting, new discoveries, notably the presence of Hf in the spectrum of kappa Camcri. The implications of this work for theories designed to account for anomalous abundances in chemically peculiar stars are discussed.

  5. A comparative analysis of computational approaches and algorithms for protein subcomplex identification

    PubMed Central

    Zaki, Nazar; Mora, Antonio

    2014-01-01

    High-throughput AP-MS methods have allowed the identification of many protein complexes. However, most post-processing methods of this type of data have been focused on detection of protein complexes and not its subcomplexes. Here, we review the results of some existing methods that may allow subcomplex detection and propose alternative methods in order to detect subcomplexes from AP-MS data. We assessed and drew comparisons between the use of overlapping clustering methods, methods based in the core-attachment model and our own prediction strategy (TRIBAL). The hypothesis behind TRIBAL is that subcomplex-building information may be concealed in the multiple edges generated by an interaction repeated in different contexts in raw data. The CACHET method offered the best results when the evaluation of the predicted subcomplexes was carried out using both the hypergeometric and geometric scores. TRIBAL offered the best performance when using a strict meet-min score. PMID:24584908

  6. A cascadic monotonic time-discretized algorithm for finite-level quantum control computation

    NASA Astrophysics Data System (ADS)

    Ditz, P.; Borzi`, A.

    2008-03-01

    A computer package (CNMS) is presented aimed at the solution of finite-level quantum optimal control problems. This package is based on a recently developed computational strategy known as monotonic schemes. Quantum optimal control problems arise in particular in quantum optics where the optimization of a control representing laser pulses is required. The purpose of the external control field is to channel the system's wavefunction between given states in its most efficient way. Physically motivated constraints, such as limited laser resources, are accommodated through appropriately chosen cost functionals. Program summaryProgram title: CNMS Catalogue identifier: ADEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 770 No. of bytes in distributed program, including test data, etc.: 7098 Distribution format: tar.gz Programming language: MATLAB 6 Computer: AMD Athlon 64 × 2 Dual, 2:21 GHz, 1:5 GB RAM Operating system: Microsoft Windows XP Word size: 32 Classification: 4.9 Nature of problem: Quantum control Solution method: Iterative Running time: 60-600 sec

  7. The Effects of Linear Microphone Array Changes on Computed Sound Exposure Level Footprints

    NASA Technical Reports Server (NTRS)

    Mueller, Arnold W.; Wilson, Mark R.

    1997-01-01

    Airport land planning commissions often are faced with determining how much area around an airport is affected by the sound exposure levels (SELS) associated with helicopter operations. This paper presents a study of the effects changing the size and composition of a microphone array has on the computed SEL contour (ground footprint) areas used by such commissions. Descent flight acoustic data measured by a fifteen microphone array were reprocessed for five different combinations of microphones within this array. This resulted in data for six different arrays for which SEL contours were computed. The fifteen microphone array was defined as the 'baseline' array since it contained the greatest amount of data. The computations used a newly developed technique, the Acoustic Re-propagation Technique (ART), which uses parts of the NASA noise prediction program ROTONET. After the areas of the SEL contours were calculated the differences between the areas were determined. The area differences for the six arrays are presented that show a five and a three microphone array (with spacing typical of that required by the FAA FAR Part 36 noise certification procedure) compare well with the fifteen microphone array. All data were obtained from a database resulting from a joint project conducted by NASA and U.S. Army researchers at Langley and Ames Research Centers. A brief description of the joint project test design, microphone array set-up, and data reduction methodology associated with the database are discussed.

  8. Parallel computation of level set method for 500 Hz visual servo control

    NASA Astrophysics Data System (ADS)

    Fei, Xianfeng; Igarashi, Yasunobu; Hashimoto, Koichi

    2008-11-01

    We propose a 2D microorganism tracking system using a parallel level set method and a column parallel vision system (CPV). This system keeps a single microorganism in the middle of the visual field under a microscope by visual servoing an automated stage. We propose a new energy function for the level set method. This function constrains an amount of light intensity inside the detected object contour to control the number of the detected objects. This algorithm is implemented in CPV system and computational time for each frame is 2 [ms], approximately. A tracking experiment for about 25 s is demonstrated. Also we demonstrate a single paramecium can be kept tracking even if other paramecia appear in the visual field and contact with the tracked paramecium.

  9. Identification of a unique cause of ring artifact seen in computed tomography trans-axial images.

    PubMed

    Jha, Ashish Kumar; Purandare, Nilendu C; Shah, Sneha; Agrawal, Archi; Puranik, Ameya D; Rangarajan, Venkatesh

    2013-10-01

    Artifacts present in computed tomography (CT) image often degrade the image quality and ultimately, the diagnostic outcome. Ring artifact in trans-axial image is caused by either miscalibrated or defective detector element of detector row, which is often categorized as scanner based artifact. A ring artifact detected on trans-axial CT image of positron emission tomography/computed tomography (PET/CT), was caused by contamination of CT tube aperture by droplet of injectable contrast medium. This artifact was corrected by removal of contrast droplet from CT tube aperture. The ring artifact is a very common artifact, commonly cited in the literature. Our case puts forward an uncommon cause of this artifact and its method of correction, which also, has no mention in the existing literature. PMID:24379535

  10. Efficient Calibration of Computationally Intensive Groundwater Models through Surrogate Modelling with Lower Levels of Fidelity

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B.

    2012-12-01

    intensive groundwater modelling case study developed with the FEFLOW software is used to evaluate the proposed methodology. Multiple surrogates of this computationally intensive model with different levels of fidelity are created and applied. Dynamically dimensioned search (DDS) optimization algorithm is used as the search engine in the calibration framework enabled with surrogate models. Results show that this framework can substantially reduce the number of original model evaluations required for calibration by intelligently utilizing faster-to-run surrogates in the course of optimization. Results also demonstrate that the compromise between efficiency (reduced run time) and fidelity of a surrogate model is critically important to the success of the framework, as a surrogate with unreasonably low fidelity, despite being fast, might be quite misleading in calibration of the original model of interest.