Sample records for automatic script identification

  1. Automatic script identification from images using cluster-based templates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hochberg, J.; Kerns, L.; Kelly, P.

    We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a newmore » document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.« less

  2. Trial-Based Functional Analysis Informs Treatment for Vocal Scripting.

    PubMed

    Rispoli, Mandy; Brodhead, Matthew; Wolfe, Katie; Gregori, Emily

    2018-05-01

    Research on trial-based functional analysis has primarily focused on socially maintained challenging behaviors. However, procedural modifications may be necessary to clarify ambiguous assessment results. The purposes of this study were to evaluate the utility of iterative modifications to trial-based functional analysis on the identification of putative reinforcement and subsequent treatment for vocal scripting. For all participants, modifications to the trial-based functional analysis identified a primary function of automatic reinforcement. The structure of the trial-based format led to identification of social attention as an abolishing operation for vocal scripting. A noncontingent attention treatment was evaluated using withdrawal designs for each participant. This noncontingent attention treatment resulted in near zero levels of vocal scripting for all participants. Implications for research and practice are presented.

  3. [Development of a Software for Automatically Generated Contours in Eclipse TPS].

    PubMed

    Xie, Zhao; Hu, Jinyou; Zou, Lian; Zhang, Weisha; Zou, Yuxin; Luo, Kelin; Liu, Xiangxiang; Yu, Luxin

    2015-03-01

    The automatic generation of planning targets and auxiliary contours have achieved in Eclipse TPS 11.0. The scripting language autohotkey was used to develop a software for automatically generated contours in Eclipse TPS. This software is named Contour Auto Margin (CAM), which is composed of operational functions of contours, script generated visualization and script file operations. RESULTS Ten cases in different cancers have separately selected, in Eclipse TPS 11.0 scripts generated by the software could not only automatically generate contours but also do contour post-processing. For different cancers, there was no difference between automatically generated contours and manually created contours. The CAM is a user-friendly and powerful software, and can automatically generated contours fast in Eclipse TPS 11.0. With the help of CAM, it greatly save plan preparation time and improve working efficiency of radiation therapy physicists.

  4. Running Gaussian16 Software Jobs on the Peregrine System | High-Performance

    Science.gov Websites

    , parallel setup is taken care of automatically based on settings in the PBS script example below. Previous filesystem called /dev/shm. This scratch space is set automatically by the example script below. The Gaussian system. An example script for batch submission is given below. #!/bin/bash #PBS -l nodes=2 #PBS -l

  5. Page segmentation using script identification vectors: A first look

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hochberg, J.; Cannon, M.; Kelly, P.

    1997-07-01

    Document images in which different scripts, such as Chinese and Roman, appear on a single page pose a problem for optical character recognition (OCR) systems. This paper explores the use of script identification vectors in the analysis of multilingual document images. A script identification vector is calculated for each connected component in a document. The vector expresses the closest distance between the component and templates developed for each of thirteen scripts, including Arabic, Chinese, Cyrillic, and Roman. The authors calculate the first three principal components within the resulting thirteen-dimensional space for each image. By mapping these components to red, green,more » and blue, they can visualize the information contained in the script identification vectors. The visualization of several multilingual images suggests that the script identification vectors can be used to segment images into script-specific regions as large as several paragraphs or as small as a few characters. The visualized vectors also reveal distinctions within scripts, such as font in Roman documents, and kanji vs. kana in Japanese. Results are best for documents containing highly dissimilar scripts such as Roman and Japanese. Documents containing similar scripts, such as Roman and Cyrillic will require further investigation.« less

  6. Proteomics to go: Proteomatic enables the user-friendly creation of versatile MS/MS data evaluation workflows.

    PubMed

    Specht, Michael; Kuhlgert, Sebastian; Fufezan, Christian; Hippler, Michael

    2011-04-15

    We present Proteomatic, an operating system independent and user-friendly platform that enables the construction and execution of MS/MS data evaluation pipelines using free and commercial software. Required external programs such as for peptide identification are downloaded automatically in the case of free software. Due to a strict separation of functionality and presentation, and support for multiple scripting languages, new processing steps can be added easily. Proteomatic is implemented in C++/Qt, scripts are implemented in Ruby, Python and PHP. All source code is released under the LGPL. Source code and installers for Windows, Mac OS X, and Linux are freely available at http://www.proteomatic.org. michael.specht@uni-muenster.de Supplementary data are available at Bioinformatics online.

  7. Formatting scripts with computers and Extended BASIC.

    PubMed

    Menning, C B

    1984-02-01

    A computer program, written in the language of Extended BASIC, is presented which enables scripts, for educational media, to be quickly written in a nearly unformatted style. From the resulting script file, stored on magnetic tape or disk, the computer program formats the script into either a storyboard , a presentation, or a narrator 's script. Script headings and page and paragraph numbers are automatic features in the word processing. Suggestions are given for making personal modifications to the computer program.

  8. Semi automatic indexing of PostScript files using Medical Text Indexer in medical education.

    PubMed

    Mollah, Shamim Ara; Cimino, Christopher

    2007-10-11

    At Albert Einstein College of Medicine a large part of online lecture materials contain PostScript files. As the collection grows it becomes essential to create a digital library to have easy access to relevant sections of the lecture material that is full-text indexed; to create this index it is necessary to extract all the text from the document files that constitute the originals of the lectures. In this study we present a semi automatic indexing method using robust technique for extracting text from PostScript files and National Library of Medicine's Medical Text Indexer (MTI) program for indexing the text. This model can be applied to other medical schools for indexing purposes.

  9. Early Market Site Identification Data

    DOE Data Explorer

    Levi Kilcher

    2016-04-01

    This data was compiled for the 'Early Market Opportunity Hot Spot Identification' project. The data and scripts included were used in the 'MHK Energy Site Identification and Ranking Methodology' Reports (Part I: Wave, NREL Report #66038; Part II: Tidal, NREL Report #66079). The Python scripts will generate a set of results--based on the Excel data files--some of which were described in the reports. The scripts depend on the 'score_site' package, and the score site package depends on a number of standard Python libraries (see the score_site install instructions).

  10. Saving Time with Automated Account Management

    ERIC Educational Resources Information Center

    School Business Affairs, 2013

    2013-01-01

    Thanks to intelligent solutions, schools, colleges, and universities no longer need to manage user account life cycles by using scripts or tedious manual procedures. The solutions house the scripts and manual procedures. Accounts can be automatically created, modified, or deleted in all applications within the school. This article describes how an…

  11. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    NASA Astrophysics Data System (ADS)

    Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.

  12. galaxie--CGI scripts for sequence identification through automated phylogenetic analysis.

    PubMed

    Nilsson, R Henrik; Larsson, Karl-Henrik; Ursing, Björn M

    2004-06-12

    The prevalent use of similarity searches like BLAST to identify sequences and species implicitly assumes the reference database to be of extensive sequence sampling. This is often not the case, restraining the correctness of the outcome as a basis for sequence identification. Phylogenetic inference outperforms similarity searches in retrieving correct phylogenies and consequently sequence identities, and a project was initiated to design a freely available script package for sequence identification through automated Web-based phylogenetic analysis. Three CGI scripts were designed to facilitate qualified sequence identification from a Web interface. Query sequences are aligned to pre-made alignments or to alignments made by ClustalW with entries retrieved from a BLAST search. The subsequent phylogenetic analysis is based on the PHYLIP package for inferring neighbor-joining and parsimony trees. The scripts are highly configurable. A service installation and a version for local use are found at http://andromeda.botany.gu.se/galaxiewelcome.html and http://galaxie.cgb.ki.se

  13. MAGE (M-file/Mif Automatic GEnerator): A graphical interface tool for automatic generation of Object Oriented Micromagnetic Framework configuration files and Matlab scripts for results analysis

    NASA Astrophysics Data System (ADS)

    Chęciński, Jakub; Frankowski, Marek

    2016-10-01

    We present a tool for fully-automated generation of both simulations configuration files (Mif) and Matlab scripts for automated data analysis, dedicated for Object Oriented Micromagnetic Framework (OOMMF). We introduce extended graphical user interface (GUI) that allows for fast, error-proof and easy creation of Mifs, without any programming skills usually required for manual Mif writing necessary. With MAGE we provide OOMMF extensions for complementing it by mangetoresistance and spin-transfer-torque calculations, as well as local magnetization data selection for output. Our software allows for creation of advanced simulations conditions like simultaneous parameters sweeps and synchronic excitation application. Furthermore, since output of such simulation could be long and complicated we provide another GUI allowing for automated creation of Matlab scripts suitable for analysis of such data with Fourier and wavelet transforms as well as user-defined operations.

  14. Simplifying Chandra aperture photometry with srcflux

    NASA Astrophysics Data System (ADS)

    Glotfelty, Kenny

    2014-11-01

    This poster will highlight some of the features of the srcflux script in CIAO. This script combines many threads and tools together to compute photometric properties for sources: counts, rates, various fluxes, and confidence intervals or upper limits. Beginning and casual X-ray astronomers greatly benefit from the simple interface: just specify the event file and a celestial location, while power-users and X-ray astronomy experts can take advantage of the all the parameters to automatically produce catalogs for entire fields. Current limitations and future enhancements of the script will also be presented.

  15. Development of visual expertise for reading: rapid emergence of visual familiarity for an artificial script

    PubMed Central

    Maurer, Urs; Blau, Vera C.; Yoncheva, Yuliya N.; McCandliss, Bruce D.

    2010-01-01

    Adults produce left-lateralized N170 responses to visual words relative to control stimuli, even within tasks that do not require active reading. This specialization begins in preschoolers as a right-lateralized N170 effect. We investigated whether this developmental shift reflects an early learning phenomenon, such as attaining visual familiarity with a script, by training adults in an artificial script and measuring N170 responses before and afterward. Training enhanced the N170 response, especially over the right hemisphere. This suggests N170 sensitivity to visual familiarity with a script before reading becomes sufficiently automatic to drive left-lateralized effects in a shallow encoding task. PMID:20614357

  16. 75 FR 38026 - Medicare Program; Identification of Backward Compatible Version of Adopted Standard for E...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-01

    ... Programs (NCPDP) Prescriber/ Pharmacist Interface SCRIPT standard, Implementation Guide, Version 10... Prescriber/Pharmacist Interface SCRIPT standard, Version 8, Release 1 and its equivalent NCPDP Prescriber/Pharmacist Interface SCRIPT Implementation Guide, Version 8, Release 1 (hereinafter referred to as the...

  17. An Open-Source Automated Peptide Synthesizer Based on Arduino and Python.

    PubMed

    Gali, Hariprasad

    2017-10-01

    The development of the first open-source automated peptide synthesizer, PepSy, using Arduino UNO and readily available components is reported. PepSy was primarily designed to synthesize small peptides in a relatively small scale (<100 µmol). Scripts to operate PepSy in a fully automatic or manual mode were written in Python. Fully automatic script includes functions to carry out resin swelling, resin washing, single coupling, double coupling, Fmoc deprotection, ivDde deprotection, on-resin oxidation, end capping, and amino acid/reagent line cleaning. Several small peptides and peptide conjugates were successfully synthesized on PepSy with reasonably good yields and purity depending on the complexity of the peptide.

  18. Writers Identification Based on Multiple Windows Features Mining

    NASA Astrophysics Data System (ADS)

    Fadhil, Murad Saadi; Alkawaz, Mohammed Hazim; Rehman, Amjad; Saba, Tanzila

    2016-03-01

    Now a days, writer identification is at high demand to identify the original writer of the script at high accuracy. The one of the main challenge in writer identification is how to extract the discriminative features of different authors' scripts to classify precisely. In this paper, the adaptive division method on the offline Latin script has been implemented using several variant window sizes. Fragments of binarized text a set of features are extracted and classified into clusters in the form of groups or classes. Finally, the proposed approach in this paper has been tested on various parameters in terms of text division and window sizes. It is observed that selection of the right window size yields a well positioned window division. The proposed approach is tested on IAM standard dataset (IAM, Institut für Informatik und angewandte Mathematik, University of Bern, Bern, Switzerland) that is a constraint free script database. Finally, achieved results are compared with several techniques reported in the literature.

  19. SU-F-P-36: Automation of Linear Accelerator Star Shot Measurement with Advanced XML Scripting and Electronic Portal Imaging Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, N; Knutson, N; Schmidt, M

    Purpose: To verify a method used to automatically acquire jaw, MLC, collimator and couch star shots for a Varian TrueBeam linear accelerator utilizing Developer Mode and an Electronic Portal Imaging Device (EPID). Methods: An XML script was written to automate motion of the jaws, MLC, collimator and couch in TrueBeam Developer Mode (TBDM) to acquire star shot measurements. The XML script also dictates MV imaging parameters to facilitate automatic acquisition and recording of integrated EPID images. Since couch star shot measurements cannot be acquired using a combination of EPID and jaw/MLC collimation alone due to a fixed imager geometry, amore » method utilizing a 5mm wide steel ruler placed on the table and centered within a 15×15cm2 open field to produce a surrogate of the narrow field aperture was investigated. Four individual star shot measurements (X jaw, Y jaw, MLC and couch) were obtained using our proposed as well as traditional film-based method. Integrated EPID images and scanned measurement films were analyzed and compared. Results: Star shot (X jaw, Y jaw, MLC and couch) measurements were obtained in a single 5 minute delivery using the TBDM XML script method compared to 60 minutes for equivalent traditional film measurements. Analysis of the images and films demonstrated comparable isocentricity results, agreeing within 0.3mm of each other. Conclusion: The presented automatic approach of acquiring star shot measurements using TBDM and EPID has proven to be more efficient than the traditional film approach with equivalent results.« less

  20. Cross-Language Transfer of Word Reading Accuracy and Word Reading Fluency in Spanish-English and Chinese-English Bilinguals: Script-Universal and Script-Specific Processes

    ERIC Educational Resources Information Center

    Pasquarella, Adrian; Chen, Xi; Gottardo, Alexandra; Geva, Esther

    2015-01-01

    This study examined cross-language transfer of word reading accuracy and word reading fluency in Spanish-English and Chinese-English bilinguals. Participants included 51 Spanish-English and 64 Chinese-English bilinguals. Both groups of children completed parallel measures of phonological awareness, rapid automatized naming, word reading accuracy,…

  1. Automated Sequence Processor: Something Old, Something New

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara; Schrock, Mitchell; Fisher, Forest; Himes, Terry

    2012-01-01

    High productivity required for operations teams to meet schedules Risk must be minimized. Scripting used to automate processes. Scripts perform essential operations functions. Automated Sequence Processor (ASP) was a grass-roots task built to automate the command uplink process System engineering task for ASP revitalization organized. ASP is a set of approximately 200 scripts written in Perl, C Shell, AWK and other scripting languages.. ASP processes/checks/packages non-interactive commands automatically.. Non-interactive commands are guaranteed to be safe and have been checked by hardware or software simulators.. ASP checks that commands are non-interactive.. ASP processes the commands through a command. simulator and then packages them if there are no errors.. ASP must be active 24 hours/day, 7 days/week..

  2. Text block identification in restoration process of Javanese script damage

    NASA Astrophysics Data System (ADS)

    Himamunanto, A. R.; Setyowati, E.

    2018-05-01

    Generally, in a sheet of documents there are two objects of information, namely text and image. A text block area in the sheet of manuscript is a vital object because the restoration process would be done only in this object. Text block or text area identification becomes an important step before. This paper describes the steps leading to the restoration of Java script destruction. The process stages are: pre-processing, identification of text block, segmentation, damage identification, restoration. The test result based on the input manuscript “Hamong Tani” show that the system works with a success rate of 82.07%

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, C

    Purpose: To implement a novel, automatic, institutional customizable DVH quantities evaluation and PDF report tool on Philips Pinnacle treatment planning system (TPS) Methods: An add-on program (P3DVHStats) is developed by us to enable automatic DVH quantities evaluation (including both volume and dose based quantities, such as V98, V100, D2), and automatic PDF format report generation, for EMR convenience. The implementation is based on a combination of Philips Pinnacle scripting tool and Java language pre-installed on each Pinnacle Sun Solaris workstation. A single Pinnacle script provide user a convenient access to the program when needed. The activated script will first exportmore » DVH data for user selected ROIs from current Pinnacle plan trial; a Java program then provides a simple GUI interface, utilizes the data to compute any user requested DVH quantities, compare with preset institutional DVH planning goals; if accepted by users, the program will also generate a PDF report of the results and export it from Pinnacle to EMR import folder via FTP. Results: The program was tested thoroughly and has been released for clinical use at our institution (Pinnacle Enterprise server with both thin clients and P3PC access), for all dosimetry and physics staff, with excellent feedback. It used to take a few minutes to use MS-Excel worksheet to calculate these DVH quantities for IMRT/VMAT plans, and manually save them as PDF report; with the new program, it literally takes a few mouse clicks in less than 30 seconds to complete the same tasks. Conclusion: A Pinnacle scripting and Java language based program is successfully implemented, customized to our institutional needs. It is shown to dramatically reduce time and effort needed for DVH quantities computing and EMR reporting.« less

  4. Microseismic event location using global optimization algorithms: An integrated and automated workflow

    NASA Astrophysics Data System (ADS)

    Lagos, Soledad R.; Velis, Danilo R.

    2018-02-01

    We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.

  5. Amira: Multi-Dimensional Scientific Visualization for the GeoSciences in the 21st Century

    NASA Astrophysics Data System (ADS)

    Bartsch, H.; Erlebacher, G.

    2003-12-01

    amira (www.amiravis.com) is a general purpose framework for 3D scientific visualization that meets the needs of the non-programmer, the script writer, and the advanced programmer alike. Provided modules may be visually assembled in an interactive manner to create complex visual displays. These modules and their associated user interfaces are controlled either through a mouse, or via an interactive scripting mechanism based on Tcl. We provide interactive demonstrations of the various features of Amira and explain how these may be used to enhance the comprehension of datasets in use in the Earth Sciences community. Its features will be illustrated on scalar and vector fields on grid types ranging from Cartesian to fully unstructured. Specialized extension modules developed by some of our collaborators will be illustrated [1]. These include a module to automatically choose values for salient isosurface identification and extraction, and color maps suitable for volume rendering. During the session, we will present several demonstrations of remote networking, processing of very large spatio-temporal datasets, and various other projects that are underway. In particular, we will demonstrate WEB-IS, a java-applet interface to Amira that allows script editing via the web, and selected data analysis [2]. [1] G. Erlebacher, D. A. Yuen, F. Dubuffet, "Case Study: Visualization and Analysis of High Rayleigh Number -- 3D Convection in the Earth's Mantle", Proceedings of Visualization 2002, pp. 529--532. [2] Y. Wang, G. Erlebacher, Z. A. Garbow, D. A. Yuen, "Web-Based Service of a Visualization Package 'amira' for the Geosciences", Visual Geosciences, 2003.

  6. World Wide Web-based system for the calculation of substituent parameters and substituent similarity searches.

    PubMed

    Ertl, P

    1998-02-01

    Easy to use, interactive, and platform-independent WWW-based tools are ideal for development of chemical applications. By using the newly emerging Web technologies such as Java applets and sophisticated scripting, it is possible to deliver powerful molecular processing capabilities directly to the desk of synthetic organic chemists. In Novartis Crop Protection in Basel, a Web-based molecular modelling system has been in use since 1995. In this article two new modules of this system are presented: a program for interactive calculation of important hydrophobic, electronic, and steric properties of organic substituents, and a module for substituent similarity searches enabling the identification of bioisosteric functional groups. Various possible applications of calculated substituent parameters are also discussed, including automatic design of molecules with the desired properties and creation of targeted virtual combinatorial libraries.

  7. Visual word form familiarity and attention in lateral difference during processing Japanese Kana words.

    PubMed

    Nakagawa, A; Sukigara, M

    2000-09-01

    The purpose of this study was to examine the relationship between familiarity and laterality in reading Japanese Kana words. In two divided-visual-field experiments, three- or four-character Hiragana or Katakana words were presented in both familiar and unfamiliar scripts, to which subjects performed lexical decisions. Experiment 1, using three stimulus durations (40, 100, 160 ms), suggested that only in the unfamiliar script condition was increased stimulus presentation time differently affected in each visual field. To examine this lateral difference during the processing of unfamiliar scripts as related to attentional laterality, a concurrent auditory shadowing task was added in Experiment 2. The results suggested that processing words in an unfamiliar script requires attention, which could be left-hemisphere lateralized, while orthographically familiar kana words can be processed automatically on the basis of their word-level orthographic representations or visual word form. Copyright 2000 Academic Press.

  8. Script identification from images using cluster-based templates

    DOEpatents

    Hochberg, J.G.; Kelly, P.M.; Thomas, T.R.

    1998-12-01

    A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script. 17 figs.

  9. Script identification from images using cluster-based templates

    DOEpatents

    Hochberg, Judith G.; Kelly, Patrick M.; Thomas, Timothy R.

    1998-01-01

    A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script.

  10. Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui

    PubMed Central

    2012-01-01

    Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications. PMID:22998945

  11. Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui.

    PubMed

    Newton, Richard; Deonarine, Andrew; Wernisch, Lorenz

    2012-09-24

    The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications.

  12. Chain of evidence generation for contrast enhancement in digital image forensics

    NASA Astrophysics Data System (ADS)

    Battiato, Sebastiano; Messina, Giuseppe; Strano, Daniela

    2010-01-01

    The quality of the images obtained by digital cameras has improved a lot since digital cameras early days. Unfortunately, it is not unusual in image forensics to find wrongly exposed pictures. This is mainly due to obsolete techniques or old technologies, but also due to backlight conditions. To extrapolate some invisible details a stretching of the image contrast is obviously required. The forensics rules to produce evidences require a complete documentation of the processing steps, enabling the replication of the entire process. The automation of enhancement techniques is thus quite difficult and needs to be carefully documented. This work presents an automatic procedure to find contrast enhancement settings, allowing both image correction and automatic scripting generation. The technique is based on a preprocessing step which extracts the features of the image and selects correction parameters. The parameters are thus saved through a JavaScript code that is used in the second step of the approach to correct the image. The generated script is Adobe Photoshop compliant (which is largely used in image forensics analysis) thus permitting the replication of the enhancement steps. Experiments on a dataset of images are also reported showing the effectiveness of the proposed methodology.

  13. SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop.

    PubMed

    Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo

    2014-01-01

    Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig's scalability over many computing nodes and illustrate its use with example scripts. Available under the open source MIT license at http://sourceforge.net/projects/seqpig/

  14. Automatic Testcase Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.

    2008-01-01

    The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.

  15. Identification and feasibility test of specialized rural pedestrian safety training. Volume 4, PEDSAFE audiovisual scripts

    DOT National Transportation Integrated Search

    1981-03-01

    This report (Volume 4 of four volumes) provides the scripts for all audiovisuals employed in the PEDSAFE Program. Volume 1 of this report describes the conduct and results of the evaluation of the entire PEDSAFE Program and provides recommendations c...

  16. A Chinese Character Teaching System Using Structure Theory and Morphing Technology

    PubMed Central

    Sun, Linjia; Liu, Min; Hu, Jiajia; Liang, Xiaohui

    2014-01-01

    This paper proposes a Chinese character teaching system by using the Chinese character structure theory and the 2D contour morphing technology. This system, including the offline phase and the online phase, automatically generates animation for the same Chinese character from different writing stages to intuitively show the evolution of shape and topology in the process of Chinese characters teaching. The offline phase builds the component models database for the same script and the components correspondence database for different scripts. Given two or several different scripts of the same Chinese character, the online phase firstly divides the Chinese characters into components by using the process of Chinese character parsing, and then generates the evolution animation by using the process of Chinese character morphing. Finally, two writing stages of Chinese characters, i.e., seal script and clerical script, are used in experiment to show the ability of the system. The result of the user experience study shows that the system can successfully guide students to improve the learning of Chinese characters. And the users agree that the system is interesting and can motivate them to learn. PMID:24978171

  17. [Study on the automatic parameters identification of water pipe network model].

    PubMed

    Jia, Hai-Feng; Zhao, Qi-Feng

    2010-01-01

    Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.

  18. Developing Matlab scripts for image analysis and quality assessment

    NASA Astrophysics Data System (ADS)

    Vaiopoulos, A. D.

    2011-11-01

    Image processing is a very helpful tool in many fields of modern sciences that involve digital imaging examination and interpretation. Processed images however, often need to be correlated with the original image, in order to ensure that the resulting image fulfills its purpose. Aside from the visual examination, which is mandatory, image quality indices (such as correlation coefficient, entropy and others) are very useful, when deciding which processed image is the most satisfactory. For this reason, a single program (script) was written in Matlab language, which automatically calculates eight indices by utilizing eight respective functions (independent function scripts). The program was tested in both fused hyperspectral (Hyperion-ALI) and multispectral (ALI, Landsat) imagery and proved to be efficient. Indices were found to be in agreement with visual examination and statistical observations.

  19. Experimental research control software system

    NASA Astrophysics Data System (ADS)

    Cohn, I. A.; Kovalenko, A. G.; Vystavkin, A. N.

    2014-05-01

    A software system, intended for automation of a small scale research, has been developed. The software allows one to control equipment, acquire and process data by means of simple scripts. The main purpose of that development is to increase experiment automation easiness, thus significantly reducing experimental setup automation efforts. In particular, minimal programming skills are required and supervisors have no reviewing troubles. Interactions between scripts and equipment are managed automatically, thus allowing to run multiple scripts simultaneously. Unlike well-known data acquisition commercial software systems, the control is performed by an imperative scripting language. This approach eases complex control and data acquisition algorithms implementation. A modular interface library performs interaction with external interfaces. While most widely used interfaces are already implemented, a simple framework is developed for fast implementations of new software and hardware interfaces. While the software is in continuous development with new features being implemented, it is already used in our laboratory for automation of a helium-3 cryostat control and data acquisition. The software is open source and distributed under Gnu Public License.

  20. Possibilities for retracing of copyright violations on current video game consoles by optical disk analysis

    NASA Astrophysics Data System (ADS)

    Irmler, Frank; Creutzburg, Reiner

    2014-02-01

    This paper deals with the possibilities of retracing copyright violations on current video game consoles (e.g. Microsoft Xbox, Sony PlayStation, ...) by studying the corresponding optical storage media DVD and Blu-ray. The possibilities of forensic investigation of DVD and Blu-ray Discs are presented. It is shown which information can be read by using freeware and commercial software for forensic examination. A detailed analysis is given on the visualization of hidden content and the possibility to find out information about the burning hardware used for writing on the optical discs. In connection with a forensic analysis of the Windows registry of a suspects PC a detailed overview of the crime scene for forged DVD and Blu-ray Discs can be obtained. Optical discs are examined under forensic aspects and the obtained results are implemented into automatic analysis scripts for the commercial forensics program EnCase Forensic. It is shown that for the optical storage media a possibility of identification of the drive used for writing can be obtained. In particular Blu-ray Discs contain the serial number of the burner. These and other findings were incorporated into the creation of various EnCase scripts for the professional forensic investigation with EnCase Forensic. Furthermore, a detailed flowchart for a forensic investigation of copyright infringement was developed.

  1. 2DB: a Proteomics database for storage, analysis, presentation, and retrieval of information from mass spectrometric experiments.

    PubMed

    Allmer, Jens; Kuhlgert, Sebastian; Hippler, Michael

    2008-07-07

    The amount of information stemming from proteomics experiments involving (multi dimensional) separation techniques, mass spectrometric analysis, and computational analysis is ever-increasing. Data from such an experimental workflow needs to be captured, related and analyzed. Biological experiments within this scope produce heterogenic data ranging from pictures of one or two-dimensional protein maps and spectra recorded by tandem mass spectrometry to text-based identifications made by algorithms which analyze these spectra. Additionally, peptide and corresponding protein information needs to be displayed. In order to handle the large amount of data from computational processing of mass spectrometric experiments, automatic import scripts are available and the necessity for manual input to the database has been minimized. Information is in a generic format which abstracts from specific software tools typically used in such an experimental workflow. The software is therefore capable of storing and cross analysing results from many algorithms. A novel feature and a focus of this database is to facilitate protein identification by using peptides identified from mass spectrometry and link this information directly to respective protein maps. Additionally, our application employs spectral counting for quantitative presentation of the data. All information can be linked to hot spots on images to place the results into an experimental context. A summary of identified proteins, containing all relevant information per hot spot, is automatically generated, usually upon either a change in the underlying protein models or due to newly imported identifications. The supporting information for this report can be accessed in multiple ways using the user interface provided by the application. We present a proteomics database which aims to greatly reduce evaluation time of results from mass spectrometric experiments and enhance result quality by allowing consistent data handling. Import functionality, automatic protein detection, and summary creation act together to facilitate data analysis. In addition, supporting information for these findings is readily accessible via the graphical user interface provided. The database schema and the implementation, which can easily be installed on virtually any server, can be downloaded in the form of a compressed file from our project webpage.

  2. Automation of radiation treatment planning : Evaluation of head and neck cancer patient plans created by the Pinnacle3 scripting and Auto-Planning functions.

    PubMed

    Speer, Stefan; Klein, Andreas; Kober, Lukas; Weiss, Alexander; Yohannes, Indra; Bert, Christoph

    2017-08-01

    Intensity-modulated radiotherapy (IMRT) techniques are now standard practice. IMRT or volumetric-modulated arc therapy (VMAT) allow treatment of the tumor while simultaneously sparing organs at risk. Nevertheless, treatment plan quality still depends on the physicist's individual skills, experiences, and personal preferences. It would therefore be advantageous to automate the planning process. This possibility is offered by the Pinnacle 3 treatment planning system (Philips Healthcare, Hamburg, Germany) via its scripting language or Auto-Planning (AP) module. AP module results were compared to in-house scripts and manually optimized treatment plans for standard head and neck cancer plans. Multiple treatment parameters were scored to judge plan quality (100 points = optimum plan). Patients were initially planned manually by different physicists and re-planned using scripts or AP. Script-based head and neck plans achieved a mean of 67.0 points and were, on average, superior to manually created (59.1 points) and AP plans (62.3 points). Moreover, they are characterized by reproducibility and lower standard deviation of treatment parameters. Even less experienced staff are able to create at least a good starting point for further optimization in a short time. However, for particular plans, experienced planners perform even better than scripts or AP. Experienced-user input is needed when setting up scripts or AP templates for the first time. Moreover, some minor drawbacks exist, such as the increase of monitor units (+35.5% for scripted plans). On average, automatically created plans are superior to manually created treatment plans. For particular plans, experienced physicists were able to perform better than scripts or AP; thus, the benefit is greatest when time is short or staff inexperienced.

  3. Comparing the Effects of Augmented Reality Phonics and Scripted Phonics Approaches on Achievement of At-Risk Kindergarten Students

    ERIC Educational Resources Information Center

    Ladd, Melissa

    2016-01-01

    This study strived to determine the effectiveness of the AR phonics program relative to the effectiveness of the scripted phonics program for developing the letter identification, sound verbalization, and blending abilities of kindergarten students considered at-risk based on state assessments. The researcher was interested in pretest and posttest…

  4. Automated Sequence Generation Process and Software

    NASA Technical Reports Server (NTRS)

    Gladden, Roy

    2007-01-01

    "Automated sequence generation" (autogen) signifies both a process and software used to automatically generate sequences of commands to operate various spacecraft. The autogen software comprises the autogen script plus the Activity Plan Generator (APGEN) program. APGEN can be used for planning missions and command sequences.

  5. SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop

    PubMed Central

    Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo

    2014-01-01

    Summary: Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig’s scalability over many computing nodes and illustrate its use with example scripts. Availability and Implementation: Available under the open source MIT license at http://sourceforge.net/projects/seqpig/ Contact: andre.schumacher@yahoo.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24149054

  6. A nonlinear disturbance-decoupled elevation axis controller for the Multiple Mirror Telescope

    NASA Astrophysics Data System (ADS)

    Clark, Dusty; Trebisky, Tom; Powell, Keith

    2008-07-01

    The Multiple Mirror Telescope (MMT), upgraded in 2000 to a monolithic 6.5m primary mirror from its original array of six 1.8m primary mirrors, was commissioned with axis controllers designed early in the upgrade process without regard to structural resonances or the possibility of the need for digital filtering of the control axis signal path. Post-commissioning performance issues led us to investigate replacement of the original control system with a more modern digital controller with full control over the system filters and gain paths. This work, from system identification through controller design iteration by simulation, and pre-deployment hardware-in-the-loop testing, was performed using latest-generation tools with Matlab® and Simulink®. Using Simulink's Real Time Workshop toolbox to automatically generate C source code for the controller from the Simulink diagram and a custom target build script, we were able to deploy the new controller into our existing software infrastructure running Wind River's VxWorks™real-time operating system. This paper describes the process of the controller design, including system identification data collection, with discussion of implementation of non-linear control modes and disturbance decoupling, which became necessary to obtain acceptable wind buffeting rejection.

  7. Texture for script identification.

    PubMed

    Busch, Andrew; Boles, Wageeh W; Sridharan, Sridha

    2005-11-01

    The problem of determining the script and language of a document image has a number of important applications in the field of document analysis, such as indexing and sorting of large collections of such images, or as a precursor to optical character recognition (OCR). In this paper, we investigate the use of texture as a tool for determining the script of a document image, based on the observation that text has a distinct visual texture. An experimental evaluation of a number of commonly used texture features is conducted on a newly created script database, providing a qualitative measure of which features are most appropriate for this task. Strategies for improving classification results in situations with limited training data and multiple font types are also proposed.

  8. Search and Determine Integrated Environment (SADIE)

    NASA Astrophysics Data System (ADS)

    Sabol, C.; Schumacher, P.; Segerman, A.; Coffey, S.; Hoskins, A.

    2012-09-01

    A new and integrated high performance computing software applications package called the Search and Determine Integrated Environment (SADIE) is being jointly developed and refined by the Air Force and Naval Research Laboratories (AFRL and NRL) to automatically resolve uncorrelated tracks (UCTs) and build a more complete space object catalog for improved Space Situational Awareness (SSA). The motivation for SADIE is to respond to very challenging needs identified and guidance received from Air Force Space Command (AFSPC) and other senior leaders to develop this technology to support the evolving Joint Space Operations Center (JSpOC) and Alternate Space Control Center (ASC2)-Dahlgren. The JSpOC and JMS SSA mission requirements and threads flow down from the United States Strategic Command (USSTRATCOM). The SADIE suite includes modification and integration of legacy applications and software components that include Search And Determine (SAD), Satellite Identification (SID), and Parallel Catalog (Parcat), as well as other utilities and scripts to enable end-to-end catalog building and maintenance in a parallel processing environment. SADIE is being developed to handle large catalog building challenges in all orbit regimes and includes the automatic processing of radar, fence, and optical data. Real data results are provided for the processing of Air Force Space Surveillance System fence observations and for the processing of Space Surveillance Telescope optical data.

  9. 33 CFR 401.20 - Automatic Identification System.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...' maritime Differential Global Positioning System radiobeacon services; or (7) The use of a temporary unit... Identification System. (a) Each of the following vessels must use an Automatic Identification System (AIS... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Automatic Identification System...

  10. Interaction Quality During Partner Reading

    ERIC Educational Resources Information Center

    Meisinger, Elizabeth B.; Schwanenflugel, Paula J.; Bradley, Barbara A.; Stahl, Steven A.

    2004-01-01

    The influence of social relationships, positive interdependence, and teacher structure on the quality of partner reading interactions was examined. Partner reading, a scripted cooperative learning strategy, is often used in classrooms to promote the development of fluent and automatic reading skills. Forty-three pairs of second grade children were…

  11. 47 CFR 80.231 - Technical Requirements for Class B Automatic Identification System (AIS) equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Identification System (AIS) equipment. 80.231 Section 80.231 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... § 80.231 Technical Requirements for Class B Automatic Identification System (AIS) equipment. (a) Class B Automatic Identification System (AIS) equipment must meet the technical requirements of IEC 62287...

  12. 47 CFR 80.231 - Technical Requirements for Class B Automatic Identification System (AIS) equipment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Identification System (AIS) equipment. 80.231 Section 80.231 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... § 80.231 Technical Requirements for Class B Automatic Identification System (AIS) equipment. (a) Class B Automatic Identification System (AIS) equipment must meet the technical requirements of IEC 62287...

  13. 47 CFR 80.231 - Technical Requirements for Class B Automatic Identification System (AIS) equipment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Identification System (AIS) equipment. 80.231 Section 80.231 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... § 80.231 Technical Requirements for Class B Automatic Identification System (AIS) equipment. (a) Class B Automatic Identification System (AIS) equipment must meet the technical requirements of IEC 62287...

  14. Brain model of text animation as a data mining strategy.

    PubMed

    Astakhova, Tamara; Astakhov, Vadim

    2009-01-01

    Imagination is the critical point in developing of realistic intelligence (AI) systems. One way to approach imagination would be simulation of its properties and operations. We developed two models "Brain Network Hierarchy of Languages," and "Semantical Holographic Calculus" and simulation system ScriptWriter that emulate the process of imagination through an automatic animation of English texts. The purpose of this paper is to demonstrate the model and present "ScriptWriter" system http://nvo.sdsc.edu/NVO/JCSG/get_SRB_mime_file2.cgi//home/tamara.sdsc/test/demo.zip?F=/home/tamara.sdsc/test/demo.zip&M=application/x-gtar for simulation of the imagination.

  15. Integrated cluster management at Manchester

    NASA Astrophysics Data System (ADS)

    McNab, Andrew; Forti, Alessandra

    2012-12-01

    We describe an integrated management system using third-party, open source components used in operating a large Tier-2 site for particle physics. This system tracks individual assets and records their attributes such as MAC and IP addresses; derives DNS and DHCP configurations from this database; creates each host's installation and re-configuration scripts; monitors the services on each host according to the records of what should be running; and cross references tickets with asset records and per-asset monitoring pages. In addition, scripts which detect problems and automatically remove hosts record these new states in the database which are available to operators immediately through the same interface as tickets and monitoring.

  16. Department of Combat Medic Training-Technology Enhancement

    DTIC Science & Technology

    2011-04-15

    SAYS : ............................................................................................................................ 6 2 INTRODUCTION...determined to be exempt from IRB protocol per Appendix 1.3 What this report says : Section 1 – Executive Summary: (this section) Section 2...with automatic conversion to digital text (conversion of handwriting to text) or use pre-scripted comments from a drop-down menu. b. Validation of

  17. 33 CFR 164.03 - Incorporation by reference.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... radiocommunication equipment and systems—Automatic identification systems (AIS)—part 2: Class A shipborne equipment of the universal automatic identification system (AIS)—Operational and performance requirements..., Recommendation on Performance Standards for a Universal Shipborne Automatic Identification System (AIS), adopted...

  18. 33 CFR 164.03 - Incorporation by reference.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... radiocommunication equipment and systems—Automatic identification systems (AIS)—part 2: Class A shipborne equipment of the universal automatic identification system (AIS)—Operational and performance requirements..., Recommendation on Performance Standards for a Universal Shipborne Automatic Identification System (AIS), adopted...

  19. 33 CFR 164.03 - Incorporation by reference.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... radiocommunication equipment and systems—Automatic identification systems (AIS)—part 2: Class A shipborne equipment of the universal automatic identification system (AIS)—Operational and performance requirements..., Recommendation on Performance Standards for a Universal Shipborne Automatic Identification System (AIS), adopted...

  20. Periodic, On-Demand, and User-Specified Information Reconciliation

    NASA Technical Reports Server (NTRS)

    Kolano, Paul

    2007-01-01

    Automated sequence generation (autogen) signifies both a process and software used to automatically generate sequences of commands to operate various spacecraft. Autogen requires fewer workers than are needed for older manual sequence-generation processes and reduces sequence-generation times from weeks to minutes. The autogen software comprises the autogen script plus the Activity Plan Generator (APGEN) program. APGEN can be used for planning missions and command sequences. APGEN includes a graphical user interface that facilitates scheduling of activities on a time line and affords a capability to automatically expand, decompose, and schedule activities.

  1. Research on time synchronization scheme of MES systems in manufacturing enterprise

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Wu, Kun; Sui, Changhao; Gu, Jin

    2018-04-01

    With the popularity of information and automatic production in the manufacturing enterprise, data interaction between business systems is more and more frequent. Therefore, the accuracy of time is getting higher and higher. However, the NTP network time synchronization methods lack the corresponding redundancy and monitoring mechanisms. When failure occurs, it can only make up operations after the event, which has a great effect on production data and systems interaction. Based on this, the paper proposes a RHCS-based NTP server architecture, automatically detect NTP status and failover by script.

  2. Surface-Source Downhole Seismic Analysis in R

    USGS Publications Warehouse

    Thompson, Eric M.

    2007-01-01

    This report discusses a method for interpreting a layered slowness or velocity model from surface-source downhole seismic data originally presented by Boore (2003). I have implemented this method in the statistical computing language R (R Development Core Team, 2007), so that it is freely and easily available to researchers and practitioners that may find it useful. I originally applied an early version of these routines to seismic cone penetration test data (SCPT) to analyze the horizontal variability of shear-wave velocity within the sediments in the San Francisco Bay area (Thompson et al., 2006). A more recent version of these codes was used to analyze the influence of interface-selection and model assumptions on velocity/slowness estimates and the resulting differences in site amplification (Boore and Thompson, 2007). The R environment has many benefits for scientific and statistical computation; I have chosen R to disseminate these routines because it is versatile enough to program specialized routines, is highly interactive which aids in the analysis of data, and is freely and conveniently available to install on a wide variety of computer platforms. These scripts are useful for the interpretation of layered velocity models from surface-source downhole seismic data such as deep boreholes and SCPT data. The inputs are the travel-time data and the offset of the source at the surface. The travel-time arrivals for the P- and S-waves must already be picked from the original data. An option in the inversion is to include estimates of the standard deviation of the travel-time picks for a weighted inversion of the velocity profile. The standard deviation of each travel-time pick is defined relative to the standard deviation of the best pick in a profile and is based on the accuracy with which the travel-time measurement could be determined from the seismogram. The analysis of the travel-time data consists of two parts: the identification of layer-interfaces, and the inversion for the velocity of each layer. The analyst usually picks layer-interfaces by visual inspection of the travel-time data. I have also developed an algorithm that automatically finds boundaries which can save a significant amount of the time when analyzing a large number of sites. The results of the automatic routines should be reviewed to check that they are reasonable. The interactivity of these scripts allows the user to add and to remove layers quickly, thus allowing rapid feedback on how the residuals are affected by each additional parameter in the inversion. In addition, the script allows many models to be compared at the same time.

  3. iHOPerator: user-scripting a personalized bioinformatics Web, starting with the iHOP website

    PubMed Central

    Good, Benjamin M; Kawas, Edward A; Kuo, Byron Yu-Lin; Wilkinson, Mark D

    2006-01-01

    Background User-scripts are programs stored in Web browsers that can manipulate the content of websites prior to display in the browser. They provide a novel mechanism by which users can conveniently gain increased control over the content and the display of the information presented to them on the Web. As the Web is the primary medium by which scientists retrieve biological information, any improvements in the mechanisms that govern the utility or accessibility of this information may have profound effects. GreaseMonkey is a Mozilla Firefox extension that facilitates the development and deployment of user-scripts for the Firefox web-browser. We utilize this to enhance the content and the presentation of the iHOP (information Hyperlinked Over Proteins) website. Results The iHOPerator is a GreaseMonkey user-script that augments the gene-centred pages on iHOP by providing a compact, configurable visualization of the defining information for each gene and by enabling additional data, such as biochemical pathway diagrams, to be collected automatically from third party resources and displayed in the same browsing context. Conclusion This open-source script provides an extension to the iHOP website, demonstrating how user-scripts can personalize and enhance the Web browsing experience in a relevant biological setting. The novel, user-driven controls over the content and the display of Web resources made possible by user-scripts, such as the iHOPerator, herald the beginning of a transition from a resource-centric to a user-centric Web experience. We believe that this transition is a necessary step in the development of Web technology that will eventually result in profound improvements in the way life scientists interact with information. PMID:17173692

  4. Ditching the Script: Moving beyond "Automatic Thinking" in Introductory Political Science Courses

    ERIC Educational Resources Information Center

    Glover, Robert W.; Tagliarina, Daniel

    2011-01-01

    Political science is a challenging field, particularly when it comes to undergraduate teaching. If we are to engage in something more than uncritical ideological instruction, it demands from the student a willingness to approach alien political ideas with intellectual generosity. Yet, students within introductory classes often harbor inherited…

  5. Technical Note: A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition and diffusion-derived components

    NASA Astrophysics Data System (ADS)

    Hoffmann, M.; Schulz-Hanke, M.; Garcia Alba, J.; Jurisch, N.; Hagemann, U.; Sachs, T.; Sommer, M.; Augustin, J.

    2015-08-01

    Processes driving the production, transformation and transport of methane (CH4) in wetland ecosystems are highly complex. Thus, serious challenges are constitutes in terms of the mechanistic process understanding, the identification of potential environmental drivers and the calculation of reliable CH4 emission estimates. We present a simple calculation algorithm to separate open-water CH4 fluxes measured with automatic chambers into diffusion- and ebullition-derived components, which helps facilitating the identification of underlying dynamics and potential environmental drivers. Flux separation is based on ebullition related sudden concentration changes during single measurements. A variable ebullition filter is applied, using the lower and upper quartile and the interquartile range (IQR). Automation of data processing is achieved by using an established R-script, adjusted for the purpose of CH4 flux calculation. The algorithm was tested using flux measurement data (July to September 2013) from a former fen grassland site, converted into a shallow lake as a result of rewetting ebullition and diffusion contributed 46 and 55 %, respectively, to total CH4 emissions, which is comparable to those previously reported by literature. Moreover, the separation algorithm revealed a concealed shift in the diurnal trend of diffusive fluxes throughout the measurement period.

  6. MultiDrizzle: An Integrated Pyraf Script for Registering, Cleaning and Combining Images

    NASA Astrophysics Data System (ADS)

    Koekemoer, A. M.; Fruchter, A. S.; Hook, R. N.; Hack, W.

    We present the new PyRAF-based `MultiDrizzle' script, which is aimed at providing a one-step approach to combining dithered HST images. The purpose of this script is to allow easy interaction with the complex suite of tasks in the IRAF/STSDAS `dither' package, as well as the new `PyDrizzle' task, while at the same time retaining the flexibility of these tasks through a number of parameters. These parameters control the various individual steps, such as sky subtraction, image registration, `drizzling' onto separate output images, creation of a clean median image, transformation of the median with `blot' and creation of cosmic ray masks, as well as the final image combination step using `drizzle'. The default parameters of all the steps are set so that the task will work automatically for a wide variety of different types of images, while at the same time allowing adjustment of individual parameters for special cases. The script currently works for both ACS and WFPC2 data, and is now being tested on STIS and NICMOS images. We describe the operation of the script and the effect of various parameters, particularly in the context of combining images from dithered observations using ACS and WFPC2. Additional information is also available at the `MultiDrizzle' home page: http://www.stsci.edu/~koekemoe/multidrizzle/

  7. snpTree--a web-server to identify and construct SNP trees from whole genome sequence data.

    PubMed

    Leekitcharoenphon, Pimlapas; Kaas, Rolf S; Thomsen, Martin Christen Frølund; Friis, Carsten; Rasmussen, Simon; Aarestrup, Frank M

    2012-01-01

    The advances and decreasing economical cost of whole genome sequencing (WGS), will soon make this technology available for routine infectious disease epidemiology. In epidemiological studies, outbreak isolates have very little diversity and require extensive genomic analysis to differentiate and classify isolates. One of the successfully and broadly used methods is analysis of single nucletide polymorphisms (SNPs). Currently, there are different tools and methods to identify SNPs including various options and cut-off values. Furthermore, all current methods require bioinformatic skills. Thus, we lack a standard and simple automatic tool to determine SNPs and construct phylogenetic tree from WGS data. Here we introduce snpTree, a server for online-automatic SNPs analysis. This tool is composed of different SNPs analysis suites, perl and python scripts. snpTree can identify SNPs and construct phylogenetic trees from WGS as well as from assembled genomes or contigs. WGS data in fastq format are aligned to reference genomes by BWA while contigs in fasta format are processed by Nucmer. SNPs are concatenated based on position on reference genome and a tree is constructed from concatenated SNPs using FastTree and a perl script. The online server was implemented by HTML, Java and python script.The server was evaluated using four published bacterial WGS data sets (V. cholerae, S. aureus CC398, S. Typhimurium and M. tuberculosis). The evaluation results for the first three cases was consistent and concordant for both raw reads and assembled genomes. In the latter case the original publication involved extensive filtering of SNPs, which could not be repeated using snpTree. The snpTree server is an easy to use option for rapid standardised and automatic SNP analysis in epidemiological studies also for users with limited bioinformatic experience. The web server is freely accessible at http://www.cbs.dtu.dk/services/snpTree-1.0/.

  8. Recent advances in the Lesser Antilles observatories Part 2 : WebObs - an integrated web-based system for monitoring and networks management

    NASA Astrophysics Data System (ADS)

    Beauducel, François; Bosson, Alexis; Randriamora, Frédéric; Anténor-Habazac, Christian; Lemarchand, Arnaud; Saurel, Jean-Marie; Nercessian, Alexandre; Bouin, Marie-Paule; de Chabalier, Jean-Bernard; Clouard, Valérie

    2010-05-01

    Seismological and Volcanological observatories have common needs and often common practical problems for multi disciplinary data monitoring applications. In fact, access to integrated data in real-time and estimation of measurements uncertainties are keys for an efficient interpretation, but instruments variety, heterogeneity of data sampling and acquisition systems lead to difficulties that may hinder crisis management. In Guadeloupe observatory, we have developed in the last years an operational system that attempts to answer the questions in the context of a pluri-instrumental observatory. Based on a single computer server, open source scripts (Matlab, Perl, Bash, Nagios) and a Web interface, the system proposes: an extended database for networks management, stations and sensors (maps, station file with log history, technical characteristics, meta-data, photos and associated documents); a web-form interfaces for manual data input/editing and export (like geochemical analysis, some of the deformation measurements, ...); routine data processing with dedicated automatic scripts for each technique, production of validated data outputs, static graphs on preset moving time intervals, and possible e-mail alarms; computers, acquisition processes, stations and individual sensors status automatic check with simple criteria (files update and signal quality), displayed as synthetic pages for technical control. In the special case of seismology, WebObs includes a digital stripchart multichannel continuous seismogram associated with EarthWorm acquisition chain (see companion paper Part 1), event classification database, location scripts, automatic shakemaps and regional catalog with associated hypocenter maps accessed through a user request form. This system leads to a real-time Internet access for integrated monitoring and becomes a strong support for scientists and technicians exchange, and is widely open to interdisciplinary real-time modeling. It has been set up at Martinique observatory and installation is planned this year at Montserrat Volcanological Observatory. It also in production at the geomagnetic observatory of Addis Abeba in Ethiopia.

  9. 33 CFR 164.43 - Automatic Identification System Shipborne Equipment-Prince William Sound.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Automatic Identification System Shipborne Equipment-Prince William Sound. 164.43 Section 164.43 Navigation and Navigable Waters COAST GUARD... Automatic Identification System Shipborne Equipment—Prince William Sound. (a) Until December 31, 2004, each...

  10. 33 CFR 164.43 - Automatic Identification System Shipborne Equipment-Prince William Sound.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Automatic Identification System Shipborne Equipment-Prince William Sound. 164.43 Section 164.43 Navigation and Navigable Waters COAST GUARD... Automatic Identification System Shipborne Equipment—Prince William Sound. (a) Until December 31, 2004, each...

  11. 33 CFR 164.43 - Automatic Identification System Shipborne Equipment-Prince William Sound.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Automatic Identification System Shipborne Equipment-Prince William Sound. 164.43 Section 164.43 Navigation and Navigable Waters COAST GUARD... Automatic Identification System Shipborne Equipment—Prince William Sound. (a) Until December 31, 2004, each...

  12. The AST3 controlling and operating software suite for automatic sky survey

    NASA Astrophysics Data System (ADS)

    Hu, Yi; Shang, Zhaohui; Ma, Bin; Hu, Keliang

    2016-07-01

    We have developed a specialized software package, called ast3suite, to achieve the remote control and automatic sky survey for AST3 (Antarctic Survey Telescope) from scratch. It includes several daemon servers and many basic commands. Each program does only one single task, and they work together to make AST3 a robotic telescope. A survey script calls basic commands to carry out automatic sky survey. Ast3suite was carefully tested in Mohe, China in 2013 and has been used at Dome, Antarctica in 2015 and 2016 with the real hardware for practical sky survey. Both test results and practical using showed that ast3suite had worked very well without any manual auxiliary as we expected.

  13. Predicting Reading in Vowelized and Unvowelized Arabic Script: An Investigation of Reading in First and Second Grades

    ERIC Educational Resources Information Center

    Asadi, Ibrahim A.; Khateb, Asaid

    2017-01-01

    This study examined the orthographic transparency of Arabic by investigating the contribution of phonological awareness (PA), vocabulary, and Rapid Automatized Naming (RAN) to reading vowelized and unvowelized words. The results from first and second grade children showed that PA contribution was similar in the vowelized and unvowelized…

  14. The Effects of Different Computer-Supported Collaboration Scripts on Students' Learning Processes and Outcome in a Simulation-Based Collaborative Learning Environment

    ERIC Educational Resources Information Center

    Wieland, Kristina

    2010-01-01

    Students benefit from collaborative learning activities, but they do not automatically reach desired learning outcomes when working together (Fischer, Kollar, Mandl, & Haake, 2007; King, 2007). Learners need instructional support to increase the quality of collaborative processes and individual learning outcomes. The core challenge is to find…

  15. [Development of a Compared Software for Automatically Generated DVH in Eclipse TPS].

    PubMed

    Xie, Zhao; Luo, Kelin; Zou, Lian; Hu, Jinyou

    2016-03-01

    This study is to automatically calculate the dose volume histogram(DVH) for the treatment plan, then to compare it with requirements of doctor's prescriptions. The scripting language Autohotkey and programming language C# were used to develop a compared software for automatically generated DVH in Eclipse TPS. This software is named Show Dose Volume Histogram (ShowDVH), which is composed of prescription documents generation, operation functions of DVH, software visualization and DVH compared report generation. Ten cases in different cancers have been separately selected, in Eclipse TPS 11.0 ShowDVH could not only automatically generate DVH reports but also accurately determine whether treatment plans meet the requirements of doctor’s prescriptions, then reports gave direction for setting optimization parameters of intensity modulated radiated therapy. The ShowDVH is an user-friendly and powerful software, and can automatically generated compared DVH reports fast in Eclipse TPS 11.0. With the help of ShowDVH, it greatly saves plan designing time and improves working efficiency of radiation therapy physicists.

  16. A review of automatic patient identification options for public health care centers with restricted budgets.

    PubMed

    García-Betances, Rebeca I; Huerta, Mónica K

    2012-01-01

    A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies' backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones' present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients' identification processes in low-budget situations.

  17. A Review of Automatic Patient Identification Options for Public Health Care Centers with Restricted Budgets

    PubMed Central

    García-Betances, Rebeca I.; Huerta, Mónica K.

    2012-01-01

    A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies’ backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones’ present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients’ identification processes in low-budget situations. PMID:23569629

  18. Integration of a clinical trial database with a PACS

    NASA Astrophysics Data System (ADS)

    van Herk, M.

    2014-03-01

    Many clinical trials use Electronic Case Report Forms (ECRF), e.g., from OpenClinica. Trial data is augmented if DICOM scans, dose cubes, etc. from the Picture Archiving and Communication System (PACS) are included for data mining. Unfortunately, there is as yet no structured way to collect DICOM objects in trial databases. In this paper, we obtain a tight integration of ECRF and PACS using open source software. Methods: DICOM identifiers for selected images/series/studies are stored in associated ECRF events (e.g., baseline) as follows: 1) JavaScript added to OpenClinica communicates using HTML with a gateway server inside the hospitals firewall; 2) On this gateway, an open source DICOM server runs scripts to query and select the data, returning anonymized identifiers; 3) The scripts then collects, anonymizes, zips and transmits selected data to a central trial server; 4) Here data is stored in a DICOM archive which allows authorized ECRF users to view and download the anonymous images associated with each event. Results: All integration scripts are open source. The PACS administrator configures the anonymization script and decides to use the gateway in passive (receiving) mode or in an active mode going out to the PACS to gather data. Our ECRF centric approach supports automatic data mining by iterating over the cases in the ECRF database, providing the identifiers to load images and the clinical data to correlate with image analysis results. Conclusions: Using open source software and web technology, a tight integration has been achieved between PACS and ECRF.

  19. SU-F-T-94: Plan2pdf - a Software Tool for Automatic Plan Report for Philips Pinnacle TPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, C

    Purpose: To implement an automatic electronic PDF plan reporting tool for Philips Pinnacle treatment planning system (TPS) Methods: An electronic treatment plan reporting software is developed by us to enable fully automatic PDF report from Pinnacle TPS to external EMR programs such as MOSAIQ. The tool is named “plan2pdf”. plan2pdf is implemented using Pinnacle scripts, Java and UNIX shell scripts, without any external program needed. plan2pdf supports full auto-mode and manual mode reporting. In full auto-mode, with a single mouse click, plan2pdf will generate a detailed Pinnacle plan report in PDF format, which includes customizable cover page, Pinnacle plan summary,more » orthogonal views through each plan POI and maximum dose point, DRR for each beam, serial transverse views captured throughout the dose grid at a user specified interval, DVH and scorecard windows. The final PDF report is also automatically bookmarked for each section above for convenient plan review. The final PDF report can either be saved on a user specified folder on Pinnacle, or it can be automatically exported to an EMR import folder via a user configured FTP service. In manual capture mode, plan2pdf allows users to capture any Pinnacle plan by full screen, individual window or rectangular ROI drawn on screen. Furthermore, to avoid possible patients’ plan mix-up during auto-mode reporting, a user conflict check feature is included in plan2pdf: it prompts user to wait if another patient is being exported by plan2pdf by another user. Results: plan2pdf is tested extensively and successfully at our institution consists of 5 centers, 15 dosimetrists and 10 physicists, running Pinnacle version 9.10 on Enterprise servers. Conclusion: plan2pdf provides a highly efficient, user friendly and clinical proven platform for all Philips Pinnacle users, to generate a detailed plan report in PDF format for external EMR systems.« less

  20. SU-E-T-406: Use of TrueBeam Developer Mode and API to Increase the Efficiency and Accuracy of Commissioning Measurements for the Varian EDGE Stereotactic Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, S; Gulam, M; Song, K

    2014-06-01

    Purpose: The Varian EDGE machine is a new stereotactic platform, combining Calypso and VisionRT localization systems with a stereotactic linac. The system includes TrueBeam DeveloperMode, making possible the use of XML-scripting for automation of linac-related tasks. This study details the use of DeveloperMode to automate commissioning tasks for Varian EDGE, thereby improving efficiency and measurement consistency. Methods: XML-scripting was used for various commissioning tasks,including couch model verification,beam-scanning,and isocenter verification. For couch measurements, point measurements were acquired for several field sizes (2×2,4×4,10×10cm{sup 2}) at 42 gantry angles for two couch-models. Measurements were acquired with variations in couch position(rails in/out,couch shifted inmore » each of motion axes) compared to treatment planning system(TPS)-calculated values,which were logged automatically through advanced planning interface(API) scripting functionality. For beam scanning, XML-scripts were used to create custom MLC-apertures. For isocenter verification, XML-scripts were used to automate various Winston-Lutz-type tests. Results: For couch measurements, the time required for each set of angles was approximately 9 minutes. Without scripting, each set required approximately 12 minutes. Automated measurements required only one physicist, while manual measurements required at least two physicists to handle linac positions/beams and data recording. MLC apertures were generated outside of the TPS,and with the .xml file format, double-checking without use of TPS/operator console was possible. Similar time efficiency gains were found for isocenter verification measurements Conclusion: The use of XML scripting in TrueBeam DeveloperMode allows for efficient and accurate data acquisition during commissioning. The efficiency improvement is most pronounced for iterative measurements, exemplified by the time savings for couch modeling measurements(approximately 10 hours). The scripting also allowed for creation of the files in advance without requiring access to TPS. The API scripting functionality enabled efficient creation/mining of TPS data. Finally, automation reduces the potential for human error in entering linac values at the machine console,and the script provides a log of measurements acquired for each session. This research was supported in part by a grant from Varian Medical Systems, Palo Alto, CA.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopan, O; Yang, F; Ford, E

    Purpose: The physics plan check verifies various aspects of a treatment plan after dosimetrists have finished creating the plan. Some errors in the plan which are caught by the physics check could be caught earlier in the departmental workflow. The purpose of this project was to evaluate a plan checking script that can be run within the treatment planning system (TPS) by the dosimetrists prior to plan approval and export to the record and verify system. Methods: A script was created in the Pinnacle TPS to automatically check 15 aspects of a plan for clinical practice conformity. The script outputsmore » a list of checks which the plan has passed and a list of checks which the plan has failed so that appropriate adjustments can be made. For this study, the script was run on a total of 108 plans: IMRT (46/108), VMAT (35/108) and SBRT (27/108). Results: Of the plans checked by the script, 77/108 (71%) failed at least one of the fifteen checks. IMRT plans resulted in more failed checks (91%) than VMAT (51%) or SBRT (63%), due to the high failure rate of an IMRT-specific check, which checks that no IMRT segment < 5 MU. The dose grid size and couch removal checks caught errors in 10% and 14% of all plans – errors that ultimately may have resulted in harm to the patient. Conclusion: Approximately three-fourths of the plans being examined contain errors that could be caught by dosimetrists running an automated script embedded in the TPS. The results of this study will improve the departmental workflow by cutting down on the number of plans that, due to these types of errors, necessitate re-planning and re-approval of plans, increase dosimetrist and physician workload and, in urgent cases, inconvenience patients by causing treatment delays.« less

  2. Maestro Workflow Conductor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Natale, Francesco

    2017-06-01

    MaestroWF is a Python tool and software package for loading YAML study specifications that represents a simulation campaign. The package is capable of parameterizing a study, pulling dependencies automatically, formatting output directories, and managing the flow and execution of the campaign. MaestroWF also provides a set of abstracted objects that can also be used to develop user specific scripts for launching simulation campaigns.

  3. An Intelligent Automation Platform for Rapid Bioprocess Design.

    PubMed

    Wu, Tianyi; Zhou, Yuhong

    2014-08-01

    Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user's inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. © 2013 Society for Laboratory Automation and Screening.

  4. An Intelligent Automation Platform for Rapid Bioprocess Design

    PubMed Central

    Wu, Tianyi

    2014-01-01

    Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user’s inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. PMID:24088579

  5. Reproducible research in palaeomagnetism

    NASA Astrophysics Data System (ADS)

    Lurcock, Pontus; Florindo, Fabio

    2015-04-01

    The reproducibility of research findings is attracting increasing attention across all scientific disciplines. In palaeomagnetism as elsewhere, computer-based analysis techniques are becoming more commonplace, complex, and diverse. Analyses can often be difficult to reproduce from scratch, both for the original researchers and for others seeking to build on the work. We present a palaeomagnetic plotting and analysis program designed to make reproducibility easier. Part of the problem is the divide between interactive and scripted (batch) analysis programs. An interactive desktop program with a graphical interface is a powerful tool for exploring data and iteratively refining analyses, but usually cannot operate without human interaction. This makes it impossible to re-run an analysis automatically, or to integrate it into a larger automated scientific workflow - for example, a script to generate figures and tables for a paper. In some cases the parameters of the analysis process itself are not saved explicitly, making it hard to repeat or improve the analysis even with human interaction. Conversely, non-interactive batch tools can be controlled by pre-written scripts and configuration files, allowing an analysis to be 'replayed' automatically from the raw data. However, this advantage comes at the expense of exploratory capability: iteratively improving an analysis entails a time-consuming cycle of editing scripts, running them, and viewing the output. Batch tools also tend to require more computer expertise from their users. PuffinPlot is a palaeomagnetic plotting and analysis program which aims to bridge this gap. First released in 2012, it offers both an interactive, user-friendly desktop interface and a batch scripting interface, both making use of the same core library of palaeomagnetic functions. We present new improvements to the program that help to integrate the interactive and batch approaches, allowing an analysis to be interactively explored and refined, then saved as a self-contained configuration which can be re-run without human interaction. PuffinPlot can thus be used as a component of a larger scientific workflow, integrated with workflow management tools such as Kepler, without compromising its capabilities as an exploratory tool. Since both PuffinPlot and the platform it runs on (Java) are Free/Open Source software, even the most fundamental components of an analysis can be verified and reproduced.

  6. Construction, implementation and testing of an image identification system using computer vision methods for fruit flies with economic importance (Diptera: Tephritidae).

    PubMed

    Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang

    2017-07-01

    Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  7. Auto identification technology and its impact on patient safety in the Operating Room of the Future.

    PubMed

    Egan, Marie T; Sandberg, Warren S

    2007-03-01

    Automatic identification technologies, such as bar coding and radio frequency identification, are ubiquitous in everyday life but virtually nonexistent in the operating room. User expectations, based on everyday experience with automatic identification technologies, have generated much anticipation that these systems will improve readiness, workflow, and safety in the operating room, with minimal training requirements. We report, in narrative form, a multi-year experience with various automatic identification technologies in the Operating Room of the Future Project at Massachusetts General Hospital. In each case, the additional human labor required to make these ;labor-saving' technologies function in the medical environment has proved to be their undoing. We conclude that while automatic identification technologies show promise, significant barriers to realizing their potential still exist. Nevertheless, overcoming these obstacles is necessary if the vision of an operating room of the future in which all processes are monitored, controlled, and optimized is to be achieved.

  8. Creating and virtually screening databases of fluorescently-labelled compounds for the discovery of target-specific molecular probes

    NASA Astrophysics Data System (ADS)

    Kamstra, Rhiannon L.; Dadgar, Saedeh; Wigg, John; Chowdhury, Morshed A.; Phenix, Christopher P.; Floriano, Wely B.

    2014-11-01

    Our group has recently demonstrated that virtual screening is a useful technique for the identification of target-specific molecular probes. In this paper, we discuss some of our proof-of-concept results involving two biologically relevant target proteins, and report the development of a computational script to generate large databases of fluorescence-labelled compounds for computer-assisted molecular design. The virtual screening of a small library of 1,153 fluorescently-labelled compounds against two targets, and the experimental testing of selected hits reveal that this approach is efficient at identifying molecular probes, and that the screening of a labelled library is preferred over the screening of base compounds followed by conjugation of confirmed hits. The automated script for library generation explores the known reactivity of commercially available dyes, such as NHS-esters, to create large virtual databases of fluorescence-tagged small molecules that can be easily synthesized in a laboratory. A database of 14,862 compounds, each tagged with the ATTO680 fluorophore was generated with the automated script reported here. This library is available for downloading and it is suitable for virtual ligand screening aiming at the identification of target-specific fluorescent molecular probes.

  9. Tidal analysis and Arrival Process Mining Using Automatic Identification System (AIS) Data

    DTIC Science & Technology

    2017-01-01

    files, organized by location. The data were processed using the Python programming language (van Rossum and Drake 2001), the Pandas data analysis...ER D C/ CH L TR -1 7- 2 Coastal Inlets Research Program Tidal Analysis and Arrival Process Mining Using Automatic Identification System...17-2 January 2017 Tidal Analysis and Arrival Process Mining Using Automatic Identification System (AIS) Data Brandan M. Scully Coastal and

  10. Video indexing based on image and sound

    NASA Astrophysics Data System (ADS)

    Faudemay, Pascal; Montacie, Claude; Caraty, Marie-Jose

    1997-10-01

    Video indexing is a major challenge for both scientific and economic reasons. Information extraction can sometimes be easier from sound channel than from image channel. We first present a multi-channel and multi-modal query interface, to query sound, image and script through 'pull' and 'push' queries. We then summarize the segmentation phase, which needs information from the image channel. Detection of critical segments is proposed. It should speed-up both automatic and manual indexing. We then present an overview of the information extraction phase. Information can be extracted from the sound channel, through speaker recognition, vocal dictation with unconstrained vocabularies, and script alignment with speech. We present experiment results for these various techniques. Speaker recognition methods were tested on the TIMIT and NTIMIT database. Vocal dictation as experimented on newspaper sentences spoken by several speakers. Script alignment was tested on part of a carton movie, 'Ivanhoe'. For good quality sound segments, error rates are low enough for use in indexing applications. Major issues are the processing of sound segments with noise or music, and performance improvement through the use of appropriate, low-cost architectures or networks of workstations.

  11. Multi- and hyperspectral scene modeling

    NASA Astrophysics Data System (ADS)

    Borel, Christoph C.; Tuttle, Ronald F.

    2011-06-01

    This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the overall apparent canopy reflectance in the near infrared.

  12. Identification of Barramundi (Lates calcarifer) DC-SCRIPT, a Specific Molecular Marker for Dendritic Cells in Fish

    PubMed Central

    Zoccola, Emmanuelle; Delamare-Deboutteville, Jérôme; Barnes, Andrew C.

    2015-01-01

    Antigen presentation is a critical step bridging innate immune recognition and specific immune memory. In mammals, the process is orchestrated by dendritic cells (DCs) in the lymphatic system, which initiate clonal proliferation of antigen-specific lymphocytes. However, fish lack a classical lymphatic system and there are currently no cellular markers for DCs in fish, thus antigen-presentation in fish is poorly understood. Recently, antigen-presenting cells similar in structure and function to mammalian DCs were identified in various fish, including rainbow trout (Oncorhynchus mykiss) and zebrafish (Danio rerio). The present study aimed to identify a potential molecular marker for DCs in fish and therefore targeted DC-SCRIPT, a well-conserved zinc finger protein that is preferentially expressed in all sub-types of human DCs. Putative dendritic cells were obtained in culture by maturation of spleen and pronephros-derived monocytes. DC-SCRIPT was identified in barramundi by homology using RACE PCR and genome walking. Specific expression of DC-SCRIPT was detected in barramundi cells by Stellaris mRNA FISH, in combination with MHCII expression when exposed to bacterial derived peptidoglycan, suggesting the presence of DCs in L. calcarifer. Moreover, morphological identification was achieved by light microscopy of cytospins prepared from these cultures. The cultured cells were morphologically similar to mammalian and trout DCs. Migration assays determined that these cells have the ability to move towards pathogens and pathogen associated molecular patterns, with a preference for peptidoglycans over lipopolysaccharides. The cells were also strongly phagocytic, engulfing bacteria and rapidly breaking them down. Barramundi DCs induced significant proliferation of responder populations of T-lymphocytes, supporting their role as antigen presenting cells. DC-SCRIPT expression in head kidney was higher 6 and 24 h following intraperitoneal challenge with peptidoglycan and lipopolysaccharide and declined after 3 days relative to PBS-injected controls. Relative expression was also lower in the spleen at 3 days post challenge but increased again at 7 days. As DC-SCRIPT is a constitutively expressed nuclear receptor, independent of immune activation, this may indicate initial migration of immature DCs from head kidney and spleen to the injection site, followed by return to the spleen for maturation and antigen presentation. DC-SCRIPT may be a valuable tool in the investigation of antigen presentation in fish and facilitate optimisation of vaccines and adjuvants for aquaculture. PMID:26173015

  13. Automated IMRT planning with regional optimization using planning scripts

    PubMed Central

    Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff Z.

    2013-01-01

    Intensity‐modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time‐consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases. PACS numbers: 87.55.D, 87.55.de PMID:23318393

  14. 47 CFR 80.275 - Technical Requirements for Class A Automatic Identification System (AIS) equipment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Compulsory Ships § 80.275 Technical Requirements for Class A Automatic Identification System (AIS) equipment. (a) Prior to submitting a certification application for a Class A AIS device, the following... Identification System (AIS) equipment. 80.275 Section 80.275 Telecommunication FEDERAL COMMUNICATIONS COMMISSION...

  15. 47 CFR 80.275 - Technical Requirements for Class A Automatic Identification System (AIS) equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Compulsory Ships § 80.275 Technical Requirements for Class A Automatic Identification System (AIS) equipment. (a) Prior to submitting a certification application for a Class A AIS device, the following... Identification System (AIS) equipment. 80.275 Section 80.275 Telecommunication FEDERAL COMMUNICATIONS COMMISSION...

  16. 47 CFR 80.275 - Technical Requirements for Class A Automatic Identification System (AIS) equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Compulsory Ships § 80.275 Technical Requirements for Class A Automatic Identification System (AIS) equipment. (a) Prior to submitting a certification application for a Class A AIS device, the following... Identification System (AIS) equipment. 80.275 Section 80.275 Telecommunication FEDERAL COMMUNICATIONS COMMISSION...

  17. 47 CFR 80.275 - Technical Requirements for Class A Automatic Identification System (AIS) equipment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Compulsory Ships § 80.275 Technical Requirements for Class A Automatic Identification System (AIS) equipment. (a) Prior to submitting a certification application for a Class A AIS device, the following... Identification System (AIS) equipment. 80.275 Section 80.275 Telecommunication FEDERAL COMMUNICATIONS COMMISSION...

  18. Space Particle Hazard Specification, Forecasting, and Mitigation

    DTIC Science & Technology

    2007-11-30

    Automated FTP scripts permitted users to automatically update their global input parameter data set directly from the National Oceanic and...of CEASE capabilities. The angular field-of-view for CEASE is relatively large and will not allow for pitch angle resolved measurements. However... angular zones spanning 120° in the plane containing the magnetic field with an approximate 4° width in the direction perpendicular to the look-plane

  19. The procedure execution manager and its application to Advanced Photon Source operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.

    1997-06-01

    The Procedure Execution Manager (PEM) combines a complete scripting environment for coding accelerator operation procedures with a manager application for executing and monitoring the procedures. PEM is based on Tcl/Tk, a supporting widget library, and the dp-tcl extension for distributed processing. The scripting environment provides support for distributed, parallel execution of procedures along with join and abort operations. Nesting of procedures is supported, permitting the same code to run as a top-level procedure under operator control or as a subroutine under control of another procedure. The manager application allows an operator to execute one or more procedures in automatic, semi-automatic,more » or manual modes. It also provides a standard way for operators to interact with procedures. A number of successful applications of PEM to accelerator operations have been made to date. These include start-up, shutdown, and other control of the positron accumulator ring (PAR), low-energy transport (LET) lines, and the booster rf systems. The PAR/LET procedures make nested use of PEM`s ability to run parallel procedures. There are also a number of procedures to guide and assist tune-up operations, to make accelerator physics measurements, and to diagnose equipment. Because of the success of the existing procedures, expanded use of PEM is planned.« less

  20. Provision of an X-environment using the HEPiX-X11 scripts

    NASA Astrophysics Data System (ADS)

    Jones, R. W. L.; Cons, L.; Taddei, A.

    1997-02-01

    At CERN, we have created a user X11 environment within the HEPiX framework. Customisation is possible at the HEPiX, site, cluster, machine, group and user level, in order of increasing priority. The management of the X11 session is divorced from the window management. FVWM is the default window manager, being light on system resources while providing most of the desired functionality. The assembly of a correctly ordered. fvwmrc is done automatically by the scripts, with customisation allowed at all of the above levels. Two tools are provided to query aspects of that environment. These may be used both at the start of the X-session or when commencing any application. The first is guesskbd, a tool to identify the user's keyboard. A second, provides useful information about a given display.

  1. Flexible Automatic Discretization for Finite Differences: Eliminating the Human Factor

    NASA Astrophysics Data System (ADS)

    Pranger, Casper

    2017-04-01

    In the geophysical numerical modelling community, finite differences are (in part due to their small footprint) a popular spatial discretization method for PDEs in the regular-shaped continuum that is the earth. However, they rapidly become prone to programming mistakes when physics increase in complexity. To eliminate opportunities for human error, we have designed an automatic discretization algorithm using Wolfram Mathematica, in which the user supplies symbolic PDEs, the number of spatial dimensions, and a choice of symbolic boundary conditions, and the script transforms this information into matrix- and right-hand-side rules ready for use in a C++ code that will accept them. The symbolic PDEs are further used to automatically develop and perform manufactured solution benchmarks, ensuring at all stages physical fidelity while providing pragmatic targets for numerical accuracy. We find that this procedure greatly accelerates code development and provides a great deal of flexibility in ones choice of physics.

  2. Automatic identification of species with neural networks.

    PubMed

    Hernández-Serna, Andrés; Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  3. Composable languages for bioinformatics: the NYoSh experiment

    PubMed Central

    Simi, Manuele

    2014-01-01

    Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is distributed at http://nyosh.campagnelab.org. PMID:24482760

  4. Composable languages for bioinformatics: the NYoSh experiment.

    PubMed

    Simi, Manuele; Campagne, Fabien

    2014-01-01

    Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is distributed at http://nyosh.campagnelab.org.

  5. The role of interword spacing in reading Japanese: an eye movement study.

    PubMed

    Sainio, Miia; Hyönä, Jukka; Bingushi, Kazuo; Bertram, Raymond

    2007-09-01

    The present study investigated the role of interword spacing in a naturally unspaced language, Japanese. Eye movements were registered of native Japanese readers reading pure Hiragana (syllabic) and mixed Kanji-Hiragana (ideographic and syllabic) text in spaced and unspaced conditions. Interword spacing facilitated both word identification and eye guidance when reading syllabic script, but not when the script contained ideographic characters. We conclude that in reading Hiragana interword spacing serves as an effective segmentation cue. In contrast, spacing information in mixed Kanji-Hiragana text is redundant, since the visually salient Kanji characters serve as effective segmentation cues by themselves.

  6. Suspect/foil identification in actual crimes and in the laboratory: a reality monitoring analysis.

    PubMed

    Behrman, Bruce W; Richards, Regina E

    2005-06-01

    Four reality monitoring variables were used to discriminate suspect from foil identifications in 183 actual criminal cases. Four hundred sixty-one identification attempts based on five and six-person lineups were analyzed. These identification attempts resulted in 238 suspect identifications and 68 foil identifications. Confidence, automatic processing, eliminative processing and feature use comprised the set of reality monitoring variables. Thirty-five verbal confidence phrases taken from police reports were assigned numerical values on a 10-point confidence scale. Automatic processing identifications were those that occurred "immediately" or "without hesitation." Eliminative processing identifications occurred when witnesses compared or eliminated persons in the lineups. Confidence, automatic processing and eliminative processing were significant predictors, but feature use was not. Confidence was the most effective discriminator. In cases that involved substantial evidence extrinsic to the identification 43% of the suspect identifications were made with high confidence, whereas only 10% of the foil identifications were made with high confidence. The results of a laboratory study using the same predictors generally paralleled the archival results. Forensic implications are discussed.

  7. Automatic Car Identification - an Evaluation

    DOT National Transportation Integrated Search

    1972-03-01

    In response to a Federal Railroad Administration request, the Transportation Systems Center evaluated the Automatic Car Identification System (ACI) used on the nation's railroads. The ACI scanner was found to be adequate for reliable data output whil...

  8. GrayStar: Web-based pedagogical stellar modeling

    NASA Astrophysics Data System (ADS)

    Short, C. Ian

    2017-01-01

    GrayStar is a web-based pedagogical stellar model. It approximates stellar atmospheric and spectral line modeling in JavaScript with visualization in HTML. It is suitable for a wide range of education and public outreach levels depending on which optional plots and print-outs are turned on. All plots and renderings are pure basic HTML and the plotting module contains original HTML procedures for automatically scaling and graduating x- and y-axes.

  9. Optical Automatic Car Identification (OACI) : Volume 1. Advanced System Specification.

    DOT National Transportation Integrated Search

    1978-12-01

    A performance specification is provided in this report for an Optical Automatic Car Identification (OACI) scanner system which features 6% improved readability over existing industry scanner systems. It also includes the analysis and rationale which ...

  10. Estimating spatial travel times using automatic vehicle identification data

    DOT National Transportation Integrated Search

    2001-01-01

    Prepared ca. 2001. The paper describes an algorithm that was developed for estimating reliable and accurate average roadway link travel times using Automatic Vehicle Identification (AVI) data. The algorithm presented is unique in two aspects. First, ...

  11. Scripting Module for the Satellite Orbit Analysis Program (SOAP)

    NASA Technical Reports Server (NTRS)

    Carnright, Robert; Paget, Jim; Coggi, John; Stodden, David

    2008-01-01

    This add-on module to the SOAP software can perform changes to simulation objects based on the occurrence of specific conditions. This allows the software to encompass simulation response of scheduled or physical events. Users can manipulate objects in the simulation environment under programmatic control. Inputs to the scripting module are Actions, Conditions, and the Script. Actions are arbitrary modifications to constructs such as Platform Objects (i.e. satellites), Sensor Objects (representing instruments or communication links), or Analysis Objects (user-defined logical or numeric variables). Examples of actions include changes to a satellite orbit ( v), changing a sensor-pointing direction, and the manipulation of a numerical expression. Conditions represent the circumstances under which Actions are performed and can be couched in If-Then-Else logic, like performing v at specific times or adding to the spacecraft power only when it is being illuminated by the Sun. The SOAP script represents the entire set of conditions being considered over a specific time interval. The output of the scripting module is a series of events, which are changes to objects at specific times. As the SOAP simulation clock runs forward, the scheduled events are performed. If the user sets the clock back in time, the events within that interval are automatically undone. This script offers an interface for defining scripts where the user does not have to remember the vocabulary of various keywords. Actions can be captured by employing the same user interface that is used to define the objects themselves. Conditions can be set to invoke Actions by selecting them from pull-down lists. Users define the script by selecting from the pool of defined conditions. Many space systems have to react to arbitrary events that can occur from scheduling or from the environment. For example, an instrument may cease to draw power when the area that it is tasked to observe is not in view. The contingency of the planetary body blocking the line of sight is a condition upon which the power being drawn is set to zero. It remains at zero until the observation objective is again in view. Computing the total power drawn by the instrument over a period of days or weeks can now take such factors into consideration. What makes the architecture especially powerful is that the scripting module can look ahead and behind in simulation time, and this temporal versatility can be leveraged in displays such as x-y plots. For example, a plot of a satellite s altitude as a function of time can take changes to the orbit into account.

  12. The Stroop Effect in Kana and Kanji Scripts in Native Japanese Speakers: An fMRI Study

    PubMed Central

    Coderre, Emily L.; Filippi, Christopher G.; Newhouse, Paul A.; Dumas, Julie A.

    2008-01-01

    Prior research has shown that the two writing systems of the Japanese orthography are processed differently: kana (syllabic symbols) are processed like other phonetic languages such as English, while kanji (a logographic writing system) are processed like other logographic languages like Chinese. Previous work done with the Stroop task in Japanese has shown that these differences in processing strategies create differences in Stroop effects. This study investigated the Stroop effect in kanji and kana using functional magnetic resonance imaging (fMRI) to examine the similarities and differences in brain processing between logographic and phonetic languages. Nine native Japanese speakers performed the Stroop task both in kana and kanji scripts during fMRI. Both scripts individually produced significant Stroop effects as measured by the behavioral reaction time data. The imaging data for both scripts showed brain activation in the anterior cingulate gyrus, an area involved in inhibiting automatic processing. Though behavioral data showed no significant differences between the Stroop effects in kana and kanji, there were differential areas of activation in fMRI found for each writing system. In fMRI, the Stroop task activated an area in the left inferior parietal lobule during the kana task and the left inferior frontal gyrus during the kanji task. The results of the present study suggest that the Stroop task in Japanese kana and kanji elicits differential activation in brain regions involved in conflict detection and resolution for syllabic and logographic writing systems. PMID:18325582

  13. Offline Arabic handwriting recognition: a survey.

    PubMed

    Lorigo, Liana M; Govindaraju, Venu

    2006-05-01

    The automatic recognition of text on scanned images has enabled many applications such as searching for words in large volumes of documents, automatic sorting of postal mail, and convenient editing of previously printed documents. The domain of handwriting in the Arabic script presents unique technical challenges and has been addressed more recently than other domains. Many different methods have been proposed and applied to various types of images. This paper provides a comprehensive review of these methods. It is the first survey to focus on Arabic handwriting recognition and the first Arabic character recognition survey to provide recognition rates and descriptions of test data for the approaches discussed. It includes background on the field, discussion of the methods, and future research directions.

  14. Roadway system assessment using bluetooth-based automatic vehicle identification travel time data.

    DOT National Transportation Integrated Search

    2012-12-01

    This monograph is an exposition of several practice-ready methodologies for automatic vehicle identification (AVI) data collection : systems. This includes considerations in the physical setup of the collection system as well as the interpretation of...

  15. Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.

  16. HDF-EOS Web Server

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  17. Formal Methods for Automated Diagnosis of Autosub 6000

    NASA Technical Reports Server (NTRS)

    Ernits, Juhan; Dearden, Richard; Pebody, Miles

    2009-01-01

    This is a progress report on applying formal methods in the context of building an automated diagnosis and recovery system for Autosub 6000, an Autonomous Underwater Vehicle (AUV). The diagnosis task involves building abstract models of the control system of the AUV. The diagnosis engine is based on Livingstone 2, a model-based diagnoser originally built for aerospace applications. Large parts of the diagnosis model can be built without concrete knowledge about each mission, but actual mission scripts and configuration parameters that carry important information for diagnosis are changed for every mission. Thus we use formal methods for generating the mission control part of the diagnosis model automatically from the mission script and perform a number of invariant checks to validate the configuration. After the diagnosis model is augmented with the generated mission control component model, it needs to be validated using verification techniques.

  18. jsPsych: a JavaScript library for creating behavioral experiments in a Web browser.

    PubMed

    de Leeuw, Joshua R

    2015-03-01

    Online experiments are growing in popularity, and the increasing sophistication of Web technology has made it possible to run complex behavioral experiments online using only a Web browser. Unlike with offline laboratory experiments, however, few tools exist to aid in the development of browser-based experiments. This makes the process of creating an experiment slow and challenging, particularly for researchers who lack a Web development background. This article introduces jsPsych, a JavaScript library for the development of Web-based experiments. jsPsych formalizes a way of describing experiments that is much simpler than writing the entire experiment from scratch. jsPsych then executes these descriptions automatically, handling the flow from one task to another. The jsPsych library is open-source and designed to be expanded by the research community. The project is available online at www.jspsych.org .

  19. Phylo.io: Interactive Viewing and Comparison of Large Phylogenetic Trees on the Web.

    PubMed

    Robinson, Oscar; Dylus, David; Dessimoz, Christophe

    2016-08-01

    Phylogenetic trees are pervasively used to depict evolutionary relationships. Increasingly, researchers need to visualize large trees and compare multiple large trees inferred for the same set of taxa (reflecting uncertainty in the tree inference or genuine discordance among the loci analyzed). Existing tree visualization tools are however not well suited to these tasks. In particular, side-by-side comparison of trees can prove challenging beyond a few dozen taxa. Here, we introduce Phylo.io, a web application to visualize and compare phylogenetic trees side-by-side. Its distinctive features are: highlighting of similarities and differences between two trees, automatic identification of the best matching rooting and leaf order, scalability to large trees, high usability, multiplatform support via standard HTML5 implementation, and possibility to store and share visualizations. The tool can be freely accessed at http://phylo.io and can easily be embedded in other web servers. The code for the associated JavaScript library is available at https://github.com/DessimozLab/phylo-io under an MIT open source license. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  20. The epidural needle guidance with an intelligent and automatic identification system for epidural anesthesia

    NASA Astrophysics Data System (ADS)

    Kao, Meng-Chun; Ting, Chien-Kun; Kuo, Wen-Chuan

    2018-02-01

    Incorrect placement of the needle causes medical complications in the epidural block, such as dural puncture or spinal cord injury. This study proposes a system which combines an optical coherence tomography (OCT) imaging probe with an automatic identification (AI) system to objectively identify the position of the epidural needle tip. The automatic identification system uses three features as image parameters to distinguish the different tissue by three classifiers. Finally, we found that the support vector machine (SVM) classifier has highest accuracy, specificity, and sensitivity, which reached to 95%, 98%, and 92%, respectively.

  1. Automatic Publication of a MIS Product to GeoNetwork: Case of the AIS Indexer

    DTIC Science & Technology

    2012-11-01

    installation and configuration The following instructions are for installing and configuring the software packages Java 1.6 and MySQL 5.5 which are...An Automatic Identification System (AIS) reception indexer Java application was developed in the summer of 2011, based on the work of Lapinski and...release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT An Automatic Identification System (AIS) reception indexer Java application was

  2. Associative priming in a masked perceptual identification task: evidence for automatic processes.

    PubMed

    Pecher, Diane; Zeelenberg, René; Raaijmakers, Jeroen G W

    2002-10-01

    Two experiments investigated the influence of automatic and strategic processes on associative priming effects in a perceptual identification task in which prime-target pairs are briefly presented and masked. In this paradigm, priming is defined as a higher percentage of correctly identified targets for related pairs than for unrelated pairs. In Experiment 1, priming was obtained for mediated word pairs. This mediated priming effect was affected neither by the presence of direct associations nor by the presentation time of the primes, indicating that automatic priming effects play a role in perceptual identification. Experiment 2 showed that the priming effect was not affected by the proportion (.90 vs. .10) of related pairs if primes were presented briefly to prevent their identification. However, a large proportion effect was found when primes were presented for 1000 ms so that they were clearly visible. These results indicate that priming in a masked perceptual identification task is the result of automatic processes and is not affected by strategies. The present paradigm provides a valuable alternative to more commonly used tasks such as lexical decision.

  3. Abbreviation definition identification based on automatic precision estimates.

    PubMed

    Sohn, Sunghwan; Comeau, Donald C; Kim, Won; Wilbur, W John

    2008-09-25

    The rapid growth of biomedical literature presents challenges for automatic text processing, and one of the challenges is abbreviation identification. The presence of unrecognized abbreviations in text hinders indexing algorithms and adversely affects information retrieval and extraction. Automatic abbreviation definition identification can help resolve these issues. However, abbreviations and their definitions identified by an automatic process are of uncertain validity. Due to the size of databases such as MEDLINE only a small fraction of abbreviation-definition pairs can be examined manually. An automatic way to estimate the accuracy of abbreviation-definition pairs extracted from text is needed. In this paper we propose an abbreviation definition identification algorithm that employs a variety of strategies to identify the most probable abbreviation definition. In addition our algorithm produces an accuracy estimate, pseudo-precision, for each strategy without using a human-judged gold standard. The pseudo-precisions determine the order in which the algorithm applies the strategies in seeking to identify the definition of an abbreviation. On the Medstract corpus our algorithm produced 97% precision and 85% recall which is higher than previously reported results. We also annotated 1250 randomly selected MEDLINE records as a gold standard. On this set we achieved 96.5% precision and 83.2% recall. This compares favourably with the well known Schwartz and Hearst algorithm. We developed an algorithm for abbreviation identification that uses a variety of strategies to identify the most probable definition for an abbreviation and also produces an estimated accuracy of the result. This process is purely automatic.

  4. Pydna: a simulation and documentation tool for DNA assembly strategies using python.

    PubMed

    Pereira, Filipa; Azevedo, Flávio; Carvalho, Ângela; Ribeiro, Gabriela F; Budde, Mark W; Johansson, Björn

    2015-05-02

    Recent advances in synthetic biology have provided tools to efficiently construct complex DNA molecules which are an important part of many molecular biology and biotechnology projects. The planning of such constructs has traditionally been done manually using a DNA sequence editor which becomes error-prone as scale and complexity of the construction increase. A human-readable formal description of cloning and assembly strategies, which also allows for automatic computer simulation and verification, would therefore be a valuable tool. We have developed pydna, an extensible, free and open source Python library for simulating basic molecular biology DNA unit operations such as restriction digestion, ligation, PCR, primer design, Gibson assembly and homologous recombination. A cloning strategy expressed as a pydna script provides a description that is complete, unambiguous and stable. Execution of the script automatically yields the sequence of the final molecule(s) and that of any intermediate constructs. Pydna has been designed to be understandable for biologists with limited programming skills by providing interfaces that are semantically similar to the description of molecular biology unit operations found in literature. Pydna simplifies both the planning and sharing of cloning strategies and is especially useful for complex or combinatorial DNA molecule construction. An important difference compared to existing tools with similar goals is the use of Python instead of a specifically constructed language, providing a simulation environment that is more flexible and extensible by the user.

  5. Optical Automatic Car Identification (OACI) Field Test Program

    DOT National Transportation Integrated Search

    1976-05-01

    The results of the Optical Automatic Car Identification (OACI) tests at Chicago conducted from August 16 to September 4, 1975 are presented. The main purpose of this test was to determine the suitability of optics as a principle of operation for an a...

  6. A discrete optimization approach for locating automatic vehicle identification readers for the provision of roadway travel times

    DOT National Transportation Integrated Search

    2002-11-01

    This paper develops an algorithm for optimally locating surveillance technologies with an emphasis on Automatic Vehicle Identification tag readers by maximizing the benefit that would accrue from measuring travel times on a transportation network. Th...

  7. 47 CFR 25.281 - Automatic Transmitter Identification System (ATIS).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... identified through the use of an automatic transmitter identification system as specified below. (a.... (3) The ATIS signal as a minimum shall consist of the following: (i) The FCC assigned earth station... (ATIS). 25.281 Section 25.281 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON...

  8. 47 CFR 25.281 - Automatic Transmitter Identification System (ATIS).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... identified through the use of an automatic transmitter identification system as specified below. (a.... (3) The ATIS signal as a minimum shall consist of the following: (i) The FCC assigned earth station... (ATIS). 25.281 Section 25.281 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON...

  9. 47 CFR 25.281 - Automatic Transmitter Identification System (ATIS).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... identified through the use of an automatic transmitter identification system as specified below. (a.... (3) The ATIS signal as a minimum shall consist of the following: (i) The FCC assigned earth station... (ATIS). 25.281 Section 25.281 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON...

  10. 47 CFR 25.281 - Automatic Transmitter Identification System (ATIS).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... identified through the use of an automatic transmitter identification system as specified below. (a.... (3) The ATIS signal as a minimum shall consist of the following: (i) The FCC assigned earth station... (ATIS). 25.281 Section 25.281 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON...

  11. Automatically identifying health outcome information in MEDLINE records.

    PubMed

    Demner-Fushman, Dina; Few, Barbara; Hauser, Susan E; Thoma, George

    2006-01-01

    Understanding the effect of a given intervention on the patient's health outcome is one of the key elements in providing optimal patient care. This study presents a methodology for automatic identification of outcomes-related information in medical text and evaluates its potential in satisfying clinical information needs related to health care outcomes. An annotation scheme based on an evidence-based medicine model for critical appraisal of evidence was developed and used to annotate 633 MEDLINE citations. Textual, structural, and meta-information features essential to outcome identification were learned from the created collection and used to develop an automatic system. Accuracy of automatic outcome identification was assessed in an intrinsic evaluation and in an extrinsic evaluation, in which ranking of MEDLINE search results obtained using PubMed Clinical Queries relied on identified outcome statements. The accuracy and positive predictive value of outcome identification were calculated. Effectiveness of the outcome-based ranking was measured using mean average precision and precision at rank 10. Automatic outcome identification achieved 88% to 93% accuracy. The positive predictive value of individual sentences identified as outcomes ranged from 30% to 37%. Outcome-based ranking improved retrieval accuracy, tripling mean average precision and achieving 389% improvement in precision at rank 10. Preliminary results in outcome-based document ranking show potential validity of the evidence-based medicine-model approach in timely delivery of information critical to clinical decision support at the point of service.

  12. Simple, Script-Based Science Processing Archive

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Hegde, Mahabaleshwara; Barth, C. Wrandle

    2007-01-01

    The Simple, Scalable, Script-based Science Processing (S4P) Archive (S4PA) is a disk-based archival system for remote sensing data. It is based on the data-driven framework of S4P and is used for data transfer, data preprocessing, metadata generation, data archive, and data distribution. New data are automatically detected by the system. S4P provides services such as data access control, data subscription, metadata publication, data replication, and data recovery. It comprises scripts that control the data flow. The system detects the availability of data on an FTP (file transfer protocol) server, initiates data transfer, preprocesses data if necessary, and archives it on readily available disk drives with FTP and HTTP (Hypertext Transfer Protocol) access, allowing instantaneous data access. There are options for plug-ins for data preprocessing before storage. Publication of metadata to external applications such as the Earth Observing System Clearinghouse (ECHO) is also supported. S4PA includes a graphical user interface for monitoring the system operation and a tool for deploying the system. To ensure reliability, S4P continuously checks stored data for integrity, Further reliability is provided by tape backups of disks made once a disk partition is full and closed. The system is designed for low maintenance, requiring minimal operator oversight.

  13. Tracking delays in report availability caused by incorrect exam status with Web-based issue tracking: a quality initiative.

    PubMed

    Awan, Omer Abdulrehman; van Wagenberg, Frans; Daly, Mark; Safdar, Nabile; Nagy, Paul

    2011-04-01

    Many radiology information systems (RIS) cannot accept a final report from a dictation reporting system before the exam has been completed in the RIS by a technologist. A radiologist can still render a report in a reporting system once images are available, but the RIS and ancillary systems may not get the results because of the study's uncompleted status. This delay in completing the study caused an alarming number of delayed reports and was undetected by conventional RIS reporting techniques. We developed a Web-based reporting tool to monitor uncompleted exams and automatically page section supervisors when a report was being delayed by its incomplete status in the RIS. Institutional Review Board exemption was obtained. At four imaging centers, a Python script was developed to poll the dictation system every 10 min for exams in five different modalities that were signed by the radiologist but could not be sent to the RIS. This script logged the exams into an existing Web-based tracking tool using PHP and a MySQL database. The script also text-paged the modality supervisor. The script logged the time at which the report was finally sent, and statistics were aggregated onto a separate Web-based reporting tool. Over a 1-year period, the average number of uncompleted exams per month and time to problem resolution decreased at every imaging center and in almost every imaging modality. Automated feedback provides a vital link in improving technologist performance and patient care without assigning a human resource to manage report queues.

  14. Java Application Shell: A Framework for Piecing Together Java Applications

    NASA Technical Reports Server (NTRS)

    Miller, Philip; Powers, Edward I. (Technical Monitor)

    2001-01-01

    This session describes the architecture of Java Application Shell (JAS), a Swing-based framework for developing interactive Java applications. Java Application Shell is being developed by Commerce One, Inc. for NASA Goddard Space Flight Center Code 588. The purpose of JAS is to provide a framework for the development of Java applications, providing features that enable the development process to be more efficient, consistent and flexible. Fundamentally, JAS is based upon an architecture where an application is considered a collection of 'plugins'. In turn, a plug-in is a collection of Swing actions defined using XML and packaged in a jar file. Plug-ins may be local to the host platform or remotely-accessible through HTTP. Local and remote plugins are automatically discovered by JAS upon application startup; plugins may also be loaded dynamically without having to re-start the application. Using Extensible Markup Language (XML) to define actions, as opposed to hardcoding them in application logic, allows easier customization of application-specific operations by separating application logic from presentation. Through XML, a developer defines an action that may appear on any number of menus, toolbars, and buttons. Actions maintain and propagate enable/disable states and specify icons, tool-tips, titles, etc. Furthermore, JAS allows actions to be implemented using various scripting languages through the use of IBM's Bean Scripting Framework. Scripted action implementation is seamless to the end-user. In addition to action implementation, scripts may be used for application and unit-level testing. In the case of application-level testing, JAS has hooks to assist a script in simulating end-user input. JAS also provides property and user preference management, JavaHelp, Undo/Redo, Multi-Document Interface, Single-Document Interface, printing, and logging. Finally, Jini technology has also been included into the framework by means of a Jini services browser and the ability to associate services with actions. Several Java technologies have been incorporated into JAS, including Swing, Internal Frames, Java Beans, XML, JavaScript, JavaHelp, and Jini. Additional information is contained in the original extended abstract.

  15. Review of Software Platforms for Agent Based Models

    DTIC Science & Technology

    2008-04-01

    EINSTein 4.3.2 Battlefield Python (optional, for batch runs) MANA 4.3.3 Battlefield N/A MASON 4.3.4 General Java NetLogo 4.3.5 General Logo-variant...through the use of relatively simple Python scripts. It also has built-in functions for parameter sweeps, and can plot the resulting fitness landscape ac...Nonetheless its ease of use, and support for automatic drawing of agents in 2D or 3D2 makes this a suitable platform for beginner programmers. 2Only in the

  16. Contingency Software in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn; Patterson-Hine, Ann

    2006-01-01

    This viewgraph presentation reviews the development of contingency software for autonomous systems. Autonomous vehicles currently have a limited capacity to diagnose and mitigate failures. There is a need to be able to handle a broader range of contingencies. The goals of the project are: 1. Speed up diagnosis and mitigation of anomalous situations.2.Automatically handle contingencies, not just failures.3.Enable projects to select a degree of autonomy consistent with their needs and to incrementally introduce more autonomy.4.Augment on-board fault protection with verified contingency scripts

  17. Motor signatures of emotional reactivity in frontotemporal dementia.

    PubMed

    Marshall, Charles R; Hardy, Chris J D; Russell, Lucy L; Clark, Camilla N; Bond, Rebecca L; Dick, Katrina M; Brotherhood, Emilie V; Mummery, Cath J; Schott, Jonathan M; Rohrer, Jonathan D; Kilner, James M; Warren, Jason D

    2018-01-18

    Automatic motor mimicry is essential to the normal processing of perceived emotion, and disrupted automatic imitation might underpin socio-emotional deficits in neurodegenerative diseases, particularly the frontotemporal dementias. However, the pathophysiology of emotional reactivity in these diseases has not been elucidated. We studied facial electromyographic responses during emotion identification on viewing videos of dynamic facial expressions in 37 patients representing canonical frontotemporal dementia syndromes versus 21 healthy older individuals. Neuroanatomical associations of emotional expression identification accuracy and facial muscle reactivity were assessed using voxel-based morphometry. Controls showed characteristic profiles of automatic imitation, and this response predicted correct emotion identification. Automatic imitation was reduced in the behavioural and right temporal variant groups, while the normal coupling between imitation and correct identification was lost in the right temporal and semantic variant groups. Grey matter correlates of emotion identification and imitation were delineated within a distributed network including primary visual and motor, prefrontal, insular, anterior temporal and temporo-occipital junctional areas, with common involvement of supplementary motor cortex across syndromes. Impaired emotional mimesis may be a core mechanism of disordered emotional signal understanding and reactivity in frontotemporal dementia, with implications for the development of novel physiological biomarkers of socio-emotional dysfunction in these diseases.

  18. Identification of Matra Region and Overlapping Characters for OCR of Printed Bengali Scripts

    NASA Astrophysics Data System (ADS)

    Goswami, Subhra Sundar

    One of the important reasons for poor recognition rate in optical character recognition (OCR) system is the error in character segmentation. In case of Bangla scripts, the errors occur due to several reasons, which include incorrect detection of matra (headline), over-segmentation and under-segmentation. We have proposed a robust method for detecting the headline region. Existence of overlapping characters (in under-segmented parts) in scanned printed documents is a major problem in designing an effective character segmentation procedure for OCR systems. In this paper, a predictive algorithm is developed for effectively identifying overlapping characters and then selecting the cut-borders for segmentation. Our method can be successfully used in achieving high recognition result.

  19. SU-F-BRB-16: A Spreadsheet Based Automatic Trajectory GEnerator (SAGE): An Open Source Tool for Automatic Creation of TrueBeam Developer Mode Robotic Trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Etmektzoglou, A; Mishra, P; Svatos, M

    Purpose: To automate creation and delivery of robotic linac trajectories with TrueBeam Developer Mode, an open source spreadsheet-based trajectory generation tool has been developed, tested and made freely available. The computing power inherent in a spreadsheet environment plus additional functions programmed into the tool insulate users from the underlying schema tedium and allow easy calculation, parameterization, graphical visualization, validation and finally automatic generation of Developer Mode XML scripts which are directly loadable on a TrueBeam linac. Methods: The robotic control system platform that allows total coordination of potentially all linac moving axes with beam (continuous, step-and-shoot, or combination thereof) becomesmore » available in TrueBeam Developer Mode. Many complex trajectories are either geometric or can be described in analytical form, making the computational power, graphing and programmability available in a spreadsheet environment an easy and ideal vehicle for automatic trajectory generation. The spreadsheet environment allows also for parameterization of trajectories thus enabling the creation of entire families of trajectories using only a few variables. Standard spreadsheet functionality has been extended for powerful movie-like dynamic graphic visualization of the gantry, table, MLC, room, lasers, 3D observer placement and beam centerline all as a function of MU or time, for analysis of the motions before requiring actual linac time. Results: We used the tool to generate and deliver extended SAD “virtual isocenter” trajectories of various shapes such as parameterized circles and ellipses. We also demonstrated use of the tool in generating linac couch motions that simulate respiratory motion using analytical parameterized functions. Conclusion: The SAGE tool is a valuable resource to experiment with families of complex geometric trajectories for a TrueBeam Linac. It makes Developer Mode more accessible as a vehicle to quickly translate research ideas into machine readable scripts without programming knowledge. As an open source initiative, it also enables researcher collaboration on future developments. I am a full time employee at Varian Medical Systems, Palo Alto, California.« less

  20. 21 CFR 892.1900 - Automatic radiographic film processor.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...

  1. 21 CFR 892.1900 - Automatic radiographic film processor.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...

  2. 21 CFR 892.1900 - Automatic radiographic film processor.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...

  3. 21 CFR 892.1900 - Automatic radiographic film processor.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...

  4. 21 CFR 892.1900 - Automatic radiographic film processor.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...

  5. 47 CFR 80.275 - Technical Requirements for Class A Automatic Identification System (AIS) equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Technical Requirements for Class A Automatic Identification System (AIS) equipment. 80.275 Section 80.275 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Equipment Authorization for Compulsory Ships § 80.275...

  6. RFID: A Revolution in Automatic Data Recognition

    ERIC Educational Resources Information Center

    Deal, Walter F., III

    2004-01-01

    Radio frequency identification, or RFID, is a generic term for technologies that use radio waves to automatically identify people or objects. There are several methods of identification, but the most common is to store a serial number that identifies a person or object, and perhaps other information, on a microchip that is attached to an antenna…

  7. 33 CFR 164.43 - Automatic Identification System Shipborne Equipment-Prince William Sound.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (AISSE) system consisting of a: (1) Twelve-channel all-in-view Differential Global Positioning System (d... to indicate to shipboard personnel that the U.S. Coast Guard dGPS system cannot provide the required... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Automatic Identification System...

  8. ESAP plus: a web-based server for EST-SSR marker development.

    PubMed

    Ponyared, Piyarat; Ponsawat, Jiradej; Tongsima, Sissades; Seresangtakul, Pusadee; Akkasaeng, Chutipong; Tantisuwichwong, Nathpapat

    2016-12-22

    Simple sequence repeats (SSRs) have become widely used as molecular markers in plant genetic studies due to their abundance, high allelic variation at each locus and simplicity to analyze using conventional PCR amplification. To study plants with unknown genome sequence, SSR markers from Expressed Sequence Tags (ESTs), which can be obtained from the plant mRNA (converted to cDNA), must be utilized. With the advent of high-throughput sequencing technology, huge EST sequence data have been generated and are now accessible from many public databases. However, SSR marker identification from a large in-house or public EST collection requires a computational pipeline that makes use of several standard bioinformatic tools to design high quality EST-SSR primers. Some of these computational tools are not users friendly and must be tightly integrated with reference genomic databases. A web-based bioinformatic pipeline, called EST Analysis Pipeline Plus (ESAP Plus), was constructed for assisting researchers to develop SSR markers from a large EST collection. ESAP Plus incorporates several bioinformatic scripts and some useful standard software tools necessary for the four main procedures of EST-SSR marker development, namely 1) pre-processing, 2) clustering and assembly, 3) SSR mining and 4) SSR primer design. The proposed pipeline also provides two alternative steps for reducing EST redundancy and identifying SSR loci. Using public sugarcane ESTs, ESAP Plus automatically executed the aforementioned computational pipeline via a simple web user interface, which was implemented using standard PHP, HTML, CSS and Java scripts. With ESAP Plus, users can upload raw EST data and choose various filtering options and parameters to analyze each of the four main procedures through this web interface. All input EST data and their predicted SSR results will be stored in the ESAP Plus MySQL database. Users will be notified via e-mail when the automatic process is completed and they can download all the results through the web interface. ESAP Plus is a comprehensive and convenient web-based bioinformatic tool for SSR marker development. ESAP Plus offers all necessary EST-SSR development processes with various adjustable options that users can easily use to identify SSR markers from a large EST collection. With familiar web interface, users can upload the raw EST using the data submission page and visualize/download the corresponding EST-SSR information from within ESAP Plus. ESAP Plus can handle considerably large EST datasets. This EST-SSR discovery tool can be accessed directly from: http://gbp.kku.ac.th/esap_plus/ .

  9. Tashkeela: Novel corpus of Arabic vocalized texts, data for auto-diacritization systems.

    PubMed

    Zerrouki, Taha; Balla, Amar

    2017-04-01

    Arabic diacritics are often missed in Arabic scripts. This feature is a handicap for new learner to read َArabic, text to speech conversion systems, reading and semantic analysis of Arabic texts. The automatic diacritization systems are the best solution to handle this issue. But such automation needs resources as diactritized texts to train and evaluate such systems. In this paper, we describe our corpus of Arabic diacritized texts. This corpus is called Tashkeela. It can be used as a linguistic resource tool for natural language processing such as automatic diacritics systems, dis-ambiguity mechanism, features and data extraction. The corpus is freely available, it contains 75 million of fully vocalized words mainly 97 books from classical and modern Arabic language. The corpus is collected from manually vocalized texts using web crawling process.

  10. Proteomic Cinderella: Customized analysis of bulky MS/MS data in one night.

    PubMed

    Kiseleva, Olga; Poverennaya, Ekaterina; Shargunov, Alexander; Lisitsa, Andrey

    2018-02-01

    Proteomic challenges, stirred up by the advent of high-throughput technologies, produce large amount of MS data. Nowadays, the routine manual search does not satisfy the "speed" of modern science any longer. In our work, the necessity of single-thread analysis of bulky data emerged during interpretation of HepG2 proteome profiling results for proteoforms searching. We compared the contribution of each of the eight search engines (X!Tandem, MS-GF[Formula: see text], MS Amanda, MyriMatch, Comet, Tide, Andromeda, and OMSSA) integrated in an open-source graphical user interface SearchGUI ( http://searchgui.googlecode.com ) into total result of proteoforms identification and optimized set of engines working simultaneously. We also compared the results of our search combination with Mascot results using protein kit UPS2, containing 48 human proteins. We selected combination of X!Tandem, MS-GF[Formula: see text] and OMMSA as the most time-efficient and productive combination of search. We added homemade java-script to automatize pipeline from file picking to report generation. These settings resulted in rise of the efficiency of our customized pipeline unobtainable by manual scouting: the analysis of 192 files searched against human proteome (42153 entries) downloaded from UniProt took 11[Formula: see text]h.

  11. Global optimization framework for solar building design

    NASA Astrophysics Data System (ADS)

    Silva, N.; Alves, N.; Pascoal-Faria, P.

    2017-07-01

    The generative modeling paradigm is a shift from static models to flexible models. It describes a modeling process using functions, methods and operators. The result is an algorithmic description of the construction process. Each evaluation of such an algorithm creates a model instance, which depends on its input parameters (width, height, volume, roof angle, orientation, location). These values are normally chosen according to aesthetic aspects and style. In this study, the model's parameters are automatically generated according to an objective function. A generative model can be optimized according to its parameters, in this way, the best solution for a constrained problem is determined. Besides the establishment of an overall framework design, this work consists on the identification of different building shapes and their main parameters, the creation of an algorithmic description for these main shapes and the formulation of the objective function, respecting a building's energy consumption (solar energy, heating and insulation). Additionally, the conception of an optimization pipeline, combining an energy calculation tool with a geometric scripting engine is presented. The methods developed leads to an automated and optimized 3D shape generation for the projected building (based on the desired conditions and according to specific constrains). The approach proposed will help in the construction of real buildings that account for less energy consumption and for a more sustainable world.

  12. Kekule.js: An Open Source JavaScript Chemoinformatics Toolkit.

    PubMed

    Jiang, Chen; Jin, Xi; Dong, Ying; Chen, Ming

    2016-06-27

    Kekule.js is an open-source, object-oriented JavaScript toolkit for chemoinformatics. It provides methods for many common tasks in molecular informatics, including chemical data input/output (I/O), two- and three-dimensional (2D/3D) rendering of chemical structure, stereo identification, ring perception, structure comparison, and substructure search. Encapsulated widgets to display and edit chemical structures directly in web context are also supplied. Developed with web standards, the toolkit is ideal for building chemoinformatics applications over the Internet. Moreover, it is highly platform-independent and can also be used in desktop or mobile environments. Some initial applications, such as plugins for inputting chemical structures on the web and uses in chemistry education, have been developed based on the toolkit.

  13. From Provenance Standards and Tools to Queries and Actionable Provenance

    NASA Astrophysics Data System (ADS)

    Ludaescher, B.

    2017-12-01

    The W3C PROV standard provides a minimal core for sharing retrospective provenance information for scientific workflows and scripts. PROV extensions such as DataONE's ProvONE model are necessary for linking runtime observables in retrospective provenance records with conceptual-level prospective provenance information, i.e., workflow (or dataflow) graphs. Runtime provenance recorders, such as DataONE's RunManager for R, or noWorkflow for Python capture retrospective provenance automatically. YesWorkflow (YW) is a toolkit that allows researchers to declare high-level prospective provenance models of scripts via simple inline comments (YW-annotations), revealing the computational modules and dataflow dependencies in the script. By combining and linking both forms of provenance, important queries and use cases can be supported that neither provenance model can afford on its own. We present existing and emerging provenance tools developed for the DataONE and SKOPE (Synthesizing Knowledge of Past Environments) projects. We show how the different tools can be used individually and in combination to model, capture, share, query, and visualize provenance information. We also present challenges and opportunities for making provenance information more immediately actionable for the researchers who create it in the first place. We argue that such a shift towards "provenance-for-self" is necessary to accelerate the creation, sharing, and use of provenance in support of transparent, reproducible computational and data science.

  14. Framework for Automation of Hazard Log Management on Large Critical Projects

    NASA Astrophysics Data System (ADS)

    Vinerbi, Lorenzo; Babu, Arun P.

    2016-08-01

    Hazard log is a database of all risk management activities in a project. Maintaining its correctness and consistency on large safety/mission critical projects involving multiple vendors, suppliers, and partners is critical and challenging. IBM DOORS is one of the popular tool used for hazard management in space applications. However, not all stake- holders are familiar with it. Also, It is not always feasible to expect all stake-holders to provide correct and consistent hazard data.The current work describes the process and tools to simplify the process of hazard data collection on large projects. It demonstrates how the collected data from all stake-holders is merged to form the hazard log while ensuring data consistency and correctness.The data provided by all parties are collected using a template containing scripts. The scripts check for mistakes based on internal standards of company in charge of hazard management. The collected data is then subjected to merging in DOORS, which also contain scripts to check and import data to form the hazard log. The proposed tool has been applied to a mission critical project, and has been found to save time and reduce the number of mistakes while creating the hazard log. The use of automatic checks paves the way for correct tracking of risk and hazard analysis activities for large critical projects.

  15. Automatic classification of canine PRG neuronal discharge patterns using K-means clustering.

    PubMed

    Zuperku, Edward J; Prkic, Ivana; Stucke, Astrid G; Miller, Justin R; Hopp, Francis A; Stuth, Eckehard A

    2015-02-01

    Respiratory-related neurons in the parabrachial-Kölliker-Fuse (PB-KF) region of the pons play a key role in the control of breathing. The neuronal activities of these pontine respiratory group (PRG) neurons exhibit a variety of inspiratory (I), expiratory (E), phase spanning and non-respiratory related (NRM) discharge patterns. Due to the variety of patterns, it can be difficult to classify them into distinct subgroups according to their discharge contours. This report presents a method that automatically classifies neurons according to their discharge patterns and derives an average subgroup contour of each class. It is based on the K-means clustering technique and it is implemented via SigmaPlot User-Defined transform scripts. The discharge patterns of 135 canine PRG neurons were classified into seven distinct subgroups. Additional methods for choosing the optimal number of clusters are described. Analysis of the results suggests that the K-means clustering method offers a robust objective means of both automatically categorizing neuron patterns and establishing the underlying archetypical contours of subtypes based on the discharge patterns of group of neurons. Published by Elsevier B.V.

  16. High cancer-specific expression of mesothelin (MSLN) is attributable to an upstream enhancer containing a transcription enhancer factor dependent MCAT motif.

    PubMed

    Hucl, Tomas; Brody, Jonathan R; Gallmeier, Eike; Iacobuzio-Donahue, Christine A; Farrance, Iain K; Kern, Scott E

    2007-10-01

    Identification of genes with cancer-specific overexpression offers the potential to efficiently discover cancer-specific activities in an unbiased manner. We apply this paradigm to study mesothelin (MSLN) overexpression, a nearly ubiquitous, diagnostically and therapeutically useful characteristic of pancreatic cancer. We identified an 18-bp upstream enhancer, termed CanScript, strongly activating transcription from an otherwise weak tissue-nonspecific promoter and operating selectively in cells having aberrantly elevated cancer-specific MSLN transcription. Introducing mutations into CanScript showed two functionally distinct sites: an Sp1-like site and an MCAT element. Gel retardation and chromatin immunoprecipitation assays showed the MCAT element to be bound by transcription enhancer factor (TEF)-1 (TEAD1) in vitro and in vivo. The presence of TEF-1 was required for MSLN protein overexpression as determined by TEF-1 knockdown experiments. The cancer specificity seemed to be provided by a putative limiting cofactor of TEF-1 that could be outcompeted by exogenous TEF-1 only in a MSLN-overexpressing cell line. A CanScript concatemer offered enhanced activity. These results identify a TEF family member as a major regulator of MSLN overexpression, a fundamental characteristic of pancreatic and other cancers, perhaps due to an upstream and highly frequent aberrant cellular activity. The CanScript sequence represents a modular element for cancer-specific targeting, potentially suitable for nearly a third of human malignancies.

  17. Automated segmentation of myocardial scar in late enhancement MRI using combined intensity and spatial information.

    PubMed

    Tao, Qian; Milles, Julien; Zeppenfeld, Katja; Lamb, Hildo J; Bax, Jeroen J; Reiber, Johan H C; van der Geest, Rob J

    2010-08-01

    Accurate assessment of the size and distribution of a myocardial infarction (MI) from late gadolinium enhancement (LGE) MRI is of significant prognostic value for postinfarction patients. In this paper, an automatic MI identification method combining both intensity and spatial information is presented in a clear framework of (i) initialization, (ii) false acceptance removal, and (iii) false rejection removal. The method was validated on LGE MR images of 20 chronic postinfarction patients, using manually traced MI contours from two independent observers as reference. Good agreement was observed between automatic and manual MI identification. Validation results showed that the average Dice indices, which describe the percentage of overlap between two regions, were 0.83 +/- 0.07 and 0.79 +/- 0.08 between the automatic identification and the manual tracing from observer 1 and observer 2, and the errors in estimated infarct percentage were 0.0 +/- 1.9% and 3.8 +/- 4.7% compared with observer 1 and observer 2. The difference between the automatic method and manual tracing is in the order of interobserver variation. In conclusion, the developed automatic method is accurate and robust in MI delineation, providing an objective tool for quantitative assessment of MI in LGE MR imaging.

  18. An L1-Script-Transfer-Effect Fallacy: A Rejoinder to Wang et al

    ERIC Educational Resources Information Center

    Yamada, Jun

    2004-01-01

    Do different L1 (first language) writing systems differentially affect word identification in English as a second language (ESL)? Wang, Koda, and Perfetti [Cognition 87 (2003) 129] answered yes by examining Chinese students with a logographic L1 background and Korean students with an alphabetic L1 background for their phonological and orthographic…

  19. Development and Testing of Geo-Processing Models for the Automatic Generation of Remediation Plan and Navigation Data to Use in Industrial Disaster Remediation

    NASA Astrophysics Data System (ADS)

    Lucas, G.; Lénárt, C.; Solymosi, J.

    2015-08-01

    This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree) and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines). Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long), 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect). The second model shows only 1% difference with the variation of feature number; so this last is less interesting for planning optimization applications. Last model rather simply fulfils the task it was designed for by drawing navigation lines.

  20. OPTICAL correlation identification technology applied in underwater laser imaging target identification

    NASA Astrophysics Data System (ADS)

    Yao, Guang-tao; Zhang, Xiao-hui; Ge, Wei-long

    2012-01-01

    The underwater laser imaging detection is an effective method of detecting short distance target underwater as an important complement of sonar detection. With the development of underwater laser imaging technology and underwater vehicle technology, the underwater automatic target identification has gotten more and more attention, and is a research difficulty in the area of underwater optical imaging information processing. Today, underwater automatic target identification based on optical imaging is usually realized with the method of digital circuit software programming. The algorithm realization and control of this method is very flexible. However, the optical imaging information is 2D image even 3D image, the amount of imaging processing information is abundant, so the electronic hardware with pure digital algorithm will need long identification time and is hard to meet the demands of real-time identification. If adopt computer parallel processing, the identification speed can be improved, but it will increase complexity, size and power consumption. This paper attempts to apply optical correlation identification technology to realize underwater automatic target identification. The optics correlation identification technology utilizes the Fourier transform characteristic of Fourier lens which can accomplish Fourier transform of image information in the level of nanosecond, and optical space interconnection calculation has the features of parallel, high speed, large capacity and high resolution, combines the flexibility of calculation and control of digital circuit method to realize optoelectronic hybrid identification mode. We reduce theoretical formulation of correlation identification and analyze the principle of optical correlation identification, and write MATLAB simulation program. We adopt single frame image obtained in underwater range gating laser imaging to identify, and through identifying and locating the different positions of target, we can improve the speed and orientation efficiency of target identification effectively, and validate the feasibility of this method primarily.

  1. The simulation of automatic ladar sensor control during flight operations using USU LadarSIM software

    NASA Astrophysics Data System (ADS)

    Pack, Robert T.; Saunders, David; Fullmer, Rees; Budge, Scott

    2006-05-01

    USU LadarSIM Release 2.0 is a ladar simulator that has the ability to feed high-level mission scripts into a processor that automatically generates scan commands during flight simulations. The scan generation depends on specified flight trajectories and scenes consisting of terrain and targets. The scenes and trajectories can either consist of simulated or actual data. The first modeling step produces an outline of scan footprints in xyz space. Once mission goals have been analyzed and it is determined that the scan footprints are appropriately distributed or placed, specific scans can then be chosen for the generation of complete radiometry-based range images and point clouds. The simulation is capable of quickly modeling ray-trace geometry associated with (1) various focal plane arrays and scanner configurations and (2) various scene and trajectories associated with particular maneuvers or missions.

  2. Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations.

    PubMed

    Zala, Sarah M; Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J

    2017-01-01

    House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4-12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a 'gold standard' reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community.

  3. [Wearable Automatic External Defibrillators].

    PubMed

    Luo, Huajie; Luo, Zhangyuan; Jin, Xun; Zhang, Leilei; Wang, Changjin; Zhang, Wenzan; Tu, Quan

    2015-11-01

    Defibrillation is the most effective method of treating ventricular fibrillation(VF), this paper introduces wearable automatic external defibrillators based on embedded system which includes EGG measurements, bioelectrical impedance measurement, discharge defibrillation module, which can automatic identify VF signal, biphasic exponential waveform defibrillation discharge. After verified by animal tests, the device can realize EGG acquisition and automatic identification. After identifying the ventricular fibrillation signal, it can automatic defibrillate to abort ventricular fibrillation and to realize the cardiac electrical cardioversion.

  4. Data Provenance as a Tool for Debugging Hydrological Models based on Python

    NASA Astrophysics Data System (ADS)

    Wombacher, A.; Huq, M.; Wada, Y.; Van Beek, R.

    2012-12-01

    There is an increase in data volume used in hydrological modeling. The increasing data volume requires additional efforts in debugging models since a single output value is influenced by a multitude of input values. Thus, it is difficult to keep an overview among the data dependencies. Further, knowing these dependencies, it is a tedious job to infer all the relevant data values. The aforementioned data dependencies are also known as data provenance, i.e. the determination of how a particular value has been created and processed. The proposed tool infers the data provenance automatically from a python script and visualizes the dependencies as a graph without executing the script. To debug the model the user specifies the value of interest in space and time. The tool infers all related data values and displays them in the graph. The tool has been evaluated by hydrologists developing a model for estimating the global water demand [1]. The model uses multiple different data sources. The script we analysed has 120 lines of codes and used more than 3000 individual files, each of them representing a raster map of 360*720 cells. After importing the data of the files into a SQLite database, the data consumes around 40 GB of memory. Using the proposed tool a modeler is able to select individual values and infer which values have been used to calculate the value. Especially in cases of outliers or missing values it is a beneficial tool to provide the modeler with efficient information to investigate the unexpected behavior of the model. The proposed tool can be applied to many python scripts and has been tested with other scripts in different contexts. In case a python code contains an unknown function or class the tool requests additional information about the used function or class to enable the inference. This information has to be entered only once and can be shared with colleagues or in the community. Reference [1] Y. Wada, L. P. H. van Beek, D. Viviroli, H. H. Drr, R. Weingartner, and M. F. P. Bierkens, "Global monthly water stress: II. water demand and severity of water," Water Resources Research, vol. 47, 2011.

  5. System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator

    DTIC Science & Technology

    2006-08-01

    commanded torque to move away from these singularity points. The introduction of this error may not degrade the performance for large slew angle ...trajectory has been generated and quaternion feedback control has been implemented for reference trajectory tracking. The testbed was reasonably well...System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator Jae-Jun Kim∗ and Brij N. Agrawal † Department of

  6. A new methodology for automatic detection of reference points in 3D cephalometry: A pilot study.

    PubMed

    Ed-Dhahraouy, Mohammed; Riri, Hicham; Ezzahmouly, Manal; Bourzgui, Farid; El Moutaoukkil, Abdelmajid

    2018-04-05

    The aim of this study was to develop a new method for an automatic detection of reference points in 3D cephalometry to overcome the limits of 2D cephalometric analyses. A specific application was designed using the C++ language for automatic and manual identification of 21 (reference) points on the craniofacial structures. Our algorithm is based on the implementation of an anatomical and geometrical network adapted to the craniofacial structure. This network was constructed based on the anatomical knowledge of the 3D cephalometric (reference) points. The proposed algorithm was tested on five CBCT images. The proposed approach for the automatic 3D cephalometric identification was able to detect 21 points with a mean error of 2.32mm. In this pilot study, we propose an automated methodology for the identification of the 3D cephalometric (reference) points. A larger sample will be implemented in the future to assess the method validity and reliability. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.

  7. [Statistical analysis using freely-available "EZR (Easy R)" software].

    PubMed

    Kanda, Yoshinobu

    2015-10-01

    Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.

  8. A unified approach for development of Urdu Corpus for OCR and demographic purpose

    NASA Astrophysics Data System (ADS)

    Choudhary, Prakash; Nain, Neeta; Ahmed, Mushtaq

    2015-02-01

    This paper presents a methodology for the development of an Urdu handwritten text image Corpus and application of Corpus linguistics in the field of OCR and information retrieval from handwritten document. Compared to other language scripts, Urdu script is little bit complicated for data entry. To enter a single character it requires a combination of multiple keys entry. Here, a mixed approach is proposed and demonstrated for building Urdu Corpus for OCR and Demographic data collection. Demographic part of database could be used to train a system to fetch the data automatically, which will be helpful to simplify existing manual data-processing task involved in the field of data collection such as input forms like Passport, Ration Card, Voting Card, AADHAR, Driving licence, Indian Railway Reservation, Census data etc. This would increase the participation of Urdu language community in understanding and taking benefit of the Government schemes. To make availability and applicability of database in a vast area of corpus linguistics, we propose a methodology for data collection, mark-up, digital transcription, and XML metadata information for benchmarking.

  9. Validation of automatic landmark identification for atlas-based segmentation for radiation treatment planning of the head-and-neck region

    NASA Astrophysics Data System (ADS)

    Leavens, Claudia; Vik, Torbjørn; Schulz, Heinrich; Allaire, Stéphane; Kim, John; Dawson, Laura; O'Sullivan, Brian; Breen, Stephen; Jaffray, David; Pekar, Vladimir

    2008-03-01

    Manual contouring of target volumes and organs at risk in radiation therapy is extremely time-consuming, in particular for treating the head-and-neck area, where a single patient treatment plan can take several hours to contour. As radiation treatment delivery moves towards adaptive treatment, the need for more efficient segmentation techniques will increase. We are developing a method for automatic model-based segmentation of the head and neck. This process can be broken down into three main steps: i) automatic landmark identification in the image dataset of interest, ii) automatic landmark-based initialization of deformable surface models to the patient image dataset, and iii) adaptation of the deformable models to the patient-specific anatomical boundaries of interest. In this paper, we focus on the validation of the first step of this method, quantifying the results of our automatic landmark identification method. We use an image atlas formed by applying thin-plate spline (TPS) interpolation to ten atlas datasets, using 27 manually identified landmarks in each atlas/training dataset. The principal variation modes returned by principal component analysis (PCA) of the landmark positions were used by an automatic registration algorithm, which sought the corresponding landmarks in the clinical dataset of interest using a controlled random search algorithm. Applying a run time of 60 seconds to the random search, a root mean square (rms) distance to the ground-truth landmark position of 9.5 +/- 0.6 mm was calculated for the identified landmarks. Automatic segmentation of the brain, mandible and brain stem, using the detected landmarks, is demonstrated.

  10. Interaction Quality during Partner Reading

    PubMed Central

    Meisinger, Elizabeth B.; Schwanenflugel, Paula J.; Bradley, Barbara A.; Stahl, Steven A.

    2009-01-01

    The influence of social relationships, positive interdependence, and teacher structure on the quality of partner reading interactions was examined. Partner reading, a scripted cooperative learning strategy, is often used in classrooms to promote the development of fluent and automatic reading skills. Forty-three pairs of second grade children were observed during partner reading sessions taking place in 12 classrooms. The degree to which the partners displayed social cooperation (instrumental support, emotional support, and conflict management) and on/off task behavior was evaluated. Children who chose their own partners showed greater social cooperation than those children whose teacher selected their partner. However, when the positive interdependence requirements of the task were not met within the pair (neither child had the skills to provide reading support or no one needed support), lower levels of on-task behavior were observed. Providing basic partner reading script instruction at the beginning of the year was associated with better social cooperation during partner reading, but providing elaborated instruction or no instruction was associated with poorer social cooperation. It is recommended that teachers provide basic script instruction and allow children to choose their own partners. Additionally, pairings of low ability children with other low ability children and high ability children with other high ability children should be avoided. Teachers may want to suggest alternate partners for children who inadvertently choose such pairings or adjust the text difficulty to the pair. Overall, partner reading seems to be an enjoyable pedagogical strategy for teaching reading fluency. PMID:19830259

  11. Aviation Careers Series: Airline Non-Flying Careers

    DOT National Transportation Integrated Search

    1996-01-01

    TRAVLINK demonstrated the use of Automatic Vehicle Location (AVL), ComputerAided dispatch (CAD), and Automatic Vehicle Identification (AVI) systems on Metropolitan Council Transit Operations (MCTO) buses in Minneapolis, Minnesota and western suburbs,...

  12. Automatic limb identification and sleeping parameters assessment for pressure ulcer prevention.

    PubMed

    Baran Pouyan, Maziyar; Birjandtalab, Javad; Nourani, Mehrdad; Matthew Pompeo, M D

    2016-08-01

    Pressure ulcers (PUs) are common among vulnerable patients such as elderly, bedridden and diabetic. PUs are very painful for patients and costly for hospitals and nursing homes. Assessment of sleeping parameters on at-risk limbs is critical for ulcer prevention. An effective assessment depends on automatic identification and tracking of at-risk limbs. An accurate limb identification can be used to analyze the pressure distribution and assess risk for each limb. In this paper, we propose a graph-based clustering approach to extract the body limbs from the pressure data collected by a commercial pressure map system. A robust signature-based technique is employed to automatically label each limb. Finally, an assessment technique is applied to evaluate the experienced stress by each limb over time. The experimental results indicate high performance and more than 94% average accuracy of the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Multi-font printed Mongolian document recognition system

    NASA Astrophysics Data System (ADS)

    Peng, Liangrui; Liu, Changsong; Ding, Xiaoqing; Wang, Hua; Jin, Jianming

    2009-01-01

    Mongolian is one of the major ethnic languages in China. Large amount of Mongolian printed documents need to be digitized in digital library and various applications. Traditional Mongolian script has unique writing style and multi-font-type variations, which bring challenges to Mongolian OCR research. As traditional Mongolian script has some characteristics, for example, one character may be part of another character, we define the character set for recognition according to the segmented components, and the components are combined into characters by rule-based post-processing module. For character recognition, a method based on visual directional feature and multi-level classifiers is presented. For character segmentation, a scheme is used to find the segmentation point by analyzing the properties of projection and connected components. As Mongolian has different font-types which are categorized into two major groups, the parameter of segmentation is adjusted for each group. A font-type classification method for the two font-type group is introduced. For recognition of Mongolian text mixed with Chinese and English, language identification and relevant character recognition kernels are integrated. Experiments show that the presented methods are effective. The text recognition rate is 96.9% on the test samples from practical documents with multi-font-types and mixed scripts.

  14. Fast and automatic thermographic material identification for the recycling process

    NASA Astrophysics Data System (ADS)

    Haferkamp, Heinz; Burmester, Ingo

    1998-03-01

    Within the framework of the future closed loop recycling process the automatic and economical sorting of plastics is a decisive element. The at the present time available identification and sorting systems are not yet suitable for the sorting of technical plastics since essential demands, as the realization of high recognition reliability and identification rates considering the variety of technical plastics, can not be guaranteed. Therefore the Laser Zentrum Hannover e.V. in cooperation with the Hoerotron GmbH and the Preussag Noell GmbH has carried out investigations on a rapid thermographic and laser-supported material- identification-system for automatic material-sorting- systems. The automatic identification of different engineering plastics coming from electronic or automotive waste is possible. Identification rates up to 10 parts per second are allowed by the effort from fast IR line scanners. The procedure is based on the following principle: within a few milliseconds a spot on the relevant sample is heated by a CO2 laser. The samples different and specific chemical and physical material properties cause different temperature distributions on their surfaces that are measured by a fast IR-linescan system. This 'thermal impulse response' has to be analyzed by means of a computer system. Investigations have shown that it is possible to analyze more than 18 different sorts of plastics at a frequency of 10 Hz. Crucial for the development of such a system is the rapid processing of imaging data, the minimization of interferences caused by oscillating samples geometries, and a wide range of possible additives in plastics in question. One possible application area is sorting of plastics coming from car- and electronic waste recycling.

  15. Automatic tracking of wake vortices using ground-wind sensor data

    DOT National Transportation Integrated Search

    1977-01-03

    Algorithms for automatic tracking of wake vortices using ground-wind anemometer : data are developed. Methods of bad-data suppression, track initiation, and : track termination are included. An effective sensor-failure detection-and identification : ...

  16. Crescent Evaluation : appendix D : crescent computer system components evaluation report

    DOT National Transportation Integrated Search

    1994-02-01

    In 1990, Lockheed Integrated Systems Company (LISC) was awarded a contract, under the Crescent Demonstration Project, to demonstrate the integration of Weigh In Motion (WIM), Automatic Vehicle Classification (AVC) and Automatic Vehicle Identification...

  17. Automatic Molar Extraction from Dental Panoramic Radiographs for Forensic Personal Identification

    NASA Astrophysics Data System (ADS)

    Samopa, Febriliyan; Asano, Akira; Taguchi, Akira

    Measurement of an individual molar provides rich information for forensic personal identification. We propose a computer-based system for extracting an individual molar from dental panoramic radiographs. A molar is obtained by extracting the region-of-interest, separating the maxilla and mandible, and extracting the boundaries between teeth. The proposed system is almost fully automatic; all that the user has to do is clicking three points on the boundary between the maxilla and the mandible.

  18. Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations

    PubMed Central

    Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J.

    2017-01-01

    House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4–12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a ‘gold standard’ reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community. PMID:28727808

  19. Wide-Field Imaging Telescope-0 (WIT0) with automatic observing system

    NASA Astrophysics Data System (ADS)

    Ji, Tae-Geun; Byeon, Seoyeon; Lee, Hye-In; Park, Woojin; Lee, Sang-Yun; Hwang, Sungyong; Choi, Changsu; Gibson, Coyne Andrew; Kuehne, John W.; Prochaska, Travis; Marshall, Jennifer L.; Im, Myungshin; Pak, Soojong

    2018-01-01

    We introduce Wide-Field Imaging Telescope-0 (WIT0), with an automatic observing system. It is developed for monitoring the variabilities of many sources at a time, e.g. young stellar objects and active galactic nuclei. It can also find the locations of transient sources such as a supernova or gamma-ray bursts. In 2017 February, we installed the wide-field 10-inch telescope (Takahashi CCA-250) as a piggyback system on the 30-inch telescope at the McDonald Observatory in Texas, US. The 10-inch telescope has a 2.35 × 2.35 deg field-of-view with a 4k × 4k CCD Camera (FLI ML16803). To improve the observational efficiency of the system, we developed a new automatic observing software, KAOS30 (KHU Automatic Observing Software for McDonald 30-inch telescope), which was developed by Visual C++ on the basis of a windows operating system. The software consists of four control packages: the Telescope Control Package (TCP), the Data Acquisition Package (DAP), the Auto Focus Package (AFP), and the Script Mode Package (SMP). Since it also supports the instruments that are using the ASCOM driver, the additional hardware installations become quite simplified. We commissioned KAOS30 in 2017 August and are in the process of testing. Based on the WIT0 experiences, we will extend KAOS30 to control multiple telescopes in future projects.

  20. AGUIA: autonomous graphical user interface assembly for clinical trials semantic data services.

    PubMed

    Correa, Miria C; Deus, Helena F; Vasconcelos, Ana T; Hayashi, Yuki; Ajani, Jaffer A; Patnana, Srikrishna V; Almeida, Jonas S

    2010-10-26

    AGUIA is a front-end web application originally developed to manage clinical, demographic and biomolecular patient data collected during clinical trials at MD Anderson Cancer Center. The diversity of methods involved in patient screening and sample processing generates a variety of data types that require a resource-oriented architecture to capture the associations between the heterogeneous data elements. AGUIA uses a semantic web formalism, resource description framework (RDF), and a bottom-up design of knowledge bases that employ the S3DB tool as the starting point for the client's interface assembly. The data web service, S3DB, meets the necessary requirements of generating the RDF and of explicitly distinguishing the description of the domain from its instantiation, while allowing for continuous editing of both. Furthermore, it uses an HTTP-REST protocol, has a SPARQL endpoint, and has open source availability in the public domain, which facilitates the development and dissemination of this application. However, S3DB alone does not address the issue of representing content in a form that makes sense for domain experts. We identified an autonomous set of descriptors, the GBox, that provides user and domain specifications for the graphical user interface. This was achieved by identifying a formalism that makes use of an RDF schema to enable the automatic assembly of graphical user interfaces in a meaningful manner while using only resources native to the client web browser (JavaScript interpreter, document object model). We defined a generalized RDF model such that changes in the graphic descriptors are automatically and immediately (locally) reflected into the configuration of the client's interface application. The design patterns identified for the GBox benefit from and reflect the specific requirements of interacting with data generated by clinical trials, and they contain clues for a general purpose solution to the challenge of having interfaces automatically assembled for multiple and volatile views of a domain. By coding AGUIA in JavaScript, for which all browsers include a native interpreter, a solution was found that assembles interfaces that are meaningful to the particular user, and which are also ubiquitous and lightweight, allowing the computational load to be carried by the client's machine.

  1. Port-of-entry advanced sorting system (PASS) operational test

    DOT National Transportation Integrated Search

    1998-12-01

    In 1992 the Oregon Department of Transportation undertook an operational test of the Port-of-Entry Advanced Sorting System (PASS), which uses a two-way communication automatic vehicle identification system, integrated with weigh-in-motion, automatic ...

  2. LTRsift: a graphical user interface for semi-automatic classification and postprocessing of de novo detected LTR retrotransposons

    PubMed Central

    2012-01-01

    Background Long terminal repeat (LTR) retrotransposons are a class of eukaryotic mobile elements characterized by a distinctive sequence similarity-based structure. Hence they are well suited for computational identification. Current software allows for a comprehensive genome-wide de novo detection of such elements. The obvious next step is the classification of newly detected candidates resulting in (super-)families. Such a de novo classification approach based on sequence-based clustering of transposon features has been proposed before, resulting in a preliminary assignment of candidates to families as a basis for subsequent manual refinement. However, such a classification workflow is typically split across a heterogeneous set of glue scripts and generic software (for example, spreadsheets), making it tedious for a human expert to inspect, curate and export the putative families produced by the workflow. Results We have developed LTRsift, an interactive graphical software tool for semi-automatic postprocessing of de novo predicted LTR retrotransposon annotations. Its user-friendly interface offers customizable filtering and classification functionality, displaying the putative candidate groups, their members and their internal structure in a hierarchical fashion. To ease manual work, it also supports graphical user interface-driven reassignment, splitting and further annotation of candidates. Export of grouped candidate sets in standard formats is possible. In two case studies, we demonstrate how LTRsift can be employed in the context of a genome-wide LTR retrotransposon survey effort. Conclusions LTRsift is a useful and convenient tool for semi-automated classification of newly detected LTR retrotransposons based on their internal features. Its efficient implementation allows for convenient and seamless filtering and classification in an integrated environment. Developed for life scientists, it is helpful in postprocessing and refining the output of software for predicting LTR retrotransposons up to the stage of preparing full-length reference sequence libraries. The LTRsift software is freely available at http://www.zbh.uni-hamburg.de/LTRsift under an open-source license. PMID:23131050

  3. LTRsift: a graphical user interface for semi-automatic classification and postprocessing of de novo detected LTR retrotransposons.

    PubMed

    Steinbiss, Sascha; Kastens, Sascha; Kurtz, Stefan

    2012-11-07

    Long terminal repeat (LTR) retrotransposons are a class of eukaryotic mobile elements characterized by a distinctive sequence similarity-based structure. Hence they are well suited for computational identification. Current software allows for a comprehensive genome-wide de novo detection of such elements. The obvious next step is the classification of newly detected candidates resulting in (super-)families. Such a de novo classification approach based on sequence-based clustering of transposon features has been proposed before, resulting in a preliminary assignment of candidates to families as a basis for subsequent manual refinement. However, such a classification workflow is typically split across a heterogeneous set of glue scripts and generic software (for example, spreadsheets), making it tedious for a human expert to inspect, curate and export the putative families produced by the workflow. We have developed LTRsift, an interactive graphical software tool for semi-automatic postprocessing of de novo predicted LTR retrotransposon annotations. Its user-friendly interface offers customizable filtering and classification functionality, displaying the putative candidate groups, their members and their internal structure in a hierarchical fashion. To ease manual work, it also supports graphical user interface-driven reassignment, splitting and further annotation of candidates. Export of grouped candidate sets in standard formats is possible. In two case studies, we demonstrate how LTRsift can be employed in the context of a genome-wide LTR retrotransposon survey effort. LTRsift is a useful and convenient tool for semi-automated classification of newly detected LTR retrotransposons based on their internal features. Its efficient implementation allows for convenient and seamless filtering and classification in an integrated environment. Developed for life scientists, it is helpful in postprocessing and refining the output of software for predicting LTR retrotransposons up to the stage of preparing full-length reference sequence libraries. The LTRsift software is freely available at http://www.zbh.uni-hamburg.de/LTRsift under an open-source license.

  4. Assigning unique identification numbers to new user accounts and groups in a computing environment with multiple registries

    DOEpatents

    DeRobertis, Christopher V.; Lu, Yantian T.

    2010-02-23

    A method, system, and program storage device for creating a new user account or user group with a unique identification number in a computing environment having multiple user registries is provided. In response to receiving a command to create a new user account or user group, an operating system of a clustered computing environment automatically checks multiple registries configured for the operating system to determine whether a candidate identification number for the new user account or user group has been assigned already to one or more existing user accounts or groups, respectively. The operating system automatically assigns the candidate identification number to the new user account or user group created in a target user registry if the checking indicates that the candidate identification number has not been assigned already to any of the existing user accounts or user groups, respectively.

  5. The Crescent Project : an evaluation of an element of the HELP Program : executive summary

    DOT National Transportation Integrated Search

    1994-02-01

    The HELP/Crescent Project on the West Coast evaluated the applicability of four technologies for screening transponder-equipped vehicles. The technologies included automatic vehicle identification, weigh-in-motion, automatic vehicle classification, a...

  6. Port-of-entry Advanced Sorting System (PASS) operational test : final report

    DOT National Transportation Integrated Search

    1998-12-01

    In 1992 the Oregon Department of Transportation undertook an operational test of the Port-of-Entry Advanced Sorting System (PASS), which uses a two-way communication automatic vehicle identification system, integrated with weigh-in-motion, automatic ...

  7. Understanding ITS/CVO Technology Applications, Student Manual, Course 3

    DOT National Transportation Integrated Search

    1999-01-01

    WEIGHT-IN-MOTION OR WIM, COMMERCIAL VEHICLE INFORMATION SYSTEMS AND NETWORK OR CVISN, AUTOMATIC VEHICLE IDENTIFICATION OR AVI, AUTOMATIC LOCATION OR AVL, ELECTRONIC DATA INTERCHANGE OR EDI, GLOBAL POSITIONING SYSTEM OR GPS, INTERNET OR WORLD WIDE WEB...

  8. Automatic 1H-NMR Screening of Fatty Acid Composition in Edible Oils

    PubMed Central

    Castejón, David; Fricke, Pascal; Cambero, María Isabel; Herrera, Antonio

    2016-01-01

    In this work, we introduce an NMR-based screening method for the fatty acid composition analysis of edible oils. We describe the evaluation and optimization needed for the automated analysis of vegetable oils by low-field NMR to obtain the fatty acid composition (FAC). To achieve this, two scripts, which automatically analyze and interpret the spectral data, were developed. The objective of this work was to drive forward the automated analysis of the FAC by NMR. Due to the fact that this protocol can be carried out at low field and that the complete process from sample preparation to printing the report only takes about 3 min, this approach is promising to become a fundamental technique for high-throughput screening. To demonstrate the applicability of this method, the fatty acid composition of extra virgin olive oils from various Spanish olive varieties (arbequina, cornicabra, hojiblanca, manzanilla, and picual) was determined by 1H-NMR spectroscopy according to this protocol. PMID:26891323

  9. Development of a web-based epidemiological surveillance system with health system response for improving maternal and newborn health: Field-testing in Thailand.

    PubMed

    Liabsuetrakul, Tippawan; Prappre, Tagoon; Pairot, Pakamas; Oumudee, Nurlisa; Islam, Monir

    2017-06-01

    Surveillance systems are yet to be integrated with health information systems for improving the health of pregnant mothers and their newborns, particularly in developing countries. This study aimed to develop a web-based epidemiological surveillance system for maternal and newborn health with integration of action-oriented responses and automatic data analysis with results presentations and to assess the system acceptance by nurses and doctors involved in various hospitals in southern Thailand. Freeware software and scripting languages were used. The system can be run on different platforms, and it is accessible via various electronic devices. Automatic data analysis with results presentations in the forms of graphs, tables and maps was part of the system. A multi-level security system was incorporated into the program. Most doctors and nurses involved in the study felt the system was easy to use and useful. This system can be integrated into country routine reporting system for monitoring maternal and newborn health and survival.

  10. JANIS: NEA JAva-based Nuclear Data Information System

    NASA Astrophysics Data System (ADS)

    Soppera, Nicolas; Bossant, Manuel; Cabellos, Oscar; Dupont, Emmeric; Díez, Carlos J.

    2017-09-01

    JANIS (JAva-based Nuclear Data Information System) software is developed by the OECD Nuclear Energy Agency (NEA) Data Bank to facilitate the visualization and manipulation of nuclear data, giving access to evaluated nuclear data libraries, such as ENDF, JEFF, JENDL, TENDL etc., and also to experimental nuclear data (EXFOR) and bibliographical references (CINDA). It is available as a standalone Java program, downloadable and distributed on DVD and also a web application available on the NEA website. One of the main new features in JANIS is the scripting capability via command line, which notably automatizes plots generation and permits automatically extracting data from the JANIS database. Recent NEA software developments rely on these JANIS features to access nuclear data, for example the Nuclear Data Sensitivity Tool (NDaST) makes use of covariance data in BOXER and COVERX formats, which are retrieved from the JANIS database. New features added in this version of the JANIS software are described along this paper with some examples.

  11. Print-specific N170 involves multiple subcomponents for Japanese Hiragana.

    PubMed

    Uno, Tomoki; Okumura, Yasuko; Kasai, Tetsuko

    2017-05-22

    Print-specific N170 in event-related potentials is generally considered to reflect relatively automatic processing for letter strings, which is crucial for fluent reading. However, our previous studies demonstrated that print-specific N170 for transparent Japanese Hiragana script consists of at least two subcomponents under rapid stimulus presentation: an attention-related left-lateralized N170 and a bilateral N170 associated with more automatic orthographic processes (Okumura, Kasai & Murohashi, 2014, 2015). The present study aimed to confirm the latter component by controlling presentation frequency of letters and nonlinguistic visual controls (i.e., symbols), but found a quite different pattern of results; an enhanced occipito-temporal positivity for words (80-120ms poststimulus) followed by the typical left-lateralized N170 and an enhanced parietal negativity for nonwords (150-200ms). These results should provide further insights into the interaction processes between attention and early stages of print processing. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. TVB-EduPack—An Interactive Learning and Scripting Platform for The Virtual Brain

    PubMed Central

    Matzke, Henrik; Schirner, Michael; Vollbrecht, Daniel; Rothmeier, Simon; Llarena, Adalberto; Rojas, Raúl; Triebkorn, Paul; Domide, Lia; Mersmann, Jochen; Solodkin, Ana; Jirsa, Viktor K.; McIntosh, Anthony Randal; Ritter, Petra

    2015-01-01

    The Virtual Brain (TVB; thevirtualbrain.org) is a neuroinformatics platform for full brain network simulation based on individual anatomical connectivity data. The framework addresses clinical and neuroscientific questions by simulating multi-scale neural dynamics that range from local population activity to large-scale brain function and related macroscopic signals like electroencephalography and functional magnetic resonance imaging. TVB is equipped with a graphical and a command-line interface to create models that capture the characteristic biological variability to predict the brain activity of individual subjects. To enable researchers from various backgrounds a quick start into TVB and brain network modeling in general, we developed an educational module: TVB-EduPack. EduPack offers two educational functionalities that seamlessly integrate into TVB's graphical user interface (GUI): (i) interactive tutorials introduce GUI elements, guide through the basic mechanics of software usage and develop complex use-case scenarios; animations, videos and textual descriptions transport essential principles of computational neuroscience and brain modeling; (ii) an automatic script generator records model parameters and produces input files for TVB's Python programming interface; thereby, simulation configurations can be exported as scripts that allow flexible customization of the modeling process and self-defined batch- and post-processing applications while benefitting from the full power of the Python language and its toolboxes. This article covers the implementation of TVB-EduPack and its integration into TVB architecture. Like TVB, EduPack is an open source community project that lives from the participation and contribution of its users. TVB-EduPack can be obtained as part of TVB from thevirtualbrain.org. PMID:26635597

  13. CLIPS, AppleEvents, and AppleScript: Integrating CLIPS with commercial software

    NASA Technical Reports Server (NTRS)

    Compton, Michael M.; Wolfe, Shawn R.

    1994-01-01

    Many of today's intelligent systems are comprised of several modules, perhaps written in different tools and languages, that together help solve the user's problem. These systems often employ a knowledge-based component that is not accessed directly by the user, but instead operates 'in the background' offering assistance to the user as necessary. In these types of modular systems, an efficient, flexible, and eady-to-use mechanism for sharing data between programs is crucial. To help permit transparent integration of CLIPS with other Macintosh applications, the AI Research Branch at NASA Ames Research Center has extended CLIPS to allow it to communicate transparently with other applications through two popular data-sharing mechanisms provided by the Macintosh operating system: Apple Events (a 'high-level' event mechanism for program-to-program communication), and AppleScript, a recently-released scripting language for the Macintosh. This capability permits other applications (running on either the same or a remote machine) to send a command to CLIPS, which then responds as if the command were typed into the CLIPS dialog window. Any result returned by the command is then automatically returned to the program that sent it. Likewise, CLIPS can send several types of Apple Events directly to other local or remote applications. This CLIPS system has been successfully integrated with a variety of commercial applications, including data collection programs, electronics forms packages, DBMS's, and email programs. These mechanisms can permit transparent user access to the knowledge base from within a commercial application, and allow a single copy of the knowledge base to service multiple users in a networked environment.

  14. Astrometrica: Astrometric data reduction of CCD images

    NASA Astrophysics Data System (ADS)

    Raab, Herbert

    2012-03-01

    Astrometrica is an interactive software tool for scientific grade astrometric data reduction of CCD images. The current version of the software is for the Windows 32bit operating system family. Astrometrica reads FITS (8, 16 and 32 bit integer files) and SBIG image files. The size of the images is limited only by available memory. It also offers automatic image calibration (Dark Frame and Flat Field correction), automatic reference star identification, automatic moving object detection and identification, and access to new-generation star catalogs (PPMXL, UCAC 3 and CMC-14), in addition to online help and other features. Astrometrica is shareware, available for use for a limited period of time (100 days) for free; special arrangements can be made for educational projects.

  15. Automatic identification of bullet signatures based on consecutive matching striae (CMS) criteria.

    PubMed

    Chu, Wei; Thompson, Robert M; Song, John; Vorburger, Theodore V

    2013-09-10

    The consecutive matching striae (CMS) numeric criteria for firearm and toolmark identifications have been widely accepted by forensic examiners, although there have been questions concerning its observer subjectivity and limited statistical support. In this paper, based on signal processing and extraction, a model for the automatic and objective counting of CMS is proposed. The position and shape information of the striae on the bullet land is represented by a feature profile, which is used for determining the CMS number automatically. Rapid counting of CMS number provides a basis for ballistics correlations with large databases and further statistical and probability analysis. Experimental results in this report using bullets fired from ten consecutively manufactured barrels support this developed model. Published by Elsevier Ireland Ltd.

  16. Developing a Complete and Effective ACT-R Architecture

    DTIC Science & Technology

    2008-01-01

    of computational primitives , as contrasted with the predominant “one-off” and “grab-bag” cognitive models in the field. These architectures have...transport/ semaphore protocols connected via a glue script. Both protocols rely on the fact that file rename and file remove operations are atomic...the Trial Log file until just prior to processing the next input request. Thus, to perform synchronous identifications it is necessary to run an

  17. Elementary software for the hand lens identification of some common iranian woods

    Treesearch

    Vahidreza Safdari; Margaret S. Devall

    2009-01-01

    A computer program, “Hyrcania”, has been developed for identifying some common woods (26 hardwoods and 6 softwoods) from the Hyrcanian forest type of Iran. The program has been written in JavaScript and is usable with computers as well as mobile phones. The databases use anatomical characteristics (visible with a hand lens) and wood colour, and can be searched in...

  18. Experiments in automatic word class and word sense identification for information retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauch, S.; Futrelle, R.P.

    Automatic identification of related words and automatic detection of word senses are two long-standing goals of researchers in natural language processing. Word class information and word sense identification may enhance the performance of information retrieval system4ms. Large online corpora and increased computational capabilities make new techniques based on corpus linguisitics feasible. Corpus-based analysis is especially needed for corpora from specialized fields for which no electronic dictionaries or thesauri exist. The methods described here use a combination of mutual information and word context to establish word similarities. Then, unsupervised classification is done using clustering in the word space, identifying word classesmore » without pretagging. We also describe an extension of the method to handle the difficult problems of disambiguation and of determining part-of-speech and semantic information for low-frequency words. The method is powerful enough to produce high-quality results on a small corpus of 200,000 words from abstracts in a field of molecular biology.« less

  19. Automatic Reconstruction of 3D Building Models from Terrestrial Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    El Meouche, R.; Rezoug, M.; Hijazi, I.; Maes, D.

    2013-11-01

    With modern 3D laser scanners we can acquire a large amount of 3D data in only a few minutes. This technology results in a growing number of applications ranging from the digitalization of historical artifacts to facial authentication. The modeling process demands a lot of time and work (Tim Volodine, 2007). In comparison with the other two stages, the acquisition and the registration, the degree of automation of the modeling stage is almost zero. In this paper, we propose a new surface reconstruction technique for buildings to process the data obtained by a 3D laser scanner. These data are called a point cloud which is a collection of points sampled from the surface of a 3D object. Such a point cloud can consist of millions of points. In order to work more efficiently, we worked with simplified models which contain less points and so less details than a point cloud obtained in situ. The goal of this study was to facilitate the modeling process of a building starting from 3D laser scanner data. In order to do this, we wrote two scripts for Rhinoceros 5.0 based on intelligent algorithms. The first script finds the exterior outline of a building. With a minimum of human interaction, there is a thin box drawn around the surface of a wall. This box is able to rotate 360° around an axis in a corner of the wall in search for the points of other walls. In this way we can eliminate noise points. These are unwanted or irrelevant points. If there is an angled roof, the box can also turn around the edge of the wall and the roof. With the different positions of the box we can calculate the exterior outline. The second script draws the interior outline in a surface of a building. By interior outline we mean the outline of the openings like windows or doors. This script is based on the distances between the points and vector characteristics. Two consecutive points with a relative big distance will form the outline of an opening. Once those points are found, the interior outline can be drawn. The designed scripts are able to ensure for simple point clouds: the elimination of almost all noise points and the reconstruction of a CAD model.

  20. Full-body gestures and movements recognition: user descriptive and unsupervised learning approaches in GDL classifier

    NASA Astrophysics Data System (ADS)

    Hachaj, Tomasz; Ogiela, Marek R.

    2014-09-01

    Gesture Description Language (GDL) is a classifier that enables syntactic description and real time recognition of full-body gestures and movements. Gestures are described in dedicated computer language named Gesture Description Language script (GDLs). In this paper we will introduce new GDLs formalisms that enable recognition of selected classes of movement trajectories. The second novelty is new unsupervised learning method with which it is possible to automatically generate GDLs descriptions. We have initially evaluated both proposed extensions of GDL and we have obtained very promising results. Both the novel methodology and evaluation results will be described in this paper.

  1. Preparing a collection of radiology examinations for distribution and retrieval.

    PubMed

    Demner-Fushman, Dina; Kohli, Marc D; Rosenman, Marc B; Shooshan, Sonya E; Rodriguez, Laritza; Antani, Sameer; Thoma, George R; McDonald, Clement J

    2016-03-01

    Clinical documents made available for secondary use play an increasingly important role in discovery of clinical knowledge, development of research methods, and education. An important step in facilitating secondary use of clinical document collections is easy access to descriptions and samples that represent the content of the collections. This paper presents an approach to developing a collection of radiology examinations, including both the images and radiologist narrative reports, and making them publicly available in a searchable database. The authors collected 3996 radiology reports from the Indiana Network for Patient Care and 8121 associated images from the hospitals' picture archiving systems. The images and reports were de-identified automatically and then the automatic de-identification was manually verified. The authors coded the key findings of the reports and empirically assessed the benefits of manual coding on retrieval. The automatic de-identification of the narrative was aggressive and achieved 100% precision at the cost of rendering a few findings uninterpretable. Automatic de-identification of images was not quite as perfect. Images for two of 3996 patients (0.05%) showed protected health information. Manual encoding of findings improved retrieval precision. Stringent de-identification methods can remove all identifiers from text radiology reports. DICOM de-identification of images does not remove all identifying information and needs special attention to images scanned from film. Adding manual coding to the radiologist narrative reports significantly improved relevancy of the retrieved clinical documents. The de-identified Indiana chest X-ray collection is available for searching and downloading from the National Library of Medicine (http://openi.nlm.nih.gov/). Published by Oxford University Press on behalf of the American Medical Informatics Association 2015. This work is written by US Government employees and is in the public domain in the US.

  2. Eroticizing inequality in the United States: the consequences and determinants of traditional gender role adherence in intimate relationships.

    PubMed

    Sanchez, Diana T; Fetterolf, Janell C; Rudman, Laurie A

    2012-01-01

    This article reviews the research on traditional gender-role adherence and sexuality for heterosexual men and women. Specifically, the consequences and predictors of following traditional gender roles of female submissiveness and male dominance in sexual relationships is examined. Despite evidence that men and women's sexual roles are becoming more egalitarian over time, empirical evidence suggests that the traditional sexual roles continue to dominate heterosexual relations. This article explores whether the sexual context is one in which both men and women feel particularly compelled to engage in gender stereotypic behavior, and why. In addition, this article reports on research that finds that men and women have automatic associations between sexuality and power that reinforce their gender stereotypic behavior in sexual contexts. The negative effects of traditional gender-role adherence for women's sexual problems and satisfaction is demonstrated. This article concludes that traditional sexual scripts are harmful for both women's and men's ability to engage in authentic, rewarding sexual expression, although the female submissive role may be particularly debilitating. Future directions of research are suggested, including interventions to reduce women's adherence to the sexually submissive female script.

  3. ESDAPT - APT PROGRAMMING EDITOR AND INTERPRETER

    NASA Technical Reports Server (NTRS)

    Premack, T.

    1994-01-01

    ESDAPT is a graphical programming environment for developing APT (Automatically Programmed Tool) programs for controlling numerically controlled machine tools. ESDAPT has a graphical user interface that provides the user with an APT syntax sensitive text editor and windows for displaying geometry and tool paths. APT geometry statement can also be created using menus and screen picks. ESDAPT interprets APT geometry statements and displays the results in its view windows. Tool paths are generated by batching the APT source to an APT processor (COSMIC P-APT recommended). The tool paths are then displayed in the view windows. Hardcopy output of the view windows is in color PostScript format. ESDAPT is written in C-language, yacc, lex, and XView for use on Sun4 series computers running SunOS. ESDAPT requires 4Mb of disk space, 7Mb of RAM, and MIT's X Window System, Version 11 Release 4, or OpenWindows version 3 for execution. Program documentation in PostScript format and an executable for OpenWindows version 3 are provided on the distribution media. The standard distribution medium for ESDAPT is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. This program was developed in 1992.

  4. Telemetry-Enhancing Scripts

    NASA Technical Reports Server (NTRS)

    Maimone, Mark W.

    2009-01-01

    Scripts Providing a Cool Kit of Telemetry Enhancing Tools (SPACKLE) is a set of software tools that fill gaps in capabilities of other software used in processing downlinked data in the Mars Exploration Rovers (MER) flight and test-bed operations. SPACKLE tools have helped to accelerate the automatic processing and interpretation of MER mission data, enabling non-experts to understand and/or use MER query and data product command simulation software tools more effectively. SPACKLE has greatly accelerated some operations and provides new capabilities. The tools of SPACKLE are written, variously, in Perl or the C or C++ language. They perform a variety of search and shortcut functions that include the following: Generating text-only, Event Report-annotated, and Web-enhanced views of command sequences; Labeling integer enumerations with their symbolic meanings in text messages and engineering channels; Systematic detecting of corruption within data products; Generating text-only displays of data-product catalogs including downlink status; Validating and labeling of commands related to data products; Performing of convenient searches of detailed engineering data spanning multiple Martian solar days; Generating tables of initial conditions pertaining to engineering, health, and accountability data; Simplified construction and simulation of command sequences; and Fast time format conversions and sorting.

  5. Human Activity Recognition in AAL Environments Using Random Projections.

    PubMed

    Damaševičius, Robertas; Vasiljevas, Mindaugas; Šalkevičius, Justas; Woźniak, Marcin

    2016-01-01

    Automatic human activity recognition systems aim to capture the state of the user and its environment by exploiting heterogeneous sensors attached to the subject's body and permit continuous monitoring of numerous physiological signals reflecting the state of human actions. Successful identification of human activities can be immensely useful in healthcare applications for Ambient Assisted Living (AAL), for automatic and intelligent activity monitoring systems developed for elderly and disabled people. In this paper, we propose the method for activity recognition and subject identification based on random projections from high-dimensional feature space to low-dimensional projection space, where the classes are separated using the Jaccard distance between probability density functions of projected data. Two HAR domain tasks are considered: activity identification and subject identification. The experimental results using the proposed method with Human Activity Dataset (HAD) data are presented.

  6. Human Activity Recognition in AAL Environments Using Random Projections

    PubMed Central

    Damaševičius, Robertas; Vasiljevas, Mindaugas; Šalkevičius, Justas; Woźniak, Marcin

    2016-01-01

    Automatic human activity recognition systems aim to capture the state of the user and its environment by exploiting heterogeneous sensors attached to the subject's body and permit continuous monitoring of numerous physiological signals reflecting the state of human actions. Successful identification of human activities can be immensely useful in healthcare applications for Ambient Assisted Living (AAL), for automatic and intelligent activity monitoring systems developed for elderly and disabled people. In this paper, we propose the method for activity recognition and subject identification based on random projections from high-dimensional feature space to low-dimensional projection space, where the classes are separated using the Jaccard distance between probability density functions of projected data. Two HAR domain tasks are considered: activity identification and subject identification. The experimental results using the proposed method with Human Activity Dataset (HAD) data are presented. PMID:27413392

  7. Automatic pre-processing for an object-oriented distributed hydrological model using GRASS-GIS

    NASA Astrophysics Data System (ADS)

    Sanzana, P.; Jankowfsky, S.; Branger, F.; Braud, I.; Vargas, X.; Hitschfeld, N.

    2012-04-01

    Landscapes are very heterogeneous, which impact the hydrological processes occurring in the catchments, especially in the modeling of peri-urban catchments. The Hydrological Response Units (HRUs), resulting from the intersection of different maps, such as land use, soil types and geology, and flow networks, allow the representation of these elements in an explicit way, preserving natural and artificial contours of the different layers. These HRUs are used as model mesh in some distributed object-oriented hydrological models, allowing the application of a topological oriented approach. The connectivity between polygons and polylines provides a detailed representation of the water balance and overland flow in these distributed hydrological models, based on irregular hydro-landscape units. When computing fluxes between these HRUs, the geometrical parameters, such as the distance between the centroid of gravity of the HRUs and the river network, and the length of the perimeter, can impact the realism of the calculated overland, sub-surface and groundwater fluxes. Therefore, it is necessary to process the original model mesh in order to avoid these numerical problems. We present an automatic pre-processing implemented in the open source GRASS-GIS software, for which several Python scripts or some algorithms already available were used, such as the Triangle software. First, some scripts were developed to improve the topology of the various elements, such as snapping of the river network to the closest contours. When data are derived with remote sensing, such as vegetation areas, their perimeter has lots of right angles that were smoothed. Second, the algorithms more particularly address bad-shaped elements of the model mesh such as polygons with narrow shapes, marked irregular contours and/or the centroid outside of the polygons. To identify these elements we used shape descriptors. The convexity index was considered the best descriptor to identify them with a threshold of 0.75. Segmentation procedures were implemented and applied with criteria of homogeneous slope, convexity of the elements and maximum area of the HRUs. These tasks were implemented using a triangulation approach, applying the Triangle software, in order to dissolve the polygons according to the convexity index criteria. The automatic pre-processing was applied to two peri-urban French catchment, the Mercier and Chaudanne catchments, with 7.3 km2 and 4.1 km2 respectively. We show that the optimized mesh allows a substantial improvement of the overland flow pathways, because the segmentation procedure gives a more realistic representation of the drainage network. KEYWORDS: GRASS-GIS, Hydrological Response Units, Automatic processing, Peri-urban catchments, Geometrical Algorithms

  8. Sexual scripts and sexual risk behaviors among Black heterosexual men: development of the Sexual Scripts Scale.

    PubMed

    Bowleg, Lisa; Burkholder, Gary J; Noar, Seth M; Teti, Michelle; Malebranche, David J; Tschann, Jeanne M

    2015-04-01

    Sexual scripts are widely shared gender and culture-specific guides for sexual behavior with important implications for HIV prevention. Although several qualitative studies document how sexual scripts may influence sexual risk behaviors, quantitative investigations of sexual scripts in the context of sexual risk are rare. This mixed methods study involved the qualitative development and quantitative testing of the Sexual Scripts Scale (SSS). Study 1 included qualitative semi-structured interviews with 30 Black heterosexual men about sexual experiences with main and casual sex partners to develop the SSS. Study 2 included a quantitative test of the SSS with 526 predominantly low-income Black heterosexual men. A factor analysis of the SSS resulted in a 34-item, seven-factor solution that explained 68% of the variance. The subscales and coefficient alphas were: Romantic Intimacy Scripts (α = .86), Condom Scripts (α = .82), Alcohol Scripts (α = .83), Sexual Initiation Scripts (α = .79), Media Sexual Socialization Scripts (α = .84), Marijuana Scripts (α = .85), and Sexual Experimentation Scripts (α = .84). Among men who reported a main partner (n = 401), higher Alcohol Scripts, Media Sexual Socialization Scripts, and Marijuana Scripts scores, and lower Condom Scripts scores were related to more sexual risk behavior. Among men who reported at least one casual partner (n = 238), higher Romantic Intimacy Scripts, Sexual Initiation Scripts, and Media Sexual Socialization Scripts, and lower Condom Scripts scores were related to higher sexual risk. The SSS may have considerable utility for future research on Black heterosexual men's HIV risk.

  9. Approaching the taxonomic affiliation of unidentified sequences in public databases--an example from the mycorrhizal fungi.

    PubMed

    Nilsson, R Henrik; Kristiansson, Erik; Ryberg, Martin; Larsson, Karl-Henrik

    2005-07-18

    During the last few years, DNA sequence analysis has become one of the primary means of taxonomic identification of species, particularly so for species that are minute or otherwise lack distinct, readily obtainable morphological characters. Although the number of sequences available for comparison in public databases such as GenBank increases exponentially, only a minuscule fraction of all organisms have been sequenced, leaving taxon sampling a momentous problem for sequence-based taxonomic identification. When querying GenBank with a set of unidentified sequences, a considerable proportion typically lack fully identified matches, forming an ever-mounting pile of sequences that the researcher will have to monitor manually in the hope that new, clarifying sequences have been submitted by other researchers. To alleviate these concerns, a project to automatically monitor select unidentified sequences in GenBank for taxonomic progress through repeated local BLAST searches was initiated. Mycorrhizal fungi--a field where species identification often is prohibitively complex--and the much used ITS locus were chosen as test bed. A Perl script package called emerencia is presented. On a regular basis, it downloads select sequences from GenBank, separates the identified sequences from those insufficiently identified, and performs BLAST searches between these two datasets, storing all results in an SQL database. On the accompanying web-service http://emerencia.math.chalmers.se, users can monitor the taxonomic progress of insufficiently identified sequences over time, either through active searches or by signing up for e-mail notification upon disclosure of better matches. Other search categories, such as listing all insufficiently identified sequences (and their present best fully identified matches) publication-wise, are also available. The ever-increasing use of DNA sequences for identification purposes largely falls back on the assumption that public sequence databases contain a thorough sampling of taxonomically well-annotated sequences. Taxonomy, held by some to be an old-fashioned trade, has accordingly never been more important. emerencia does not automate the taxonomic process, but it does allow researchers to focus their efforts elsewhere than countless manual BLAST runs and arduous sieving of BLAST hit lists. The emerencia system is available on an open source basis for local installation with any organism and gene group as targets.

  10. Observing control and data reduction at the UKIRT

    NASA Astrophysics Data System (ADS)

    Bridger, Alan; Economou, Frossie; Wright, Gillian S.; Currie, Malcolm J.

    1998-07-01

    For the past seven years observing with the major instruments at the United Kingdom IR Telescope (UKIRT) has been semi-automated, using ASCII files top configure the instruments and then sequence a series of exposures and telescope movements to acquire the data. For one instrument automatic data reduction completes the cycle. The emergence of recent software technologies has suggested an evolution of this successful system to provide a friendlier and more powerful interface to observing at UKIRT. The Observatory Reduction and Acquisition Control (ORAC) project is now underway to construct this system. A key aim of ORAC is to allow a more complete description of the observing program, including the target sources and the recipe that will be used to provide on-line data reduction. Remote observation preparation and submission will also be supported. In parallel the observatory control system will be upgraded to use these descriptions for more automatic observing, while retaining the 'classical' interactive observing mode. The final component of the project is an improved automatic data reduction system, allowing on-line reduction of data at the telescope while retaining the flexibility to cope with changing observing techniques and instruments. The user will also automatically be provided with the scripts used for the real-time reduction to help provide post-observing data reduction support. The overall project goal is to improve the scientific productivity of the telescope, but it should also reduce the overall ongoing support requirements, and has the eventual goal of supporting the use of queue- scheduled observing.

  11. Automatic digital image analysis for identification of mitotic cells in synchronous mammalian cell cultures.

    PubMed

    Eccles, B A; Klevecz, R R

    1986-06-01

    Mitotic frequency in a synchronous culture of mammalian cells was determined fully automatically and in real time using low-intensity phase-contrast microscopy and a newvicon video camera connected to an EyeCom III image processor. Image samples, at a frequency of one per minute for 50 hours, were analyzed by first extracting the high-frequency picture components, then thresholding and probing for annular objects indicative of putative mitotic cells. Both the extraction of high-frequency components and the recognition of rings of varying radii and discontinuities employed novel algorithms. Spatial and temporal relationships between annuli were examined to discern the occurrences of mitoses, and such events were recorded in a computer data file. At present, the automatic analysis is suited for random cell proliferation rate measurements or cell cycle studies. The automatic identification of mitotic cells as described here provides a measure of the average proliferative activity of the cell population as a whole and eliminates more than eight hours of manual review per time-lapse video recording.

  12. Tests with VHR images for the identification of olive trees and other fruit trees in the European Union

    NASA Astrophysics Data System (ADS)

    Masson, Josiane; Soille, Pierre; Mueller, Rick

    2004-10-01

    In the context of the Common Agricultural Policy (CAP) there is a strong interest of the European Commission for counting and individually locating fruit trees. An automatic counting algorithm developed by the JRC (OLICOUNT) was used in the past for olive trees only, on 1m black and white orthophotos but with limits in case of young trees or irregular groves. This study investigates the improvement of fruit tree identification using VHR images on a large set of data in three test sites, one in Creta (Greece; one in the south-east of France with a majority of olive trees and associated fruit trees, and the last one in Florida on citrus trees. OLICOUNT was compared with two other automatic tree counting, applications, one using the CRISP software on citrus trees and the other completely automatic based on regional minima (morphological image analysis). Additional investigation was undertaken to refine the methods. This paper describes the automatic methods and presents the results derived from the tests.

  13. Research into automatic recognition of joints in human symmetrical movements

    NASA Astrophysics Data System (ADS)

    Fan, Yifang; Li, Zhiyu

    2008-03-01

    High speed photography is a major means of collecting data from human body movement. It enables the automatic identification of joints, which brings great significance to the research, treatment and recovery of injuries, the analysis to the diagnosis of sport techniques and the ergonomics. According to the features that when the adjacent joints of human body are in planetary motion, their distance remains the same, and according to the human body joint movement laws (such as the territory of the articular anatomy and the kinematic features), a new approach is introduced to process the image thresholding of joints filmed by the high speed camera, to automatically identify the joints and to automatically trace the joint points (by labeling markers at the joints). Based upon the closure of marking points, automatic identification can be achieved through thresholding treatment. Due to the screening frequency and the laws of human segment movement, when the marking points have been initialized, their automatic tracking can be achieved with the progressive sequential images.Then the testing results, the data from three-dimensional force platform and the characteristics that human body segment will only rotate around the closer ending segment when the segment has no boding force and only valid to the conservative force all tell that after being analyzed kinematically, the approach is approved to be valid.

  14. Face recognition for criminal identification: An implementation of principal component analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Abdullah, Nurul Azma; Saidi, Md. Jamri; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Hamid, Isredza Rahmi A.

    2017-10-01

    In practice, identification of criminal in Malaysia is done through thumbprint identification. However, this type of identification is constrained as most of criminal nowadays getting cleverer not to leave their thumbprint on the scene. With the advent of security technology, cameras especially CCTV have been installed in many public and private areas to provide surveillance activities. The footage of the CCTV can be used to identify suspects on scene. However, because of limited software developed to automatically detect the similarity between photo in the footage and recorded photo of criminals, the law enforce thumbprint identification. In this paper, an automated facial recognition system for criminal database was proposed using known Principal Component Analysis approach. This system will be able to detect face and recognize face automatically. This will help the law enforcements to detect or recognize suspect of the case if no thumbprint present on the scene. The results show that about 80% of input photo can be matched with the template data.

  15. Sexual Scripts and Sexual Risk Behaviors among Black Heterosexual Men: Development of the Sexual Scripts Scale

    PubMed Central

    Bowleg, Lisa; Burkholder, Gary J.; Noar, Seth M.; Teti, Michelle; Malebranche, David J.; Tschann, Jeanne M.

    2014-01-01

    Sexual scripts are widely shared gender and culture-specific guides for sexual behavior with important implications for HIV prevention. Although several qualitative studies document how sexual scripts may influence sexual risk behaviors, quantitative investigations of sexual scripts in the context of sexual risk are rare. This mixed methods study involved the qualitative development and quantitative testing of the Sexual Scripts Scale (SSS). Study 1 included qualitative semi-structured interviews with 30 Black heterosexual men about sexual experiences with main and casual sex partners to develop the SSS. Study 2 included a quantitative test of the SSS with 526 predominantly low-income Black heterosexual men. A factor analysis of the SSS resulted in a 34-item, seven-factor solution that explained 68% of the variance. The subscales and coefficient alphas were: Romantic Intimacy Scripts (α = .86), Condom Scripts (α = .82), Alcohol Scripts (α = .83), Sexual Initiation Scripts (α = .79), Media Sexual Socialization Scripts (α = .84), Marijuana Scripts (α = .85), and Sexual Experimentation Scripts (α = .84). Among men who reported a main partner (n = 401), higher Alcohol Scripts, Media Sexual Socialization Scripts, and Marijuana Scripts scores, and lower Condom Scripts scores were related to more sexual risk behavior. Among men who reported at least one casual partner (n = 238), higher Romantic Intimacy Scripts, Sexual Initiation Scripts, and Media Sexual Socialization Scripts, and lower Condom Scripts scores were related to higher sexual risk. The SSS may have considerable utility for future research on Black heterosexual men’s HIV risk. PMID:24311105

  16. RFID applications in transportation operation and intelligent transportation systems (ITS).

    DOT National Transportation Integrated Search

    2009-06-01

    Radio frequency identification (RFID) transmits the identity of an object or a person wirelessly. It is grouped under : the broad category of automatic identification technologies with corresponding standards and established protocols. : RFID is suit...

  17. MAC, A System for Automatically IPR Identification, Collection and Distribution

    NASA Astrophysics Data System (ADS)

    Serrão, Carlos

    Controlling Intellectual Property Rights (IPR) in the Digital World is a very hard challenge. The facility to create multiple bit-by-bit identical copies from original IPR works creates the opportunities for digital piracy. One of the most affected industries by this fact is the Music Industry. The Music Industry has supported huge losses during the last few years due to this fact. Moreover, this fact is also affecting the way that music rights collecting and distributing societies are operating to assure a correct music IPR identification, collection and distribution. In this article a system for automating this IPR identification, collection and distribution is presented and described. This system makes usage of advanced automatic audio identification system based on audio fingerprinting technology. This paper will present the details of the system and present a use-case scenario where this system is being used.

  18. Utilizing automatic identification tracking systems to compile operational field and structure data : [research summary].

    DOT National Transportation Integrated Search

    2014-05-01

    Thefederallymandatedmaterialsclearanceprocessrequiresstatetransportation : agenciestosubjectallconstructionfieldsamplestoqualitycontrol/assurancetestingin : ordertopassstandardizedstateinspections....

  19. Adolescents' sexual scripts: schematic representations of consensual and nonconsensual heterosexual interactions.

    PubMed

    Krahé, Barbara; Bieneck, Steffen; Scheinberger-Olwig, Renate

    2007-11-01

    The characteristic features of adolescents' sexual scripts were explored in 400 tenth and eleventh graders from Berlin, Germany. Participants rated the prototypical elements of three scripts for heterosexual interactions: (1) the prototypical script for the first consensual sexual intercourse with a new partner as pertaining to adolescents in general (general script); (2) the prototypical script for the first consensual sexual intercourse with a new partner as pertaining to themselves personally (individual script); and (3) the script for a nonconsensual sexual intercourse (rape script). Compared with the general script for the age group as a whole, the individual script contained fewer risk elements related to sexual aggression and portrayed more positive consequences of the sexual interaction. Few gender differences were found, and coital experience did not affect sexual scripts. The rape script was found to be close to the "real rape stereotype." The findings are discussed with respect to the role of sexual scripts as guidelines for behavior, particularly in terms of their significance for the prediction of sexual aggression.

  20. An automatic system to detect and extract texts in medical images for de-identification

    NASA Astrophysics Data System (ADS)

    Zhu, Yingxuan; Singh, P. D.; Siddiqui, Khan; Gillam, Michael

    2010-03-01

    Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.

  1. SLIM: an alternative Web interface for MEDLINE/PubMed searches – a preliminary study

    PubMed Central

    Muin, Michael; Fontelo, Paul; Liu, Fang; Ackerman, Michael

    2005-01-01

    Background With the rapid growth of medical information and the pervasiveness of the Internet, online search and retrieval systems have become indispensable tools in medicine. The progress of Web technologies can provide expert searching capabilities to non-expert information seekers. The objective of the project is to create an alternative search interface for MEDLINE/PubMed searches using JavaScript slider bars. SLIM, or Slider Interface for MEDLINE/PubMed searches, was developed with PHP and JavaScript. Interactive slider bars in the search form controlled search parameters such as limits, filters and MeSH terminologies. Connections to PubMed were done using the Entrez Programming Utilities (E-Utilities). Custom scripts were created to mimic the automatic term mapping process of Entrez. Page generation times for both local and remote connections were recorded. Results Alpha testing by developers showed SLIM to be functionally stable. Page generation times to simulate loading times were recorded the first week of alpha and beta testing. Average page generation times for the index page, previews and searches were 2.94 milliseconds, 0.63 seconds and 3.84 seconds, respectively. Eighteen physicians from the US, Australia and the Philippines participated in the beta testing and provided feedback through an online survey. Most users found the search interface user-friendly and easy to use. Information on MeSH terms and the ability to instantly hide and display abstracts were identified as distinctive features. Conclusion SLIM can be an interactive time-saving tool for online medical literature research that improves user control and capability to instantly refine and refocus search strategies. With continued development and by integrating search limits, methodology filters, MeSH terms and levels of evidence, SLIM may be useful in the practice of evidence-based medicine. PMID:16321145

  2. Automated multi-slice extracellular and patch-clamp experiments using the WinLTP data acquisition system with automated perfusion control

    PubMed Central

    Anderson, William W.; Fitzjohn, Stephen M.; Collingridge, Graham L.

    2012-01-01

    WinLTP is a data acquisition program for studying long-term potentiation (LTP) and other aspects of synaptic function. Earlier versions of WinLTP (J. Neurosci. Methods, 162:346–356, 2007) provided automated electrical stimulation and data acquisition capable of running nearly an entire synaptic plasticity experiment, with the primary exception that perfusion solutions had to be changed manually. This automated stimulation and acquisition was done by using ‘Sweep’, ‘Loop’ and ‘Delay’ events to build scripts using the ‘Protocol Builder’. However, this did not allow automatic changing of many solutions while running multiple slice experiments, or solution changing when this had to be performed rapidly and with accurate timing during patch-clamp experiments. We report here the addition of automated perfusion control to WinLTP. First, perfusion change between sweeps is enabled by adding the ‘Perfuse’ event to Protocol Builder scripting and is used in slice experiments. Second, fast perfusion changes during as well as between sweeps is enabled by using the Perfuse event in the protocol scripts to control changes between sweeps, and also by changing digital or analog output during a sweep and is used for single cell single-line perfusion patch-clamp experiments. The addition of stepper control of tube placement allows dual- or triple-line perfusion patch-clamp experiments for up to 48 solutions. The ability to automate perfusion changes and fully integrate them with the already automated stimulation and data acquisition goes a long way toward complete automation of multi-slice extracellularly recorded and single cell patch-clamp experiments. PMID:22524994

  3. Dynamo: a flexible, user-friendly development tool for subtomogram averaging of cryo-EM data in high-performance computing environments.

    PubMed

    Castaño-Díez, Daniel; Kudryashev, Mikhail; Arheit, Marcel; Stahlberg, Henning

    2012-05-01

    Dynamo is a new software package for subtomogram averaging of cryo Electron Tomography (cryo-ET) data with three main goals: first, Dynamo allows user-transparent adaptation to a variety of high-performance computing platforms such as GPUs or CPU clusters. Second, Dynamo implements user-friendliness through GUI interfaces and scripting resources. Third, Dynamo offers user-flexibility through a plugin API. Besides the alignment and averaging procedures, Dynamo includes native tools for visualization and analysis of results and data, as well as support for third party visualization software, such as Chimera UCSF or EMAN2. As a demonstration of these functionalities, we studied bacterial flagellar motors and showed automatically detected classes with absent and present C-rings. Subtomogram averaging is a common task in current cryo-ET pipelines, which requires extensive computational resources and follows a well-established workflow. However, due to the data diversity, many existing packages offer slight variations of the same algorithm to improve results. One of the main purposes behind Dynamo is to provide explicit tools to allow the user the insertion of custom designed procedures - or plugins - to replace or complement the native algorithms in the different steps of the processing pipeline for subtomogram averaging without the burden of handling parallelization. Custom scripts that implement new approaches devised by the user are integrated into the Dynamo data management system, so that they can be controlled by the GUI or the scripting capacities. Dynamo executables do not require licenses for third party commercial software. Sources, executables and documentation are freely distributed on http://www.dynamo-em.org. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less

  5. SLIM: an alternative Web interface for MEDLINE/PubMed searches - a preliminary study.

    PubMed

    Muin, Michael; Fontelo, Paul; Liu, Fang; Ackerman, Michael

    2005-12-01

    With the rapid growth of medical information and the pervasiveness of the Internet, online search and retrieval systems have become indispensable tools in medicine. The progress of Web technologies can provide expert searching capabilities to non-expert information seekers. The objective of the project is to create an alternative search interface for MEDLINE/PubMed searches using JavaScript slider bars. SLIM, or Slider Interface for MEDLINE/PubMed searches, was developed with PHP and JavaScript. Interactive slider bars in the search form controlled search parameters such as limits, filters and MeSH terminologies. Connections to PubMed were done using the Entrez Programming Utilities (E-Utilities). Custom scripts were created to mimic the automatic term mapping process of Entrez. Page generation times for both local and remote connections were recorded. Alpha testing by developers showed SLIM to be functionally stable. Page generation times to simulate loading times were recorded the first week of alpha and beta testing. Average page generation times for the index page, previews and searches were 2.94 milliseconds, 0.63 seconds and 3.84 seconds, respectively. Eighteen physicians from the US, Australia and the Philippines participated in the beta testing and provided feedback through an online survey. Most users found the search interface user-friendly and easy to use. Information on MeSH terms and the ability to instantly hide and display abstracts were identified as distinctive features. SLIM can be an interactive time-saving tool for online medical literature research that improves user control and capability to instantly refine and refocus search strategies. With continued development and by integrating search limits, methodology filters, MeSH terms and levels of evidence, SLIM may be useful in the practice of evidence-based medicine.

  6. COMP Superscalar, an interoperable programming framework

    NASA Astrophysics Data System (ADS)

    Badia, Rosa M.; Conejero, Javier; Diaz, Carlos; Ejarque, Jorge; Lezzi, Daniele; Lordan, Francesc; Ramon-Cortes, Cristian; Sirvent, Raul

    2015-12-01

    COMPSs is a programming framework that aims to facilitate the parallelization of existing applications written in Java, C/C++ and Python scripts. For that purpose, it offers a simple programming model based on sequential development in which the user is mainly responsible for (i) identifying the functions to be executed as asynchronous parallel tasks and (ii) annotating them with annotations or standard Python decorators. A runtime system is in charge of exploiting the inherent concurrency of the code, automatically detecting and enforcing the data dependencies between tasks and spawning these tasks to the available resources, which can be nodes in a cluster, clouds or grids. In cloud environments, COMPSs provides scalability and elasticity features allowing the dynamic provision of resources.

  7. SOCIB Glider toolbox: from sensor to data repository

    NASA Astrophysics Data System (ADS)

    Pau Beltran, Joan; Heslop, Emma; Ruiz, Simón; Troupin, Charles; Tintoré, Joaquín

    2015-04-01

    Nowadays in oceanography, gliders constitutes a mature, cost-effective technology for the acquisition of measurements independently of the sea state (unlike ships), providing subsurface data during sustained periods, including extreme weather events. The SOCIB glider toolbox is a set of MATLAB/Octave scripts and functions developed in order to manage the data collected by a glider fleet. They cover the main stages of the data management process, both in real-time and delayed-time modes: metadata aggregation, downloading, processing, and automatic generation of data products and figures. The toolbox is distributed under the GNU licence (http://www.gnu.org/copyleft/gpl.html) and is available at http://www.socib.es/users/glider/glider_toolbox.

  8. Automatic building identification under bomb damage conditions

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Noll, Warren; Barker, Joseph; Wunsch, Donald C., II

    2009-05-01

    Given the vast amount of image intelligence utilized in support of planning and executing military operations, a passive automated image processing capability for target identification is urgently required. Furthermore, transmitting large image streams from remote locations would quickly use available band width (BW) precipitating the need for processing to occur at the sensor location. This paper addresses the problem of automatic target recognition for battle damage assessment (BDA). We utilize an Adaptive Resonance Theory approach to cluster templates of target buildings. The results show that the network successfully classifies targets from non-targets in a virtual test bed environment.

  9. Projective identification and consciousness alteration: a bridge between psychoanalysis and neuroscience?

    PubMed

    Cimino, Cristiana; Correale, Antonello

    2005-02-01

    The authors claim that projective identification in the process of analysis should be considered in a circumscribed manner and seen as a very specific type of communication between the patient and the analyst, characterised through a modality that is simultaneously active, unconscious and discrete. In other words, the patient actively, though unconsciously and discretely--that is, in specific moments of the analysis--brings about particular changes in the analysts state. From the analyst's side, the effect of this type of communication is a sudden change in his general state--a sense of passivity and coercion and a change in the state of consciousness. This altered consciousness can range from an almost automatic repetition of a relational script to a moderate or serious contraction of the field of attention to full-fledged changes in the analyst's sense of self. The authors propose the theory that this type of communication is, in fact, the expression of traumatic contents of experiences emerging from the non-declarative memory. These contents belong to a pre-symbolic and pre-representative area of the mind. They are made of inert fragments of psychic material that are felt rather than thought, which can thus be viewed as a kind of writing to be completed. These pieces of psychic material are the expression of traumatic experiences that in turn exercise a traumatic effect on the analyst, inducing an altered state of consciousness in him as well. Such material should be understood as belonging to an unrepressed unconscious. Restitution of these fragments to the patient in representable forms must take place gradually and without trying to accelerate the timing, in order to avoid the possibility that the restitution itself constitute an acting on the part of the analyst, which would thus be a traumatic response to the traumatic action of the analytic material.

  10. Investigation of an automatic trim algorithm for restructurable aircraft control

    NASA Technical Reports Server (NTRS)

    Weiss, J.; Eterno, J.; Grunberg, D.; Looze, D.; Ostroff, A.

    1986-01-01

    This paper develops and solves an automatic trim problem for restructurable aircraft control. The trim solution is applied as a feed-forward control to reject measurable disturbances following control element failures. Disturbance rejection and command following performances are recovered through the automatic feedback control redesign procedure described by Looze et al. (1985). For this project the existence of a failure detection mechanism is assumed, and methods to cope with potential detection and identification inaccuracies are addressed.

  11. Automatic Command Sequence Generation

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Gladded, Roy; Khanampompan, Teerapat

    2007-01-01

    Automatic Sequence Generator (Autogen) Version 3.0 software automatically generates command sequences for the Mars Reconnaissance Orbiter (MRO) and several other JPL spacecraft operated by the multi-mission support team. Autogen uses standard JPL sequencing tools like APGEN, ASP, SEQGEN, and the DOM database to automate the generation of uplink command products, Spacecraft Command Message Format (SCMF) files, and the corresponding ground command products, DSN Keywords Files (DKF). Autogen supports all the major multi-mission mission phases including the cruise, aerobraking, mapping/science, and relay mission phases. Autogen is a Perl script, which functions within the mission operations UNIX environment. It consists of two parts: a set of model files and the autogen Perl script. Autogen encodes the behaviors of the system into a model and encodes algorithms for context sensitive customizations of the modeled behaviors. The model includes knowledge of different mission phases and how the resultant command products must differ for these phases. The executable software portion of Autogen, automates the setup and use of APGEN for constructing a spacecraft activity sequence file (SASF). The setup includes file retrieval through the DOM (Distributed Object Manager), an object database used to store project files. This step retrieves all the needed input files for generating the command products. Depending on the mission phase, Autogen also uses the ASP (Automated Sequence Processor) and SEQGEN to generate the command product sent to the spacecraft. Autogen also provides the means for customizing sequences through the use of configuration files. By automating the majority of the sequencing generation process, Autogen eliminates many sequence generation errors commonly introduced by manually constructing spacecraft command sequences. Through the layering of commands into the sequence by a series of scheduling algorithms, users are able to rapidly and reliably construct the desired uplink command products. With the aid of Autogen, sequences may be produced in a matter of hours instead of weeks, with a significant reduction in the number of people on the sequence team. As a result, the uplink product generation process is significantly streamlined and mission risk is significantly reduced. Autogen is used for operations of MRO, Mars Global Surveyor (MGS), Mars Exploration Rover (MER), Mars Odyssey, and will be used for operations of Phoenix. Autogen Version 3.0 is the operational version of Autogen including the MRO adaptation for the cruise mission phase, and was also used for development of the aerobraking and mapping mission phases for MRO.

  12. AGUIA: autonomous graphical user interface assembly for clinical trials semantic data services

    PubMed Central

    2010-01-01

    Background AGUIA is a front-end web application originally developed to manage clinical, demographic and biomolecular patient data collected during clinical trials at MD Anderson Cancer Center. The diversity of methods involved in patient screening and sample processing generates a variety of data types that require a resource-oriented architecture to capture the associations between the heterogeneous data elements. AGUIA uses a semantic web formalism, resource description framework (RDF), and a bottom-up design of knowledge bases that employ the S3DB tool as the starting point for the client's interface assembly. Methods The data web service, S3DB, meets the necessary requirements of generating the RDF and of explicitly distinguishing the description of the domain from its instantiation, while allowing for continuous editing of both. Furthermore, it uses an HTTP-REST protocol, has a SPARQL endpoint, and has open source availability in the public domain, which facilitates the development and dissemination of this application. However, S3DB alone does not address the issue of representing content in a form that makes sense for domain experts. Results We identified an autonomous set of descriptors, the GBox, that provides user and domain specifications for the graphical user interface. This was achieved by identifying a formalism that makes use of an RDF schema to enable the automatic assembly of graphical user interfaces in a meaningful manner while using only resources native to the client web browser (JavaScript interpreter, document object model). We defined a generalized RDF model such that changes in the graphic descriptors are automatically and immediately (locally) reflected into the configuration of the client's interface application. Conclusions The design patterns identified for the GBox benefit from and reflect the specific requirements of interacting with data generated by clinical trials, and they contain clues for a general purpose solution to the challenge of having interfaces automatically assembled for multiple and volatile views of a domain. By coding AGUIA in JavaScript, for which all browsers include a native interpreter, a solution was found that assembles interfaces that are meaningful to the particular user, and which are also ubiquitous and lightweight, allowing the computational load to be carried by the client's machine. PMID:20977768

  13. ActionMap: A web-based software that automates loci assignments to framework maps.

    PubMed

    Albini, Guillaume; Falque, Matthieu; Joets, Johann

    2003-07-01

    Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/).

  14. ActionMap: a web-based software that automates loci assignments to framework maps

    PubMed Central

    Albini, Guillaume; Falque, Matthieu; Joets, Johann

    2003-01-01

    Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/). PMID:12824426

  15. Preliminary Tests of a New Low-Cost Photogrammetric System

    NASA Astrophysics Data System (ADS)

    Santise, M.; Thoeni, K.; Roncella, R.; Sloan, S. W.; Giacomini, A.

    2017-11-01

    This paper presents preliminary tests of a new low-cost photogrammetric system for 4D modelling of large scale areas for civil engineering applications. The system consists of five stand-alone units. Each of the units is composed of a Raspberry Pi 2 Model B (RPi2B) single board computer connected to a PiCamera Module V2 (8 MP) and is powered by a 10 W solar panel. The acquisition of the images is performed automatically using Python scripts and the OpenCV library. Images are recorded at different times during the day and automatically uploaded onto a FTP server from where they can be accessed for processing. Preliminary tests and outcomes of the system are discussed in detail. The focus is on the performance assessment of the low-cost sensor and the quality evaluation of the digital surface models generated by the low-cost photogrammetric systems in the field under real test conditions. Two different test cases were set up in order to calibrate the low-cost photogrammetric system and to assess its performance. First comparisons with a TLS model show a good agreement.

  16. Power Plant Model Validation Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The PPMV is used to validate generator model using disturbance recordings. The PPMV tool contains a collection of power plant models and model validation studies, as well as disturbance recordings from a number of historic grid events. The user can import data from a new disturbance into the database, which converts PMU and SCADA data into GE PSLF format, and then run the tool to validate (or invalidate) the model for a specific power plant against its actual performance. The PNNL PPMV tool enables the automation of the process of power plant model validation using disturbance recordings. The tool usesmore » PMU and SCADA measurements as input information. The tool automatically adjusts all required EPCL scripts and interacts with GE PSLF in the batch mode. The main tool features includes: The tool interacts with GE PSLF; The tool uses GE PSLF Play-In Function for generator model validation; Database of projects (model validation studies); Database of the historic events; Database of the power plant; The tool has advanced visualization capabilities; and The tool automatically generates reports« less

  17. An Automatic Medium to High Fidelity Low-Thrust Global Trajectory Toolchain; EMTG-GMAT

    NASA Technical Reports Server (NTRS)

    Beeson, Ryne T.; Englander, Jacob A.; Hughes, Steven P.; Schadegg, Maximillian

    2015-01-01

    Solving the global optimization, low-thrust, multiple-flyby interplanetary trajectory problem with high-fidelity dynamical models requires an unreasonable amount of computational resources. A better approach, and one that is demonstrated in this paper, is a multi-step process whereby the solution of the aforementioned problem is solved at a lower-fidelity and this solution is used as an initial guess for a higher-fidelity solver. The framework presented in this work uses two tools developed by NASA Goddard Space Flight Center: the Evolutionary Mission Trajectory Generator (EMTG) and the General Mission Analysis Tool (GMAT). EMTG is a medium to medium-high fidelity low-thrust interplanetary global optimization solver, which now has the capability to automatically generate GMAT script files for seeding a high-fidelity solution using GMAT's local optimization capabilities. A discussion of the dynamical models as well as thruster and power modeling for both EMTG and GMAT are given in this paper. Current capabilities are demonstrated with examples that highlight the toolchains ability to efficiently solve the difficult low-thrust global optimization problem with little human intervention.

  18. Audio-video feature correlation: faces and speech

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal

    1999-08-01

    This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.

  19. OGS improvements in the year 2011 in running the Northeastern Italy Seismic Network

    NASA Astrophysics Data System (ADS)

    Bragato, P. L.; Pesaresi, D.; Saraò, A.; Di Bartolomeo, P.; Durı, G.

    2012-04-01

    The Centro di Ricerche Sismologiche (CRS, Seismological Research Center) of the Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS, Italian National Institute for Oceanography and Experimental Geophysics) in Udine (Italy) after the strong earthquake of magnitude M=6.4 occurred in 1976 in the Italian Friuli-Venezia Giulia region, started to operate the Northeastern Italy Seismic Network: it currently consists of 15 very sensitive broad band and 21 simpler short period seismic stations, all telemetered to and acquired in real time at the OGS-CRS data center in Udine. Real time data exchange agreements in place with other Italian, Slovenian, Austrian and Swiss seismological institutes lead to a total number of about 100 seismic stations acquired in real time, which makes the OGS the reference institute for seismic monitoring of Northeastern Italy. Since 2002 OGS-CRS is using the Antelope software suite on several workstations plus a SUN Cluster as the main tool for collecting, analyzing, archiving and exchanging seismic data, initially in the framework of the EU Interreg IIIA project "Trans-national seismological networks in the South-Eastern Alps". SeisComP is also used as a real time data exchange server tool. In order to improve the seismological monitoring of the Northeastern Italy area, at OGS-CRS we tuned existing programs and created ad hoc ones like: a customized web server named PickServer to manually relocate earthquakes, a script for automatic moment tensor determination, scripts for web publishing of earthquake parametric data, waveforms, state of health parameters and shaking maps, noise characterization by means of automatic spectra analysis, and last but not least scripts for email/SMS/fax alerting. The OGS-CRS Real Time Seismological website (RTS, http://rts.crs.inogs.it/) operative since several years was initially developed in the framework of the Italian DPC-INGV S3 Project: the RTS website shows classic earthquake locations parametric data plus ShakeMap and moment tensor information. At OGS-CRS we also spent a considerable amount of efforts in improving the long-period performances of broadband seismic stations, either by carrying out full re-installations and/or applying thermal insulations to the seismometers: more examples of PSD plots of the PRED broad band seismic station installation in the cave tunnel of Cave del Predil using a Quanterra Q330HR high resolution digitizer and a Sterckeisen STS-2 broadband seismometer will be illustrated. Efforts in strengthening the reliability of data links, exploring the use of redundant satellite/radio/GPRS links will also be shown.

  20. Automatic Identification and Organization of Index Terms for Interactive Browsing.

    ERIC Educational Resources Information Center

    Wacholder, Nina; Evans, David K.; Klavans, Judith L.

    The potential of automatically generated indexes for information access has been recognized for several decades, but the quantity of text and the ambiguity of natural language processing have made progress at this task more difficult than was originally foreseen. Recently, a body of work on development of interactive systems to support phrase…

  1. Interactive Light Stimulus Generation with High Performance Real-Time Image Processing and Simple Scripting.

    PubMed

    Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter

    2017-01-01

    Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations.

  2. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision.

    PubMed

    Heinrich, Andreas; Güttler, Felix; Wendt, Sebastian; Schenkl, Sebastian; Hubig, Michael; Wagner, Rebecca; Mall, Gita; Teichgräber, Ulf

    2018-06-18

     In forensic odontology the comparison between antemortem and postmortem panoramic radiographs (PRs) is a reliable method for person identification. The purpose of this study was to improve and automate identification of unknown people by comparison between antemortem and postmortem PR using computer vision.  The study includes 43 467 PRs from 24 545 patients (46 % females/54 % males). All PRs were filtered and evaluated with Matlab R2014b including the toolboxes image processing and computer vision system. The matching process used the SURF feature to find the corresponding points between two PRs (unknown person and database entry) out of the whole database.  From 40 randomly selected persons, 34 persons (85 %) could be reliably identified by corresponding PR matching points between an already existing scan in the database and the most recent PR. The systematic matching yielded a maximum of 259 points for a successful identification between two different PRs of the same person and a maximum of 12 corresponding matching points for other non-identical persons in the database. Hence 12 matching points are the threshold for reliable assignment.  Operating with an automatic PR system and computer vision could be a successful and reliable tool for identification purposes. The applied method distinguishes itself by virtue of its fast and reliable identification of persons by PR. This Identification method is suitable even if dental characteristics were removed or added in the past. The system seems to be robust for large amounts of data.   · Computer vision allows an automated antemortem and postmortem comparison of panoramic radiographs (PRs) for person identification.. · The present method is able to find identical matching partners among huge datasets (big data) in a short computing time.. · The identification method is suitable even if dental characteristics were removed or added.. · Heinrich A, Güttler F, Wendt S et al. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0632-4744. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    PubMed

    Cohen, Aaron M

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

  4. PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics1[OPEN

    PubMed Central

    Poeschl, Yvonne; Plötner, Romina

    2017-01-01

    Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. PMID:28931626

  5. Natural language processing of spoken diet records (SDRs).

    PubMed

    Lacson, Ronilda; Long, William

    2006-01-01

    Dietary assessment is a fundamental aspect of nutritional evaluation that is essential for management of obesity as well as for assessing dietary impact on chronic diseases. Various methods have been used for dietary assessment including written records, 24-hour recalls, and food frequency questionnaires. The use of mobile phones to provide real-time dietary records provides potential advantages for accessibility, ease of use and automated documentation. However, understanding even a perfect transcript of spoken dietary records (SDRs) is challenging for people. This work presents a first step towards automatic analysis of SDRs. Our approach consists of four steps - identification of food items, identification of food quantifiers, classification of food quantifiers and temporal annotation. Our method enables automatic extraction of dietary information from SDRs, which in turn allows automated mapping to a Diet History Questionnaire dietary database. Our model has an accuracy of 90%. This work demonstrates the feasibility of automatically processing SDRs.

  6. Simple, Scalable, Script-based, Science Processor for Measurements - Data Mining Edition (S4PM-DME)

    NASA Astrophysics Data System (ADS)

    Pham, L. B.; Eng, E. K.; Lynnes, C. S.; Berrick, S. W.; Vollmer, B. E.

    2005-12-01

    The S4PM-DME is the Goddard Earth Sciences Distributed Active Archive Center's (GES DAAC) web-based data mining environment. The S4PM-DME replaces the Near-line Archive Data Mining (NADM) system with a better web environment and a richer set of production rules. S4PM-DME enables registered users to submit and execute custom data mining algorithms. The S4PM-DME system uses the GES DAAC developed Simple Scalable Script-based Science Processor for Measurements (S4PM) to automate tasks and perform the actual data processing. A web interface allows the user to access the S4PM-DME system. The user first develops personalized data mining algorithm on his/her home platform and then uploads them to the S4PM-DME system. Algorithms in C and FORTRAN languages are currently supported. The user developed algorithm is automatically audited for any potential security problems before it is installed within the S4PM-DME system and made available to the user. Once the algorithm has been installed the user can promote the algorithm to the "operational" environment. From here the user can search and order the data available in the GES DAAC archive for his/her science algorithm. The user can also set up a processing subscription. The subscription will automatically process new data as it becomes available in the GES DAAC archive. The generated mined data products are then made available for FTP pickup. The benefits of using S4PM-DME are 1) to decrease the downloading time it typically takes a user to transfer the GES DAAC data to his/her system thus off-load the heavy network traffic, 2) to free-up the load on their system, and last 3) to utilize the rich and abundance ocean, atmosphere data from the MODIS and AIRS instruments available from the GES DAAC.

  7. An algebra for spatio-temporal information generation

    NASA Astrophysics Data System (ADS)

    Pebesma, Edzer; Scheider, Simon; Gräler, Benedikt; Stasch, Christoph; Hinz, Matthias

    2016-04-01

    When we accept the premises of James Frew's laws of metadata (Frew's first law: scientists don't write metadata; Frew's second law: any scientist can be forced to write bad metadata), but also assume that scientists try to maximise the impact of their research findings, can we develop our information infrastructures such that useful metadata is generated automatically? Currently, sharing of data and software to completely reproduce research findings is becoming standard, e.g. in the Journal of Statistical Software [1]. The reproduction (e.g. R) scripts however convey correct syntax, but still limited semantics. We propose [2] a new, platform-neutral way to algebraically describe how data is generated, e.g. by observation, and how data is derived, e.g. by processing observations. It starts with forming functions composed of four reference system types (space, time, quality, entity), which express for instance continuity of objects over time, and continuity of fields over space and time. Data, which is discrete by definition, is generated by evaluating such functions at discrete space and time instances, or by evaluating a convolution (aggregation) over them. Derived data is obtained by inputting data to data derivation functions, which for instance interpolate, estimate, aggregate, or convert fields into objects and vice versa. As opposed to the traditional when, where and what semantics of data sets, our algebra focuses on describing how a data set was generated. We argue that it can be used to discover data sets that were derived from a particular source x, or derived by a particular procedure y. It may also form the basis for inferring meaningfulness of derivation procedures [3]. Current research focuses on automatically generating provenance documentation from R scripts. [1] http://www.jstatsoft.org/ (open access) [2] http://www.meaningfulspatialstatistics.org has the full paper (in review) [3] Stasch, C., S. Scheider, E. Pebesma, W. Kuhn, 2014. Meaningful Spatial Prediction and Aggregation. Environmental Modelling & Software, 51, 149-165 (open access)

  8. Scientific Cluster Deployment and Recovery - Using puppet to simplify cluster management

    NASA Astrophysics Data System (ADS)

    Hendrix, Val; Benjamin, Doug; Yao, Yushu

    2012-12-01

    Deployment, maintenance and recovery of a scientific cluster, which has complex, specialized services, can be a time consuming task requiring the assistance of Linux system administrators, network engineers as well as domain experts. Universities and small institutions that have a part-time FTE with limited time for and knowledge of the administration of such clusters can be strained by such maintenance tasks. This current work is the result of an effort to maintain a data analysis cluster (DAC) with minimal effort by a local system administrator. The realized benefit is the scientist, who is the local system administrator, is able to focus on the data analysis instead of the intricacies of managing a cluster. Our work provides a cluster deployment and recovery process (CDRP) based on the puppet configuration engine allowing a part-time FTE to easily deploy and recover entire clusters with minimal effort. Puppet is a configuration management system (CMS) used widely in computing centers for the automatic management of resources. Domain experts use Puppet's declarative language to define reusable modules for service configuration and deployment. Our CDRP has three actors: domain experts, a cluster designer and a cluster manager. The domain experts first write the puppet modules for the cluster services. A cluster designer would then define a cluster. This includes the creation of cluster roles, mapping the services to those roles and determining the relationships between the services. Finally, a cluster manager would acquire the resources (machines, networking), enter the cluster input parameters (hostnames, IP addresses) and automatically generate deployment scripts used by puppet to configure it to act as a designated role. In the event of a machine failure, the originally generated deployment scripts along with puppet can be used to easily reconfigure a new machine. The cluster definition produced in our CDRP is an integral part of automating cluster deployment in a cloud environment. Our future cloud efforts will further build on this work.

  9. Cross-Media Evaluation of Color T.V., Black and White T.V. and Color Photography in the Teaching of Endoscopy. Appendix A, Sample Schedule; Appendix B, Testing; Appendix C, Scripts; Appendix D, Analyses of Covariance.

    ERIC Educational Resources Information Center

    Balin, Howard; And Others

    Based on the premise that in situations where the subject requires visual identification, where students cannot see the subject physically from the standpoint of the instructor, and where there is a high dramatic impact, color and television might be significant factors in learning, a comparative evaluation was made of: color television, black and…

  10. Automatic identification of artifacts in electrodermal activity data.

    PubMed

    Taylor, Sara; Jaques, Natasha; Chen, Weixuan; Fedor, Szymon; Sano, Akane; Picard, Rosalind

    2015-01-01

    Recently, wearable devices have allowed for long term, ambulatory measurement of electrodermal activity (EDA). Despite the fact that ambulatory recording can be noisy, and recording artifacts can easily be mistaken for a physiological response during analysis, to date there is no automatic method for detecting artifacts. This paper describes the development of a machine learning algorithm for automatically detecting EDA artifacts, and provides an empirical evaluation of classification performance. We have encoded our results into a freely available web-based tool for artifact and peak detection.

  11. Management of natural resources through automatic cartographic inventory

    NASA Technical Reports Server (NTRS)

    Rey, P.; Gourinard, Y.; Cambou, F. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Significant results of the ARNICA program from August 1972 - January 1973 have been: (1) establishment of image to object correspondence codes for all types of soil use and forestry in northern Spain; (2) establishment of a transfer procedure between qualitative (remote identification and remote interpretation) and quantitative (numerization, storage, automatic statistical cartography) use of images; (3) organization of microdensitometric data processing and automatic cartography software; and (4) development of a system for measuring reflectance simultaneous with imagery.

  12. Label-free sensor for automatic identification of erythrocytes using digital in-line holographic microscopy and machine learning.

    PubMed

    Go, Taesik; Byeon, Hyeokjun; Lee, Sang Joon

    2018-04-30

    Cell types of erythrocytes should be identified because they are closely related to their functionality and viability. Conventional methods for classifying erythrocytes are time consuming and labor intensive. Therefore, an automatic and accurate erythrocyte classification system is indispensable in healthcare and biomedical fields. In this study, we proposed a new label-free sensor for automatic identification of erythrocyte cell types using a digital in-line holographic microscopy (DIHM) combined with machine learning algorithms. A total of 12 features, including information on intensity distributions, morphological descriptors, and optical focusing characteristics, is quantitatively obtained from numerically reconstructed holographic images. All individual features for discocytes, echinocytes, and spherocytes are statistically different. To improve the performance of cell type identification, we adopted several machine learning algorithms, such as decision tree model, support vector machine, linear discriminant classification, and k-nearest neighbor classification. With the aid of these machine learning algorithms, the extracted features are effectively utilized to distinguish erythrocytes. Among the four tested algorithms, the decision tree model exhibits the best identification performance for the training sets (n = 440, 98.18%) and test sets (n = 190, 97.37%). This proposed methodology, which smartly combined DIHM and machine learning, would be helpful for sensing abnormal erythrocytes and computer-aided diagnosis of hematological diseases in clinic. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. 48 CFR 252.211-7003 - Item identification and valuation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., used to retrieve data encoded on machine-readable media. Concatenated unique item identifier means— (1... (or controlling) authority for the enterprise identifier. Item means a single hardware article or a...-readable means an automatic identification technology media, such as bar codes, contact memory buttons...

  14. 48 CFR 252.211-7003 - Item identification and valuation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., used to retrieve data encoded on machine-readable media. Concatenated unique item identifier means— (1... (or controlling) authority for the enterprise identifier. Item means a single hardware article or a...-readable means an automatic identification technology media, such as bar codes, contact memory buttons...

  15. 48 CFR 252.211-7003 - Item identification and valuation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., used to retrieve data encoded on machine-readable media. Concatenated unique item identifier means— (1... (or controlling) authority for the enterprise identifier. Item means a single hardware article or a...-readable means an automatic identification technology media, such as bar codes, contact memory buttons...

  16. An Open-source Meteorological Operational System and its Installation in Portuguese- speaking Countries

    NASA Astrophysics Data System (ADS)

    Almeida, W. G.; Ferreira, A. L.; Mendes, M. V.; Ribeiro, A.; Yoksas, T.

    2007-05-01

    CPTEC, a division of Brazil’s INPE, has been using several open-source software packages for a variety of tasks in its Data Division. Among these tools are ones traditionally used in research and educational communities such as GrADs (Grid Analysis and Display System from the Center for Ocean-Land-Atmosphere Studies (COLA)), the Local Data Manager (LDM) and GEMPAK (from Unidata), andl operational tools such the Automatic File Distributor (AFD) that are popular among National Meteorological Services. In addition, some tools developed locally at CPTEC are also being made available as open-source packages. One package is being used to manage the data from Automatic Weather Stations that INPE operates. This system uses only open- source tools such as MySQL database, PERL scripts and Java programs for web access, and Unidata’s Internet Data Distribution (IDD) system and AFD for data delivery. All of these packages are get bundled into a low-cost and easy to install and package called the Meteorological Data Operational System. Recently, in a cooperation with the SICLIMAD project, this system has been modified for use by Portuguese- speaking countries in Africa to manage data from many Automatic Weather Stations that are being installed in these countries under SICLIMAD sponsorship. In this presentation we describe the tools included-in and and architecture-of the Meteorological Data Operational System.

  17. Automatic Adviser on Mobile Objects Status Identification and Classification

    NASA Astrophysics Data System (ADS)

    Shabelnikov, A. N.; Liabakh, N. N.; Gibner, Ya M.; Saryan, A. S.

    2018-05-01

    A mobile object status identification task is defined within the image discrimination theory. It is proposed to classify objects into three classes: object operation status; its maintenance is required and object should be removed from the production process. Two methods were developed to construct the separating boundaries between the designated classes: a) using statistical information on the research objects executed movement, b) basing on regulatory documents and expert commentary. Automatic Adviser operation simulation and the operation results analysis complex were synthesized. Research results are commented using a specific example of cuts rolling from the hump yard. The work was supported by Russian Fundamental Research Fund, project No. 17-20-01040.

  18. Automatic measurement of images on astrometric plates

    NASA Astrophysics Data System (ADS)

    Ortiz Gil, A.; Lopez Garcia, A.; Martinez Gonzalez, J. M.; Yershov, V.

    1994-04-01

    We present some results on the process of automatic detection and measurement of objects in overlapped fields of astrometric plates. The main steps of our algorithm are the following: determination of the Scale and Tilt between charge coupled devices (CCD) and microscope coordinate systems and estimation of signal-to-noise ratio in each field;--image identification and improvement of its position and size;--image final centering;--image selection and storage. Several parameters allow the use of variable criteria for image identification, characterization and selection. Problems related with faint images and crowded fields will be approached by special techniques (morphological filters, histogram properties and fitting models).

  19. Simplifying and enhancing the use of PyMOL with horizontal scripts

    PubMed Central

    2016-01-01

    Abstract Scripts are used in PyMOL to exert precise control over the appearance of the output and to ease remaking similar images at a later time. We developed horizontal scripts to ease script development. A horizontal script makes a complete scene in PyMOL like a traditional vertical script. The commands in a horizontal script are separated by semicolons. These scripts are edited interactively on the command line with no need for an external text editor. This simpler workflow accelerates script development. In using PyMOL, the illustration of a molecular scene requires an 18‐element matrix of view port settings. The default format spans several lines and is laborious to manually reformat for one line. This default format prevents the fast assembly of horizontal scripts that can reproduce a molecular scene. We solved this problem by writing a function that displays the settings on one line in a compact format suitable for horizontal scripts. We also demonstrate the mapping of aliases to horizontal scripts. Many aliases can be defined in a single script file, which can be useful for applying costume molecular representations to any structure. We also redefined horizontal scripts as Python functions to enable the use of the help function to print documentation about an alias to the command history window. We discuss how these methods of using horizontal scripts both simplify and enhance the use of PyMOL in research and education. PMID:27488983

  20. Automatic vasculature identification in coronary angiograms by adaptive geometrical tracking.

    PubMed

    Xiao, Ruoxiu; Yang, Jian; Goyal, Mahima; Liu, Yue; Wang, Yongtian

    2013-01-01

    As the uneven distribution of contrast agents and the perspective projection principle of X-ray, the vasculatures in angiographic image are with low contrast and are generally superposed with other organic tissues; therefore, it is very difficult to identify the vasculature and quantitatively estimate the blood flow directly from angiographic images. In this paper, we propose a fully automatic algorithm named adaptive geometrical vessel tracking (AGVT) for coronary artery identification in X-ray angiograms. Initially, the ridge enhancement (RE) image is obtained utilizing multiscale Hessian information. Then, automatic initialization procedures including seed points detection, and initial directions determination are performed on the RE image. The extracted ridge points can be adjusted to the geometrical centerline points adaptively through diameter estimation. Bifurcations are identified by discriminating connecting relationship of the tracked ridge points. Finally, all the tracked centerlines are merged and smoothed by classifying the connecting components on the vascular structures. Synthetic angiographic images and clinical angiograms are used to evaluate the performance of the proposed algorithm. The proposed algorithm is compared with other two vascular tracking techniques in terms of the efficiency and accuracy, which demonstrate successful applications of the proposed segmentation and extraction scheme in vasculature identification.

  1. AUTOMOTIVE DIESEL MAINTENANCE 2. UNIT VII, AUTOMATIC TRANSMISSIONS--ALLISON, TORQUMATIC SERIES 5960 AND 6060 (PART I).

    ERIC Educational Resources Information Center

    Human Engineering Inst., Cleveland, OH.

    THIS MODULE OF A 25-MODULE COURSE IS DESIGNED TO DEVELOP AN UNDERSTANDING OF THE OPERATION AND MAINTENANCE OF SPECIFIC MODELS OF AUTOMATIC TRANSMISSIONS USED ON DIESEL POWERED VEHICLES. TOPICS ARE (1) GENERAL SPECIFICATION DATA, (2) OPTIONS FOR VARIOUS APPLICATIONS, (3) ROAD TEST INSTRUCTIONS, (4) IDENTIFICATION AND SPECIFICATION DATA, (5) ALLISON…

  2. Automatic Method of Pause Measurement for Normal and Dysarthric Speech

    ERIC Educational Resources Information Center

    Rosen, Kristin; Murdoch, Bruce; Folker, Joanne; Vogel, Adam; Cahill, Louise; Delatycki, Martin; Corben, Louise

    2010-01-01

    This study proposes an automatic method for the detection of pauses and identification of pause types in conversational speech for the purpose of measuring the effects of Friedreich's Ataxia (FRDA) on speech. Speech samples of [approximately] 3 minutes were recorded from 13 speakers with FRDA and 18 healthy controls. Pauses were measured from the…

  3. Crossing boundaries in interprofessional education: A call for instructional integration of two script concepts.

    PubMed

    Kiesewetter, Jan; Kollar, Ingo; Fernandez, Nicolas; Lubarsky, Stuart; Kiessling, Claudia; Fischer, Martin R; Charlin, Bernard

    2016-09-01

    Clinical work occurs in a context which is heavily influenced by social interactions. The absence of theoretical frameworks underpinning the design of collaborative learning has become a roadblock for interprofessional education (IPE). This article proposes a script-based framework for the design of IPE. This framework provides suggestions for designing learning environments intended to foster competences we feel are fundamental to successful interprofessional care. The current literature describes two script concepts: "illness scripts" and "internal/external collaboration scripts". Illness scripts are specific knowledge structures that link general disease categories and specific examples of diseases. "Internal collaboration scripts" refer to an individual's knowledge about how to interact with others in a social situation. "External collaboration scripts" are instructional scaffolds designed to help groups collaborate. Instructional research relating to illness scripts and internal collaboration scripts supports (a) putting learners in authentic situations in which they need to engage in clinical reasoning, and (b) scaffolding their interaction with others with "external collaboration scripts". Thus, well-established experiential instructional approaches should be combined with more fine-grained script-based scaffolding approaches. The resulting script-based framework offers instructional designers insights into how students can be supported to develop the necessary skills to master complex interprofessional clinical situations.

  4. Sexual scripts among young heterosexually active men and women: continuity and change.

    PubMed

    Masters, N Tatiana; Casey, Erin; Wells, Elizabeth A; Morrison, Diane M

    2013-01-01

    Whereas gendered sexual scripts are hegemonic at the cultural level, research suggests they may be less so at dyadic and individual levels. Understanding "disjunctures" between sexual scripts at different levels holds promise for illuminating mechanisms through which sexual scripts can change. Through interviews with 44 heterosexually active men and women aged 18 to 25, the ways young people grappled with culture-level scripts for sexuality and relationships were delineated. Findings suggest that, although most participants' culture-level gender scripts for behavior in sexual relationships were congruent with descriptions of traditional masculine and feminine sexuality, there was heterogeneity in how or whether these scripts were incorporated into individual relationships. Specifically, three styles of working with sexual scripts were found: conforming, in which personal gender scripts for sexual behavior overlapped with traditional scripts; exception-finding, in which interviewees accepted culture-level gender scripts as a reality, but created exceptions to gender rules for themselves; and transforming, in which participants either attempted to remake culture-level gender scripts or interpreted their own nontraditional styles as equally normative. Changing sexual scripts can potentially contribute to decreased gender inequity in the sexual realm and to increased opportunities for sexual satisfaction, safety, and well-being, particularly for women, but for men as well.

  5. 21 CFR 820.200 - Servicing.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... reports with appropriate statistical methodology in accordance with § 820.100. (c) Each manufacturer who... chapter shall automatically consider the report a complaint and shall process it in accordance with the... device serviced; (2) Any device identification(s) and control number(s) used; (3) The date of service; (4...

  6. 48 CFR 252.211-7003 - Item unique identification and valuation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... reader or interrogator, used to retrieve data encoded on machine-readable media. Concatenated unique item... identifier. Item means a single hardware article or a single unit formed by a grouping of subassemblies... manufactured under identical conditions. Machine-readable means an automatic identification technology media...

  7. Development of a Web-Based Distributed Interactive Simulation (DIS) Environment Using JavaScript

    DTIC Science & Technology

    2014-09-01

    scripting that let users change or interact with web content depending on user input, which is in contrast with server-side scripts such as PHP, Java and...transfer, DIS usually broadcasts or multicasts its PDUs based on UDP socket. 3. JavaScript JavaScript is the scripting language of the web, and all...IDE) for developing desktop, mobile and web applications with JAVA , C++, HTML5, JavaScript and more. b. Framework The DIS implementation of

  8. Sparks Will Fly: engineering creative script conflicts

    NASA Astrophysics Data System (ADS)

    Veale, Tony; Valitutti, Alessandro

    2017-10-01

    Scripts are often dismissed as the stuff of good movies and bad politics. They codify cultural experience so rigidly that they remove our freedom of choice and become the very antithesis of creativity. Yet, mental scripts have an important role to play in our understanding of creative behaviour, since a deliberate departure from an established script can produce results that are simultaneously novel and familiar, especially when others stick to the conventional script. Indeed, creative opportunities often arise at the overlapping boundaries of two scripts that antagonistically compete to mentally organise the same situation. This work explores the computational integration of competing scripts to generate creative friction in short texts that are surprising but meaningful. Our exploration considers conventional macro-scripts - ordered sequences of actions - and the less obvious micro-scripts that operate at even the lowest levels of language. For the former, we generate plots that squeeze two scripts into a single mini-narrative; for the latter, we generate ironic descriptions that use conflicting scripts to highlight the speaker's pragmatic insincerity. We show experimentally that verbal irony requires both kinds of scripts - macro and micro - to work together to reliably generate creative sparks from a speaker's subversive intent.

  9. Using script theory to cultivate illness script formation and clinical reasoning in health professions education.

    PubMed

    Lubarsky, Stuart; Dory, Valérie; Audétat, Marie-Claude; Custers, Eugène; Charlin, Bernard

    2015-01-01

    Script theory proposes an explanation for how information is stored in and retrieved from the human mind to influence individuals' interpretation of events in the world. Applied to medicine, script theory focuses on knowledge organization as the foundation of clinical reasoning during patient encounters. According to script theory, medical knowledge is bundled into networks called 'illness scripts' that allow physicians to integrate new incoming information with existing knowledge, recognize patterns and irregularities in symptom complexes, identify similarities and differences between disease states, and make predictions about how diseases are likely to unfold. These knowledge networks become updated and refined through experience and learning. The implications of script theory on medical education are profound. Since clinician-teachers cannot simply transfer their customized collections of illness scripts into the minds of learners, they must create opportunities to help learners develop and fine-tune their own sets of scripts. In this essay, we provide a basic sketch of script theory, outline the role that illness scripts play in guiding reasoning during clinical encounters, and propose strategies for aligning teaching practices in the classroom and the clinical setting with the basic principles of script theory.

  10. CCP4i2: the new graphical user interface to the CCP4 program suite.

    PubMed

    Potterton, Liz; Agirre, Jon; Ballard, Charles; Cowtan, Kevin; Dodson, Eleanor; Evans, Phil R; Jenkins, Huw T; Keegan, Ronan; Krissinel, Eugene; Stevenson, Kyle; Lebedev, Andrey; McNicholas, Stuart J; Nicholls, Robert A; Noble, Martin; Pannu, Navraj S; Roth, Christian; Sheldrick, George; Skubak, Pavol; Turkenburg, Johan; Uski, Ville; von Delft, Frank; Waterman, David; Wilson, Keith; Winn, Martyn; Wojdyr, Marcin

    2018-02-01

    The CCP4 (Collaborative Computational Project, Number 4) software suite for macromolecular structure determination by X-ray crystallography groups brings together many programs and libraries that, by means of well established conventions, interoperate effectively without adhering to strict design guidelines. Because of this inherent flexibility, users are often presented with diverse, even divergent, choices for solving every type of problem. Recently, CCP4 introduced CCP4i2, a modern graphical interface designed to help structural biologists to navigate the process of structure determination, with an emphasis on pipelining and the streamlined presentation of results. In addition, CCP4i2 provides a framework for writing structure-solution scripts that can be built up incrementally to create increasingly automatic procedures.

  11. ROSSTEP v1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allevato, Adam

    2016-07-21

    ROSSTEP is a system for sequentially running roslaunch, rosnode, and bash scripts automatically, for use in Robot Operating System (ROS) applications. The system consists of YAML files which define actions and conditions. A python file parses the code and runs actions sequentially using the sys and subprocess python modules. Between actions, it uses various ROS-based code to check conditions required to proceed, and only moves on to the next action when all the necessary conditions have been met. Included is rosstep-creator, a QT application designed to create the YAML files required for ROSSTEP. It has a nearly one-to-one mapping frommore » interface elements to YAML output, and serves as a convenient GUI for working with the ROSSTEP system.« less

  12. BaHaMAS A Bash Handler to Monitor and Administrate Simulations

    NASA Astrophysics Data System (ADS)

    Sciarra, Alessandro

    2018-03-01

    Numerical QCD is often extremely resource demanding and it is not rare to run hundreds of simulations at the same time. Each of these can last for days or even months and it typically requires a job-script file as well as an input file with the physical parameters for the application to be run. Moreover, some monitoring operations (i.e. copying, moving, deleting or modifying files, resume crashed jobs, etc.) are often required to guarantee that the final statistics is correctly accumulated. Proceeding manually in handling simulations is probably the most error-prone way and it is deadly uncomfortable and inefficient! BaHaMAS was developed and successfully used in the last years as a tool to automatically monitor and administrate simulations.

  13. Complete scanpaths analysis toolbox.

    PubMed

    Augustyniak, Piotr; Mikrut, Zbigniew

    2006-01-01

    This paper presents a complete open software environment for control, data processing and assessment of visual experiments. Visual experiments are widely used in research on human perception physiology and the results are applicable to various visual information-based man-machine interfacing, human-emulated automatic visual systems or scanpath-based learning of perceptual habits. The toolbox is designed for Matlab platform and supports infra-red reflection-based eyetracker in calibration and scanpath analysis modes. Toolbox procedures are organized in three layers: the lower one, communicating with the eyetracker output file, the middle detecting scanpath events on a physiological background and the one upper consisting of experiment schedule scripts, statistics and summaries. Several examples of visual experiments carried out with use of the presented toolbox complete the paper.

  14. 78 FR 63159 - Amendment to Certification of Nebraska's Central Filing System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-23

    ... system for Nebraska to permit the conversion of all debtor social security and taxpayer identification... automatically convert social security numbers and taxpayer identification numbers into ten number unique... certified central filing systems is available through the Internet on the GIPSA Web site ( http://www.gipsa...

  15. Semi-automated identification of leopard frogs

    USGS Publications Warehouse

    Petrovska-Delacrétaz, Dijana; Edwards, Aaron; Chiasson, John; Chollet, Gérard; Pilliod, David S.

    2014-01-01

    Principal component analysis is used to implement a semi-automatic recognition system to identify recaptured northern leopard frogs (Lithobates pipiens). Results of both open set and closed set experiments are given. The presented algorithm is shown to provide accurate identification of 209 individual leopard frogs from a total set of 1386 images.

  16. 33 CFR 169.235 - What exemptions are there from reporting?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... SECURITY (CONTINUED) PORTS AND WATERWAYS SAFETY SHIP REPORTING SYSTEMS Transmission of Long Range Identification and Tracking Information § 169.235 What exemptions are there from reporting? A ship is exempt from this subpart if it is— (a) Fitted with an operating automatic identification system (AIS), under 33 CFR...

  17. 33 CFR 164.46 - Automatic Identification System (AIS).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (AIS). 164.46 Section 164.46 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Identification System (AIS). (a) The following vessels must have a properly installed, operational, type approved AIS as of the date specified: (1) Self-propelled vessels of 65 feet or more in length, other than...

  18. 33 CFR 169.235 - What exemptions are there from reporting?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... SECURITY (CONTINUED) PORTS AND WATERWAYS SAFETY SHIP REPORTING SYSTEMS Transmission of Long Range Identification and Tracking Information § 169.235 What exemptions are there from reporting? A ship is exempt from this subpart if it is— (a) Fitted with an operating automatic identification system (AIS), under 33 CFR...

  19. 33 CFR 164.46 - Automatic Identification System (AIS).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (AIS). 164.46 Section 164.46 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Identification System (AIS). (a) The following vessels must have a properly installed, operational, type approved AIS as of the date specified: (1) Self-propelled vessels of 65 feet or more in length, other than...

  20. 33 CFR 164.46 - Automatic Identification System (AIS).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (AIS). 164.46 Section 164.46 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Identification System (AIS). (a) The following vessels must have a properly installed, operational, type approved AIS as of the date specified: (1) Self-propelled vessels of 65 feet or more in length, other than...

  1. 33 CFR 169.235 - What exemptions are there from reporting?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... SECURITY (CONTINUED) PORTS AND WATERWAYS SAFETY SHIP REPORTING SYSTEMS Transmission of Long Range Identification and Tracking Information § 169.235 What exemptions are there from reporting? A ship is exempt from this subpart if it is— (a) Fitted with an operating automatic identification system (AIS), under 33 CFR...

  2. 33 CFR 164.46 - Automatic Identification System (AIS).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (AIS). 164.46 Section 164.46 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Identification System (AIS). (a) The following vessels must have a properly installed, operational, type approved AIS as of the date specified: (1) Self-propelled vessels of 65 feet or more in length, other than...

  3. 33 CFR 169.235 - What exemptions are there from reporting?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... SECURITY (CONTINUED) PORTS AND WATERWAYS SAFETY SHIP REPORTING SYSTEMS Transmission of Long Range Identification and Tracking Information § 169.235 What exemptions are there from reporting? A ship is exempt from this subpart if it is— (a) Fitted with an operating automatic identification system (AIS), under 33 CFR...

  4. 33 CFR 164.46 - Automatic Identification System (AIS).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (AIS). 164.46 Section 164.46 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Identification System (AIS). (a) The following vessels must have a properly installed, operational, type approved AIS as of the date specified: (1) Self-propelled vessels of 65 feet or more in length, other than...

  5. The role of scripts in personal consistency and individual differences.

    PubMed

    Demorest, Amy; Popovska, Ana; Dabova, Milena

    2012-02-01

    This article examines the role of scripts in personal consistency and individual differences. Scripts are personally distinctive rules for understanding emotionally significant experiences. In 2 studies, scripts were identified from autobiographical memories of college students (Ns = 47 and 50) using standard categories of events and emotions to derive event-emotion compounds (e.g., Affiliation-Joy). In Study 1, scripts predicted responses to a reaction-time task 1 month later, such that participants responded more quickly to the event from their script when asked to indicate what emotion would be evoked by a series of events. In Study 2, individual differences in 5 common scripts were found to be systematically related to individual differences in traits of the Five-Factor Model. Distinct patterns of correlation revealed the importance of studying events and emotions in compound units, that is, in script form (e.g., Agreeableness was correlated with the script Affiliation-Joy but not with the scripts Fun-Joy or Affiliation-Love). © 2012 The Authors. Journal of Personality © 2012, Wiley Periodicals, Inc.

  6. [Effects of planning and executive functions on young children's script change strategy: A developmental perspective].

    PubMed

    Yanaoka, Kaichi

    2016-02-01

    This research examined the effects of planning and executive functions on young children's (ages 3-to 5-years) strategies in changing scripts. Young children (N = 77) performed a script task (doll task), three executive function tasks (DCCS, red/blue task, and nine box task), a planning task, and a receptive vocabulary task. In the doll task, young children first enacted a "changing clothes" script, and then faced a situation in which some elements of the script were inappropriate. They needed to enact a script by compensating inappropriate items for the other-script items or by changing to the other script in advance. The results showed that shifting, a factor of executive function, had a positive influence on whether young children could compensate inappropriate items. In addition, planning was also an important factor that helped children to change to the other script in advance. These findings suggest that shifting and planning play different roles in using the two strategies appropriately when young children enact scripts in unexpected situations.

  7. Report: Unsupervised identification of malaria parasites using computer vision.

    PubMed

    Khan, Najeed Ahmed; Pervaz, Hassan; Latif, Arsalan; Musharaff, Ayesha

    2017-01-01

    Malaria in human is a serious and fatal tropical disease. This disease results from Anopheles mosquitoes that are infected by Plasmodium species. The clinical diagnosis of malaria based on the history, symptoms and clinical findings must always be confirmed by laboratory diagnosis. Laboratory diagnosis of malaria involves identification of malaria parasite or its antigen / products in the blood of the patient. Manual diagnosis of malaria parasite by the pathologists has proven to become cumbersome. Therefore, there is a need of automatic, efficient and accurate identification of malaria parasite. In this paper, we proposed a computer vision based approach to identify the malaria parasite from light microscopy images. This research deals with the challenges involved in the automatic detection of malaria parasite tissues. Our proposed method is based on the pixel-based approach. We used K-means clustering (unsupervised approach) for the segmentation to identify malaria parasite tissues.

  8. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  9. Space-Based Identification of Archaeological Illegal Excavations and a New Automatic Method for Looting Feature Extraction in Desert Areas

    NASA Astrophysics Data System (ADS)

    Lasaponara, Rosa; Masini, Nicola

    2018-06-01

    The identification and quantification of disturbance of archaeological sites has been generally approached by visual inspection of optical aerial or satellite pictures. In this paper, we briefly summarize the state of the art of the traditionally satellite-based approaches for looting identification and propose a new automatic method for archaeological looting feature extraction approach (ALFEA). It is based on three steps: the enhancement using spatial autocorrelation, unsupervised classification, and segmentation. ALFEA has been applied to Google Earth images of two test areas, selected in desert environs in Syria (Dura Europos), and in Peru (Cahuachi-Nasca). The reliability of ALFEA was assessed through field surveys in Peru and visual inspection for the Syrian case study. Results from the evaluation procedure showed satisfactory performance from both of the two analysed test cases with a rate of success higher than 90%.

  10. Types and Characteristics of Fish and Seafood Provisioning Scripts Used by Rural Midlife Adults.

    PubMed

    Bostic, Stephanie M; Sobal, Jeffery; Bisogni, Carole A; Monclova, Juliet M

    To examine rural New York State consumers' cognitive scripts for fish and seafood provisioning. A cross-sectional design with in-depth, semistructured interviews. Three rural New York State counties. Adults (n = 31) with diverse fish-related experiences were purposefully recruited. Scripts describing fish and seafood acquisition, preparation, and eating out. Interview transcripts were coded for emergent themes using Atlas.ti. Diagrams of scripts for each participant were constructed. Five types of acquisition scripts included quality-oriented, price-oriented, routine, special occasion, and fresh catch. Frequently used preparation scripts included everyday cooking, fast meal, entertaining, and grilling. Scripts for eating out included fish as first choice, Friday outing, convenient meals, special event, and travel meals. Personal values and resources influenced script development. Individuals drew on a repertoire of scripts based on their goals and resources at that time and in that place. Script characteristics of scope, flexibility, and complexity varied widely. Scripts incorporated goals, values, and resources into routine food behaviors. Understanding the characteristics of scripts provided insights about fish provisioning and opportunities to reduce the gap between current intake and dietary guidelines in this rural setting. Copyright © 2017 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  11. Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer.

    PubMed

    Vogel, Sven C; Biwer, Chris M; Rogers, David H; Ahrens, James P; Hackenberg, Robert E; Onken, Drew; Zhang, Jianzhong

    2018-06-01

    A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U-Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr 3 . A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download.

  12. Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer

    PubMed Central

    Biwer, Chris M.; Rogers, David H.; Ahrens, James P.; Hackenberg, Robert E.; Onken, Drew; Zhang, Jianzhong

    2018-01-01

    A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U–Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr3. A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download. PMID:29896062

  13. Automatic photointerpretation for plant species and stress identification (ERTS-A1)

    NASA Technical Reports Server (NTRS)

    Swanlund, G. D. (Principal Investigator); Kirvida, L.; Johnson, G. R.

    1973-01-01

    The author has identified the following significant results. Automatic stratification of forested land from ERTS-1 data provides a valuable tool for resource management. The results are useful for wood product yield estimates, recreation and wildlife management, forest inventory, and forest condition monitoring. Automatic procedures based on both multispectral and spatial features are evaluated. With five classes, training and testing on the same samples, classification accuracy of 74 percent was achieved using the MSS multispectral features. When adding texture computed from 8 x 8 arrays, classification accuracy of 90 percent was obtained.

  14. Automatic identification of watercourses in flat and engineered landscapes by computing the skeleton of a LiDAR point cloud

    NASA Astrophysics Data System (ADS)

    Broersen, Tom; Peters, Ravi; Ledoux, Hugo

    2017-09-01

    Drainage networks play a crucial role in protecting land against floods. It is therefore important to have an accurate map of the watercourses that form the drainage network. Previous work on the automatic identification of watercourses was typically based on grids, focused on natural landscapes, and used mostly the slope and curvature of the terrain. We focus in this paper on areas that are characterised by low-lying, flat, and engineered landscapes; these are characteristic to the Netherlands for instance. We propose a new methodology to identify watercourses automatically from elevation data, it uses solely a raw classified LiDAR point cloud as input. We show that by computing twice a skeleton of the point cloud-once in 2D and once in 3D-and that by using the properties of the skeletons we can identify most of the watercourses. We have implemented our methodology and tested it for three different soil types around Utrecht, the Netherlands. We were able to detect 98% of the watercourses for one soil type, and around 75% for the worst case, when we compared to a reference dataset that was obtained semi-automatically.

  15. Automatic contact in DYNA3D for vehicle crashworthiness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whirley, R.G.; Engelmann, B.E.

    1993-07-15

    This paper presents a new formulation for the automatic definition and treatment of mechanical contact in explicit nonlinear finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. This paper discusses in detail a new four-step automatic contact algorithm. Key aspects of the proposed method include automatic identification of adjacent and opposite surfaces in the global search phase, and the use of a smoothly varying surface normal which allows a consistent treatment of shell intersection and corner contact conditions without ad-hocmore » rules. The paper concludes with three examples which illustrate the performance of the newly proposed algorithm in the public DYNA3D code.« less

  16. Research on the Additional Secondary Phase Factor for Automatic Identification System Signals Transmitted over a Rough Sea Surface

    PubMed Central

    Zhang, Shufang; Sun, Xiaowen

    2018-01-01

    This paper investigates the Additional Secondary Phase Factor (ASF) characteristics of Automatic Identification System (AIS) signals spreading over a rough sea surface. According to the change of the ASFs for AIS signals in different signal form, the influences of the different propagation conditions on the ASFs are analyzed. The expression, numerical calculation, and simulation analysis of the ASFs of AIS signal are performed in the rough sea surface. The results contribute to the high-accuracy propagation delay measurement of AIS signals spreading over the rough sea surface as, well as providing a reference for reliable communication link design in marine engineering for Very High Frequency (VHF) signals. PMID:29462995

  17. Identification of mycobacterium tuberculosis in sputum smear slide using automatic scanning microscope

    NASA Astrophysics Data System (ADS)

    Rulaningtyas, Riries; Suksmono, Andriyan B.; Mengko, Tati L. R.; Saptawati, Putri

    2015-04-01

    Sputum smear observation has an important role in tuberculosis (TB) disease diagnosis, because it needs accurate identification to avoid high errors diagnosis. In development countries, sputum smear slide observation is commonly done with conventional light microscope from Ziehl-Neelsen stained tissue and it doesn't need high cost to maintain the microscope. The clinicians do manual screening process for sputum smear slide which is time consuming and needs highly training to detect the presence of TB bacilli (mycobacterium tuberculosis) accurately, especially for negative slide and slide with less number of TB bacilli. For helping the clinicians, we propose automatic scanning microscope with automatic identification of TB bacilli. The designed system modified the field movement of light microscope with stepper motor which was controlled by microcontroller. Every sputum smear field was captured by camera. After that some image processing techniques were done for the sputum smear images. The color threshold was used for background subtraction with hue canal in HSV color space. Sobel edge detection algorithm was used for TB bacilli image segmentation. We used feature extraction based on shape for bacilli analyzing and then neural network classified TB bacilli or not. The results indicated identification of TB bacilli that we have done worked well and detected TB bacilli accurately in sputum smear slide with normal staining, but not worked well in over staining and less staining tissue slide. However, overall the designed system can help the clinicians in sputum smear observation becomes more easily.

  18. Automatic poisson peak harvesting for high throughput protein identification.

    PubMed

    Breen, E J; Hopwood, F G; Williams, K L; Wilkins, M R

    2000-06-01

    High throughput identification of proteins by peptide mass fingerprinting requires an efficient means of picking peaks from mass spectra. Here, we report the development of a peak harvester to automatically pick monoisotopic peaks from spectra generated on matrix-assisted laser desorption/ionisation time of flight (MALDI-TOF) mass spectrometers. The peak harvester uses advanced mathematical morphology and watershed algorithms to first process spectra to stick representations. Subsequently, Poisson modelling is applied to determine which peak in an isotopically resolved group represents the monoisotopic mass of a peptide. We illustrate the features of the peak harvester with mass spectra of standard peptides, digests of gel-separated bovine serum albumin, and with Escherictia coli proteins prepared by two-dimensional polyacrylamide gel electrophoresis. In all cases, the peak harvester proved effective in its ability to pick similar monoisotopic peaks as an experienced human operator, and also proved effective in the identification of monoisotopic masses in cases where isotopic distributions of peptides were overlapping. The peak harvester can be operated in an interactive mode, or can be completely automated and linked through to peptide mass fingerprinting protein identification tools to achieve high throughput automated protein identification.

  19. The Mechanics of CSCL Macro Scripts

    ERIC Educational Resources Information Center

    Dillenbourg, Pierre; Hong, Fabrice

    2008-01-01

    Macro scripts structure collaborative learning and foster the emergence of knowledge-productive interactions such as argumentation, explanations and mutual regulation. We propose a pedagogical model for the designing of scripts and illustrate this model using three scripts. In brief, a script disturbs the natural convergence of a team and in doing…

  20. Script Reforms--Are They Necessary?

    ERIC Educational Resources Information Center

    James, Gregory

    Script reform, the modification of an existing writing system, is often confused with script replacement of one writing system with another. Turkish underwent the replacement of Arabic script by an adaptation of Roman script under Kamel Ataturk, but a similar replacement in Persian was rejected because of the high rate of existing literacy in…

  1. Gender differences in performance of script analysis by older adults.

    PubMed

    Helmes, E; Bush, J D; Pike, D L; Drake, D G

    2006-12-01

    Script analysis as a test of executive functions is presumed sensitive to cognitive changes seen with increasing age. Two studies evaluated if gender differences exist in performance on scripts for familiar and unfamiliar tasks in groups of cognitively intact older adults. In Study 1, 26 older adults completed male and female stereotypical scripts. Results were not significant but a tendency was present, with genders making fewer impossible errors on the gender-typical script. Such an interaction was also noted in Study 2, which contrasted 50 older with 50 younger adults on three scripts, including a script with neutral familiarity. The pattern of significant interactions for errors suggested the need to use scripts that are based upon tasks that are equally familiar to both genders.

  2. The Departmental Script as an Ongoing Conversation into the Phronesis of Teaching Science as Inquiry

    NASA Astrophysics Data System (ADS)

    Melville, Wayne; Campbell, Todd; Fazio, Xavier; Bartley, Anthony

    2012-12-01

    This article investigates the extent to which a science department script supports the teaching and learning of science as inquiry and how this script is translated into individual teachers' classrooms. This study was completed at one school in Canada which, since 2000, has developed a departmental script supportive of teaching and learning of science as inquiry. Through a mixed-method strategy, multiple data sources were drawn together to inform a cohesive narrative about scripts, science departments, and individual classrooms. Results of the study reveal three important findings: (1) the departmental script is not an artefact, but instead is an ongoing conversation into the episteme, techne and phronesis of science teaching; (2) the consistently reformed teaching practices that were observed lead us to believe that a departmental script has the capacity to enhance the teaching of science as inquiry; and, (3) the existence of a departmental script does not mean that teaching will be `standardized' in the bureaucratic sense of the word. Our findings indicate that a departmental script can be considered to concurrently operate as an epistemic script that is translated consistently across the classes, and a social script that was more open to interpretation within individual teachers' classrooms.

  3. An automatic method to detect and track the glottal gap from high speed videoendoscopic images.

    PubMed

    Andrade-Miranda, Gustavo; Godino-Llorente, Juan I; Moro-Velázquez, Laureano; Gómez-García, Jorge Andrés

    2015-10-29

    The image-based analysis of the vocal folds vibration plays an important role in the diagnosis of voice disorders. The analysis is based not only on the direct observation of the video sequences, but also in an objective characterization of the phonation process by means of features extracted from the recorded images. However, such analysis is based on a previous accurate identification of the glottal gap, which is the most challenging step for a further automatic assessment of the vocal folds vibration. In this work, a complete framework to automatically segment and track the glottal area (or glottal gap) is proposed. The algorithm identifies a region of interest that is adapted along time, and combine active contours and watershed transform for the final delineation of the glottis and also an automatic procedure for synthesize different videokymograms is proposed. Thanks to the ROI implementation, our technique is robust to the camera shifting and also the objective test proved the effectiveness and performance of the approach in the most challenging scenarios that it is when exist an inappropriate closure of the vocal folds. The novelties of the proposed algorithm relies on the used of temporal information for identify an adaptive ROI and the use of watershed merging combined with active contours for the glottis delimitation. Additionally, an automatic procedure for synthesize multiline VKG by the identification of the glottal main axis is developed.

  4. Developing a standard for de-identifying electronic patient records written in Swedish: precision, recall and F-measure in a manual and computerized annotation trial.

    PubMed

    Velupillai, Sumithra; Dalianis, Hercules; Hassel, Martin; Nilsson, Gunnar H

    2009-12-01

    Electronic patient records (EPRs) contain a large amount of information written in free text. This information is considered very valuable for research but is also very sensitive since the free text parts may contain information that could reveal the identity of a patient. Therefore, methods for de-identifying EPRs are needed. The work presented here aims to perform a manual and automatic Protected Health Information (PHI)-annotation trial for EPRs written in Swedish. This study consists of two main parts: the initial creation of a manually PHI-annotated gold standard, and the porting and evaluation of an existing de-identification software written for American English to Swedish in a preliminary automatic de-identification trial. Results are measured with precision, recall and F-measure. This study reports fairly high Inter-Annotator Agreement (IAA) results on the manually created gold standard, especially for specific tags such as names. The average IAA over all tags was 0.65 F-measure (0.84 F-measure highest pairwise agreement). For name tags the average IAA was 0.80 F-measure (0.91 F-measure highest pairwise agreement). Porting a de-identification software written for American English to Swedish directly was unfortunately non-trivial, yielding poor results. Developing gold standard sets as well as automatic systems for de-identification tasks in Swedish is feasible. However, discussions and definitions on identifiable information is needed, as well as further developments both on the tag sets and the annotation guidelines, in order to get a reliable gold standard. A completely new de-identification software needs to be developed.

  5. PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics.

    PubMed

    Möller, Birgit; Poeschl, Yvonne; Plötner, Romina; Bürstenbinder, Katharina

    2017-11-01

    Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. © 2017 American Society of Plant Biologists. All Rights Reserved.

  6. [Script crossing in scanning electron microscopy].

    PubMed

    Oehmichen, M; von Kortzfleisch, D; Hegner, B

    1989-01-01

    A case of mixed script in which ball point-pen ink was contaminated with typewriting prompted a survey of the literature and a systematic SEM study of mixed script with various writing instruments or inks. Mixed scripts produced with the following instruments or inks were investigated: pencil, ink/India ink, ball-pint pen, felt-tip pen, copied script and typewriter. This investigation showed SEM to be the method of choice for visualizing overlying scripts produced by different writing instruments or inks.

  7. Automatic topics segmentation for TV news video

    NASA Astrophysics Data System (ADS)

    Hmayda, Mounira; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    Automatic identification of television programs in the TV stream is an important task for operating archives. This article proposes a new spatio-temporal approach to identify the programs in TV stream into two main steps: First, a reference catalogue for video features visual jingles built. We operate the features that characterize the instances of the same program type to identify the different types of programs in the flow of television. The role of video features is to represent the visual invariants for each visual jingle using appropriate automatic descriptors for each television program. On the other hand, programs in television streams are identified by examining the similarity of the video signal for visual grammars in the catalogue. The main idea of the identification process is to compare the visual similarity of the video signal features in the flow of television to the catalogue. After presenting the proposed approach, the paper overviews encouraging experimental results on several streams extracted from different channels and compounds of several programs.

  8. An automatic identification procedure to promote the use of FES-cycling training for hemiparetic patients.

    PubMed

    Ambrosini, Emilia; Ferrante, Simona; Schauer, Thomas; Ferrigno, Giancarlo; Molteni, Franco; Pedrocchi, Alessandra

    2014-01-01

    Cycling induced by Functional Electrical Stimulation (FES) training currently requires a manual setting of different parameters, which is a time-consuming and scarcely repeatable procedure. We proposed an automatic procedure for setting session-specific parameters optimized for hemiparetic patients. This procedure consisted of the identification of the stimulation strategy as the angular ranges during which FES drove the motion, the comparison between the identified strategy and the physiological muscular activation strategy, and the setting of the pulse amplitude and duration of each stimulated muscle. Preliminary trials on 10 healthy volunteers helped define the procedure. Feasibility tests on 8 hemiparetic patients (5 stroke, 3 traumatic brain injury) were performed. The procedure maximized the motor output within the tolerance constraint, identified a biomimetic strategy in 6 patients, and always lasted less than 5 minutes. Its reasonable duration and automatic nature make the procedure usable at the beginning of every training session, potentially enhancing the performance of FES-cycling training.

  9. Evaluation of the automatic optical authentication technologies for control systems of objects

    NASA Astrophysics Data System (ADS)

    Averkin, Vladimir V.; Volegov, Peter L.; Podgornov, Vladimir A.

    2000-03-01

    The report considers the evaluation of the automatic optical authentication technologies for the automated integrated system of physical protection, control and accounting of nuclear materials at RFNC-VNIITF, and for providing of the nuclear materials nonproliferation regime. The report presents the nuclear object authentication objectives and strategies, the methodology of the automatic optical authentication and results of the development of pattern recognition techniques carried out under the ISTC project #772 with the purpose of identification of unique features of surface structure of a controlled object and effects of its random treatment. The current decision of following functional control tasks is described in the report: confirmation of the item authenticity (proof of the absence of its substitution by an item of similar shape), control over unforeseen change of item state, control over unauthorized access to the item. The most important distinctive feature of all techniques is not comprehensive description of some properties of controlled item, but unique identification of item using minimum necessary set of parameters, properly comprising identification attribute of the item. The main emphasis in the technical approach is made on the development of rather simple technological methods for the first time intended for use in the systems of physical protection, control and accounting of nuclear materials. The developed authentication devices and system are described.

  10. MetMSLine: an automated and fully integrated pipeline for rapid processing of high-resolution LC-MS metabolomic datasets.

    PubMed

    Edmands, William M B; Barupal, Dinesh K; Scalbert, Augustin

    2015-03-01

    MetMSLine represents a complete collection of functions in the R programming language as an accessible GUI for biomarker discovery in large-scale liquid-chromatography high-resolution mass spectral datasets from acquisition through to final metabolite identification forming a backend to output from any peak-picking software such as XCMS. MetMSLine automatically creates subdirectories, data tables and relevant figures at the following steps: (i) signal smoothing, normalization, filtration and noise transformation (PreProc.QC.LSC.R); (ii) PCA and automatic outlier removal (Auto.PCA.R); (iii) automatic regression, biomarker selection, hierarchical clustering and cluster ion/artefact identification (Auto.MV.Regress.R); (iv) Biomarker-MS/MS fragmentation spectra matching and fragment/neutral loss annotation (Auto.MS.MS.match.R) and (v) semi-targeted metabolite identification based on a list of theoretical masses obtained from public databases (DBAnnotate.R). All source code and suggested parameters are available in an un-encapsulated layout on http://wmbedmands.github.io/MetMSLine/. Readme files and a synthetic dataset of both X-variables (simulated LC-MS data), Y-variables (simulated continuous variables) and metabolite theoretical masses are also available on our GitHub repository. © The Author 2014. Published by Oxford University Press.

  11. MetMSLine: an automated and fully integrated pipeline for rapid processing of high-resolution LC–MS metabolomic datasets

    PubMed Central

    Edmands, William M. B.; Barupal, Dinesh K.; Scalbert, Augustin

    2015-01-01

    Summary: MetMSLine represents a complete collection of functions in the R programming language as an accessible GUI for biomarker discovery in large-scale liquid-chromatography high-resolution mass spectral datasets from acquisition through to final metabolite identification forming a backend to output from any peak-picking software such as XCMS. MetMSLine automatically creates subdirectories, data tables and relevant figures at the following steps: (i) signal smoothing, normalization, filtration and noise transformation (PreProc.QC.LSC.R); (ii) PCA and automatic outlier removal (Auto.PCA.R); (iii) automatic regression, biomarker selection, hierarchical clustering and cluster ion/artefact identification (Auto.MV.Regress.R); (iv) Biomarker—MS/MS fragmentation spectra matching and fragment/neutral loss annotation (Auto.MS.MS.match.R) and (v) semi-targeted metabolite identification based on a list of theoretical masses obtained from public databases (DBAnnotate.R). Availability and implementation: All source code and suggested parameters are available in an un-encapsulated layout on http://wmbedmands.github.io/MetMSLine/. Readme files and a synthetic dataset of both X-variables (simulated LC–MS data), Y-variables (simulated continuous variables) and metabolite theoretical masses are also available on our GitHub repository. Contact: ScalbertA@iarc.fr PMID:25348215

  12. Parametric diagnosis of the adaptive gas path in the automatic control system of the aircraft engine

    NASA Astrophysics Data System (ADS)

    Kuznetsova, T. A.

    2017-01-01

    The paper dwells on the adaptive multimode mathematical model of the gas-turbine aircraft engine (GTE) embedded in the automatic control system (ACS). The mathematical model is based on the throttle performances, and is characterized by high accuracy of engine parameters identification in stationary and dynamic modes. The proposed on-board engine model is the state space linearized low-level simulation. The engine health is identified by the influence of the coefficient matrix. The influence coefficient is determined by the GTE high-level mathematical model based on measurements of gas-dynamic parameters. In the automatic control algorithm, the sum of squares of the deviation between the parameters of the mathematical model and real GTE is minimized. The proposed mathematical model is effectively used for gas path defects detecting in on-line GTE health monitoring. The accuracy of the on-board mathematical model embedded in ACS determines the quality of adaptive control and reliability of the engine. To improve the accuracy of identification solutions and sustainability provision, the numerical method of Monte Carlo was used. The parametric diagnostic algorithm based on the LPτ - sequence was developed and tested. Analysis of the results suggests that the application of the developed algorithms allows achieving higher identification accuracy and reliability than similar models used in practice.

  13. [Preliminary application of scripting in RayStation TPS system].

    PubMed

    Zhang, Jianying; Sun, Jing; Wang, Yun

    2013-07-01

    Discussing the basic application of scripting in RayStation TPS system. On the RayStation 3.0 Platform, the programming methods and the points should be considered during basic scripting application were explored with the help of utility scripts. The typical planning problems in the field of beam arrangement and plan outputting were used as examples by ironprthon language. The necessary properties and the functions of patient object for script writing can be extracted from RayStation system. With the help of NET controls, planning functions such as the interactive parameter input, treatment planning control and the extract of the plan have been realized by scripts. With the help of demo scripts, scripts can be developed in RayStation, as well as the system performance can be upgraded.

  14. Automated detection of retinal landmarks for the identification of clinically relevant regions in fundus photography

    NASA Astrophysics Data System (ADS)

    Ometto, Giovanni; Calivá, Francesco; Al-Diri, Bashir; Bek, Toke; Hunter, Andrew

    2016-03-01

    Automatic, quick and reliable identification of retinal landmarks from fundus photography is key for measurements used in research, diagnosis, screening and treating of common diseases affecting the eyes. This study presents a fast method for the detection of the centre of mass of the vascular arcades, optic nerve head (ONH) and fovea, used in the definition of five clinically relevant areas in use for screening programmes for diabetic retinopathy (DR). Thirty-eight fundus photographs showing 7203 DR lesions were analysed to find the landmarks manually by two retina-experts and automatically by the proposed method. The automatic identification of the ONH and fovea were performed using template matching based on normalised cross correlation. The centre of mass of the arcades was obtained by fitting an ellipse on sample coordinates of the main vessels. The coordinates were obtained by processing the image with hessian filtering followed by shape analyses and finally sampling the results. The regions obtained manually and automatically were used to count the retinal lesions falling within, and to evaluate the method. 92.7% of the lesions were falling within the same regions based on the landmarks selected by the two experts. 91.7% and 89.0% were counted in the same areas identified by the method and the first and second expert respectively. The inter-repeatability of the proposed method and the experts is comparable, while the 100% intra-repeatability makes the algorithm a valuable tool in tasks like analyses in real-time, of large datasets and of intra-patient variability.

  15. Automatically visualise and analyse data on pathways using PathVisioRPC from any programming environment.

    PubMed

    Bohler, Anwesha; Eijssen, Lars M T; van Iersel, Martijn P; Leemans, Christ; Willighagen, Egon L; Kutmon, Martina; Jaillard, Magali; Evelo, Chris T

    2015-08-23

    Biological pathways are descriptive diagrams of biological processes widely used for functional analysis of differentially expressed genes or proteins. Primary data analysis, such as quality control, normalisation, and statistical analysis, is often performed in scripting languages like R, Perl, and Python. Subsequent pathway analysis is usually performed using dedicated external applications. Workflows involving manual use of multiple environments are time consuming and error prone. Therefore, tools are needed that enable pathway analysis directly within the same scripting languages used for primary data analyses. Existing tools have limited capability in terms of available pathway content, pathway editing and visualisation options, and export file formats. Consequently, making the full-fledged pathway analysis tool PathVisio available from various scripting languages will benefit researchers. We developed PathVisioRPC, an XMLRPC interface for the pathway analysis software PathVisio. PathVisioRPC enables creating and editing biological pathways, visualising data on pathways, performing pathway statistics, and exporting results in several image formats in multiple programming environments. We demonstrate PathVisioRPC functionalities using examples in Python. Subsequently, we analyse a publicly available NCBI GEO gene expression dataset studying tumour bearing mice treated with cyclophosphamide in R. The R scripts demonstrate how calls to existing R packages for data processing and calls to PathVisioRPC can directly work together. To further support R users, we have created RPathVisio simplifying the use of PathVisioRPC in this environment. We have also created a pathway module for the microarray data analysis portal ArrayAnalysis.org that calls the PathVisioRPC interface to perform pathway analysis. This module allows users to use PathVisio functionality online without having to download and install the software and exemplifies how the PathVisioRPC interface can be used by data analysis pipelines for functional analysis of processed genomics data. PathVisioRPC enables data visualisation and pathway analysis directly from within various analytical environments used for preliminary analyses. It supports the use of existing pathways from WikiPathways or pathways created using the RPC itself. It also enables automation of tasks performed using PathVisio, making it useful to PathVisio users performing repeated visualisation and analysis tasks. PathVisioRPC is freely available for academic and commercial use at http://projects.bigcat.unimaas.nl/pathvisiorpc.

  16. 6 CFR 37.19 - Machine readable technology on the driver's license or identification card.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., States must use the ISO/IEC 15438:2006(E) Information Technology—Automatic identification and data... 6 Domestic Security 1 2011-01-01 2011-01-01 false Machine readable technology on the driver's..., Verification, and Card Issuance Requirements § 37.19 Machine readable technology on the driver's license or...

  17. 6 CFR 37.19 - Machine readable technology on the driver's license or identification card.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., States must use the ISO/IEC 15438:2006(E) Information Technology—Automatic identification and data... 6 Domestic Security 1 2010-01-01 2010-01-01 false Machine readable technology on the driver's..., Verification, and Card Issuance Requirements § 37.19 Machine readable technology on the driver's license or...

  18. Caller I.D. and ANI: The Technology and the Controversy.

    ERIC Educational Resources Information Center

    Bertot, John C.

    1992-01-01

    Examines telephone caller identification (caller-I.D.) and Automatic Number Identification (ANI) technology and discusses policy and privacy issues at the state and federal levels of government. A comparative analysis of state caller-I.D. adoption policies is presented, caller-I.D. blocking is discussed, costs are reported, and legal aspects of…

  19. Call recognition and individual identification of fish vocalizations based on automatic speech recognition: An example with the Lusitanian toadfish.

    PubMed

    Vieira, Manuel; Fonseca, Paulo J; Amorim, M Clara P; Teixeira, Carlos J C

    2015-12-01

    The study of acoustic communication in animals often requires not only the recognition of species specific acoustic signals but also the identification of individual subjects, all in a complex acoustic background. Moreover, when very long recordings are to be analyzed, automatic recognition and identification processes are invaluable tools to extract the relevant biological information. A pattern recognition methodology based on hidden Markov models is presented inspired by successful results obtained in the most widely known and complex acoustical communication signal: human speech. This methodology was applied here for the first time to the detection and recognition of fish acoustic signals, specifically in a stream of round-the-clock recordings of Lusitanian toadfish (Halobatrachus didactylus) in their natural estuarine habitat. The results show that this methodology is able not only to detect the mating sounds (boatwhistles) but also to identify individual male toadfish, reaching an identification rate of ca. 95%. Moreover this method also proved to be a powerful tool to assess signal durations in large data sets. However, the system failed in recognizing other sound types.

  20. Automated Dispersion and Orientation Analysis for Carbon Nanotube Reinforced Polymer Composites

    PubMed Central

    Gao, Yi; Li, Zhuo; Lin, Ziyin; Zhu, Liangjia; Tannenbaum, Allen; Bouix, Sylvain; Wong, C.P.

    2012-01-01

    The properties of carbon nanotube (CNT)/polymer composites are strongly dependent on the dispersion and orientation of CNTs in the host matrix. Quantification of the dispersion and orientation of CNTs by microstructure observation and image analysis has been demonstrated as a useful way to understand the structure-property relationship of CNT/polymer composites. However, due to the various morphologies and large amount of CNTs in one image, automatic and accurate identification of CNTs has become the bottleneck for dispersion/orientation analysis. To solve this problem, shape identification is performed for each pixel in the filler identification step, so that individual CNT can be exacted from images automatically. The improved filler identification enables more accurate analysis of CNT dispersion and orientation. The obtained dispersion index and orientation index of both synthetic and real images from model compounds correspond well with the observations. Moreover, these indices help to explain the electrical properties of CNT/Silicone composite, which is used as a model compound. This method can also be extended to other polymer composites with high aspect ratio fillers. PMID:23060008

  1. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    PubMed

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  2. Hybrid neuro-fuzzy approach for automatic vehicle license plate recognition

    NASA Astrophysics Data System (ADS)

    Lee, Hsi-Chieh; Jong, Chung-Shi

    1998-03-01

    Most currently available vehicle identification systems use techniques such as R.F., microwave, or infrared to help identifying the vehicle. Transponders are usually installed in the vehicle in order to transmit the corresponding information to the sensory system. It is considered expensive to install a transponder in each vehicle and the malfunction of the transponder will result in the failure of the vehicle identification system. In this study, novel hybrid approach is proposed for automatic vehicle license plate recognition. A system prototype is built which can be used independently or cooperating with current vehicle identification system in identifying a vehicle. The prototype consists of four major modules including the module for license plate region identification, the module for character extraction from the license plate, the module for character recognition, and the module for the SimNet neuro-fuzzy system. To test the performance of the proposed system, three hundred and eighty vehicle image samples are taken by a digital camera. The license plate recognition success rate of the prototype is approximately 91% while the character recognition success rate of the prototype is approximately 97%.

  3. CADDIS Volume 4. Data Analysis: PECBO Appendix - R Scripts for Non-Parametric Regressions

    EPA Pesticide Factsheets

    Script for computing nonparametric regression analysis. Overview of using scripts to infer environmental conditions from biological observations, statistically estimating species-environment relationships, statistical scripts.

  4. TREE2FASTA: a flexible Perl script for batch extraction of FASTA sequences from exploratory phylogenetic trees.

    PubMed

    Sauvage, Thomas; Plouviez, Sophie; Schmidt, William E; Fredericq, Suzanne

    2018-03-05

    The body of DNA sequence data lacking taxonomically informative sequence headers is rapidly growing in user and public databases (e.g. sequences lacking identification and contaminants). In the context of systematics studies, sorting such sequence data for taxonomic curation and/or molecular diversity characterization (e.g. crypticism) often requires the building of exploratory phylogenetic trees with reference taxa. The subsequent step of segregating DNA sequences of interest based on observed topological relationships can represent a challenging task, especially for large datasets. We have written TREE2FASTA, a Perl script that enables and expedites the sorting of FASTA-formatted sequence data from exploratory phylogenetic trees. TREE2FASTA takes advantage of the interactive, rapid point-and-click color selection and/or annotations of tree leaves in the popular Java tree-viewer FigTree to segregate groups of FASTA sequences of interest to separate files. TREE2FASTA allows for both simple and nested segregation designs to facilitate the simultaneous preparation of multiple data sets that may overlap in sequence content.

  5. Asymmetric bias in perception of facial affect among Roman and Arabic script readers.

    PubMed

    Heath, Robin L; Rouhana, Aida; Ghanem, Dana Abi

    2005-01-01

    The asymmetric chimeric faces test is used frequently as an indicator of right hemisphere involvement in the perception of facial affect, as the test is considered free of linguistic elements. Much of the original research with the asymmetric chimeric faces test was conducted with subjects reading left-to-right Roman script, i.e., English. As readers of right-to-left scripts, such as Arabic, demonstrated a mixed or weak rightward bias in judgements of facial affect, the influence of habitual scanning direction was thought to intersect with laterality. We administered the asymmetric chimeric faces test to 1239 adults who represented a range of script experience, i.e., Roman script readers (English and French), Arabic readers, bidirectional readers of Roman and Arabic scripts, and illiterates. Our findings supported the hypothesis that the bias in facial affect judgement is rooted in laterality, but can be influenced by script direction. Specifically, right-handed readers of Roman script demonstrated the greatest mean leftward score, and mixed-handed Arabic script readers demonstrated the greatest mean rightward score. Biliterates showed a gradual shift in asymmetric perception, as their scores fell between those of Roman and Arabic script readers, basically distributed in the order expected by their handedness and most often used script. Illiterates, whose only directional influence was laterality, showed a slight leftward bias.

  6. Automatic identification of inertial sensor placement on human body segments during walking

    PubMed Central

    2013-01-01

    Background Current inertial motion capture systems are rarely used in biomedical applications. The attachment and connection of the sensors with cables is often a complex and time consuming task. Moreover, it is prone to errors, because each sensor has to be attached to a predefined body segment. By using wireless inertial sensors and automatic identification of their positions on the human body, the complexity of the set-up can be reduced and incorrect attachments are avoided. We present a novel method for the automatic identification of inertial sensors on human body segments during walking. This method allows the user to place (wireless) inertial sensors on arbitrary body segments. Next, the user walks for just a few seconds and the segment to which each sensor is attached is identified automatically. Methods Walking data was recorded from ten healthy subjects using an Xsens MVN Biomech system with full-body configuration (17 inertial sensors). Subjects were asked to walk for about 6 seconds at normal walking speed (about 5 km/h). After rotating the sensor data to a global coordinate frame with x-axis in walking direction, y-axis pointing left and z-axis vertical, RMS, mean, and correlation coefficient features were extracted from x-, y- and z-components and magnitudes of the accelerations, angular velocities and angular accelerations. As a classifier, a decision tree based on the C4.5 algorithm was developed using Weka (Waikato Environment for Knowledge Analysis). Results and conclusions After testing the algorithm with 10-fold cross-validation using 31 walking trials (involving 527 sensors), 514 sensors were correctly classified (97.5%). When a decision tree for a lower body plus trunk configuration (8 inertial sensors) was trained and tested using 10-fold cross-validation, 100% of the sensors were correctly identified. This decision tree was also tested on walking trials of 7 patients (17 walking trials) after anterior cruciate ligament reconstruction, which also resulted in 100% correct identification, thus illustrating the robustness of the method. PMID:23517757

  7. Automatic identification of inertial sensor placement on human body segments during walking.

    PubMed

    Weenk, Dirk; van Beijnum, Bert-Jan F; Baten, Chris T M; Hermens, Hermie J; Veltink, Peter H

    2013-03-21

    Current inertial motion capture systems are rarely used in biomedical applications. The attachment and connection of the sensors with cables is often a complex and time consuming task. Moreover, it is prone to errors, because each sensor has to be attached to a predefined body segment. By using wireless inertial sensors and automatic identification of their positions on the human body, the complexity of the set-up can be reduced and incorrect attachments are avoided.We present a novel method for the automatic identification of inertial sensors on human body segments during walking. This method allows the user to place (wireless) inertial sensors on arbitrary body segments. Next, the user walks for just a few seconds and the segment to which each sensor is attached is identified automatically. Walking data was recorded from ten healthy subjects using an Xsens MVN Biomech system with full-body configuration (17 inertial sensors). Subjects were asked to walk for about 6 seconds at normal walking speed (about 5 km/h). After rotating the sensor data to a global coordinate frame with x-axis in walking direction, y-axis pointing left and z-axis vertical, RMS, mean, and correlation coefficient features were extracted from x-, y- and z-components and magnitudes of the accelerations, angular velocities and angular accelerations. As a classifier, a decision tree based on the C4.5 algorithm was developed using Weka (Waikato Environment for Knowledge Analysis). After testing the algorithm with 10-fold cross-validation using 31 walking trials (involving 527 sensors), 514 sensors were correctly classified (97.5%). When a decision tree for a lower body plus trunk configuration (8 inertial sensors) was trained and tested using 10-fold cross-validation, 100% of the sensors were correctly identified. This decision tree was also tested on walking trials of 7 patients (17 walking trials) after anterior cruciate ligament reconstruction, which also resulted in 100% correct identification, thus illustrating the robustness of the method.

  8. autokonf - A Configuration Script Generator Implemented in Perl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reus, J F

    This paper discusses configuration scripts in general and the scripting language issues involved. A brief description of GNU autoconf is provided along with a contrasting overview of autokonf, a configuration script generator implemented in Perl, whose macros are implemented in Perl, generating a configuration script in Perl. It is very portable, easily extensible, and readily mastered.

  9. Flow Pattern Identification of Horizontal Two-Phase Refrigerant Flow Using Neural Networks

    DTIC Science & Technology

    2015-12-31

    AFRL-RQ-WP-TP-2016-0079 FLOW PATTERN IDENTIFICATION OF HORIZONTAL TWO-PHASE REFRIGERANT FLOW USING NEURAL NETWORKS (POSTPRINT) Abdeel J...Journal Article Postprint 01 October 2013 – 22 June 2015 4. TITLE AND SUBTITLE FLOW PATTERN IDENTIFICATION OF HORIZONTAL TWO-PHASE REFRIGERANT FLOW USING...networks were used to automatically identify two-phase flow patterns for refrigerant R-134a flowing in a horizontal tube. In laboratory experiments

  10. Robotic Spectroscopy at the Dark Sky Observatory

    NASA Astrophysics Data System (ADS)

    Rosenberg, Daniel E.; Gray, Richard O.; Mashburn, Jonathan; Swenson, Aaron W.; McGahee, Courtney E.; Briley, Michael M.

    2018-06-01

    Spectroscopic observations using the classification-resolution Gray-Miller spectrograph attached to the Dark Sky Observatory 32 inch telescope (Appalachian State University, North Carolina) have been automated with a robotic script called the “Robotic Spectroscopist” (RS). RS runs autonomously during the night and controls all operations related to spectroscopic observing. At the heart of RS are a number of algorithms that first select and center the target star in the field of an imaging camera and then on the spectrograph slit. RS monitors the observatory weather station, and suspends operations and closes the dome when weather conditions warrant, and can reopen and resume observations when the weather improves. RS selects targets from a list using a queue-observing protocol based on observer-assigned priorities, but also uses target-selection criteria based on weather conditions, especially seeing. At the end of the night RS transfers the data files to the main campus, where they are reduced with an automatic pipeline. Our experience has shown that RS is more efficient and consistent than a human observer, and produces data sets that are ideal for automatic reduction. RS should be adaptable for use at other similar observatories, and so we are making the code freely available to the astronomical community.

  11. Software development for a gamma-ray burst rapid-response observatory in the US Virgin Islands.

    NASA Astrophysics Data System (ADS)

    Davis, K. A.; Giblin, T. W.; Neff, J. E.; Hakkila, J.; Hartmann, D.

    2004-12-01

    The site is situated near the crest of Crown Mountain on the island of St. Thomas in the US Virgin Islands. The observing site is strategically located 65 W longitude, placing it as the most eastern GRB-dedicated observing site in the western hemisphere. The observatory has a 0.5 m robotic telescope and a Marconi 4240 2048 by 2048 CCD with BVRI filters. The field of view is identical to that of the XRT onboard Swift, 19 by 19 arc minutes. The telescope is operated through the Talon telescope control software. The observatory is notified of a burst trigger through the GRB Coordinates Network (GCN). This GCN notification is received through a socket connection to the control computer on site. A Perl script passes this information to the Talon software, which automatically interrupts concurrent observations and inserts a new GRB observing schedule. Once the observations are made the resulting images are then analyzed in IRAF. A source extraction is necessary to identify known sources and the optical transient. The system is being calibrated for automatic GRB response and is expected to be ready to follow up Swift observations. This work has been supported by NSF and NASA-EPSCoR.

  12. [OISO, automatic treatment of patients management in oncogenetics].

    PubMed

    Guien, Céline; Fabre, Aurélie; Lagarde, Arnaud; Salgado, David; Gensollen-Thiriez, Catherine; Zattara, Hélène; Beroud, Christophe; Olschwang, Sylviane

    Oncogenetics is a long-term process, which requires a close relation between patients and medical teams, good familial links allowing lifetime follow-up. Numerous documents are exchanged in between the medical team, which has to frequently interact. We present here a new tool that has been conceived specifically for this management. The tool has been developed according to a model-view-controler approach with the relational system PostgreSQL 9.3. The web site used PHP 5.3, HTML5 and CSS3 languages, completed with JavaScript and jQuery-AJAX functions and two additional modules, FPDF and PHPMailer. The tool allows multiple interactions, clinical data management, mailing and emailing, follow-up plannings. Requests are able to follow all patients and planning automatically, to send information to a large number of patients or physicians, and to report activity. The tool has been designed for oncogenetics and adapted to its different aspects. The CNIL delivered an authorization for use. Secured web access allows the management at a regional level. Its simple concept makes it evolutive according to the constant updates of genetic and clinical management of patients. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  13. Interactive Light Stimulus Generation with High Performance Real-Time Image Processing and Simple Scripting

    PubMed Central

    Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter

    2017-01-01

    Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations. PMID:29326579

  14. A GIS-based automated procedure for landslide susceptibility mapping by the Conditional Analysis method: the Baganza valley case study (Italian Northern Apennines)

    NASA Astrophysics Data System (ADS)

    Clerici, Aldo; Perego, Susanna; Tellini, Claudio; Vescovi, Paolo

    2006-08-01

    Among the many GIS based multivariate statistical methods for landslide susceptibility zonation, the so called “Conditional Analysis method” holds a special place for its conceptual simplicity. In fact, in this method landslide susceptibility is simply expressed as landslide density in correspondence with different combinations of instability-factor classes. To overcome the operational complexity connected to the long, tedious and error prone sequence of commands required by the procedure, a shell script mainly based on the GRASS GIS was created. The script, starting from a landslide inventory map and a number of factor maps, automatically carries out the whole procedure resulting in the construction of a map with five landslide susceptibility classes. A validation procedure allows to assess the reliability of the resulting model, while the simple mean deviation of the density values in the factor class combinations, helps to evaluate the goodness of landslide density distribution. The procedure was applied to a relatively small basin (167 km2) in the Italian Northern Apennines considering three landslide types, namely rotational slides, flows and complex landslides, for a total of 1,137 landslides, and five factors, namely lithology, slope angle and aspect, elevation and slope/bedding relations. The analysis of the resulting 31 different models obtained combining the five factors, confirms the role of lithology, slope angle and slope/bedding relations in influencing slope stability.

  15. A generic template for automated bioanalytical ligand-binding assays using modular robotic scripts in support of discovery biotherapeutic programs.

    PubMed

    Duo, Jia; Dong, Huijin; DeSilva, Binodh; Zhang, Yan J

    2013-07-01

    Sample dilution and reagent pipetting are time-consuming steps in ligand-binding assays (LBAs). Traditional automation-assisted LBAs use assay-specific scripts that require labor-intensive script writing and user training. Five major script modules were developed on Tecan Freedom EVO liquid handling software to facilitate the automated sample preparation and LBA procedure: sample dilution, sample minimum required dilution, standard/QC minimum required dilution, standard/QC/sample addition, and reagent addition. The modular design of automation scripts allowed the users to assemble an automated assay with minimal script modification. The application of the template was demonstrated in three LBAs to support discovery biotherapeutic programs. The results demonstrated that the modular scripts provided the flexibility in adapting to various LBA formats and the significant time saving in script writing and scientist training. Data generated by the automated process were comparable to those by manual process while the bioanalytical productivity was significantly improved using the modular robotic scripts.

  16. Toward a Script Theory of Guidance in Computer-Supported Collaborative Learning

    PubMed Central

    Fischer, Frank; Kollar, Ingo; Stegmann, Karsten; Wecker, Christof

    2013-01-01

    This article presents an outline of a script theory of guidance for computer-supported collaborative learning (CSCL). With its 4 types of components of internal and external scripts (play, scene, role, and scriptlet) and 7 principles, this theory addresses the question of how CSCL practices are shaped by dynamically reconfigured internal collaboration scripts of the participating learners. Furthermore, it explains how internal collaboration scripts develop through participation in CSCL practices. It emphasizes the importance of active application of subject matter knowledge in CSCL practices, and it prioritizes transactive over nontransactive forms of knowledge application in order to facilitate learning. Further, the theory explains how external collaboration scripts modify CSCL practices and how they influence the development of internal collaboration scripts. The principles specify an optimal scaffolding level for external collaboration scripts and allow for the formulation of hypotheses about the fading of external collaboration scripts. Finally, the article points toward conceptual challenges and future research questions. PMID:23378679

  17. SU-F-T-458: Tracking Trends of TG-142 Parameters Via Analysis of Data Recorded by 2D Chamber Array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandrian, A; Kabat, C; Defoor, D

    Purpose: With increasing QA demands of medical physicists in clinical radiation oncology, the need for an effective method of tracking clinical data has become paramount. A tool was produced which scans through data automatically recorded by a 2D chamber array and extracts relevant information recommended by TG-142. Using this extracted information a timely and comprehensive analysis of QA parameters can be easily performed enabling efficient monthly checks on multiple linear accelerators simultaneously. Methods: A PTW STARCHECK chamber array was used to record several months of beam outputs from two Varian 2100 series linear accelerators and a Varian NovalisTx−. In conjunctionmore » with the chamber array, a beam quality phantom was used to simultaneously to determine beam quality. A minimalist GUI was created in MatLab that allows a user to set the file path of the data for each modality to be analyzed. These file paths are recorded to a MatLab structure and then subsequently accessed by a script written in Python (version 3.5.1) which then extracts values required to perform monthly checks as outlined by recommendations from TG-142. The script incorporates calculations to determine if the values recorded by the chamber array fall within an acceptable threshold. Results: Values obtained by the script are written to a spreadsheet where results can be easily viewed and annotated with a “pass” or “fail” and saved for further analysis. In addition to creating a new scheme for reviewing monthly checks, this application allows for able to succinctly store data for follow up analysis. Conclusion: By utilizing this tool, parameters recommended by TG-142 for multiple linear accelerators can be rapidly obtained and analyzed which can be used for evaluation of monthly checks.« less

  18. The Next Generation of Ground Operations Command and Control; Scripting in C Sharp and Visual Basic

    NASA Technical Reports Server (NTRS)

    Ritter, George; Pedoto, Ramon

    2010-01-01

    This slide presentation reviews the use of scripting languages in Ground Operations Command and Control. It describes the use of scripting languages in a historical context, the advantages and disadvantages of scripts. It describes the Enhanced and Redesigned Scripting (ERS) language, that was designed to combine the features of a scripting language and the graphical and IDE richness of a programming language with the utility of scripting languages. ERS uses the Microsoft Visual Studio programming environment and offers custom controls that enable an ERS developer to extend the Visual Basic and C sharp language interface with the Payload Operations Integration Center (POIC) telemetry and command system.

  19. Violent Attacks and Damaged Victims: An Exploration of the Rape Scripts of European American and African American U.S. College Women.

    PubMed

    Littleton, Heather L; Dodd, Julia C

    2016-02-25

    Scripts are influential in shaping sexual behaviors. Prior studies have examined the influence of individuals' rape scripts. However, these scripts have not been evaluated among diverse groups. The current study examined the rape scripts of African American (n = 72) and European American (n = 99) college women. Results supported three rape scripts: the "real rape," the "party rape," and the mismatched intentions rape, that were equally common. However, there were some differences, with African Americans' narratives more often including active victim resistance and less often containing victim vulnerability themes. Societal and cultural influences on rape scripts are discussed. © The Author(s) 2016.

  20. Novel Technology for Treating Individuals with Aphasia and Concomitant Cognitive Deficits

    PubMed Central

    Cherney, Leora R.; Halper, Anita S.

    2009-01-01

    Purpose This article describes three individuals with aphasia and concomitant cognitive deficits who used state-of-the-art computer software for training conversational scripts. Method Participants were assessed before and after 9 weeks of a computer script training program. For each participant, three individualized scripts were developed, recorded on the software, and practiced sequentially at home. Weekly meetings with the speech-language pathologist occurred to monitor practice and assess progress. Baseline and posttreatment scripts were audiotaped, transcribed, and compared to the target scripts for content, grammatical productivity, and rate of production of script-related words. Interviews were conducted at the conclusion of treatment. Results There was great variability in improvements across scripts, with two participants improving on two of their three scripts in measures of content, grammatical productivity, and rate of production of script-related words. One participant gained more than 5 points on the Aphasia Quotient of the Western Aphasia Battery. Five positive themes were consistently identified from exit interviews: increased verbal communication, improvements in other modalities and situations, communication changes noticed by others, increased confidence, and satisfaction with the software. Conclusion Computer-based script training potentially may be an effective intervention for persons with chronic aphasia and concomitant cognitive deficits. PMID:19158062

  1. Dominant heterosexual sexual scripts in emerging adulthood: conceptualization and measurement.

    PubMed

    Sakaluk, John K; Todd, Leah M; Milhausen, Robin; Lachowsky, Nathan J

    2014-01-01

    Sexual script research (Simon & Gagnon 1969 , 1986 ) bourgeoned following Simon and Gagnon's groundbreaking work. Empirical measurement of sexual script adherence has been limited, however, as no measures exist that have undergone rigorous development and validation. We conducted three studies to examine current dominant sexual scripts of heterosexual adults and to develop a measure of endorsement of these scripts. In Study 1, we conducted three focus groups of men ( n = 19) and four of women ( n = 20) to discuss the current scripts governing sexual behavior. Results supported scripts for sex drive, physical and emotional sex, sexual performance, initiation and gatekeeping, and evaluation of sexual others. In Study 2, we used these qualitative findings to develop a measure of script endorsement, the Sexual Script Scale. Factor analysis of data from 721 participants revealed six interrelated factors demonstrating initial construct validity. In Study 3, confirmatory factor analysis of a separate sample of 289 participants supported the model from Study 2, and evidence of factorial invariance and test-retest reliability was obtained. This article presents the results of these studies, documenting the process of scale development from formative research through to confirmatory testing, and suggests future directions for the continued development of sexual scripting theory.

  2. Gold Digger or Video Girl: the salience of an emerging hip-hop sexual script.

    PubMed

    Ross, Jasmine N; Coleman, Nicole M

    2011-02-01

    Concerns have been expressed in the common discourse and scholarly literature about the negative influence of Hip-Hop on its young listeners' ideas about sex and sexuality. Most of the scholarly literature has focused on the impact of this urban, Black media on young African American girls' sexual self-concept and behaviours. In response to this discourse, Stephens and Phillips (2003) proposed a Hip-Hop sexual scripting model that theorises about specific sexual scripts for young African American women. Their model includes eight different sexual scripts including the Gold Digger script. The present study proposes a ninth emerging script - the Video Girl. Participants were 18 female African American college students, between the ages of 18 and 30 years old from a large urban public university in the Southwest USA. Using q-methodology the present study found support for the existence of a Video Girl script. In addition, the data indicates that this script is distinct but closely related to Stephens and Phillips' Gold Digger script. These findings support their theory by suggesting that Hip-Hop sexual scripts are salient and hold real meaning for this sample.

  3. Shallow and deep orthographies in Hebrew: the role of vowelization in reading development for unvowelized scripts.

    PubMed

    Schiff, Rachel

    2012-12-01

    The present study explored the speed, accuracy, and reading comprehension of vowelized versus unvowelized scripts among 126 native Hebrew speaking children in second, fourth, and sixth grades. Findings indicated that second graders read and comprehended vowelized scripts significantly more accurately and more quickly than unvowelized scripts, whereas among fourth and sixth graders reading of unvowelized scripts developed to a greater degree than the reading of vowelized scripts. An analysis of the mediation effect for children's mastery of vowelized reading speed and accuracy on their mastery of unvowelized reading speed and comprehension revealed that in second grade, reading accuracy of vowelized words mediated the reading speed and comprehension of unvowelized scripts. In the fourth grade, accuracy in reading both vowelized and unvowelized words mediated the reading speed and comprehension of unvowelized scripts. By sixth grade, accuracy in reading vowelized words offered no mediating effect, either on reading speed or comprehension of unvowelized scripts. The current outcomes thus suggest that young Hebrew readers undergo a scaffolding process, where vowelization serves as the foundation for building initial reading abilities and is essential for successful and meaningful decoding of unvowelized scripts.

  4. Internet-based support for the production of holographic stereograms

    NASA Astrophysics Data System (ADS)

    Gustafsson, Jonny

    1998-03-01

    Holographic hard-copy techniques suffers from a lack of availability for ordinary users of computer graphics. The production of holograms usually requires special skills as well as expensive equipment which means that the direct production cost will be high for an ordinary user with little or no knowledge in holography. Here it is shown how a system may be created in which the users of computer graphics can do all communication with a holography studio through a Java-based web browser. This system will facilitate for the user to understand the technique of holographic stereograms, make decisions about angles, views, lighting etc., previsualizing the end result, as well as automatically submit the 3D-data to the producer of the hologram. A prototype system has been built which uses internal scripting in VRML.

  5. Automated crystallographic system for high-throughput protein structure determination.

    PubMed

    Brunzelle, Joseph S; Shafaee, Padram; Yang, Xiaojing; Weigand, Steve; Ren, Zhong; Anderson, Wayne F

    2003-07-01

    High-throughput structural genomic efforts require software that is highly automated, distributive and requires minimal user intervention to determine protein structures. Preliminary experiments were set up to test whether automated scripts could utilize a minimum set of input parameters and produce a set of initial protein coordinates. From this starting point, a highly distributive system was developed that could determine macromolecular structures at a high throughput rate, warehouse and harvest the associated data. The system uses a web interface to obtain input data and display results. It utilizes a relational database to store the initial data needed to start the structure-determination process as well as generated data. A distributive program interface administers the crystallographic programs which determine protein structures. Using a test set of 19 protein targets, 79% were determined automatically.

  6. CCP4i2: the new graphical user interface to the CCP4 program suite

    PubMed Central

    Potterton, Liz; Ballard, Charles; Dodson, Eleanor; Evans, Phil R.; Keegan, Ronan; Krissinel, Eugene; Stevenson, Kyle; Lebedev, Andrey; McNicholas, Stuart J.; Noble, Martin; Pannu, Navraj S.; Roth, Christian; Sheldrick, George; Skubak, Pavol; Uski, Ville

    2018-01-01

    The CCP4 (Collaborative Computational Project, Number 4) software suite for macromolecular structure determination by X-ray crystallography groups brings together many programs and libraries that, by means of well established conventions, interoperate effectively without adhering to strict design guidelines. Because of this inherent flexibility, users are often presented with diverse, even divergent, choices for solving every type of problem. Recently, CCP4 introduced CCP4i2, a modern graphical interface designed to help structural biologists to navigate the process of structure determination, with an emphasis on pipelining and the streamlined presentation of results. In addition, CCP4i2 provides a framework for writing structure-solution scripts that can be built up incrementally to create increasingly automatic procedures. PMID:29533233

  7. [The maintenance of automatic analysers and associated documentation].

    PubMed

    Adjidé, V; Fournier, P; Vassault, A

    2010-12-01

    The maintenance of automatic analysers and associated documentation taking part in the requirements of the ISO 15189 Standard and the French regulation as well have to be defined in the laboratory policy. The management of the periodic maintenance and documentation shall be implemented and fulfilled. The organisation of corrective maintenance has to be managed to avoid interruption of the task of the laboratory. The different recommendations concern the identification of materials including automatic analysers, the environmental conditions to take into account, the documentation provided by the manufacturer and documents prepared by the laboratory including procedures for maintenance.

  8. Automatic integration of data from dissimilar sensors

    NASA Astrophysics Data System (ADS)

    Citrin, W. I.; Proue, R. W.; Thomas, J. W.

    The present investigation is concerned with the automatic integration of radar and electronic support measures (ESM) sensor data, and with the development of a method for the automatical integration of identification friend or foe (IFF) and radar sensor data. On the basis of the two considered proojects, significant advances have been made in the areas of sensor data integration. It is pointed out that the log likelihood approach in sensor data correlation is appropriate for both similar and dissimilar sensor data. Attention is given to the real time integration of radar and ESM sensor data, and a radar ESM correlation simulation program.

  9. An automatic and efficient pipeline for disease gene identification through utilizing family-based sequencing data.

    PubMed

    Song, Dandan; Li, Ning; Liao, Lejian

    2015-01-01

    Due to the generation of enormous amounts of data at both lower costs as well as in shorter times, whole-exome sequencing technologies provide dramatic opportunities for identifying disease genes implicated in Mendelian disorders. Since upwards of thousands genomic variants can be sequenced in each exome, it is challenging to filter pathogenic variants in protein coding regions and reduce the number of missing true variants. Therefore, an automatic and efficient pipeline for finding disease variants in Mendelian disorders is designed by exploiting a combination of variants filtering steps to analyze the family-based exome sequencing approach. Recent studies on the Freeman-Sheldon disease are revisited and show that the proposed method outperforms other existing candidate gene identification methods.

  10. Contribution to a Theory of CSCL Scripts: Taking into Account the Appropriation of Scripts by Learners

    ERIC Educational Resources Information Center

    Tchounikine, Pierre

    2016-01-01

    This paper presents a contribution to the development of a theory of CSCL scripts, i.e., an understanding of what happens when learners engage in such scripts. It builds on the Script Theory of Guidance (SToG) recently proposed by (Fischer et al. in "Educational Psychologist," 48(1), 56-66, 2013). We argue that, when engaged in a…

  11. 47 CFR 15.251 - Operation within the bands 2.9-3.26 GHz, 3.267-3.332 GHz, 3.339-3.3458 GHz, and 3.358-3.6 GHz.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., and 3.358-3.6 GHz. (a) Operation under the provisions of this section is limited to automatic vehicle identification systems (AVIS) which use swept frequency techniques for the purpose of automatically identifying transportation vehicles. (b) The field strength anywhere within the frequency range swept by the signal shall not...

  12. 47 CFR 15.251 - Operation within the bands 2.9-3.26 GHz, 3.267-3.332 GHz, 3.339-3.3458 GHz, and 3.358-3.6 GHz.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., and 3.358-3.6 GHz. (a) Operation under the provisions of this section is limited to automatic vehicle identification systems (AVIS) which use swept frequency techniques for the purpose of automatically identifying transportation vehicles. (b) The field strength anywhere within the frequency range swept by the signal shall not...

  13. 47 CFR 15.251 - Operation within the bands 2.9-3.26 GHz, 3.267-3.332 GHz, 3.339-3.3458 GHz, and 3.358-3.6 GHz.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., and 3.358-3.6 GHz. (a) Operation under the provisions of this section is limited to automatic vehicle identification systems (AVIS) which use swept frequency techniques for the purpose of automatically identifying transportation vehicles. (b) The field strength anywhere within the frequency range swept by the signal shall not...

  14. 47 CFR 15.251 - Operation within the bands 2.9-3.26 GHz, 3.267-3.332 GHz, 3.339-3.3458 GHz, and 3.358-3.6 GHz.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., and 3.358-3.6 GHz. (a) Operation under the provisions of this section is limited to automatic vehicle identification systems (AVIS) which use swept frequency techniques for the purpose of automatically identifying transportation vehicles. (b) The field strength anywhere within the frequency range swept by the signal shall not...

  15. Director, Operational Test and Evaluation FY 2004 Annual Report

    DTIC Science & Technology

    2004-01-01

    HIGH) Space Based Radar (SBR) Sensor Fuzed Weapon (SFW) P3I (CBU-97/B) Small Diameter Bomb (SDB) Secure Mobile Anti-Jam Reliable Tactical Terminal...detection, identification, and sampling capability for both fixed-site and mobile operations. The system must automatically detect and identify up to ten...staffing within the Services. SYSTEM DESCRIPTION AND MISSION The Services envision JCAD as a hand-held device that automatically detects, identifies, and

  16. Evaluating current automatic de-identification methods with Veteran's health administration clinical documents.

    PubMed

    Ferrández, Oscar; South, Brett R; Shen, Shuying; Friedlin, F Jeffrey; Samore, Matthew H; Meystre, Stéphane M

    2012-07-27

    The increased use and adoption of Electronic Health Records (EHR) causes a tremendous growth in digital information useful for clinicians, researchers and many other operational purposes. However, this information is rich in Protected Health Information (PHI), which severely restricts its access and possible uses. A number of investigators have developed methods for automatically de-identifying EHR documents by removing PHI, as specified in the Health Insurance Portability and Accountability Act "Safe Harbor" method.This study focuses on the evaluation of existing automated text de-identification methods and tools, as applied to Veterans Health Administration (VHA) clinical documents, to assess which methods perform better with each category of PHI found in our clinical notes; and when new methods are needed to improve performance. We installed and evaluated five text de-identification systems "out-of-the-box" using a corpus of VHA clinical documents. The systems based on machine learning methods were trained with the 2006 i2b2 de-identification corpora and evaluated with our VHA corpus, and also evaluated with a ten-fold cross-validation experiment using our VHA corpus. We counted exact, partial, and fully contained matches with reference annotations, considering each PHI type separately, or only one unique 'PHI' category. Performance of the systems was assessed using recall (equivalent to sensitivity) and precision (equivalent to positive predictive value) metrics, as well as the F(2)-measure. Overall, systems based on rules and pattern matching achieved better recall, and precision was always better with systems based on machine learning approaches. The highest "out-of-the-box" F(2)-measure was 67% for partial matches; the best precision and recall were 95% and 78%, respectively. Finally, the ten-fold cross validation experiment allowed for an increase of the F(2)-measure to 79% with partial matches. The "out-of-the-box" evaluation of text de-identification systems provided us with compelling insight about the best methods for de-identification of VHA clinical documents. The errors analysis demonstrated an important need for customization to PHI formats specific to VHA documents. This study informed the planning and development of a "best-of-breed" automatic de-identification application for VHA clinical text.

  17. Increasing play-based commenting in children with autism spectrum disorder using a novel script-frame procedure.

    PubMed

    Groskreutz, Mark P; Peters, Amy; Groskreutz, Nicole C; Higbee, Thomas S

    2015-01-01

    Children with developmental disabilities may engage in less frequent and more repetitious language than peers with typical development. Scripts have been used to increase communication by teaching one or more specific statements and then fading the scripts. In the current study, preschoolers with developmental disabilities experienced a novel script-frame protocol and learned to make play-related comments about toys. After the script-frame protocol, commenting occurred in the absence of scripts, with untrained play activities, and included untrained comments. © Society for the Experimental Analysis of Behavior.

  18. TH-D-BRB-04: Pinnacle Scripting: Improving Efficiency While Maintaining Safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, J.

    2016-06-15

    Scripting capabilities and application programming interfaces (APIs) are becoming commonly available in modern treatment planning systems. These links to the treatment planning system (TPS) allow users to read data from the TPS, and in some cases use TPS functionality and write data back to the TPS. Such tools are powerful extensions, allowing automation of routine clinical tasks and supporting research, particularly research involving repetitive tasks on large patient populations. The data and functionality exposed by scripting/API capabilities is vendor dependent, as are the languages used by script/API engines, such as the Microsoft .NET framework or Python. Scripts deployed in amore » clinical environment must be commissioned and validated like any other software tool. This session will provide an overview of scripting applications and a discussion of best practices, followed by a practical introduction to the scripting capabilities of three commercial treatment planning systems. Learning Objectives: Understand the scripting capabilities available in several treatment planning systems Learn how to get started using scripting capabilities Understand the best practices for safe script deployment in a clinical environment R. Popple, Varian Medical Systems has provided research support unrelated to the topic of this session.R. Cardan, Varian Medical Systems for grant research, product evaluation, and teaching honorarium.« less

  19. Methods for inducing alcohol craving in individuals with comorbid alcohol dependence and posttraumatic stress disorder: Behavioral and physiological outcomes

    PubMed Central

    Kwako, L. E.; Schwandt, M. L.; Sells, J. R.; Ramchandani, V. A.; George, D. T.; Sinha, R.; Heilig, M.

    2014-01-01

    Rationale Alcohol addiction is a chronic relapsing disorder that presents a substantial public health problem, and is frequently comorbid with posttraumatic stress disorder (PTSD). Craving for alcohol is a predictor of relapse to alcohol use, and is triggered by cues associated with alcohol and trauma. Identification of reliable and valid laboratory methods for craving induction is an important objective for alcoholism and PTSD research. Objectives The present study compares two methods for induction of craving via stress and alcohol cues in individuals with comorbid alcohol dependence (AD) and PTSD: the combined Trier Social Stress Test and cue reactivity paradigm (Trier/CR), and a guided imagery (Scripts) paradigm. Outcomes include self-reported measures of craving, stress, and anxiety as well as endocrine measures. Methods Subjects were 52 individuals diagnosed with comorbid AD and PTSD seeking treatment at the NIAAA inpatient research facility. They participated in a four week inpatient study of the efficacy of a NK1 antagonist to treat comorbid AD and PTSD, and which included the two challenge procedures. Results Both the Trier/CR and Scripts induced craving for alcohol, as well as elevated levels of subjective distress and anxiety. The Trier/CR yielded significant increases in ACTH and cortisol, while the Scripts did not. Conclusions Both paradigms are effective laboratory means of inducing craving for alcohol. Further research is warranted to better understand the mechanisms behind craving induced by stress vs. alcohol cues, as well as to understand the impact of comorbid PTSD and AD on craving. PMID:24806358

  20. Identification of the chemical constituents of Chinese medicine Yi-Xin-Shu capsule by molecular feature orientated precursor ion selection and tandem mass spectrometry structure elucidation.

    PubMed

    Wang, Hong-ping; Chen, Chang; Liu, Yan; Yang, Hong-Jun; Wu, Hong-Wei; Xiao, Hong-Bin

    2015-11-01

    The incomplete identification of the chemical components of traditional Chinese medicinal formula has been one of the bottlenecks in the modernization of traditional Chinese medicine. Tandem mass spectrometry has been widely used for the identification of chemical substances. Current automatic tandem mass spectrometry acquisition, where precursor ions were selected according to their signal intensity, encounters a drawback in chemical substances identification when samples contain many overlapping signals. Compounds in minor or trace amounts could not be identified because most tandem mass spectrometry information was lost. Herein, a molecular feature orientated precursor ion selection and tandem mass spectrometry structure elucidation method for complex Chinese medicine chemical constituent analysis was developed. The precursor ions were selected according to their two-dimensional characteristics of retention times and mass-to-charge ratio ranges from herbal compounds, so that all precursor ions from herbal compounds were included and more minor chemical constituents in Chinese medicine were identified. Compared to the conventional automatic tandem mass spectrometry setups, the approach is novel and can overcome the drawback for chemical substances identification. As an example, 276 compounds from the Chinese Medicine of Yi-Xin-Shu capsule were identified. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Specifying Computer-Supported Collaboration Scripts

    ERIC Educational Resources Information Center

    Kobbe, Lars; Weinberger, Armin; Dillenbourg, Pierre; Harrer, Andreas; Hamalainen, Raija; Hakkinen, Paivi; Fischer, Frank

    2007-01-01

    Collaboration scripts facilitate social and cognitive processes of collaborative learning by shaping the way learners interact with each other. Computer-supported collaboration scripts generally suffer from the problem of being restrained to a specific learning platform. A standardization of collaboration scripts first requires a specification of…

  2. Automatic quality assessment and peak identification of auditory brainstem responses with fitted parametric peaks.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Alvarez, Isaac; Segura, Jose Carlos; Thornton, A Roger D; Sainz, Manuel; Vargas, Jose Luis

    2014-05-01

    The recording of the auditory brainstem response (ABR) is used worldwide for hearing screening purposes. In this process, a precise estimation of the most relevant components is essential for an accurate interpretation of these signals. This evaluation is usually carried out subjectively by an audiologist. However, the use of automatic methods for this purpose is being encouraged nowadays in order to reduce human evaluation biases and ensure uniformity among test conditions, patients, and screening personnel. This article describes a new method that performs automatic quality assessment and identification of the peaks, the fitted parametric peaks (FPP). This method is based on the use of synthesized peaks that are adjusted to the ABR response. The FPP is validated, on one hand, by an analysis of amplitudes and latencies measured manually by an audiologist and automatically by the FPP method in ABR signals recorded at different stimulation rates; and on the other hand, contrasting the performance of the FPP method with the automatic evaluation techniques based on the correlation coefficient, FSP, and cross correlation with a predefined template waveform by comparing the automatic evaluations of the quality of these methods with subjective evaluations provided by five experienced evaluators on a set of ABR signals of different quality. The results of this study suggest (a) that the FPP method can be used to provide an accurate parameterization of the peaks in terms of amplitude, latency, and width, and (b) that the FPP remains as the method that best approaches the averaged subjective quality evaluation, as well as provides the best results in terms of sensitivity and specificity in ABR signals validation. The significance of these findings and the clinical value of the FPP method are highlighted on this paper. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Crop Identification Technolgy Assessment for Remote Sensing (CITARS). Volume 1: Task design plan

    NASA Technical Reports Server (NTRS)

    Hall, F. G.; Bizzell, R. M.

    1975-01-01

    A plan for quantifying the crop identification performances resulting from the remote identification of corn, soybeans, and wheat is described. Steps for the conversion of multispectral data tapes to classification results are specified. The crop identification performances resulting from the use of several basic types of automatic data processing techniques are compared and examined for significant differences. The techniques are evaluated also for changes in geographic location, time of the year, management practices, and other physical factors. The results of the Crop Identification Technology Assessment for Remote Sensing task will be applied extensively in the Large Area Crop Inventory Experiment.

  4. Collaboration Scripts--A Conceptual Analysis

    ERIC Educational Resources Information Center

    Kollar, Ingo; Fischer, Frank; Hesse, Friedrich W.

    2006-01-01

    This article presents a conceptual analysis of collaboration scripts used in face-to-face and computer-mediated collaborative learning. Collaboration scripts are scaffolds that aim to improve collaboration through structuring the interactive processes between two or more learning partners. Collaboration scripts consist of at least five components:…

  5. The role of scripts in psychological maladjustment and psychotherapy.

    PubMed

    Demorest, Amy P

    2013-12-01

    This article considers the value of script theory for understanding psychological maladjustment and psychotherapy. Scripts are implicit expectations that individuals develop to understand and deal with emotionally significant life experiences. Script theory provides a way to understand the complex patterns of thinking, feeling, and behavior that characterize personal consistency, as well as a way to address personality development and change. As such it is a vital model for understanding both personality and clinical phenomena. The article begins by describing script theory and noting similar models in personality and clinical psychology. It then outlines both idiographic and nomothetic methods for assessing scripts and discusses the strengths and weaknesses of each. A survey of the author's program of research follows, using a nomothetic method to examine the role of interpersonal scripts in psychological maladjustment and psychotherapy. The article concludes by presenting a promising method for future research synthesizing idiographic and nomothetic approaches and raising important questions for future research on the role of scripts in psychological maladjustment and psychotherapy. © 2012 Wiley Periodicals, Inc.

  6. ScriptingRT: A Software Library for Collecting Response Latencies in Online Studies of Cognition

    PubMed Central

    Schubert, Thomas W.; Murteira, Carla; Collins, Elizabeth C.; Lopes, Diniz

    2013-01-01

    ScriptingRT is a new open source tool to collect response latencies in online studies of human cognition. ScriptingRT studies run as Flash applets in enabled browsers. ScriptingRT provides the building blocks of response latency studies, which are then combined with generic Apache Flex programming. Six studies evaluate the performance of ScriptingRT empirically. Studies 1–3 use specialized hardware to measure variance of response time measurement and stimulus presentation timing. Studies 4–6 implement a Stroop paradigm and run it both online and in the laboratory, comparing ScriptingRT to other response latency software. Altogether, the studies show that Flash programs developed in ScriptingRT show a small lag and an increased variance in response latencies. However, this did not significantly influence measured effects: The Stroop effect was reliably replicated in all studies, and the found effects did not depend on the software used. We conclude that ScriptingRT can be used to test response latency effects online. PMID:23805326

  7. Automatic topic identification of health-related messages in online health community using text classification.

    PubMed

    Lu, Yingjie

    2013-01-01

    To facilitate patient involvement in online health community and obtain informative support and emotional support they need, a topic identification approach was proposed in this paper for identifying automatically topics of the health-related messages in online health community, thus assisting patients in reaching the most relevant messages for their queries efficiently. Feature-based classification framework was presented for automatic topic identification in our study. We first collected the messages related to some predefined topics in a online health community. Then we combined three different types of features, n-gram-based features, domain-specific features and sentiment features to build four feature sets for health-related text representation. Finally, three different text classification techniques, C4.5, Naïve Bayes and SVM were adopted to evaluate our topic classification model. By comparing different feature sets and different classification techniques, we found that n-gram-based features, domain-specific features and sentiment features were all considered to be effective in distinguishing different types of health-related topics. In addition, feature reduction technique based on information gain was also effective to improve the topic classification performance. In terms of classification techniques, SVM outperformed C4.5 and Naïve Bayes significantly. The experimental results demonstrated that the proposed approach could identify the topics of online health-related messages efficiently.

  8. Semi-automatic forensic approach using mandibular midline lingual structures as fingerprint: a pilot study.

    PubMed

    Shaheen, E; Mowafy, B; Politis, C; Jacobs, R

    2017-12-01

    Previous research proposed the use of the mandibular midline neurovascular canal structures as a forensic finger print. In their observer study, an average correct identification of 95% was reached which triggered this study. To present a semi-automatic computer recognition approach to replace the observers and to validate the accuracy of this newly proposed method. Imaging data from Computer Tomography (CT) and Cone Beam Computer Tomography (CBCT) of mandibles scanned at two different moments were collected to simulate an AM and PM situation where the first scan presented AM and the second scan was used to simulate PM. Ten cases with 20 scans were used to build a classifier which relies on voxel based matching and results with classification into one of two groups: "Unmatched" and "Matched". This protocol was then tested using five other scans out of the database. Unpaired t-testing was applied and accuracy of the computerized approach was determined. A significant difference was found between the "Unmatched" and "Matched" classes with means of 0.41 and 0.86 respectively. Furthermore, the testing phase showed an accuracy of 100%. The validation of this method pushes this protocol further to a fully automatic identification procedure for victim identification based on the mandibular midline canals structures only in cases with available AM and PM CBCT/CT data.

  9. Novel technology for treating individuals with aphasia and concomitant cognitive deficits.

    PubMed

    Cherney, Leora R; Halper, Anita S

    2008-01-01

    This article describes three individuals with aphasia and concomitant cognitive deficits who used state-of-theart computer software for training conversational scripts. Participants were assessed before and after 9 weeks of a computer script training program. For each participant, three individualized scripts were developed, recorded on the software, and practiced sequentially at home. Weekly meetings with the speech-language pathologist occurred to monitor practice and assess progress. Baseline and posttreatment scripts were audiotaped, transcribed, and compared to the target scripts for content, grammatical productivity, and rate of production of script-related words. Interviews were conducted at the conclusion of treatment. There was great variability in improvements across scripts, with two participants improving on two of their three scripts in measures of content, grammatical productivity, and rate of production of scriptrelated words. One participant gained more than 5 points on the Aphasia Quotient of the Western Aphasia Battery. Five positive themes were consistently identified from exit interviews: increased verbal communication, improvements in other modalities and situations, communication changes noticed by others, increased confidence, and satisfaction with the software. Computer-based script training potentially may be an effective intervention for persons with chronic aphasia and concomitant cognitive deficits.

  10. Gender and Ethnicity in Dating, Hanging Out, and Hooking Up: Sexual Scripts Among Hispanic and White Young Adults.

    PubMed

    Eaton, Asia A; Rose, Suzanna M; Interligi, Camille; Fernandez, Katherine; McHugh, Maureen

    2016-09-01

    We examined the scripts associated with heterosexual Hispanic and White young adults' most recent initial sexual or romantic encounter using two samples of heterosexual undergraduates: 224 Hispanic students (49% female) and 316 White students (51% female). Scripts were identified for three types of encounters: dating, hanging out, and hooking up. The three scripts had more than half of their actions in common. Items such as get to know one another, feel aroused, and engage in physical contact were present across all scripts for all participant groups. As expected, traditional gender roles were present within all scripts, but more so for dates than for hangouts and hookups. Men reported a higher presence of traditional gender roles than women across scripts and put a higher priority on the goal of physical intimacy across all scripts. Dating was the most prevalent script for all young adults, contradicting contemporary claims that "dating is dead." In terms of ethnicity, a higher proportion of Hispanic than White young adults went on dates, and a higher proportion of White students went on hookups, implying that social and contextual variables are important in understanding young adults' intimate relationships.

  11. Internal and External Scripts in Computer-Supported Collaborative Inquiry Learning

    ERIC Educational Resources Information Center

    Kollar, Ingo; Fischer, Frank; Slotta, James D.

    2007-01-01

    We investigated how differently structured external scripts interact with learners' internal scripts with respect to individual knowledge acquisition in a Web-based collaborative inquiry learning environment. Ninety students from two secondary schools participated. Two versions of an external collaboration script (high vs. low structured)…

  12. Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture

    DOEpatents

    Lassahn, Gordon D.; Lancaster, Gregory D.; Apel, William A.; Thompson, Vicki S.

    2013-01-08

    Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture are described. According to one embodiment, an image portion identification method includes accessing data regarding an image depicting a plurality of biological substrates corresponding to at least one biological sample and indicating presence of at least one biological indicator within the biological sample and, using processing circuitry, automatically identifying a portion of the image depicting one of the biological substrates but not others of the biological substrates.

  13. Associations between anxiety and love scripts.

    PubMed

    Gawda, Barbara

    2012-08-01

    Relations between trait anxiety and love scripts expressed in narratives were examined to assess how anxiety affects the perception of love. Stories about love (N = 160) written by 80 men and 80 women were analyzed. The content of the scripts was evaluated in terms of descriptions of actors, partners, expressed emotions of actor and of partner, importance of love, and the ending of the scenario. To test the differences between men and women on content of scripts and associations between trait anxiety level and frequencies of love script elements, a two-way analysis of variance was used. The main effect for sex was significant. There was an effect of trait anxiety on content of love scripts: high anxiety was associated with more frequent negative descriptions of the actor as well as more frequent negative descriptions of the partner's emotions, only in scripts written by women.

  14. Promoting interaction during sociodramatic play: teaching scripts to typical preschoolers and classmates with disabilities.

    PubMed

    Goldstein, H; Cisar, C L

    1992-01-01

    We investigated the effects of teaching sociodramatic scripts on subsequent interaction among three triads, each containing 2 typical children and 1 child with autistic characteristics. The same type and rate of teacher prompts were implemented throughout structured play observations to avoid the confounding effects of script training and teacher prompting. After learning the scripts, all children demonstrated more frequent theme-related social behavior. These improvements in social-communicative interaction were replicated with the training of three sociodramatic scripts (i.e., pet shop, carnival, magic show) according to a multiple baseline design. These effects were maintained during the training of successive scripts and when the triads were reconstituted to include new but similarly trained partners. Results provided support for the inclusion of systematic training of scripts to enhance interaction among children with and without disabilities during sociodramatic play.

  15. Automatic Keyword Identification by Artificial Neural Networks Compared to Manual Identification by Users of Filtering Systems.

    ERIC Educational Resources Information Center

    Boger, Zvi; Kuflik, Tsvi; Shoval, Peretz; Shapira, Bracha

    2001-01-01

    Discussion of information filtering (IF) and information retrieval focuses on the use of an artificial neural network (ANN) as an alternative method for both IF and term selection and compares its effectiveness to that of traditional methods. Results show that the ANN relevance prediction out-performs the prediction of an IF system. (Author/LRW)

  16. Automatic Processing and the Unitization of Two Features.

    DTIC Science & Technology

    1980-02-01

    experiment, LaBerge (1973) showed that with practice two features could be automatically unitized to form a novel character. We wish to address a...different from a search for a target which requires identification of one of the features alone. Page 2 Indeed, LaBerge (1973) used a similar implicit...perception? Journal of Experimental Child Psychology, 1978, 26, 498-507. LaBerge , D. Attention and the measurement of perceptual learning. Memory and

  17. Mock Trials: Scripts for Wisconsin Lawyers and Teachers.

    ERIC Educational Resources Information Center

    Wisconsin State Dept. of Public Instruction, Madison. Div. of Instructional Services.

    The document presents scripts prepared by experienced lawyers for seven mock trials. Designed for high school or adult audiences as introductions to the American legal system, the scripts use community lawyers, judges, and law officers as well as actors. Script topics include cases concerning automobile accidents, drunken driving, homicide,…

  18. Performance Scripts Creation: Processes and Applications

    ERIC Educational Resources Information Center

    Lyons, Paul

    2006-01-01

    Purpose: Seeks to explain some of the dynamics of scripts creation as used in training, to offer some theoretical underpinning regarding the influence of script creation on behavior and performance, and to offer some examples of how script creation is applied in training activities. Design/methodology/approach: The paper explains in detail and…

  19. Script Templates: A Practical Approach to Script Training in Aphasia

    ERIC Educational Resources Information Center

    Kaye, Rosalind C.; Cherney, Leora R.

    2016-01-01

    Purpose: Script training for aphasia involves repeated practice of relevant phrases and sentences that, when mastered, can potentially be used in other communicative situations. Although an increasingly popular approach, script development can be time-consuming. We provide a detailed summary of the evidence supporting this approach. We then…

  20. Visual Scripting.

    ERIC Educational Resources Information Center

    Halas, John

    Visual scripting is the coordination of words with pictures in sequence. This book presents the methods and viewpoints on visual scripting of fourteen film makers, from nine countries, who are involved in animated cinema; it contains concise examples of how a storybook and preproduction script can be prepared in visual terms; and it includes a…

  1. Entropic evidence for linguistic structure in the Indus script.

    PubMed

    Rao, Rajesh P N; Yadav, Nisha; Vahia, Mayank N; Joglekar, Hrishikesh; Adhikari, R; Mahadevan, Iravatham

    2009-05-29

    The script of the ancient Indus civilization remains undeciphered. The hypothesis that the script encodes language has recently been questioned. Here, we present evidence for the linguistic hypothesis by showing that the script's conditional entropy is closer to those of natural languages than various types of nonlinguistic systems.

  2. MARATHI READER.

    ERIC Educational Resources Information Center

    APTE, MAHADEO L.

    THE MARATHI LANGUAGE, SPOKEN IN BOMBAY STATE, INDIA, IS WRITTEN IN THE SCRIPT TRADITIONALLY KNOWN AS THE DEVANAGARI SCRIPT. THE SCRIPT IS SYLLABIC IN NATURE, EACH CHARACTER OR LETTER REPRESENTS A SYLLABLE RATHER THAN A CONSONANT OR A VOWEL ALONE. THE MARATHI ALPHABET IS THE ADOPTION OF THE DEVANAGARI SCRIPT WITH A FEW CHANGES AND INNOVATIONS. A…

  3. The Latent Structure of Secure Base Script Knowledge

    ERIC Educational Resources Information Center

    Waters, Theodore E. A.; Fraley, R. Chris; Groh, Ashley M.; Steele, Ryan D.; Vaughn, Brian E.; Bost, Kelly K.; Veríssimo, Manuela; Coppola, Gabrielle; Roisman, Glenn I.

    2015-01-01

    There is increasing evidence that attachment representations abstracted from childhood experiences with primary caregivers are organized as a cognitive script describing secure base use and support (i.e., the "secure base script"). To date, however, the latent structure of secure base script knowledge has gone unexamined--this despite…

  4. Diseases of Landscape Ornamentals. Slide Script.

    ERIC Educational Resources Information Center

    Powell, Charles C.; Sydnor, T. Davis

    This slide script, part of a series of slide scripts designed for use in vocational agriculture classes, deals with recognizing and controlling diseases found on ornamental landscape plants. Included in the script are narrations for use with a total of 80 slides illustrating various foliar diseases (anthracnose, black spot, hawthorn leaf blight,…

  5. An evaluation of automatic coronary artery calcium scoring methods with cardiac CT using the orCaScore framework.

    PubMed

    Wolterink, Jelmer M; Leiner, Tim; de Vos, Bob D; Coatrieux, Jean-Louis; Kelm, B Michael; Kondo, Satoshi; Salgado, Rodrigo A; Shahzad, Rahil; Shu, Huazhong; Snoeren, Miranda; Takx, Richard A P; van Vliet, Lucas J; van Walsum, Theo; Willems, Tineke P; Yang, Guanyu; Zheng, Yefeng; Viergever, Max A; Išgum, Ivana

    2016-05-01

    The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiac CT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiac CT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiac CT using a publicly available standardized framework. Cardiac CT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CT scanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohen's kappa for patient CVD risk categorization by the evaluated methods ranged from 0.80 to 1.00. A publicly available standardized framework for the evaluation of (semi)automatic methods for CAC identification in cardiac CT is described. An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible. CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVD risk determination.

  6. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    NASA Astrophysics Data System (ADS)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  7. An Analysis of Serial Number Tracking Automatic Identification Technology as Used in Naval Aviation Programs

    NASA Astrophysics Data System (ADS)

    Csorba, Robert

    2002-09-01

    The Government Accounting Office found that the Navy, between 1996 and 1998, lost 3 billion in materiel in-transit. This thesis explores the benefits and cost of automatic identification and serial number tracking technologies under consideration by the Naval Supply Systems Command and the Naval Air Systems Command. Detailed cost-savings estimates are made for each aircraft type in the Navy inventory. Project and item managers of repairable components using Serial Number Tracking were surveyed as to the value of this system. It concludes that two thirds of the in-transit losses can be avoided with implementation of effective information technology-based logistics and maintenance tracking systems. Recommendations are made for specific steps and components of such an implementation. Suggestions are made for further research.

  8. A Simple Picaxe Microcontroller Pulse Source for Juxtacellular Neuronal Labelling.

    PubMed

    Verberne, Anthony J M

    2016-10-19

    Juxtacellular neuronal labelling is a method which allows neurophysiologists to fill physiologically-identified neurons with small positively-charged marker molecules. Labelled neurons are identified by histochemical processing of brain sections along with immunohistochemical identification of neuropeptides, neurotransmitters, neurotransmitter transporters or biosynthetic enzymes. A microcontroller-based pulser circuit and associated BASIC software script is described for incorporation into the design of a commercially-available intracellular electrometer for use in juxtacellular neuronal labelling. Printed circuit board construction has been used for reliability and reproducibility. The current design obviates the need for a separate digital pulse source and simplifies the juxtacellular neuronal labelling procedure.

  9. RAP: RNA-Seq Analysis Pipeline, a new cloud-based NGS web application.

    PubMed

    D'Antonio, Mattia; D'Onorio De Meo, Paolo; Pallocca, Matteo; Picardi, Ernesto; D'Erchia, Anna Maria; Calogero, Raffaele A; Castrignanò, Tiziana; Pesole, Graziano

    2015-01-01

    The study of RNA has been dramatically improved by the introduction of Next Generation Sequencing platforms allowing massive and cheap sequencing of selected RNA fractions, also providing information on strand orientation (RNA-Seq). The complexity of transcriptomes and of their regulative pathways make RNA-Seq one of most complex field of NGS applications, addressing several aspects of the expression process (e.g. identification and quantification of expressed genes and transcripts, alternative splicing and polyadenylation, fusion genes and trans-splicing, post-transcriptional events, etc.). In order to provide researchers with an effective and friendly resource for analyzing RNA-Seq data, we present here RAP (RNA-Seq Analysis Pipeline), a cloud computing web application implementing a complete but modular analysis workflow. This pipeline integrates both state-of-the-art bioinformatics tools for RNA-Seq analysis and in-house developed scripts to offer to the user a comprehensive strategy for data analysis. RAP is able to perform quality checks (adopting FastQC and NGS QC Toolkit), identify and quantify expressed genes and transcripts (with Tophat, Cufflinks and HTSeq), detect alternative splicing events (using SpliceTrap) and chimeric transcripts (with ChimeraScan). This pipeline is also able to identify splicing junctions and constitutive or alternative polyadenylation sites (implementing custom analysis modules) and call for statistically significant differences in genes and transcripts expression, splicing pattern and polyadenylation site usage (using Cuffdiff2 and DESeq). Through a user friendly web interface, the RAP workflow can be suitably customized by the user and it is automatically executed on our cloud computing environment. This strategy allows to access to bioinformatics tools and computational resources without specific bioinformatics and IT skills. RAP provides a set of tabular and graphical results that can be helpful to browse, filter and export analyzed data, according to the user needs.

  10. Fast, accurate and easy-to-pipeline methods for amplicon sequence processing

    NASA Astrophysics Data System (ADS)

    Antonielli, Livio; Sessitsch, Angela

    2016-04-01

    Next generation sequencing (NGS) technologies established since years as an essential resource in microbiology. While on the one hand metagenomic studies can benefit from the continuously increasing throughput of the Illumina (Solexa) technology, on the other hand the spreading of third generation sequencing technologies (PacBio, Oxford Nanopore) are getting whole genome sequencing beyond the assembly of fragmented draft genomes, making it now possible to finish bacterial genomes even without short read correction. Besides (meta)genomic analysis next-gen amplicon sequencing is still fundamental for microbial studies. Amplicon sequencing of the 16S rRNA gene and ITS (Internal Transcribed Spacer) remains a well-established widespread method for a multitude of different purposes concerning the identification and comparison of archaeal/bacterial (16S rRNA gene) and fungal (ITS) communities occurring in diverse environments. Numerous different pipelines have been developed in order to process NGS-derived amplicon sequences, among which Mothur, QIIME and USEARCH are the most well-known and cited ones. The entire process from initial raw sequence data through read error correction, paired-end read assembly, primer stripping, quality filtering, clustering, OTU taxonomic classification and BIOM table rarefaction as well as alternative "normalization" methods will be addressed. An effective and accurate strategy will be presented using the state-of-the-art bioinformatic tools and the example of a straightforward one-script pipeline for 16S rRNA gene or ITS MiSeq amplicon sequencing will be provided. Finally, instructions on how to automatically retrieve nucleotide sequences from NCBI and therefore apply the pipeline to targets other than 16S rRNA gene (Greengenes, SILVA) and ITS (UNITE) will be discussed.

  11. 77 FR 28923 - Shipping Coordinating Committee; Notice of Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-16

    ... new symbols for Automatic Identification System (AIS) aids to navigation --Casualty analysis..., parking in the vicinity of the building is extremely limited. Additional information regarding this and...

  12. Promoting interaction during sociodramatic play: teaching scripts to typical preschoolers and classmates with disabilities.

    PubMed Central

    Goldstein, H; Cisar, C L

    1992-01-01

    We investigated the effects of teaching sociodramatic scripts on subsequent interaction among three triads, each containing 2 typical children and 1 child with autistic characteristics. The same type and rate of teacher prompts were implemented throughout structured play observations to avoid the confounding effects of script training and teacher prompting. After learning the scripts, all children demonstrated more frequent theme-related social behavior. These improvements in social-communicative interaction were replicated with the training of three sociodramatic scripts (i.e., pet shop, carnival, magic show) according to a multiple baseline design. These effects were maintained during the training of successive scripts and when the triads were reconstituted to include new but similarly trained partners. Results provided support for the inclusion of systematic training of scripts to enhance interaction among children with and without disabilities during sociodramatic play. PMID:1386068

  13. Selected Landscape Plants. Slide Script.

    ERIC Educational Resources Information Center

    McCann, Kevin

    This slide script, part of a series of slide scripts designed for use in vocational agriculture classes, deals with commercially important woody ornamental landscape plants. Included in the script are narrations for use with a total of 253 slides illustrating 92 different plants. Several slides are used to illustrate each plant: besides a view of…

  14. Plasma Interactions with Spacecraft. Volume 1

    DTIC Science & Technology

    2011-04-15

    64-bit MacOS X environments. N2kScriptRunner, a C++ code that runs a Nascap-2k script outside of the Java user interface, was created. Using...Default Script and Original INIVEL Velocity Initialization ..........................................................15 Figure 6. Potentials at 25 µs...Current (Right Scale) Using Default Script and Modified INIVEL Velocity Initialization ........................................................16

  15. Variability in Written Japanese: Towards a Sociolinguistics of Script Choice.

    ERIC Educational Resources Information Center

    Smith, Janet S.; Schmidt, David L.

    1996-01-01

    Tests widely-held associations among script types, genres, writers, and target readers via statistical analysis in popular Japanese fiction. Subjects texts to lexical analysis to see whether choice of vocabulary can account for variability in script selection. Finds that Japanese writers fashion their script type choices to specific contexts, as…

  16. Appropriation from a Script Theory of Guidance Perspective: A Response to Pierre Tchounikine

    ERIC Educational Resources Information Center

    Stegmann, Karsten; Kollar, Ingo; Weinberger, Armin; Fischer, Frank

    2016-01-01

    In a recent paper, Pierre Tchounikine has suggested to advance the Script Theory of Guidance (SToG) by addressing the question how learners appropriate collaboration scripts presented to them in learning environments. Tchounikine's main criticism addresses SToG's "internal script configuration principle." This principle states that in…

  17. Multilingual Education Policy in Practice: Classroom Literacy Instruction in Different Scripts in Eritrea

    ERIC Educational Resources Information Center

    Asfaha, Yonas Mesfun; Kroon, Sjaak

    2011-01-01

    This contribution compares literacy instruction in three different scripts in Eritrea. It uses data stemming from classroom observations of beginning readers of Tigrinya (Ge'ez script), Arabic (Arabic script) and Saho (Roman alphabet), the examination of teaching materials, and teacher interviews. Our analysis focuses on literacy events. We…

  18. Effects of Peer-Mediated Implementation of Visual Scripts in Middle School

    ERIC Educational Resources Information Center

    Ganz, Jennifer B.; Heath, Amy K.; Lund, Emily M.; Camargo, Siglia P. H.; Rispoli, Mandy J.; Boles, Margot; Plaisance, Lauren

    2012-01-01

    Although research has investigated the impact of peer-mediated interventions and visual scripts on social and communication skills in children with autism spectrum disorders, no studies to date have investigated peer-mediated implementation of scripts. This study investigated the effects of peer-implemented scripts on a middle school student with…

  19. Who Benefits from a Low versus High Guidance CSCL Script and Why?

    ERIC Educational Resources Information Center

    Mende, Stephan; Proske, Antje; Körndle, Hermann; Narciss, Susanne

    2017-01-01

    Computer-supported collaborative learning (CSCL) scripts can foster learners' deep text comprehension. However, this depends on (a) the extent to which the learning activities targeted by a script promote deep text comprehension and (b) whether the guidance level provided by the script is adequate to induce the targeted learning activities…

  20. Shallow and Deep Orthographies in Hebrew: The Role of Vowelization in Reading Development for Unvowelized Scripts

    ERIC Educational Resources Information Center

    Schiff, Rachel

    2012-01-01

    The present study explored the speed, accuracy, and reading comprehension of vowelized versus unvowelized scripts among 126 native Hebrew speaking children in second, fourth, and sixth grades. Findings indicated that second graders read and comprehended vowelized scripts significantly more accurately and more quickly than unvowelized scripts,…

  1. The transmission and stability of cultural life scripts: a cross-cultural study.

    PubMed

    Janssen, Steve M J; Haque, Shamsul

    2018-01-01

    Cultural life scripts are shared knowledge about the timing of important life events. In the present study, we examined whether cultural life scripts are transmitted through traditions and whether there are additional ways through which they can be attained by asking Australian and Malaysian participants which information sources they had used to generate the life script of their culture. Participants hardly reported that they had used cultural and religious traditions. They more often reported that they had used their own experiences and experiences of relatives and friends. They also reported the use of comments of relatives and friends and the use of newspapers, books, movies and television programmes. Furthermore, we examined the stability of life scripts and similarities and differences across cultures. We found that life scripts are stable cognitive structures and that there are, besides cross-cultural differences in the content, small cross-cultural differences in the valence and distribution of life script events, with the Australian life script containing more positive events and more events expected to occur before the age of 16.

  2. Secure base representations in middle childhood across two Western cultures: Associations with parental attachment representations and maternal reports of behavior problems.

    PubMed

    Waters, Theodore E A; Bosmans, Guy; Vandevivere, Eva; Dujardin, Adinda; Waters, Harriet S

    2015-08-01

    Recent work examining the content and organization of attachment representations suggests that 1 way in which we represent the attachment relationship is in the form of a cognitive script. This work has largely focused on early childhood or adolescence/adulthood, leaving a large gap in our understanding of script-like attachment representations in the middle childhood period. We present 2 studies and provide 3 critical pieces of evidence regarding the presence of a script-like representation of the attachment relationship in middle childhood. We present evidence that a middle childhood attachment script assessment tapped a stable underlying script using samples drawn from 2 western cultures, the United States (Study 1) and Belgium (Study 2). We also found evidence suggestive of the intergenerational transmission of secure base script knowledge (Study 1) and relations between secure base script knowledge and symptoms of psychopathology in middle childhood (Study 2). The results from this investigation represent an important downward extension of the secure base script construct. (c) 2015 APA, all rights reserved).

  3. The Normative and the Personal Life: Individual Differences in Life Scripts and Life Story Events among U.S.A. and Danish Undergraduates

    PubMed Central

    Rubin, David C.; Berntsen, Dorthe; Hutson, Michael

    2011-01-01

    Life scripts are culturally shared expectations about the order and timing of life events in a prototypical life course. American and Danish undergraduates produced life story events and life scripts by listing the seven most important events in their own lives and in the lives of hypothetical people living ordinary lives. They also rated their events on several scales and completed measures of depression, PTSD symptoms, and centrality of a negative event to their lives. The Danish life script replicated earlier work; the American life script showed minor differences from the Danish life script, apparently reflecting genuine differences in shared events as well as less homogeneity in the American sample. Both consisted of mostly positive events that came disproportionately from ages 15 to 30. Valence of life story events correlated with life script valence, depression, PTSD symptoms, and identity. In the Danish undergraduates, measures of life story deviation from the life script correlated with measures of depression and PTSD symptoms. PMID:19105087

  4. A video depicting resuscitation did not impact upon patients' decision-making.

    PubMed

    Richardson-Royer, Caitlin; Naqvi, Imran; Riffel, Christopher; Harvey, Lawrence; Smith, Domonique; Ayalew, Dagmawe; Motayar, Nasim; Amoateng-Adjepong, Yaw; Manthous, Constantine A

    2018-01-01

    Previous studies have demonstrated that video of and scripted information about cardiopulmonary resuscitation (CPR) can be deployed during clinician-patient end-of-life discussions. Few studies, however, examine whether video adds to verbal information-sharing. We hypothesized that video augments script-only decision-making. Patients aged >65 years admitted to hospital wards were randomized to receive evidence-based information ("script") vs. script plus video of simulated CPR and intubation. Patients' decisions registered in the hospital record, by hospital discharge were compared for the two groups. Fifty script-only intervention patients averaging 77.7 years were compared to 50 script+video patients with a mean age of 74.7 years. Eleven of 50 (22%) in each group declined CPR; and an additional three (script) vs. four (script+video) refused intubation for respiratory failure. There were no differences in sex, self-reported health trajectory, functional limitations, length of stay, or mortality associated with decisions. The rate at which verbally informed hospitalized elders opted out of resuscitation was not impacted by adding a video depiction of CPR.

  5. Secure Base Representations in Middle Childhood Across Two Western Cultures: Associations with Parental Attachment Representations and Maternal Reports of Behavior Problems

    PubMed Central

    Waters, Theodore E. A.; Bosmans, Guy; Vandevivere, Eva; Dujardin, Adinda; Waters, Harriet S.

    2015-01-01

    Recent work examining the content and organization of attachment representations suggests that one way in which we represent the attachment relationship is in the form of a cognitive script. That said, this work has largely focused on early childhood or adolescence/adulthood, leaving a large gap in our understanding of script-like attachment representations in the middle childhood period. We present two studies and provide three critical pieces of evidence regarding the presence of a script-like representation of the attachment relationship in middle childhood. We present evidence that a middle childhood attachment script assessment tapped a stable underlying script using samples drawn from two western cultures, the United States (Study 1) and Belgium (Study 2). We also found evidence suggestive of the intergenerational transmission of secure base script knowledge (Study 1) and relations between secure base script knowledge and symptoms of psychopathology in middle childhood (Study 2). The results from this investigation represent an important downward extension of the secure base script construct. PMID:26147774

  6. Parents, peers and pornography: the influence of formative sexual scripts on adult HIV sexual risk behaviour among Black men in the USA.

    PubMed

    Hussen, Sophia A; Bowleg, Lisa; Sangaramoorthy, Thurka; Malebranche, David J

    2012-01-01

    Black men in the USA experience disproportionately high rates of HIV infection, particularly in the Southeastern part of the country. We conducted 90 qualitative in-depth interviews with Black men living in the state of Georgia and analysed the transcripts using Sexual Script Theory to: (1) characterise the sources and content of sexual scripts that Black men were exposed to during their childhood and adolescence and (2) describe the potential influence of formative scripts on adult HIV sexual risk behaviour. Our analyses highlighted salient sources of cultural scenarios (parents, peers, pornography, sexual education and television), interpersonal scripts (early sex- play, older female partners, experiences of child abuse) and intrapsychic scripts that participants described. Stratification of participant responses based on sexual-risk behaviour revealed that lower- and higher-risk men described exposure to similar scripts during their formative years; however, lower-risk men reported an ability to cognitively process and challenge the validity of risk-promoting scripts that they encountered. Implications for future research are discussed.

  7. An ERP investigation of visual word recognition in syllabary scripts.

    PubMed

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2013-06-01

    The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, in the present study we examined word recognition in a different writing system, the Japanese syllabary scripts hiragana and katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words in which the prime and target words were both in the same script (within-script priming, Exp. 1) or were in the opposite script (cross-script priming, Exp. 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sublexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in "Experiment 1: Within-script priming", in which the prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neurocognitive processes that operate in similar manners across different writing systems and languages, as well as pointing to the viability of the bimodal interactive-activation framework for modeling such processes.

  8. Automatic identification of alpine mass movements based on seismic and infrasound signals

    NASA Astrophysics Data System (ADS)

    Schimmel, Andreas; Hübl, Johannes

    2017-04-01

    The automatic detection and identification of alpine mass movements like debris flows, debris floods or landslides gets increasing importance for mitigation measures in the densely populated and intensively used alpine regions. Since this mass movement processes emits characteristically seismic and acoustic waves in the low frequency range this events can be detected and identified based on this signals. So already several approaches for detection and warning systems based on seismic or infrasound signals has been developed. But a combination of both methods, which can increase detection probability and reduce false alarms is currently used very rarely and can serve as a promising method for developing an automatic detection and identification system. So this work presents an approach for a detection and identification system based on a combination of seismic and infrasound sensors, which can detect sediment related mass movements from a remote location unaffected by the process. The system is based on one infrasound sensor and one geophone which are placed co-located and a microcontroller where a specially designed detection algorithm is executed which can detect mass movements in real time directly at the sensor site. Further this work tries to get out more information from the seismic and infrasound spectrum produced by different sediment related mass movements to identify the process type and estimate the magnitude of the event. The system is currently installed and tested on five test sites in Austria, two in Italy and one in Switzerland as well as one in Germany. This high number of test sites is used to get a large database of very different events which will be the basis for a new identification method for alpine mass movements. These tests shows promising results and so this system provides an easy to install and inexpensive approach for a detection and warning system.

  9. The role of semantic processing in reading Japanese orthographies: an investigation using a script-switch paradigm.

    PubMed

    Dylman, Alexandra S; Kikutani, Mariko

    2018-01-01

    Research on Japanese reading has generally indicated that processing of the logographic script Kanji primarily involves whole-word lexical processing and follows a semantics-to-phonology route, while the two phonological scripts Hiragana and Katakana (collectively called Kana) are processed via a sub-lexical route, and more in a phonology-to-semantics manner. Therefore, switching between the two scripts often involves switching between two reading processes, which results in a delayed response for the second script (a script switch cost). In the present study, participants responded to pairs of words that were written either in the same orthography (within-script), or in two different Japanese orthographies (cross-script), switching either between Kanji and Hiragana, or between Katakana and Hiragana. They were asked to read the words aloud (Experiments 1 and 3) and to make a semantic decision about them (Experiments 2 and 4). In contrast to initial predictions, a clear switch cost was observed when participants switched between the two Kana scripts, while script switch costs were less consistent when participants switched between Kanji and Hiragana. This indicates that there are distinct processes involved in reading of the two types of Kana, where Hiragana reading appears to bear some similarities to Kanji processing. This suggests that the role of semantic processing in Hiragana (but not Katakana) reading is more prominent than previously thought and thus, Hiragana is not likely to be processed strictly phonologically.

  10. PyCMSXiO: an external interface to script treatment plans for the Elekta® CMS XiO treatment planning system

    NASA Astrophysics Data System (ADS)

    Xing, Aitang; Arumugam, Sankar; Holloway, Lois; Goozee, Gary

    2014-03-01

    Scripting in radiotherapy treatment planning systems not only simplifies routine planning tasks but can also be used for clinical research. Treatment planning scripting can only be utilized in a system that has a built-in scripting interface. Among the commercially available treatment planning systems, Pinnacle (Philips) and Raystation (Raysearch Lab.) have inherent scripting functionality. CMS XiO (Elekta) is a widely used treatment planning system in radiotherapy centres around the world, but it does not have an interface that allows the user to script radiotherapy plans. In this study an external scripting interface, PyCMSXiO, was developed for XiO using the Python programming language. The interface was implemented as a python package/library using a modern object-oriented programming methodology. The package was organized as a hierarchy of different classes (objects). Each class (object) corresponds to a plan object such as the beam of a clinical radiotherapy plan. The interface of classes was implemented as object functions. Scripting in XiO using PyCMSXiO is comparable with Pinnacle scripting. This scripting package has been used in several research projects including commissioning of a beam model, independent three-dimensional dose verification for IMRT plans and a setup-uncertainty study. Ease of use and high-level functions provided in the package achieve a useful research tool. It was released as an open-source tool that may benefit the medical physics community.

  11. Identification of Cichlid Fishes from Lake Malawi Using Computer Vision

    PubMed Central

    Joo, Deokjin; Kwan, Ye-seul; Song, Jongwoo; Pinho, Catarina; Hey, Jody; Won, Yong-Jin

    2013-01-01

    Background The explosively radiating evolution of cichlid fishes of Lake Malawi has yielded an amazing number of haplochromine species estimated as many as 500 to 800 with a surprising degree of diversity not only in color and stripe pattern but also in the shape of jaw and body among them. As these morphological diversities have been a central subject of adaptive speciation and taxonomic classification, such high diversity could serve as a foundation for automation of species identification of cichlids. Methodology/Principal Finding Here we demonstrate a method for automatic classification of the Lake Malawi cichlids based on computer vision and geometric morphometrics. For this end we developed a pipeline that integrates multiple image processing tools to automatically extract informative features of color and stripe patterns from a large set of photographic images of wild cichlids. The extracted information was evaluated by statistical classifiers Support Vector Machine and Random Forests. Both classifiers performed better when body shape information was added to the feature of color and stripe. Besides the coloration and stripe pattern, body shape variables boosted the accuracy of classification by about 10%. The programs were able to classify 594 live cichlid individuals belonging to 12 different classes (species and sexes) with an average accuracy of 78%, contrasting to a mere 42% success rate by human eyes. The variables that contributed most to the accuracy were body height and the hue of the most frequent color. Conclusions Computer vision showed a notable performance in extracting information from the color and stripe patterns of Lake Malawi cichlids although the information was not enough for errorless species identification. Our results indicate that there appears an unavoidable difficulty in automatic species identification of cichlid fishes, which may arise from short divergence times and gene flow between closely related species. PMID:24204918

  12. Towards a measurement of internalization of collaboration scripts in the medical context - results of a pilot study.

    PubMed

    Kiesewetter, Jan; Gluza, Martin; Holzer, Matthias; Saravo, Barbara; Hammitzsch, Laura; Fischer, Martin R

    2015-01-01

    Collaboration as a key qualification in medical education and everyday routine in clinical care can substantially contribute to improving patient safety. Internal collaboration scripts are conceptualized as organized - yet adaptive - knowledge that can be used in specific situations in professional everyday life. This study examines the level of internalization of collaboration scripts in medicine. Internalization is understood as fast retrieval of script information. The goals of the current study were the assessment of collaborative information, which is part of collaboration scripts, and the development of a methodology for measuring the level of internalization of collaboration scripts in medicine. For the contrastive comparison of internal collaboration scripts, 20 collaborative novices (medical students in their final year) and 20 collaborative experts (physicians with specialist degrees in internal medicine or anesthesiology) were included in the study. Eight typical medical collaborative situations as shown on a photo or video were presented to the participants for five seconds each. Afterwards, the participants were asked to describe what they saw on the photo or video. Based on the answers, the amount of information belonging to a collaboration script (script-information) was determined and the time each participant needed for answering was measured. In order to measure the level of internalization, script-information per recall time was calculated. As expected, collaborative experts stated significantly more script-information than collaborative novices. As well, collaborative experts showed a significantly higher level of internalization. Based on the findings of this research, we conclude that our instrument can discriminate between collaboration novices and experts. It therefore can be used to analyze measures to foster subject-specific competency in medical education.

  13. Conjunctive programming: An interactive approach to software system synthesis

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1992-01-01

    This report introduces a technique of software documentation called conjunctive programming and discusses its role in the development and maintenance of software systems. The report also describes the conjoin tool, an adjunct to assist practitioners. Aimed at supporting software reuse while conforming with conventional development practices, conjunctive programming is defined as the extraction, integration, and embellishment of pertinent information obtained directly from an existing database of software artifacts, such as specifications, source code, configuration data, link-edit scripts, utility files, and other relevant information, into a product that achieves desired levels of detail, content, and production quality. Conjunctive programs typically include automatically generated tables of contents, indexes, cross references, bibliographic citations, tables, and figures (including graphics and illustrations). This report presents an example of conjunctive programming by documenting the use and implementation of the conjoin program.

  14. CerebralWeb: a Cytoscape.js plug-in to visualize networks stratified by subcellular localization.

    PubMed

    Frias, Silvia; Bryan, Kenneth; Brinkman, Fiona S L; Lynn, David J

    2015-01-01

    CerebralWeb is a light-weight JavaScript plug-in that extends Cytoscape.js to enable fast and interactive visualization of molecular interaction networks stratified based on subcellular localization or other user-supplied annotation. The application is designed to be easily integrated into any website and is configurable to support customized network visualization. CerebralWeb also supports the automatic retrieval of Cerebral-compatible localizations for human, mouse and bovine genes via a web service and enables the automated parsing of Cytoscape compatible XGMML network files. CerebralWeb currently supports embedded network visualization on the InnateDB (www.innatedb.com) and Allergy and Asthma Portal (allergen.innatedb.com) database and analysis resources. Database tool URL: http://www.innatedb.com/CerebralWeb © The Author(s) 2015. Published by Oxford University Press.

  15. WebBio, a web-based management and analysis system for patient data of biological products in hospital.

    PubMed

    Lu, Ying-Hao; Kuo, Chen-Chun; Huang, Yaw-Bin

    2011-08-01

    We selected HTML, PHP and JavaScript as the programming languages to build "WebBio", a web-based system for patient data of biological products and used MySQL as database. WebBio is based on the PHP-MySQL suite and is run by Apache server on Linux machine. WebBio provides the functions of data management, searching function and data analysis for 20 kinds of biological products (plasma expanders, human immunoglobulin and hematological products). There are two particular features in WebBio: (1) pharmacists can rapidly find out whose patients used contaminated products for medication safety, and (2) the statistics charts for a specific product can be automatically generated to reduce pharmacist's work loading. WebBio has successfully turned traditional paper work into web-based data management.

  16. Automatic Identification of Subtechniques in Skating-Style Roller Skiing Using Inertial Sensors

    PubMed Central

    Sakurai, Yoshihisa; Fujita, Zenya; Ishige, Yusuke

    2016-01-01

    This study aims to develop and validate an automated system for identifying skating-style cross-country subtechniques using inertial sensors. In the first experiment, the performance of a male cross-country skier was used to develop an automated identification system. In the second, eight male and seven female college cross-country skiers participated to validate the developed identification system. Each subject wore inertial sensors on both wrists and both roller skis, and a small video camera on a backpack. All subjects skied through a 3450 m roller ski course using a skating style at their maximum speed. The adopted subtechniques were identified by the automated method based on the data obtained from the sensors, as well as by visual observations from a video recording of the same ski run. The system correctly identified 6418 subtechniques from a total of 6768 cycles, which indicates an accuracy of 94.8%. The precisions of the automatic system for identifying the V1R, V1L, V2R, V2L, V2AR, and V2AL subtechniques were 87.6%, 87.0%, 97.5%, 97.8%, 92.1%, and 92.0%, respectively. Most incorrect identification cases occurred during a subtechnique identification that included a transition and turn event. Identification accuracy can be improved by separately identifying transition and turn events. This system could be used to evaluate each skier’s subtechniques in course conditions. PMID:27049388

  17. Extraction, integration and analysis of alternative splicing and protein structure distributed information

    PubMed Central

    D'Antonio, Matteo; Masseroli, Marco

    2009-01-01

    Background Alternative splicing has been demonstrated to affect most of human genes; different isoforms from the same gene encode for proteins which differ for a limited number of residues, thus yielding similar structures. This suggests possible correlations between alternative splicing and protein structure. In order to support the investigation of such relationships, we have developed the Alternative Splicing and Protein Structure Scrutinizer (PASS), a Web application to automatically extract, integrate and analyze human alternative splicing and protein structure data sparsely available in the Alternative Splicing Database, Ensembl databank and Protein Data Bank. Primary data from these databases have been integrated and analyzed using the Protein Identifier Cross-Reference, BLAST, CLUSTALW and FeatureMap3D software tools. Results A database has been developed to store the considered primary data and the results from their analysis; a system of Perl scripts has been implemented to automatically create and update the database and analyze the integrated data; a Web interface has been implemented to make the analyses easily accessible; a database has been created to manage user accesses to the PASS Web application and store user's data and searches. Conclusion PASS automatically integrates data from the Alternative Splicing Database with protein structure data from the Protein Data Bank. Additionally, it comprehensively analyzes the integrated data with publicly available well-known bioinformatics tools in order to generate structural information of isoform pairs. Further analysis of such valuable information might reveal interesting relationships between alternative splicing and protein structure differences, which may be significantly associated with different functions. PMID:19828075

  18. Transmit: An Advanced Traffic Management System

    DOT National Transportation Integrated Search

    1995-11-27

    TRANSCOM'S SYSTEM FOR MANAGING INCIDENTS AND TRAFFIC, KNOWN AS TRANSMIT, WAS INITIATED TO ESTABLISH THE FEASIBILITY OF USING AUTOMATIC VEHICLE IDENTIFICATION (AVI) EQUIPMENT FOR TRAFFIC MANAGEMENT AND SURVEILLANCE APPLICATIONS. AVI TECHNOLOGY SYSTEMS...

  19. The aware toolbox for the detection of law infringements on web pages

    NASA Astrophysics Data System (ADS)

    Shahab, Asif; Kieninger, Thomas; Dengel, Andreas

    2010-01-01

    In the project Aware we aim to develop an automatic assistant for the detection of law infringements on web pages. The motivation for this project is that many authors of web pages are at some points infringing copyrightor other laws, mostly without being aware of that fact, and are more and more often confronted with costly legal warnings. As the legal environment is constantly changing, an important requirement of Aware is that the domain knowledge can be maintained (and initially defined) by numerous legal experts remotely working without further assistance of the computer scientists. Consequently, the software platform was chosen to be a web-based generic toolbox that can be configured to suit individual analysis experts, definitions of analysis flow, information gathering and report generation. The report generated by the system summarizes all critical elements of a given web page and provides case specific hints to the page author and thus forms a new type of service. Regarding the analysis subsystems, Aware mainly builds on existing state-of-the-art technologies. Their usability has been evaluated for each intended task. In order to control the heterogeneous analysis components and to gather the information, a lightweight scripting shell has been developed. This paper describes the analysis technologies, ranging from text based information extraction, over optical character recognition and phonetic fuzzy string matching to a set of image analysis and retrieval tools; as well as the scripting language to define the analysis flow.

  20. Machine Learning Based Online Performance Prediction for Runtime Parallelization and Task Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J; Ma, X; Singh, K

    2008-10-09

    With the emerging many-core paradigm, parallel programming must extend beyond its traditional realm of scientific applications. Converting existing sequential applications as well as developing next-generation software requires assistance from hardware, compilers and runtime systems to exploit parallelism transparently within applications. These systems must decompose applications into tasks that can be executed in parallel and then schedule those tasks to minimize load imbalance. However, many systems lack a priori knowledge about the execution time of all tasks to perform effective load balancing with low scheduling overhead. In this paper, we approach this fundamental problem using machine learning techniques first to generatemore » performance models for all tasks and then applying those models to perform automatic performance prediction across program executions. We also extend an existing scheduling algorithm to use generated task cost estimates for online task partitioning and scheduling. We implement the above techniques in the pR framework, which transparently parallelizes scripts in the popular R language, and evaluate their performance and overhead with both a real-world application and a large number of synthetic representative test scripts. Our experimental results show that our proposed approach significantly improves task partitioning and scheduling, with maximum improvements of 21.8%, 40.3% and 22.1% and average improvements of 15.9%, 16.9% and 4.2% for LMM (a real R application) and synthetic test cases with independent and dependent tasks, respectively.« less

  1. Effects of age of acquisition on brain activation during Chinese character recognition.

    PubMed

    Weekes, Brendan Stuart; Chan, Alice H D; Tan, Li Hai

    2008-01-01

    The age of acquisition of a word (AoA) has a specific effect on brain activation during word identification in English and German. However, the neural locus of AoA effects differs across studies. According to Hernandez and Fiebach [Hernandez, A., & Fiebach, C. (2006). The brain bases of reading late-learned words: Evidence from functional MRI. Visual Cognition, 13(8), 1027-1043], the effects of AoA on brain activation depend on the predictability of the connections between input (orthography) and output (phonology) in a lexical network. We tested this hypothesis by examining AoA effects in a non-alphabetic script with relatively arbitrary mappings between orthography and phonology--Chinese. Our results showed that the effects of AoA in Chinese speakers are located in brain regions that are spatially distinctive including the bilateral middle temporal gyrus and the left inferior parietal cortex. An additional finding was that word frequency had an independent effect on brain activation in the right middle occipital gyrus only. We conclude that spatially distinctive effects of AoA on neural activity depend on the predictability of the mappings between orthography and phonology and reflect a division of labour towards greater lexical-semantic retrieval in non-alphabetic scripts.

  2. Using Audio Script Fading and Multiple-Exemplar Training to Increase Vocal Interactions in Children with Autism

    ERIC Educational Resources Information Center

    Garcia-Albea, Elena; Reeve, Sharon A.; Brothers, Kevin J.; Reeve, Kenneth F.

    2014-01-01

    Script-fading procedures have been shown to be effective for teaching children with autism to initiate and participate in social interactions without vocal prompts from adults. In previous script and script-fading research, however, there has been no demonstration of a generalized repertoire of vocal interactions under the control of naturally…

  3. Arabic Script and the Rise of Arabic Calligraphy

    ERIC Educational Resources Information Center

    Alshahrani, Ali A.

    2008-01-01

    The aim of this paper is to present a concise coherent literature review of the Arabic Language script system as one of the oldest living Semitic languages in the world. The article discusses in depth firstly, Arabic script as a phonemic sound-based writing system of twenty eight, right to left cursive script where letterforms shaped by their…

  4. Scripted Collaboration and Group-Based Variations in a Higher Education CSCL Context

    ERIC Educational Resources Information Center

    Hamalainen, Raija; Arvaja, Maarit

    2009-01-01

    Scripting student activities is one way to make Computer-Supported Collaborative Learning more efficient. This case study examines how scripting guided student group activities and also how different groups interpreted the script; what kinds of roles students adopted and what kinds of differences there were between the groups in terms of their…

  5. Exploring the Presence of a Deaf American Cultural Life Script

    ERIC Educational Resources Information Center

    Clark, M. Diane; Daggett, Dorri J.

    2015-01-01

    Cultural life scripts are defined as culturally shared expectations that focus on a series of events that are ordered in time. In these scripts, generalized expectations for what to expect through the life course are outlined. This study examined the possibility of a Deaf American Life Script developed in relationship to the use of a visual…

  6. Cross-script and within-script priming in alcoholic Korsakoff patients.

    PubMed

    Komatsu, Shin-Ichi; Mimura, Masaru; Kato, Motoichiro; Kashima, Haruo

    2003-04-01

    In two experiments, alcoholic Korsakoff patients and control subjects studied a list of Japanese nouns written in either Hiragana or Kanji script. Word-fragment completion and recognition tests were then administered in Hiragana. When the writing script was changed between study and test phases, repetition priming in word-fragment completion was significantly attenuated but was still reliable against baseline performance. This was confirmed for both Korsakoff patients and control subjects. In contrast, the script change had little effect on recognition memory, which was severely impaired in Korsakoff patients. The results suggest that repetition priming is mediated by two different implicit processes, one that is script-specific and the other that is assumed to operate at a more abstract level.

  7. Investigations into the Properties, Conditions, and Effects of the Ionosphere

    DTIC Science & Technology

    1990-01-15

    ionogram database to be used in testing trace-identification algorithms; d. Development of automatic trace-identification algorithms and autoscaling ...Scaler ( ARTIST ) and improvement of the ARTIST software; g. Maintenance and upgrade of the digital ionosondes at Argentia, Newfoundland, and Goose Bay...provided by the contractor; j. Upgrade of the ARTIST computer at the Danish Meteorological Institute/GL Qaanaaq site to provide digisonde tape-playback

  8. Automatic identification and location technology of glass insulator self-shattering

    NASA Astrophysics Data System (ADS)

    Huang, Xinbo; Zhang, Huiying; Zhang, Ye

    2017-11-01

    The insulator of transmission lines is one of the most important infrastructures, which is vital to ensure the safe operation of transmission lines under complex and harsh operating conditions. The glass insulator often self-shatters but the available identification methods are inefficient and unreliable. Then, an automatic identification and localization technology of self-shattered glass insulators is proposed, which consists of the cameras installed on the tower video monitoring devices or the unmanned aerial vehicles, the 4G/OPGW network, and the monitoring center, where the identification and localization algorithm is embedded into the expert software. First, the images of insulators are captured by cameras, which are processed to identify the region of insulator string by the presented identification algorithm of insulator string. Second, according to the characteristics of the insulator string image, a mathematical model of the insulator string is established to estimate the direction and the length of the sliding blocks. Third, local binary pattern histograms of the template and the sliding block are extracted, by which the self-shattered insulator can be recognized and located. Finally, a series of experiments is fulfilled to verify the effectiveness of the algorithm. For single insulator images, Ac, Pr, and Rc of the algorithm are 94.5%, 92.38%, and 96.78%, respectively. For double insulator images, Ac, Pr, and Rc are 90.00%, 86.36%, and 93.23%, respectively.

  9. 78 FR 32699 - Shipping Coordinating Committee; Notice of Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-31

    ...)) --Revision of the Guidelines for the onboard operational use of shipborne automatic identification systems... transportation is not generally available). However, parking in the vicinity of the building is limited...

  10. Simulating the human body's microclimate using automatic coupling of CFD and an advanced thermoregulation model.

    PubMed

    Voelker, C; Alsaad, H

    2018-05-01

    This study aims to develop an approach to couple a computational fluid dynamics (CFD) solver to the University of California, Berkeley (UCB) thermal comfort model to accurately evaluate thermal comfort. The coupling was made using an iterative JavaScript to automatically transfer data for each individual segment of the human body back and forth between the CFD solver and the UCB model until reaching convergence defined by a stopping criterion. The location from which data are transferred to the UCB model was determined using a new approach based on the temperature difference between subsequent points on the temperature profile curve in the vicinity of the body surface. This approach was used because the microclimate surrounding the human body differs in thickness depending on the body segment and the surrounding environment. To accurately simulate the thermal environment, the numerical model was validated beforehand using experimental data collected in a climate chamber equipped with a thermal manikin. Furthermore, an example of the practical implementations of this coupling is reported in this paper through radiant floor cooling simulation cases, in which overall and local thermal sensation and comfort were investigated using the coupled UCB model. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Project Report: Automatic Sequence Processor Software Analysis

    NASA Technical Reports Server (NTRS)

    Benjamin, Brandon

    2011-01-01

    The Mission Planning and Sequencing (MPS) element of Multi-Mission Ground System and Services (MGSS) provides space missions with multi-purpose software to plan spacecraft activities, sequence spacecraft commands, and then integrate these products and execute them on spacecraft. Jet Propulsion Laboratory (JPL) is currently is flying many missions. The processes for building, integrating, and testing the multi-mission uplink software need to be improved to meet the needs of the missions and the operations teams that command the spacecraft. The Multi-Mission Sequencing Team is responsible for collecting and processing the observations, experiments and engineering activities that are to be performed on a selected spacecraft. The collection of these activities is called a sequence and ultimately a sequence becomes a sequence of spacecraft commands. The operations teams check the sequence to make sure that no constraints are violated. The workflow process involves sending a program start command, which activates the Automatic Sequence Processor (ASP). The ASP is currently a file-based system that is comprised of scripts written in perl, c-shell and awk. Once this start process is complete, the system checks for errors and aborts if there are any; otherwise the system converts the commands to binary, and then sends the resultant information to be radiated to the spacecraft.

  12. Application of the Golden Software Surfer mapping software for automation of visualisation of meteorological and oceanographic data in IMGW Maritime Branch.

    NASA Astrophysics Data System (ADS)

    Piliczewski, B.

    2003-04-01

    The Golden Software Surfer has been used in IMGW Maritime Branch for more than ten years. This tool provides ActiveX Automation objects, which allow scripts to control practically every feature of Surfer. These objects can be accessed from any Automation-enabled environment, such as Visual Basic or Excel. Several applications based on Surfer has been developed in IMGW. The first example is an on-line oceanographic service, which presents forecasts of the water temperature, sea level and currents originating from the HIROMB model and is automatically updated every day. Surfer was also utilised in MERMAID, an international project supported by EC under the 5th Framework Programme. The main aim of this project was to create a prototype of the Internet-based data brokerage system, which would enable to search, extract, buy and download datasets containing meteorological or oceanographic data. During the project IMGW developed an online application, called Mermaid Viewer, which enables communication with the data broker and automatic visualisation of the downloaded data using Surfer. Both the above mentioned applications were developed in Visual Basic. Currently it is considered to adopt Surfer for the monitoring service, which provides access to the data collected in the monitoring of the Baltic Sea environment.

  13. The experiences of undergraduate nursing students with bots in Second LifeRTM

    NASA Astrophysics Data System (ADS)

    Rose, Lesele H.

    As technology continues to transform education from the status quo of traditional lecture-style instruction to an interactive engaging learning experience, students' experiences within the learning environment continues to change as well. This dissertation addressed the need for continuing research in advancing implementation of technology in higher education. The purpose of this phenomenological study was to discover more about the experiences of undergraduate nursing students using standardized geriatric evaluation tools when interacting with scripted geriatric patient bots tools in a simulated instructional intake setting. Data was collected through a Demographics questionnaire, an Experiential questionnaire, and a Reflection questionnaire. Triangulation of data collection occurred through an automatically created log of the interactions with the two bots, and by an automatically recorded log of the participants' movements while in the simulated geriatric intake interview. The data analysis consisted of an iterative review of the questionnaires and the participants' logs in an effort to identify common themes, recurring comments, and issues which would benefit from further exploration. Findings revealed that the interactions with the bots were perceived as a valuable experience for the participants from the perspective of interacting with the Geriatric Evaluation Tools in the role of an intake nurse. Further research is indicated to explore instructional interactions with bots in effectively mastering the use of established Geriatric Evaluation Tools.

  14. Accessing world knowledge: evidence from N400 and reaction time priming.

    PubMed

    Chwilla, Dorothee J; Kolk, Herman H J

    2005-12-01

    How fast are we in accessing world knowledge? In two experiments, we tested for priming for word triplets that described a conceptual script (e.g., DIRECTOR-BRIBE-DISMISSAL) but were not associatively related and did not share a category relationship. Event-related brain potentials were used to track the time course at which script information becomes available. In Experiment 1, in which participants made lexical decisions, we found a facilitation for script-related relative to unrelated triplets, as indicated by (i) a decrease in both reaction time and errors, and (ii) an N400-like priming effect. In Experiment 2, we further explored the locus of script priming by increasing the contribution of meaning integration processes. The participants' task was to indicate whether the three words presented a plausible scenario. Again, an N400 script priming effect was obtained. Directing attention to script relations was effective in enhancing the N400 effect. The time course of the N400 effect was similar to that of the standard N400 effect to semantic relations. The present results show that script priming can be obtained in the visual modality, and that script information is immediately accessed and integrated with context. This supports the view that script information forms a central aspect of word meaning. The RT and N400 script priming effects reported in this article are problematic for most current semantic priming models, like spreading activation models, expectancy models, and task-specific semantic matching/integration models. They support a view in which there is no clear cutoff point between semantic knowledge and world knowledge.

  15. Retrieval, automaticity, vocabulary elaboration, orthography (RAVE-O): a comprehensive, fluency-based reading intervention program.

    PubMed

    Wolf, M; Miller, L; Donnelly, K

    2000-01-01

    The most important implication of the double-deficit hypothesis (Wolf & Bowers, in this issue) concerns a new emphasis on fluency and automaticity in intervention for children with developmental reading disabilities. The RAVE-O (Retrieval, Automaticity, Vocabulary Elaboration, Orthography) program is an experimental, fluency-based approach to reading intervention that is designed to accompany a phonological analysis program. In an effort to address multiple possible sources of dysfluency in readers with disabilities, the program involves comprehensive emphases both on fluency in word attack, word identification, and comprehension and on automaticity in underlying componential processes (e.g., phonological, orthographic, semantic, and lexical retrieval skills). The goals, theoretical principles, and applied activities of the RAVE-O curriculum are described with particular stress on facilitating the development of rapid orthographic pattern recognition and on changing children's attitudes toward language.

  16. Informed consent for cardiac procedures: deficiencies in patient comprehension with current methods.

    PubMed

    Dathatri, Shubha; Gruberg, Luis; Anand, Jatin; Romeiser, Jamie; Sharma, Shephali; Finnin, Eileen; Shroyer, A Laurie W; Rosengart, Todd K

    2014-05-01

    Patients who undergo cardiac catheterization or percutaneous coronary intervention (PCI) often have a poor understanding of their disease and of related therapeutic risks, benefits, and alternatives. This pilot study was undertaken to compare the effectiveness of 2 preprocedural educational approaches to enhance patients' knowledge of standard consent elements. Patients undergoing first-time elective, outpatient cardiac catheterization and possible PCI were randomly assigned to a scripted verbal or written consent process (group I) or a web-based, audiovisual presentation (group II). Preconsent and postconsent questionnaires were administered to evaluate changes in patients' self-reported understanding of standard consent elements. One hundred and two patients enrolled at a single institution completed the pre- and postconsent surveys (group I=48; group II=54). Changes in patient comprehension rates were similar between groups for risk and benefit consent elements, but group II had significantly greater improvement in the identification of treatment alternatives than group I (p=0.028). Independent of intervention, correct identification of all risks and alternatives increased significantly after consent (p<0.05); 4 of 5 queried risks were correctly identified by greater than 90% of respondents. However, misperceptions of benefits persisted after consent; increased survival and prevention of future myocardial infarction were identified as PCI-related benefits by 83% and 46% of respondents, respectively. Although both scripted verbal and audiovisual informed consent improved patient comprehension, important patient misperceptions regarding PCI-related outcomes and alternatives persist, independent of informed consent approach, and considerable challenges still exist in educating patients about contemplated medical procedures. Future research appears warranted to improve patient comprehension. Copyright © 2014 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  17. An Exploration of the Examination Script Features that Most Influence Expert Judgements in Three Methods of Evaluating Script Quality

    ERIC Educational Resources Information Center

    Suto, Irenka; Novakovic, Nadezda

    2012-01-01

    Some methods of determining grade boundaries within examinations, such as awarding, paired comparisons, and rank ordering, entail expert judgements of script quality. We aimed to identify the features of examinees' scripts that most influence judgements in the three methods. For contrasting examinations in biology and English, a Latin square…

  18. "Can There Be Such a Delightful Feeling as This?" Variations of Sexual Scripts in Finnish Girls' Narratives

    ERIC Educational Resources Information Center

    Suvivuo, Pia; Tossavainen, Kerttu; Kontula, Osmo

    2010-01-01

    This study examined what kinds of sexual scripts were found in Finnish girls' narratives, what elements those scripts included and how different scripts were associated with sexually risky behavior. The data were comprised of the narratives of 173 14-15-year-old girls regarding their experiences in sexually motivating situations. The narratives…

  19. Accurate Arabic Script Language/Dialect Classification

    DTIC Science & Technology

    2014-01-01

    Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification

  20. "Writing It in English": Script Choices among Young Multilingual Muslims in the UK

    ERIC Educational Resources Information Center

    Rosowsky, Andrey

    2010-01-01

    Much attention has been paid in the literature to matters of script choice vis-a-vis languages. This attention, however, has focused on script choice in a national and political context. By contrast, there has not been any significant attention paid to more local and idiosyncratic instances of script choice operating on an individual and community…

  1. The Influence of Cross-Language Similarity on within- and between-Language Stroop Effects in Trilinguals

    PubMed Central

    van Heuven, Walter J. B.; Conklin, Kathy; Coderre, Emily L.; Guo, Taomei; Dijkstra, Ton

    2011-01-01

    This study investigated effects of cross-language similarity on within- and between-language Stroop interference and facilitation in three groups of trilinguals. Trilinguals were either proficient in three languages that use the same-script (alphabetic in German–English–Dutch trilinguals), two similar scripts and one different script (Chinese and alphabetic scripts in Chinese–English–Malay trilinguals), or three completely different scripts (Arabic, Chinese, and alphabetic in Uyghur–Chinese–English trilinguals). The results revealed a similar magnitude of within-language Stroop interference for the three groups, whereas between-language interference was modulated by cross-language similarity. For the same-script trilinguals, the within- and between-language interference was similar, whereas the between-language Stroop interference was reduced for trilinguals with languages written in different scripts. The magnitude of within-language Stroop facilitation was similar across the three groups of trilinguals, but smaller than within-language Stroop interference. Between-language Stroop facilitation was also modulated by cross-language similarity such that these effects became negative for trilinguals with languages written in different scripts. The overall pattern of Stroop interference and facilitation effects can be explained in terms of diverging and converging color and word information across languages. PMID:22180749

  2. PsyScript: a Macintosh application for scripting experiments.

    PubMed

    Bates, Timothy C; D'Oliveiro, Lawrence

    2003-11-01

    PsyScript is a scriptable application allowing users to describe experiments in Apple's compiled high-level object-oriented AppleScript language, while still supporting millisecond or better within-trial event timing (delays can be in milliseconds or refresh-based, and PsyScript can wait on external I/O, such as eye movement fixations). Because AppleScript is object oriented and system-wide, PsyScript experiments support complex branching, code reuse, and integration with other applications. Included AppleScript-based libraries support file handling and stimulus randomization and sampling, as well as more specialized tasks, such as adaptive testing. Advanced features include support for the BBox serial port button box, as well as a low-cost USB-based digital I/O card for millisecond timing, recording of any number and types of responses within a trial, novel responses, such as graphics tablet drawing, and use of the Macintosh sound facilities to provide an accurate voice key, saving voice responses to disk, scriptable image creation, support for flicker-free animation, and gaze-dependent masking. The application is open source, allowing researchers to enhance the feature set and verify internal functions. Both the application and the source are available for free download at www.maccs.mq.edu.au/-tim/psyscript/.

  3. The development of videos in culturally grounded drug prevention for rural native Hawaiian youth.

    PubMed

    Okamoto, Scott K; Helm, Susana; McClain, Latoya L; Dinson, Ay-Laina

    2012-12-01

    The purpose of this study was to adapt and validate narrative scripts to be used for the video components of a culturally grounded drug prevention program for rural Native Hawaiian youth. Scripts to be used to film short video vignettes of drug-related problem situations were developed based on a foundation of pre-prevention research funded by the National Institute on Drug Abuse. Seventy-four middle- and high-school-aged youth in 15 focus groups adapted and validated the details of the scripts to make them more realistic. Specifically, youth participants affirmed the situations described in the scripts and suggested changes to details of the scripts to make them more culturally specific. Suggested changes to the scripts also reflected preferred drug resistance strategies described in prior research, and varied based on the type of drug offerer described in each script (i.e., peer/friend, parent, or cousin/sibling). Implications for culturally grounded drug prevention are discussed.

  4. Using audio script fading and multiple-exemplar training to increase vocal interactions in children with autism.

    PubMed

    Garcia-Albea, Elena; Reeve, Sharon A; Brothers, Kevin J; Reeve, Kenneth F

    2014-01-01

    Script-fading procedures have been shown to be effective for teaching children with autism to initiate and participate in social interactions without vocal prompts from adults. In previous script and script-fading research, however, there has been no demonstration of a generalized repertoire of vocal interactions under the control of naturally occurring relevant stimuli. In this study, 4 boys with autism were taught to initiate a conversation in the presence of toys through the use of a script and script-fading procedure. Training with multiple categories and exemplars of toys was used to increase the likelihood of generalization of vocal interactions across novel toys. A multiple-probe design across participants was used to assess the effects of these procedures. The intervention successfully brought interactions by children with autism under the control of relevant stimuli in the environment. Future research pertaining to the specific implementation of these procedures (e.g., fading, script placement, participant characteristics) is discussed. © Society for the Experimental Analysis of Behavior.

  5. 76 FR 19176 - Shipping Coordinating Committee; Notice of Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ... (SOLAS) regulation V/22 --Development of policy and new symbols for Automatic Identification System (AIS... transportation is not generally available). However, parking in the vicinity of the building is extremely limited...

  6. Better service, greater efficiency : transit management for demand response systems

    DOT National Transportation Integrated Search

    1999-01-01

    This brochure briefly describes different technologies which can enhance demand response transit systems. It covers automated scheduling and dispatching, mobile data terminals, electronic identification cards, automatic vehicle location, and geograph...

  7. Maritime over the Horizon Sensor Integration: High Frequency Surface-Wave-Radar and Automatic Identification System Data Integration Algorithm.

    PubMed

    Nikolic, Dejan; Stojkovic, Nikola; Lekic, Nikola

    2018-04-09

    To obtain the complete operational picture of the maritime situation in the Exclusive Economic Zone (EEZ) which lies over the horizon (OTH) requires the integration of data obtained from various sensors. These sensors include: high frequency surface-wave-radar (HFSWR), satellite automatic identification system (SAIS) and land automatic identification system (LAIS). The algorithm proposed in this paper utilizes radar tracks obtained from the network of HFSWRs, which are already processed by a multi-target tracking algorithm and associates SAIS and LAIS data to the corresponding radar tracks, thus forming an integrated data pair. During the integration process, all HFSWR targets in the vicinity of AIS data are evaluated and the one which has the highest matching factor is used for data association. On the other hand, if there is multiple AIS data in the vicinity of a single HFSWR track, the algorithm still makes only one data pair which consists of AIS and HFSWR data with the highest mutual matching factor. During the design and testing, special attention is given to the latency of AIS data, which could be very high in the EEZs of developing countries. The algorithm is designed, implemented and tested in a real working environment. The testing environment is located in the Gulf of Guinea and includes a network of HFSWRs consisting of two HFSWRs, several coastal sites with LAIS receivers and SAIS data provided by provider of SAIS data.

  8. Development of an optimal filter substrate for the identification of small microplastic particles in food by micro-Raman spectroscopy.

    PubMed

    Oßmann, Barbara E; Sarau, George; Schmitt, Sebastian W; Holtmannspötter, Heinrich; Christiansen, Silke H; Dicke, Wilhelm

    2017-06-01

    When analysing microplastics in food, due to toxicological reasons it is important to achieve clear identification of particles down to a size of at least 1 μm. One reliable, optical analytical technique allowing this is micro-Raman spectroscopy. After isolation of particles via filtration, analysis is typically performed directly on the filter surface. In order to obtain high qualitative Raman spectra, the material of the membrane filters should not show any interference in terms of background and Raman signals during spectrum acquisition. To facilitate the usage of automatic particle detection, membrane filters should also show specific optical properties. In this work, beside eight different, commercially available membrane filters, three newly designed metal-coated polycarbonate membrane filters were tested to fulfil these requirements. We found that aluminium-coated polycarbonate membrane filters had ideal characteristics as a substrate for micro-Raman spectroscopy. Its spectrum shows no or minimal interference with particle spectra, depending on the laser wavelength. Furthermore, automatic particle detection can be applied when analysing the filter surface under dark-field illumination. With this new membrane filter, analytics free of interference of microplastics down to a size of 1 μm becomes possible. Thus, an important size class of these contaminants can now be visualized and spectrally identified. Graphical abstract A newly developed aluminium coated polycarbonate membrane filter enables automatic particle detection and generation of high qualitative Raman spectra allowing identification of small microplastics.

  9. Automated Drug Identification for Urban Hospitals

    NASA Technical Reports Server (NTRS)

    Shirley, Donna L.

    1971-01-01

    Many urban hospitals are becoming overloaded with drug abuse cases requiring chemical analysis for identification of drugs. In this paper, the requirements for chemical analysis of body fluids for drugs are determined and a system model for automated drug analysis is selected. The system as modeled, would perform chemical preparation of samples, gas-liquid chromatographic separation of drugs in the chemically prepared samples, infrared spectrophotometric analysis of the drugs, and would utilize automatic data processing and control for drug identification. Requirements of cost, maintainability, reliability, flexibility, and operability are considered.

  10. Rapid and automatic chemical identification of the medicinal flower buds of Lonicera plants by the benchtop and hand-held Fourier transform infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Chen, Jianbo; Guo, Baolin; Yan, Rui; Sun, Suqin; Zhou, Qun

    2017-07-01

    With the utilization of the hand-held equipment, Fourier transform infrared (FT-IR) spectroscopy is a promising analytical technique to minimize the time cost for the chemical identification of herbal materials. This research examines the feasibility of the hand-held FT-IR spectrometer for the on-site testing of herbal materials, using Lonicerae Japonicae Flos (LJF) and Lonicerae Flos (LF) as examples. Correlation-based linear discriminant models for LJF and LF are established based on the benchtop and hand-held FT-IR instruments. The benchtop FT-IR models can exactly recognize all articles of LJF and LF. Although a few LF articles are misjudged at the sub-class level, the hand-held FT-IR models are able to exactly discriminate LJF and LF. As a direct and label-free analytical technique, FT-IR spectroscopy has great potential in the rapid and automatic chemical identification of herbal materials either in laboratories or in fields. This is helpful to prevent the spread and use of adulterated herbal materials in time.

  11. The busy social brain: evidence for automaticity and control in the neural systems supporting social cognition and action understanding.

    PubMed

    Spunt, Robert P; Lieberman, Matthew D

    2013-01-01

    Much social-cognitive processing is believed to occur automatically; however, the relative automaticity of the brain systems underlying social cognition remains largely undetermined. We used functional MRI to test for automaticity in the functioning of two brain systems that research has indicated are important for understanding other people's behavior: the mirror neuron system and the mentalizing system. Participants remembered either easy phone numbers (low cognitive load) or difficult phone numbers (high cognitive load) while observing actions after adopting one of four comprehension goals. For all four goals, mirror neuron system activation showed relatively little evidence of modulation by load; in contrast, the association of mentalizing system activation with the goal of inferring the actor's mental state was extinguished by increased cognitive load. These results support a dual-process model of the brain systems underlying action understanding and social cognition; the mirror neuron system supports automatic behavior identification, and the mentalizing system supports controlled social causal attribution.

  12. The Influence of Media Violence on Youth.

    PubMed

    Anderson, Craig A; Berkowitz, Leonard; Donnerstein, Edward; Huesmann, L Rowell; Johnson, James D; Linz, Daniel; Malamuth, Neil M; Wartella, Ellen

    2003-12-01

    Research on violent television and films, video games, and music reveals unequivocal evidence that media violence increases the likelihood of aggressive and violent behavior in both immediate and long-term contexts. The effects appear larger for milder than for more severe forms of aggression, but the effects on severe forms of violence are also substantial (r = .13 to .32) when compared with effects of other violence risk factors or medical effects deemed important by the medical community (e.g., effect of aspirin on heart attacks). The research base is large; diverse in methods, samples, and media genres; and consistent in overall findings. The evidence is clearest within the most extensively researched domain, television and film violence. The growing body of video-game research yields essentially the same conclusions. Short-term exposure increases the likelihood of physically and verbally aggressive behavior, aggressive thoughts, and aggressive emotions. Recent large-scale longitudinal studies provide converging evidence linking frequent exposure to violent media in childhood with aggression later in life, including physical assaults and spouse abuse. Because extremely violent criminal behaviors (e.g., forcible rape, aggravated assault, homicide) are rare, new longitudinal studies with larger samples are needed to estimate accurately how much habitual childhood exposure to media violence increases the risk for extreme violence. Well-supported theory delineates why and when exposure to media violence increases aggression and violence. Media violence produces short-term increases by priming existing aggressive scripts and cognitions, increasing physiological arousal, and triggering an automatic tendency to imitate observed behaviors. Media violence produces long-term effects via several types of learning processes leading to the acquisition of lasting (and automatically accessible) aggressive scripts, interpretational schemas, and aggression-supporting beliefs about social behavior, and by reducing individuals' normal negative emotional responses to violence (i.e., desensitization). Certain characteristics of viewers (e.g., identification with aggressive characters), social environments (e.g., parental influences), and media content (e.g., attractiveness of the perpetrator) can influence the degree to which media violence affects aggression, but there are some inconsistencies in research results. This research also suggests some avenues for preventive intervention (e.g., parental supervision, interpretation, and control of children's media use). However, extant research on moderators suggests that no one is wholly immune to the effects of media violence. Recent surveys reveal an extensive presence of violence in modern media. Furthermore, many children and youth spend an inordinate amount of time consuming violent media. Although it is clear that reducing exposure to media violence will reduce aggression and violence, it is less clear what sorts of interventions will produce a reduction in exposure. The sparse research literature suggests that counterattitudinal and parental-mediation interventions are likely to yield beneficial effects, but that media literacy interventions by themselves are unsuccessful. Though the scientific debate over whether media violence increases aggression and violence is essentially over, several critical tasks remain. Additional laboratory and field studies are needed for a better understanding of underlying psychological processes, which eventually should lead to more effective interventions. Large-scale longitudinal studies would help specify the magnitude of media-violence effects on the most severe types of violence. Meeting the larger societal challenge of providing children and youth with a much healthier media diet may prove to be more difficult and costly, especially if the scientific, news, public policy, and entertainment communities fail to educate the general public about the real risks of media-violence exposure to children and youth. © 2003 Association for Psychological Science.

  13. Finite Element Analysis of Osteosynthesis Screw Fixation in the Bone Stock: An Appropriate Method for Automatic Screw Modelling

    PubMed Central

    Wieding, Jan; Souffrant, Robert; Fritsche, Andreas; Mittelmeier, Wolfram; Bader, Rainer

    2012-01-01

    The use of finite element analysis (FEA) has grown to a more and more important method in the field of biomedical engineering and biomechanics. Although increased computational performance allows new ways to generate more complex biomechanical models, in the area of orthopaedic surgery, solid modelling of screws and drill holes represent a limitation of their use for individual cases and an increase of computational costs. To cope with these requirements, different methods for numerical screw modelling have therefore been investigated to improve its application diversity. Exemplarily, fixation was performed for stabilization of a large segmental femoral bone defect by an osteosynthesis plate. Three different numerical modelling techniques for implant fixation were used in this study, i.e. without screw modelling, screws as solid elements as well as screws as structural elements. The latter one offers the possibility to implement automatically generated screws with variable geometry on arbitrary FE models. Structural screws were parametrically generated by a Python script for the automatic generation in the FE-software Abaqus/CAE on both a tetrahedral and a hexahedral meshed femur. Accuracy of the FE models was confirmed by experimental testing using a composite femur with a segmental defect and an identical osteosynthesis plate for primary stabilisation with titanium screws. Both deflection of the femoral head and the gap alteration were measured with an optical measuring system with an accuracy of approximately 3 µm. For both screw modelling techniques a sufficient correlation of approximately 95% between numerical and experimental analysis was found. Furthermore, using structural elements for screw modelling the computational time could be reduced by 85% using hexahedral elements instead of tetrahedral elements for femur meshing. The automatically generated screw modelling offers a realistic simulation of the osteosynthesis fixation with screws in the adjacent bone stock and can be used for further investigations. PMID:22470474

  14. [Developmental changes of rapid automatized naming and Hiragana reading of Japanese in elementary-school children].

    PubMed

    Kobayashi, Tomoka; Inagaki, Masumi; Gunji, Atsuko; Yatabe, Kiyomi; Kita, Yosuke; Kaga, Makiko; Gotoh, Takaaki; Koike, Toshihide

    2011-11-01

    Two hundred and seven Japanese elementary school children aged from 6 (Grade 1) to 12 (Grade 6) years old were tested for their abilities to name numbers and pictured objects along with reading Hiragana characters and words. These children all showed typical development and their classroom teachers judged that they were not having any problems with reading or writing. The children were randomly divided into two groups, the first group was assigned to two naming tasks;the rapid automatized naming (RAN) of "numbers" and "pictured objects," the second group was assigned to two rapid alternative stimulus (RAS) naming tasks using numbers and pictured objects. All children were asked to perform two reading tasks that were written in Hiragana script: single mora reading task and four syllable word reading task. The total articulation time for naming and reading and performance in terms of accuracy were measured for each task. Developmental changes in these variables were evaluated. The articulation time was significantly longer for the first graders, and it gradually shortened as they moved through to the upper grades in all tasks. The articulation time reached a plateau in the 5th grade for the number naming, while gradual change continued after drastic change in the lower grades for the pictured object naming. The articulation times for the single mora reading and RAN of numbers correlated strongly. The articulation time for the RAS naming was significantly longer compared to that for the RAN, though there were very few errors. The RAS naming showed the highest correlation with the four syllable word reading. This study demonstrated that the performance in rapid automatized naming of numbers and pictures were closely related with performance on reading tasks. Thus Japanese children with reading disorders such as developmental dyslexia should also be evaluated for rapid automatized naming.

  15. LAMDA programmer's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, T.P.; Clark, R.M.; Mostrom, M.A.

    This report discusses the following topics on the LAMDA program: General maintenance; CTSS FCL script; DOS batch files; Macintosh MPW scripts; UNICOS FCL script; VAX/MS command file; LINC calling tree; and LAMDA calling tree.

  16. Using Diacritics in the Arabic Script of Malay to Scaffold Arab Postgraduate Students in Reading Malay Words

    ERIC Educational Resources Information Center

    Salehuddin, Khazriyati; Winskel, Heather

    2015-01-01

    Purpose: This study aims to investigate the use of diacritics in the Arabic script of Malay to facilitate Arab postgraduate students of UKM to read the Malay words accurately. It is hypothesised that the Arabic script could facilitate the reading of Malay words among the Arab students because of their earlier exposure to the Arabic script in…

  17. Automatic Adviser on stationary devices status identification and anticipated change

    NASA Astrophysics Data System (ADS)

    Shabelnikov, A. N.; Liabakh, N. N.; Gibner, Ya M.; Pushkarev, E. A.

    2018-05-01

    A task is defined to synthesize an Automatic Adviser to identify the automation systems stationary devices status using an autoregressive model of changing their key parameters. An applied model type was rationalized and the research objects monitoring process algorithm was developed. A complex of mobile objects status operation simulation and prediction results analysis was proposed. Research results are commented using a specific example of a hump yard compressor station. The work was supported by the Russian Fundamental Research Fund, project No. 17-20-01040.

  18. Patient identification error among prostate needle core biopsy specimens--are we ready for a DNA time-out?

    PubMed

    Suba, Eric J; Pfeifer, John D; Raab, Stephen S

    2007-10-01

    Patient identification errors in surgical pathology often involve switches of prostate or breast needle core biopsy specimens among patients. We assessed strategies for decreasing the occurrence of these uncommon and yet potentially catastrophic events. Root cause analyses were performed following 3 cases of patient identification error involving prostate needle core biopsy specimens. Patient identification errors in surgical pathology result from slips and lapses of automatic human action that may occur at numerous steps during pre-laboratory, laboratory and post-laboratory work flow processes. Patient identification errors among prostate needle biopsies may be difficult to entirely prevent through the optimization of work flow processes. A DNA time-out, whereby DNA polymorphic microsatellite analysis is used to confirm patient identification before radiation therapy or radical surgery, may eliminate patient identification errors among needle biopsies.

  19. Neural Network Design on the SRC-6 Reconfigurable Computer

    DTIC Science & Technology

    2006-12-01

    fingerprint identification. In this field, automatic identification methods are used to save time, especially for the purpose of fingerprint matching in...grid widths and lengths and therefore was useful in producing an accurate canvas with which to create sample training images. The added benefit of...tools available free of charge and readily accessible on the computer, it was simple to design bitmap data files visually on a canvas and then

  20. The Next Generation of Ground Operations Command and Control; Scripting in C no. and Visual Basic

    NASA Technical Reports Server (NTRS)

    Ritter, George; Pedoto, Ramon

    2010-01-01

    Scripting languages have become a common method for implementing command and control solutions in space ground operations. The Systems Test and Operations Language (STOL), the Huntsville Operations Support Center (HOSC) Scripting Language Processor (SLP), and the Spacecraft Control Language (SCL) offer script-commands that wrap tedious operations tasks into single calls. Since script-commands are interpreted, they also offer a certain amount of hands-on control that is highly valued in space ground operations. Although compiled programs seem to be unsuited for interactive user control and are more complex to develop, Marshall Space flight Center (MSFC) has developed a product called the Enhanced and Redesign Scripting (ERS) that makes use of the graphical and logical richness of a programming language while offering the hands-on and ease of control of a scripting language. ERS is currently used by the International Space Station (ISS) Payload Operations Integration Center (POIC) Cadre team members. ERS integrates spacecraft command mnemonics, telemetry measurements, and command and telemetry control procedures into a standard programming language, while making use of Microsoft's Visual Studio for developing Visual Basic (VB) or C# ground operations procedures. ERS also allows for script-style user control during procedure execution using a robust graphical user input and output feature. The availability of VB and C# programmers, and the richness of the languages and their development environment, has allowed ERS to lower our "script" development time and maintenance costs at the Marshall POIC.

  1. MASGOMAS PROJECT, New automatic-tool for cluster search on IR photometric surveys

    NASA Astrophysics Data System (ADS)

    Rübke, K.; Herrero, A.; Borissova, J.; Ramirez-Alegria, S.; García, M.; Marin-Franch, A.

    2015-05-01

    The Milky Way is expected to contain a large number of young massive (few x 1000 solar masses) stellar clusters, borne in dense cores of gas and dust. Yet, their known number remains small. We have started a programme to search for such clusters, MASGOMAS (MAssive Stars in Galactic Obscured MAssive clusterS). Initially, we selected promising candidates by means of visual inspection of infrared images. In a second phase of the project we have presented a semi-automatic method to search for obscured massive clusters that resulted in the identification of new massive clusters, like MASGOMAS-1 (with more than 10,000 solar masses) and MASGOMAS-4 (a double-cored association of about 3,000 solar masses). We have now developped a new automatic tool for MASGOMAS that allows the identification of a large number of massive cluster candidates from the 2MASS and VVV catalogues. Cluster candidates fulfilling criteria appropriated for massive OB stars are thus selected in an efficient and objective way. We present the results from this tool and the observations of the first selected cluster, and discuss the implications for the Milky Way structure.

  2. 49 CFR 599.303 - Agency disposition of dealer application for reimbursement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... correct a non-conforming submission. (d) Electronic rejection. An application is automatically rejected... transaction, or identifies the vehicle identification number of a new or trade-in vehicle that was involved in...

  3. The integrated manual and automatic control of complex flight systems

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.

    1984-01-01

    A unified control synthesis methodology for complex and/or non-conventional flight vehicles are developed. Prediction techniques for the handling characteristics of such vehicles and pilot parameter identification from experimental data are addressed.

  4. Subliminal convergence of Kanji and Kana words: further evidence for functional parcellation of the posterior temporal cortex in visual word perception.

    PubMed

    Nakamura, Kimihiro; Dehaene, Stanislas; Jobert, Antoinette; Le Bihan, Denis; Kouider, Sid

    2005-06-01

    Recent evidence has suggested that the human occipitotemporal region comprises several subregions, each sensitive to a distinct processing level of visual words. To further explore the functional architecture of visual word recognition, we employed a subliminal priming method with functional magnetic resonance imaging (fMRI) during semantic judgments of words presented in two different Japanese scripts, Kanji and Kana. Each target word was preceded by a subliminal presentation of either the same or a different word, and in the same or a different script. Behaviorally, word repetition produced significant priming regardless of whether the words were presented in the same or different script. At the neural level, this cross-script priming was associated with repetition suppression in the left inferior temporal cortex anterior and dorsal to the visual word form area hypothesized for alphabetical writing systems, suggesting that cross-script convergence occurred at a semantic level. fMRI also evidenced a shared visual occipito-temporal activation for words in the two scripts, with slightly more mesial and right-predominant activation for Kanji and with greater occipital activation for Kana. These results thus allow us to separate script-specific and script-independent regions in the posterior temporal lobe, while demonstrating that both can be activated subliminally.

  5. The cultural life script as cognitive schema: how the life script shapes memory for fictional life stories.

    PubMed

    Koppel, Jonathan; Berntsen, Dorthe

    2014-01-01

    We tested, across three studies, the effect of the cultural life script on memory and its phenomenological properties. We focused in particular on the mnemonic effects of both schema-consistency and frequency in the life script. In addition to testing recognition (in Study 1) and recall (in Studies 2 and 3), we also collected remember/know judgements for remembered events (in Studies 1 and 2) and memory for their emotional valence (in Study 2). Our primary finding was that, across all three studies, higher-frequency events were more memorable than lower-frequency events, as measured through either recognition or recall. We also attained three additional, complementary effects: First, schema-inconsistent events received remember ratings more often than schema-consistent events (in Study 2, with a trend to this effect in Study 1); second, where an event's emotional valence was inconsistent with the life script, memory for its valence was reconstructed to fit the script (in Study 2); and, third, intrusions in recall were disproportionately for life script events (in Study 3), although that was not the case in recognition (in Study 1). We conclude that the life script serves as a cognitive schema in how it shapes memory and its phenomenological properties.

  6. An ERP Investigation of Visual Word Recognition in Syllabary Scripts

    PubMed Central

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.

    2013-01-01

    The bi-modal interactive-activation model has been successfully applied to understanding the neuro-cognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, the current study examined word recognition in a different writing system, the Japanese syllabary scripts Hiragana and Katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words where the prime and target words were both in the same script (within-script priming, Experiment 1) or were in the opposite script (cross-script priming, Experiment 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sub-lexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time-course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in Experiment 1 where prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neuro-cognitive processes that operate in a similar manner across different writing systems and languages, as well as pointing to the viability of the bi-modal interactive activation framework for modeling such processes. PMID:23378278

  7. Critical Assessment of Small Molecule Identification 2016: automated methods.

    PubMed

    Schymanski, Emma L; Ruttkies, Christoph; Krauss, Martin; Brouard, Céline; Kind, Tobias; Dührkop, Kai; Allen, Felicity; Vaniya, Arpana; Verdegem, Dries; Böcker, Sebastian; Rousu, Juho; Shen, Huibin; Tsugawa, Hiroshi; Sajed, Tanvir; Fiehn, Oliver; Ghesquière, Bart; Neumann, Steffen

    2017-03-27

    The fourth round of the Critical Assessment of Small Molecule Identification (CASMI) Contest ( www.casmi-contest.org ) was held in 2016, with two new categories for automated methods. This article covers the 208 challenges in Categories 2 and 3, without and with metadata, from organization, participation, results and post-contest evaluation of CASMI 2016 through to perspectives for future contests and small molecule annotation/identification. The Input Output Kernel Regression (CSI:IOKR) machine learning approach performed best in "Category 2: Best Automatic Structural Identification-In Silico Fragmentation Only", won by Team Brouard with 41% challenge wins. The winner of "Category 3: Best Automatic Structural Identification-Full Information" was Team Kind (MS-FINDER), with 76% challenge wins. The best methods were able to achieve over 30% Top 1 ranks in Category 2, with all methods ranking the correct candidate in the Top 10 in around 50% of challenges. This success rate rose to 70% Top 1 ranks in Category 3, with candidates in the Top 10 in over 80% of the challenges. The machine learning and chemistry-based approaches are shown to perform in complementary ways. The improvement in (semi-)automated fragmentation methods for small molecule identification has been substantial. The achieved high rates of correct candidates in the Top 1 and Top 10, despite large candidate numbers, open up great possibilities for high-throughput annotation of untargeted analysis for "known unknowns". As more high quality training data becomes available, the improvements in machine learning methods will likely continue, but the alternative approaches still provide valuable complementary information. Improved integration of experimental context will also improve identification success further for "real life" annotations. The true "unknown unknowns" remain to be evaluated in future CASMI contests. Graphical abstract .

  8. Monitoring caustic injuries from emergency department databases using automatic keyword recognition software.

    PubMed

    Vignally, P; Fondi, G; Taggi, F; Pitidis, A

    2011-03-31

    In Italy the European Union Injury Database reports the involvement of chemical products in 0.9% of home and leisure accidents. The Emergency Department registry on domestic accidents in Italy and the Poison Control Centres record that 90% of cases of exposure to toxic substances occur in the home. It is not rare for the effects of chemical agents to be observed in hospitals, with a high potential risk of damage - the rate of this cause of hospital admission is double the domestic injury average. The aim of this study was to monitor the effects of injuries caused by caustic agents in Italy using automatic free-text recognition in Emergency Department medical databases. We created a Stata software program to automatically identify caustic or corrosive injury cases using an agent-specific list of keywords. We focused attention on the procedure's sensitivity and specificity. Ten hospitals in six regions of Italy participated in the study. The program identified 112 cases of injury by caustic or corrosive agents. Checking the cases by quality controls (based on manual reading of ED reports), we assessed 99 cases as true positive, i.e. 88.4% of the patients were automatically recognized by the software as being affected by caustic substances (99% CI: 80.6%- 96.2%), that is to say 0.59% (99% CI: 0.45%-0.76%) of the whole sample of home injuries, a value almost three times as high as that expected (p < 0.0001) from European codified information. False positives were 11.6% of the recognized cases (99% CI: 5.1%- 21.5%). Our automatic procedure for caustic agent identification proved to have excellent product recognition capacity with an acceptable level of excess sensitivity. Contrary to our a priori hypothesis, the automatic recognition system provided a level of identification of agents possessing caustic effects that was significantly much greater than was predictable on the basis of the values from current codifications reported in the European Database.

  9. SU-F-T-476: Performance of the AS1200 EPID for Periodic Photon Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeMarco, J; Fraass, B; Yang, W

    2016-06-15

    Purpose: To assess the dosimetric performance of a new amorphous silicon flat-panel electronic portal imaging device (EPID) suitable for high-intensity, flattening-filter-free delivery mode. Methods: An EPID-based QA suite was created with automation to periodically monitor photon central-axis output and two-dimensional beam profile constancy as a function of gantry angle and dose-rate. A Varian TrueBeamTM linear accelerator installed with Developer Mode was used to customize and deliver XML script routines for the QA suite using the dosimetry mode image acquisition for an aS1200 EPID. Automatic post-processing software was developed to analyze the resulting DICOM images. Results: The EPID was used tomore » monitor photon beam output constancy (central-axis), flatness, and symmetry over a period of 10 months for four photon beam energies (6x, 15x, 6xFFF, and 10xFFF). EPID results were consistent to those measured with a standard daily QA check device. At the four cardinal gantry angles, the standard deviation of the EPID central-axis output was <0.5%. Likewise, EPID measurements were independent for the wide range of dose rates (including up to 2400 mu/min for 10xFFF) studied with a standard deviation of <0.8% relative to the nominal dose rate for each energy. Also, profile constancy and field size measurements showed good agreement with the reference acquisition of 0° gantry angle and nominal dose rate. XML script files were also tested for MU linearity and picket-fence delivery. Using Developer Mode, the test suite was delivered in <60 minutes for all 4 photon energies with 4 dose rates per energy and 5 picket-fence acquisitions. Conclusion: Dosimetry image acquisition using a new EPID was found to be accurate for standard and high-intensity photon beams over a broad range of dose rates over 10 months. Developer Mode provided an efficient platform to customize the EPID acquisitions by using custom script files which significantly reduced the time. This work was funded in part by Varian Medical Systems.« less

  10. LAMDA programmer`s manual. [Final report, Part 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, T.P.; Clark, R.M.; Mostrom, M.A.

    This report discusses the following topics on the LAMDA program: General maintenance; CTSS FCL script; DOS batch files; Macintosh MPW scripts; UNICOS FCL script; VAX/MS command file; LINC calling tree; and LAMDA calling tree.

  11. ANLPS. Graphics Driver for PostScript Output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.

    1987-09-01

    ANLPS is a PostScript graphics device driver for use with the proprietary CA TELLAGRAF, CUECHART, and DISSPLA products. The driver allows the user to create and send text and graphics output in the Adobe Systems` PostScript page description language, which is accepted by many print devices. The PostScript output can be generated by TELLAGRAF 6.0 and DISSPLA 10.0. The files containing the PostScript output are sent to PostScript laser printers, such as the Apple LaserWriter. It is not necessary to initialize the printer, as the output for each plot is self-contained. All CA fonts are mapped to PostScript fonts, e.g.more » Swiss-Medium is mapped to Helvetica, and the mapping is easily changed. Hardware shading and hardware characters, area fill, and color are included. Auxiliary routines are provided which allow graphics files containing figures, logos, and diagrams to be merged with text files. The user can then position, scale, and rotate the figures on the output page in the reserved area specified.« less

  12. [Effect of spatial location on the generality of block-wise conflict adaptation between different types of scripts].

    PubMed

    Watanabe, Yurina; Yoshizaki, Kazuhito

    2014-10-01

    This study aimed to investigate the generality of conflict adaptation associated with block-wise conflict frequency between two types of stimulus scripts (Kanji and Hiragana). To this end, we examined whether the modulation of the compatibility effect with one type of script depending on block-wise conflict frequency (75% versus 25% generalized to the other type of script whose block-wise conflict frequency was kept constant (50%), using the Spatial Stroop task. In Experiment 1, 16 participants were required to identify the target orientation (up or down) presented in the upper or lower visual-field. The results showed that block-wise conflict adaptation with one type of stimulus script generalized to the other. The procedure in Experiment 2 was the same as that in Experiment 1, except that the presentation location differed between the two types of stimulus scripts. We did not find a generalization from one script to the other. These results suggest that presentation location is a critical factor contributing to the generality of block-wise conflict adaptation.

  13. A Simple Picaxe Microcontroller Pulse Source for Juxtacellular Neuronal Labelling †

    PubMed Central

    Verberne, Anthony J. M.

    2016-01-01

    Juxtacellular neuronal labelling is a method which allows neurophysiologists to fill physiologically-identified neurons with small positively-charged marker molecules. Labelled neurons are identified by histochemical processing of brain sections along with immunohistochemical identification of neuropeptides, neurotransmitters, neurotransmitter transporters or biosynthetic enzymes. A microcontroller-based pulser circuit and associated BASIC software script is described for incorporation into the design of a commercially-available intracellular electrometer for use in juxtacellular neuronal labelling. Printed circuit board construction has been used for reliability and reproducibility. The current design obviates the need for a separate digital pulse source and simplifies the juxtacellular neuronal labelling procedure. PMID:28952589

  14. Automatic Identification of Critical Data Items in a Database to Mitigate the Effects of Malicious Insiders

    NASA Astrophysics Data System (ADS)

    White, Jonathan; Panda, Brajendra

    A major concern for computer system security is the threat from malicious insiders who target and abuse critical data items in the system. In this paper, we propose a solution to enable automatic identification of critical data items in a database by way of data dependency relationships. This identification of critical data items is necessary because insider threats often target mission critical data in order to accomplish malicious tasks. Unfortunately, currently available systems fail to address this problem in a comprehensive manner. It is more difficult for non-experts to identify these critical data items because of their lack of familiarity and due to the fact that data systems are constantly changing. By identifying the critical data items automatically, security engineers will be better prepared to protect what is critical to the mission of the organization and also have the ability to focus their security efforts on these critical data items. We have developed an algorithm that scans the database logs and forms a directed graph showing which items influence a large number of other items and at what frequency this influence occurs. This graph is traversed to reveal the data items which have a large influence throughout the database system by using a novel metric based formula. These items are critical to the system because if they are maliciously altered or stolen, the malicious alterations will spread throughout the system, delaying recovery and causing a much more malignant effect. As these items have significant influence, they are deemed to be critical and worthy of extra security measures. Our proposal is not intended to replace existing intrusion detection systems, but rather is intended to complement current and future technologies. Our proposal has never been performed before, and our experimental results have shown that it is very effective in revealing critical data items automatically.

  15. ITEP: an integrated toolkit for exploration of microbial pan-genomes.

    PubMed

    Benedict, Matthew N; Henriksen, James R; Metcalf, William W; Whitaker, Rachel J; Price, Nathan D

    2014-01-03

    Comparative genomics is a powerful approach for studying variation in physiological traits as well as the evolution and ecology of microorganisms. Recent technological advances have enabled sequencing large numbers of related genomes in a single project, requiring computational tools for their integrated analysis. In particular, accurate annotations and identification of gene presence and absence are critical for understanding and modeling the cellular physiology of newly sequenced genomes. Although many tools are available to compare the gene contents of related genomes, new tools are necessary to enable close examination and curation of protein families from large numbers of closely related organisms, to integrate curation with the analysis of gain and loss, and to generate metabolic networks linking the annotations to observed phenotypes. We have developed ITEP, an Integrated Toolkit for Exploration of microbial Pan-genomes, to curate protein families, compute similarities to externally-defined domains, analyze gene gain and loss, and generate draft metabolic networks from one or more curated reference network reconstructions in groups of related microbial species among which the combination of core and variable genes constitute the their "pan-genomes". The ITEP toolkit consists of: (1) a series of modular command-line scripts for identification, comparison, curation, and analysis of protein families and their distribution across many genomes; (2) a set of Python libraries for programmatic access to the same data; and (3) pre-packaged scripts to perform common analysis workflows on a collection of genomes. ITEP's capabilities include de novo protein family prediction, ortholog detection, analysis of functional domains, identification of core and variable genes and gene regions, sequence alignments and tree generation, annotation curation, and the integration of cross-genome analysis and metabolic networks for study of metabolic network evolution. ITEP is a powerful, flexible toolkit for generation and curation of protein families. ITEP's modular design allows for straightforward extension as analysis methods and tools evolve. By integrating comparative genomics with the development of draft metabolic networks, ITEP harnesses the power of comparative genomics to build confidence in links between genotype and phenotype and helps disambiguate gene annotations when they are evaluated in both evolutionary and metabolic network contexts.

  16. 42 CFR 423.160 - Standards for electronic prescribing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard, Implementation... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1 (Version 8.1...

  17. 42 CFR 423.160 - Standards for electronic prescribing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1 (Version 8.1...

  18. 42 CFR 423.160 - Standards for electronic prescribing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1 (Version 8.1...

  19. Python Scripts for Automation of Current-Voltage Testing of Semiconductor Devices (FY17)

    DTIC Science & Technology

    2017-01-01

    ARL-TR-7923 ● JAN 2017 US Army Research Laboratory Python Scripts for Automation of Current- Voltage Testing of Semiconductor...manual device-testing procedures is reduced or eliminated through automation. This technical report includes scripts written in Python , version 2.7, used ...nothing. 3.1.9 Exit Program The script exits the entire program. Line 505, sys.exit(), uses the sys package that comes with Python to exit system

  20. Word Spotting for Indic Documents to Facilitate Retrieval

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Anurag; Setlur, Srirangaraj; Govindaraju, Venu

    With advances in the field of digitization of printed documents and several mass digitization projects underway, information retrieval and document search have emerged as key research areas. However, most of the current work in these areas is limited to English and a few oriental languages. The lack of efficient solutions for Indic scripts has hampered information extraction from a large body of documents of cultural and historical importance. This chapter presents two relevant topics in this area. First, we describe the use of a script-specific keyword spotting for Devanagari documents that makes use of domain knowledge of the script. Second, we address the needs of a digital library to provide access to a collection of documents from multiple scripts. This requires intelligent solutions which scale across different scripts. We present a script-independent keyword spotting approach for this purpose. Experimental results illustrate the efficacy of our methods.

  1. Development and Operations of the Astrophysics Data System

    NASA Technical Reports Server (NTRS)

    Murray, Stephen S.; Oliversen, Ronald (Technical Monitor)

    2003-01-01

    SAO TASKS ACCOMPLISHED: Abstract Service: (1) Continued regular updates of abstracts in the databases, both at SAO and at all mirror sites; (2) Established a new naming convention of QB books in preparation for adding physics books from Hollis or Library of Congress; (3) Modified handling of object tag so as not to interfere with XHTML definition; (4) Worked on moving 'what's new' announcements to a majordomo email list so as not to interfere with divisional mail handling; (5) Implemented and tested new first author feature following suggestions from users at the AAS meeting; (6) Added SSRv entries back to volume 1 in preparation for scanning of the journal; (7) Assisted in the re-configuration of the ADS mirror site at the CDS and sent a new set of tapes containing article data to allow re-creation of the ADS article data lost during the move; (8) Created scripts to automatically download Astrobiology.

  2. IntegratedMap: a Web interface for integrating genetic map data.

    PubMed

    Yang, Hongyu; Wang, Hongyu; Gingle, Alan R

    2005-05-01

    IntegratedMap is a Web application and database schema for storing and interactively displaying genetic map data. Its Web interface includes a menu for direct chromosome/linkage group selection, a search form for selection based on mapped object location and linkage group displays. An overview display provides convenient access to the full range of mapped and anchored object types with genetic locus details, such as numbers, types and names of mapped/anchored objects displayed in a compact scrollable list box that automatically updates based on selected map location and object type. Also, multilinkage group and localized map views are available along with links that can be configured for integration with other Web resources. IntegratedMap is implemented in C#/ASP.NET and the package, including a MySQL schema creation script, is available from http://cggc.agtec.uga.edu/Data/download.asp

  3. Scanning X-ray diffraction on cardiac tissue: automatized data analysis and processing.

    PubMed

    Nicolas, Jan David; Bernhardt, Marten; Markus, Andrea; Alves, Frauke; Burghammer, Manfred; Salditt, Tim

    2017-11-01

    A scanning X-ray diffraction study of cardiac tissue has been performed, covering the entire cross section of a mouse heart slice. To this end, moderate focusing by compound refractive lenses to micrometer spot size, continuous scanning, data acquisition by a fast single-photon-counting pixel detector, and fully automated analysis scripts have been combined. It was shown that a surprising amount of structural data can be harvested from such a scan, evaluating the local scattering intensity, interfilament spacing of the muscle tissue, the filament orientation, and the degree of anisotropy. The workflow of data analysis is described and a data analysis toolbox with example data for general use is provided. Since many cardiomyopathies rely on the structural integrity of the sarcomere, the contractile unit of cardiac muscle cells, the present study can be easily extended to characterize tissue from a diseased heart.

  4. Strain Library Imaging Protocol for high-throughput, automated single-cell microscopy of large bacterial collections arrayed on multiwell plates.

    PubMed

    Shi, Handuo; Colavin, Alexandre; Lee, Timothy K; Huang, Kerwyn Casey

    2017-02-01

    Single-cell microscopy is a powerful tool for studying gene functions using strain libraries, but it suffers from throughput limitations. Here we describe the Strain Library Imaging Protocol (SLIP), which is a high-throughput, automated microscopy workflow for large strain collections that requires minimal user involvement. SLIP involves transferring arrayed bacterial cultures from multiwell plates onto large agar pads using inexpensive replicator pins and automatically imaging the resulting single cells. The acquired images are subsequently reviewed and analyzed by custom MATLAB scripts that segment single-cell contours and extract quantitative metrics. SLIP yields rich data sets on cell morphology and gene expression that illustrate the function of certain genes and the connections among strains in a library. For a library arrayed on 96-well plates, image acquisition can be completed within 4 min per plate.

  5. chimeraviz: a tool for visualizing chimeric RNA.

    PubMed

    Lågstad, Stian; Zhao, Sen; Hoff, Andreas M; Johannessen, Bjarne; Lingjærde, Ole Christian; Skotheim, Rolf I

    2017-09-15

    Advances in high-throughput RNA sequencing have enabled more efficient detection of fusion transcripts, but the technology and associated software used for fusion detection from sequencing data often yield a high false discovery rate. Good prioritization of the results is important, and this can be helped by a visualization framework that automatically integrates RNA data with known genomic features. Here we present chimeraviz , a Bioconductor package that automates the creation of chimeric RNA visualizations. The package supports input from nine different fusion-finder tools: deFuse, EricScript, InFusion, JAFFA, FusionCatcher, FusionMap, PRADA, SOAPfuse and STAR-FUSION. chimeraviz is an R package available via Bioconductor ( https://bioconductor.org/packages/release/bioc/html/chimeraviz.html ) under Artistic-2.0. Source code and support is available at GitHub ( https://github.com/stianlagstad/chimeraviz ). rolf.i.skotheim@rr-research.no. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  6. LinkEHR-Ed: a multi-reference model archetype editor based on formal semantics.

    PubMed

    Maldonado, José A; Moner, David; Boscá, Diego; Fernández-Breis, Jesualdo T; Angulo, Carlos; Robles, Montserrat

    2009-08-01

    To develop a powerful archetype editing framework capable of handling multiple reference models and oriented towards the semantic description and standardization of legacy data. The main prerequisite for implementing tools providing enhanced support for archetypes is the clear specification of archetype semantics. We propose a formalization of the definition section of archetypes based on types over tree-structured data. It covers the specialization of archetypes, the relationship between reference models and archetypes and conformance of data instances to archetypes. LinkEHR-Ed, a visual archetype editor based on the former formalization with advanced processing capabilities that supports multiple reference models, the editing and semantic validation of archetypes, the specification of mappings to data sources, and the automatic generation of data transformation scripts, is developed. LinkEHR-Ed is a useful tool for building, processing and validating archetypes based on any reference model.

  7. Automatic mechanisms for measuring subjective unit of discomfort.

    PubMed

    Hartanto, D W I; Kang, Ni; Brinkman, Willem-Paul; Kampmann, Isabel L; Morina, Nexhmedin; Emmelkamp, Paul G M; Neerincx, Mark A

    2012-01-01

    Current practice in Virtual Reality Exposure Therapy (VRET) is that therapists ask patients about their anxiety level by means of the Subjective Unit of Discomfort (SUD) scale. With an aim of developing a home-based VRET system, this measurement ideally should be done using speech technology. In a VRET system for social phobia with scripted avatar-patient dialogues, the timing of asking patients to give their SUD score becomes relevant. This study examined three timing mechanisms: (1) dialogue dependent (i.e. naturally in the flow of the dialogue); (2) speech dependent (i.e. when both patient and avatar are silent); and (3) context independent (i.e. randomly). Results of an experiment with non-patients (n = 24) showed a significant effect for the timing mechanisms on the perceived dialogue flow, user preference, reported presence and user dialog replies. Overall, dialogue dependent timing mechanism seems superior followed by the speech dependent and context independent timing mechanism.

  8. pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2014-01-01

    This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.

  9. Automatic analysis of quantitative NMR data of pharmaceutical compound libraries.

    PubMed

    Liu, Xuejun; Kolpak, Michael X; Wu, Jiejun; Leo, Gregory C

    2012-08-07

    In drug discovery, chemical library compounds are usually dissolved in DMSO at a certain concentration and then distributed to biologists for target screening. Quantitative (1)H NMR (qNMR) is the preferred method for the determination of the actual concentrations of compounds because the relative single proton peak areas of two chemical species represent the relative molar concentrations of the two compounds, that is, the compound of interest and a calibrant. Thus, an analyte concentration can be determined using a calibration compound at a known concentration. One particularly time-consuming step in the qNMR analysis of compound libraries is the manual integration of peaks. In this report is presented an automated method for performing this task without prior knowledge of compound structures and by using an external calibration spectrum. The script for automated integration is fast and adaptable to large-scale data sets, eliminating the need for manual integration in ~80% of the cases.

  10. Object-based media and stream-based computing

    NASA Astrophysics Data System (ADS)

    Bove, V. Michael, Jr.

    1998-03-01

    Object-based media refers to the representation of audiovisual information as a collection of objects - the result of scene-analysis algorithms - and a script describing how they are to be rendered for display. Such multimedia presentations can adapt to viewing circumstances as well as to viewer preferences and behavior, and can provide a richer link between content creator and consumer. With faster networks and processors, such ideas become applicable to live interpersonal communications as well, creating a more natural and productive alternative to traditional videoconferencing. In this paper is outlined an example of object-based media algorithms and applications developed by my group, and present new hardware architectures and software methods that we have developed to enable meeting the computational requirements of object- based and other advanced media representations. In particular we describe stream-based processing, which enables automatic run-time parallelization of multidimensional signal processing tasks even given heterogenous computational resources.

  11. HHsvm: fast and accurate classification of profile–profile matches identified by HHsearch

    PubMed Central

    Dlakić, Mensur

    2009-01-01

    Motivation: Recently developed profile–profile methods rival structural comparisons in their ability to detect homology between distantly related proteins. Despite this tremendous progress, many genuine relationships between protein families cannot be recognized as comparisons of their profiles result in scores that are statistically insignificant. Results: Using known evolutionary relationships among protein superfamilies in SCOP database, support vector machines were trained on four sets of discriminatory features derived from the output of HHsearch. Upon validation, it was shown that the automatic classification of all profile–profile matches was superior to fixed threshold-based annotation in terms of sensitivity and specificity. The effectiveness of this approach was demonstrated by annotating several domains of unknown function from the Pfam database. Availability: Programs and scripts implementing the methods described in this manuscript are freely available from http://hhsvm.dlakiclab.org/. Contact: mdlakic@montana.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19773335

  12. Web-based UMLS concept retrieval by automatic text scanning: a comparison of two methods.

    PubMed

    Brandt, C; Nadkarni, P

    2001-01-01

    The Web is increasingly the medium of choice for multi-user application program delivery. Yet selection of an appropriate programming environment for rapid prototyping, code portability, and maintainability remain issues. We summarize our experience on the conversion of a LISP Web application, Search/SR to a new, functionally identical application, Search/SR-ASP using a relational database and active server pages (ASP) technology. Our results indicate that provision of easy access to database engines and external objects is almost essential for a development environment to be considered viable for rapid and robust application delivery. While LISP itself is a robust language, its use in Web applications may be hard to justify given that current vendor implementations do not provide such functionality. Alternative, currently available scripting environments for Web development appear to have most of LISP's advantages and few of its disadvantages.

  13. Automatic classification of 6-month-old infants at familial risk for language-based learning disorder using a support vector machine.

    PubMed

    Zare, Marzieh; Rezvani, Zahra; Benasich, April A

    2016-07-01

    This study assesses the ability of a novel, "automatic classification" approach to facilitate identification of infants at highest familial risk for language-learning disorders (LLD) and to provide converging assessments to enable earlier detection of developmental disorders that disrupt language acquisition. Network connectivity measures derived from 62-channel electroencephalogram (EEG) recording were used to identify selected features within two infant groups who differed on LLD risk: infants with a family history of LLD (FH+) and typically-developing infants without such a history (FH-). A support vector machine was deployed; global efficiency and global and local clustering coefficients were computed. A novel minimum spanning tree (MST) approach was also applied. Cross-validation was employed to assess the resultant classification. Infants were classified with about 80% accuracy into FH+ and FH- groups with 89% specificity and precision of 92%. Clustering patterns differed by risk group and MST network analysis suggests that FH+ infants' EEG complexity patterns were significantly different from FH- infants. The automatic classification techniques used here were shown to be both robust and reliable and should provide valuable information when applied to early identification of risk or clinical groups. The ability to identify infants at highest risk for LLD using "automatic classification" strategies is a novel convergent approach that may facilitate earlier diagnosis and remediation. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  14. Travtek Evaluation Task C3: Camera Car Study

    DOT National Transportation Integrated Search

    1998-11-01

    A "biometric" technology is an automatic method for the identification, or identity verification, of an individual based on physiological or behavioral characteristics. The primary objective of the study summarized in this tech brief was to make reco...

  15. Identification and on-line monitoring of reduced sulphur species (RSS) by voltammetry in oxic waters.

    PubMed

    Superville, Pierre-Jean; Pižeta, Ivanka; Omanović, Dario; Billon, Gabriel

    2013-08-15

    Based on automatic on-line measurements on the Deûle River that showed daily variation of a peak around -0.56V (vs Ag|AgCl 3M), identification of Reduced Sulphur Species (RSS) in oxic waters was performed applying cathodic stripping voltammetry (CSV) with the hanging mercury drop electrode (HMDE). Pseudopolarographic studies accompanied with increasing concentrations of copper revealed the presence of elemental sulphur S(0), thioacetamide (TA) and reduced glutathione (GSH) as the main sulphur compounds in the Deûle River. In order to resolve these three species, a simple procedure was developed and integrated in an automatic on-line monitoring system. During one week monitoring with hourly measurements, GSH and S(0) exhibited daily cycles whereas no consequential pattern was observed for TA. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Automatic modal identification of cable-supported bridges instrumented with a long-term monitoring system

    NASA Astrophysics Data System (ADS)

    Ni, Y. Q.; Fan, K. Q.; Zheng, G.; Chan, T. H. T.; Ko, J. M.

    2003-08-01

    An automatic modal identification program is developed for continuous extraction of modal parameters of three cable-supported bridges in Hong Kong which are instrumented with a long-term monitoring system. The program employs the Complex Modal Indication Function (CMIF) algorithm to identify modal properties from continuous ambient vibration measurements in an on-line manner. By using the LabVIEW graphical programming language, the software realizes the algorithm in Virtual Instrument (VI) style. The applicability and implementation issues of the developed software are demonstrated by using one-year measurement data acquired from 67 channels of accelerometers deployed on the cable-stayed Ting Kau Bridge. With the continuously identified results, normal variability of modal vectors caused by varying environmental and operational conditions is observed. Such observation is very helpful for selection of appropriate measured modal vectors for structural health monitoring applications.

  17. Use of AFIS for linking scenes of crime.

    PubMed

    Hefetz, Ido; Liptz, Yakir; Vaturi, Shaul; Attias, David

    2016-05-01

    Forensic intelligence can provide critical information in criminal investigations - the linkage of crime scenes. The Automatic Fingerprint Identification System (AFIS) is an example of a technological improvement that has advanced the entire forensic identification field to strive for new goals and achievements. In one example using AFIS, a series of burglaries into private apartments enabled a fingerprint examiner to search latent prints from different burglary scenes against an unsolved latent print database. Latent finger and palm prints coming from the same source were associated with over than 20 cases. Then, by forensic intelligence and profile analysis the offender's behavior could be anticipated. He was caught, identified, and arrested. It is recommended to perform an AFIS search of LT/UL prints against current crimes automatically as part of laboratory protocol and not by an examiner's discretion. This approach may link different crime scenes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. An intelligent identification algorithm for the monoclonal picking instrument

    NASA Astrophysics Data System (ADS)

    Yan, Hua; Zhang, Rongfu; Yuan, Xujun; Wang, Qun

    2017-11-01

    The traditional colony selection is mainly operated by manual mode, which takes on low efficiency and strong subjectivity. Therefore, it is important to develop an automatic monoclonal-picking instrument. The critical stage of the automatic monoclonal-picking and intelligent optimal selection is intelligent identification algorithm. An auto-screening algorithm based on Support Vector Machine (SVM) is proposed in this paper, which uses the supervised learning method, which combined with the colony morphological characteristics to classify the colony accurately. Furthermore, through the basic morphological features of the colony, system can figure out a series of morphological parameters step by step. Through the establishment of maximal margin classifier, and based on the analysis of the growth trend of the colony, the selection of the monoclonal colony was carried out. The experimental results showed that the auto-screening algorithm could screen out the regular colony from the other, which meets the requirement of various parameters.

  19. Automatic identification and normalization of dosage forms in drug monographs

    PubMed Central

    2012-01-01

    Background Each day, millions of health consumers seek drug-related information on the Web. Despite some efforts in linking related resources, drug information is largely scattered in a wide variety of websites of different quality and credibility. Methods As a step toward providing users with integrated access to multiple trustworthy drug resources, we aim to develop a method capable of identifying drug's dosage form information in addition to drug name recognition. We developed rules and patterns for identifying dosage forms from different sections of full-text drug monographs, and subsequently normalized them to standardized RxNorm dosage forms. Results Our method represents a significant improvement compared with a baseline lookup approach, achieving overall macro-averaged Precision of 80%, Recall of 98%, and F-Measure of 85%. Conclusions We successfully developed an automatic approach for drug dosage form identification, which is critical for building links between different drug-related resources. PMID:22336431

  20. Automated bow shock and radiation belt edge identification methods and their application for Cluster, THEMIS/ARTEMIS and Van Allen Probes data

    NASA Astrophysics Data System (ADS)

    Facsko, Gabor; Sibeck, David; Balogh, Tamas; Kis, Arpad; Wesztergom, Viktor

    2017-04-01

    The bow shock and the outer rim of the outer radiation belt are detected automatically by our algorithm developed as a part of the Boundary Layer Identification Code Cluster Active Archive project. The radiation belt positions are determined from energized electron measurements working properly onboard all Cluster spacecraft. For bow shock identification we use magnetometer data and, when available, ion plasma instrument data. In addition, electrostatic wave instrument electron density, spacecraft potential measurements and wake indicator auxiliary data are also used so the events can be identified by all Cluster probes in highly redundant way, as the magnetometer and these instruments are still operational in all spacecraft. The capability and performance of the bow shock identification algorithm were tested using known bow shock crossing determined manually from January 29, 2002 to February 3,. The verification enabled 70% of the bow shock crossings to be identified automatically. The method shows high flexibility and it can be applied to observations from various spacecraft. Now these tools have been applied to Time History of Events and Macroscale Interactions during Substorms (THEMIS)/Acceleration, Reconnection, Turbulence, and Electrodynamics of the Moon's Interaction with the Sun (ARTEMIS) magnetic field, plasma and spacecraft potential observations to identify bow shock crossings; and to Van Allen Probes supra-thermal electron observations to identify the edges of the radiation belt. The outcomes of the algorithms are checked manually and the parameters used to search for bow shock identification are refined.

Top