Syntactic dependency parsers for biomedical-NLP.
Cohen, Raphael; Elhadad, Michael
2012-01-01
Syntactic parsers have made a leap in accuracy and speed in recent years. The high order structural information provided by dependency parsers is useful for a variety of NLP applications. We present a biomedical model for the EasyFirst parser, a fast and accurate parser for creating Stanford Dependencies. We evaluate the models trained in the biomedical domains of EasyFirst and Clear-Parser in a number of task oriented metrics. Both parsers provide stat of the art speed and accuracy in the Genia of over 89%. We show that Clear-Parser excels at tasks relating to negation identification while EasyFirst excels at tasks relating to Named Entities and is more robust to changes in domain.
Semantic Role Labeling of Clinical Text: Comparing Syntactic Parsers and Features
Zhang, Yaoyun; Jiang, Min; Wang, Jingqi; Xu, Hua
2016-01-01
Semantic role labeling (SRL), which extracts shallow semantic relation representation from different surface textual forms of free text sentences, is important for understanding clinical narratives. Since semantic roles are formed by syntactic constituents in the sentence, an effective parser, as well as an effective syntactic feature set are essential to build a practical SRL system. Our study initiates a formal evaluation and comparison of SRL performance on a clinical text corpus MiPACQ, using three state-of-the-art parsers, the Stanford parser, the Berkeley parser, and the Charniak parser. First, the original parsers trained on the open domain syntactic corpus Penn Treebank were employed. Next, those parsers were retrained on the clinical Treebank of MiPACQ for further comparison. Additionally, state-of-the-art syntactic features from open domain SRL were also examined for clinical text. Experimental results showed that retraining the parsers on clinical Treebank improved the performance significantly, with an optimal F1 measure of 71.41% achieved by the Berkeley parser. PMID:28269926
Ferraro, Jeffrey P; Ye, Ye; Gesteland, Per H; Haug, Peter J; Tsui, Fuchiang Rich; Cooper, Gregory F; Van Bree, Rudy; Ginter, Thomas; Nowalk, Andrew J; Wagner, Michael
2017-05-31
This study evaluates the accuracy and portability of a natural language processing (NLP) tool for extracting clinical findings of influenza from clinical notes across two large healthcare systems. Effectiveness is evaluated on how well NLP supports downstream influenza case-detection for disease surveillance. We independently developed two NLP parsers, one at Intermountain Healthcare (IH) in Utah and the other at University of Pittsburgh Medical Center (UPMC) using local clinical notes from emergency department (ED) encounters of influenza. We measured NLP parser performance for the presence and absence of 70 clinical findings indicative of influenza. We then developed Bayesian network models from NLP processed reports and tested their ability to discriminate among cases of (1) influenza, (2) non-influenza influenza-like illness (NI-ILI), and (3) 'other' diagnosis. On Intermountain Healthcare reports, recall and precision of the IH NLP parser were 0.71 and 0.75, respectively, and UPMC NLP parser, 0.67 and 0.79. On University of Pittsburgh Medical Center reports, recall and precision of the UPMC NLP parser were 0.73 and 0.80, respectively, and IH NLP parser, 0.53 and 0.80. Bayesian case-detection performance measured by AUROC for influenza versus non-influenza on Intermountain Healthcare cases was 0.93 (using IH NLP parser) and 0.93 (using UPMC NLP parser). Case-detection on University of Pittsburgh Medical Center cases was 0.95 (using UPMC NLP parser) and 0.83 (using IH NLP parser). For influenza versus NI-ILI on Intermountain Healthcare cases performance was 0.70 (using IH NLP parser) and 0.76 (using UPMC NLP parser). On University of Pisstburgh Medical Center cases, 0.76 (using UPMC NLP parser) and 0.65 (using IH NLP parser). In all but one instance (influenza versus NI-ILI using IH cases), local parsers were more effective at supporting case-detection although performances of non-local parsers were reasonable.
Yang, Chunguang G; Granite, Stephen J; Van Eyk, Jennifer E; Winslow, Raimond L
2006-11-01
Protein identification using MS is an important technique in proteomics as well as a major generator of proteomics data. We have designed the protein identification data object model (PDOM) and developed a parser based on this model to facilitate the analysis and storage of these data. The parser works with HTML or XML files saved or exported from MASCOT MS/MS ions search in peptide summary report or MASCOT PMF search in protein summary report. The program creates PDOM objects, eliminates redundancy in the input file, and has the capability to output any PDOM object to a relational database. This program facilitates additional analysis of MASCOT search results and aids the storage of protein identification information. The implementation is extensible and can serve as a template to develop parsers for other search engines. The parser can be used as a stand-alone application or can be driven by other Java programs. It is currently being used as the front end for a system that loads HTML and XML result files of MASCOT searches into a relational database. The source code is freely available at http://www.ccbm.jhu.edu and the program uses only free and open-source Java libraries.
Parsing clinical text: how good are the state-of-the-art parsers?
Jiang, Min; Huang, Yang; Fan, Jung-wei; Tang, Buzhou; Denny, Josh; Xu, Hua
2015-01-01
Parsing, which generates a syntactic structure of a sentence (a parse tree), is a critical component of natural language processing (NLP) research in any domain including medicine. Although parsers developed in the general English domain, such as the Stanford parser, have been applied to clinical text, there are no formal evaluations and comparisons of their performance in the medical domain. In this study, we investigated the performance of three state-of-the-art parsers: the Stanford parser, the Bikel parser, and the Charniak parser, using following two datasets: (1) A Treebank containing 1,100 sentences that were randomly selected from progress notes used in the 2010 i2b2 NLP challenge and manually annotated according to a Penn Treebank based guideline; and (2) the MiPACQ Treebank, which is developed based on pathology notes and clinical notes, containing 13,091 sentences. We conducted three experiments on both datasets. First, we measured the performance of the three state-of-the-art parsers on the clinical Treebanks with their default settings. Then we re-trained the parsers using the clinical Treebanks and evaluated their performance using the 10-fold cross validation method. Finally we re-trained the parsers by combining the clinical Treebanks with the Penn Treebank. Our results showed that the original parsers achieved lower performance in clinical text (Bracketing F-measure in the range of 66.6%-70.3%) compared to general English text. After retraining on the clinical Treebank, all parsers achieved better performance, with the best performance from the Stanford parser that reached the highest Bracketing F-measure of 73.68% on progress notes and 83.72% on the MiPACQ corpus using 10-fold cross validation. When the combined clinical Treebanks and Penn Treebank was used, of the three parsers, the Charniak parser achieved the highest Bracketing F-measure of 73.53% on progress notes and the Stanford parser reached the highest F-measure of 84.15% on the MiPACQ corpus. Our study demonstrates that re-training using clinical Treebanks is critical for improving general English parsers' performance on clinical text, and combining clinical and open domain corpora might achieve optimal performance for parsing clinical text.
Parsing clinical text: how good are the state-of-the-art parsers?
2015-01-01
Background Parsing, which generates a syntactic structure of a sentence (a parse tree), is a critical component of natural language processing (NLP) research in any domain including medicine. Although parsers developed in the general English domain, such as the Stanford parser, have been applied to clinical text, there are no formal evaluations and comparisons of their performance in the medical domain. Methods In this study, we investigated the performance of three state-of-the-art parsers: the Stanford parser, the Bikel parser, and the Charniak parser, using following two datasets: (1) A Treebank containing 1,100 sentences that were randomly selected from progress notes used in the 2010 i2b2 NLP challenge and manually annotated according to a Penn Treebank based guideline; and (2) the MiPACQ Treebank, which is developed based on pathology notes and clinical notes, containing 13,091 sentences. We conducted three experiments on both datasets. First, we measured the performance of the three state-of-the-art parsers on the clinical Treebanks with their default settings. Then we re-trained the parsers using the clinical Treebanks and evaluated their performance using the 10-fold cross validation method. Finally we re-trained the parsers by combining the clinical Treebanks with the Penn Treebank. Results Our results showed that the original parsers achieved lower performance in clinical text (Bracketing F-measure in the range of 66.6%-70.3%) compared to general English text. After retraining on the clinical Treebank, all parsers achieved better performance, with the best performance from the Stanford parser that reached the highest Bracketing F-measure of 73.68% on progress notes and 83.72% on the MiPACQ corpus using 10-fold cross validation. When the combined clinical Treebanks and Penn Treebank was used, of the three parsers, the Charniak parser achieved the highest Bracketing F-measure of 73.53% on progress notes and the Stanford parser reached the highest F-measure of 84.15% on the MiPACQ corpus. Conclusions Our study demonstrates that re-training using clinical Treebanks is critical for improving general English parsers' performance on clinical text, and combining clinical and open domain corpora might achieve optimal performance for parsing clinical text. PMID:26045009
Towards automated processing of clinical Finnish: sublanguage analysis and a rule-based parser.
Laippala, Veronika; Ginter, Filip; Pyysalo, Sampo; Salakoski, Tapio
2009-12-01
In this paper, we present steps taken towards more efficient automated processing of clinical Finnish, focusing on daily nursing notes in a Finnish Intensive Care Unit (ICU). First, we analyze ICU Finnish as a sublanguage, identifying its specific features facilitating, for example, the development of a specialized syntactic analyser. The identified features include frequent omission of finite verbs, limitations in allowed syntactic structures, and domain-specific vocabulary. Second, we develop a formal grammar and a parser for ICU Finnish, thus providing better tools for the development of further applications in the clinical domain. The grammar is implemented in the LKB system in a typed feature structure formalism. The lexicon is automatically generated based on the output of the FinTWOL morphological analyzer adapted to the clinical domain. As an additional experiment, we study the effect of using Finnish constraint grammar to reduce the size of the lexicon. The parser construction thus makes efficient use of existing resources for Finnish. The grammar currently covers 76.6% of ICU Finnish sentences, producing highly accurate best-parse analyzes with F-score of 91.1%. We find that building a parser for the highly specialized domain sublanguage is not only feasible, but also surprisingly efficient, given an existing morphological analyzer with broad vocabulary coverage. The resulting parser enables a deeper analysis of the text than was previously possible.
COD::CIF::Parser: an error-correcting CIF parser for the Perl language.
Merkys, Andrius; Vaitkus, Antanas; Butkus, Justas; Okulič-Kazarinas, Mykolas; Kairys, Visvaldas; Gražulis, Saulius
2016-02-01
A syntax-correcting CIF parser, COD::CIF::Parser , is presented that can parse CIF 1.1 files and accurately report the position and the nature of the discovered syntactic problems. In addition, the parser is able to automatically fix the most common and the most obvious syntactic deficiencies of the input files. Bindings for Perl, C and Python programming environments are available. Based on COD::CIF::Parser , the cod-tools package for manipulating the CIFs in the Crystallography Open Database (COD) has been developed. The cod-tools package has been successfully used for continuous updates of the data in the automated COD data deposition pipeline, and to check the validity of COD data against the IUCr data validation guidelines. The performance, capabilities and applications of different parsers are compared.
Morphosyntactic annotation of CHILDES transcripts*
SAGAE, KENJI; DAVIS, ERIC; LAVIE, ALON; MACWHINNEY, BRIAN; WINTNER, SHULY
2014-01-01
Corpora of child language are essential for research in child language acquisition and psycholinguistics. Linguistic annotation of the corpora provides researchers with better means for exploring the development of grammatical constructions and their usage. We describe a project whose goal is to annotate the English section of the CHILDES database with grammatical relations in the form of labeled dependency structures. We have produced a corpus of over 18,800 utterances (approximately 65,000 words) with manually curated gold-standard grammatical relation annotations. Using this corpus, we have developed a highly accurate data-driven parser for the English CHILDES data, which we used to automatically annotate the remainder of the English section of CHILDES. We have also extended the parser to Spanish, and are currently working on supporting more languages. The parser and the manually and automatically annotated data are freely available for research purposes. PMID:20334720
Storing files in a parallel computing system based on user-specified parser function
Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron
2014-10-21
Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.
SAGA: A project to automate the management of software production systems
NASA Technical Reports Server (NTRS)
Campbell, R. H.
1983-01-01
The current work in progress for the SAGA project are described. The highlights of this research are: a parser independent SAGA editor, design for the screen editing facilities of the editor, delivery to NASA of release 1 of Olorin, the SAGA parser generator, personal workstation environment research, release 1 of the SAGA symbol table manager, delta generation in SAGA, requirements for a proof management system, documentation for and testing of the cyber pascal make prototype, a prototype cyber-based slicing facility, a June 1984 demonstration plan, SAGA utility programs, summary of UNIX software engineering support, and theorem prover review.
Overview of the ArbiTER edge plasma eigenvalue code
NASA Astrophysics Data System (ADS)
Baver, Derek; Myra, James; Umansky, Maxim
2011-10-01
The Arbitrary Topology Equation Reader, or ArbiTER, is a flexible eigenvalue solver that is currently under development for plasma physics applications. The ArbiTER code builds on the equation parser framework of the existing 2DX code, extending it to include a topology parser. This will give the code the capability to model problems with complicated geometries (such as multiple X-points and scrape-off layers) or model equations with arbitrary numbers of dimensions (e.g. for kinetic analysis). In the equation parser framework, model equations are not included in the program's source code. Instead, an input file contains instructions for building a matrix from profile functions and elementary differential operators. The program then executes these instructions in a sequential manner. These instructions may also be translated into analytic form, thus giving the code transparency as well as flexibility. We will present an overview of how the ArbiTER code is to work, as well as preliminary results from early versions of this code. Work supported by the U.S. DOE.
Benchmarking natural-language parsers for biological applications using dependency graphs.
Clegg, Andrew B; Shepherd, Adrian J
2007-01-25
Interest is growing in the application of syntactic parsers to natural language processing problems in biology, but assessing their performance is difficult because differences in linguistic convention can falsely appear to be errors. We present a method for evaluating their accuracy using an intermediate representation based on dependency graphs, in which the semantic relationships important in most information extraction tasks are closer to the surface. We also demonstrate how this method can be easily tailored to various application-driven criteria. Using the GENIA corpus as a gold standard, we tested four open-source parsers which have been used in bioinformatics projects. We first present overall performance measures, and test the two leading tools, the Charniak-Lease and Bikel parsers, on subtasks tailored to reflect the requirements of a system for extracting gene expression relationships. These two tools clearly outperform the other parsers in the evaluation, and achieve accuracy levels comparable to or exceeding native dependency parsers on similar tasks in previous biological evaluations. Evaluating using dependency graphs allows parsers to be tested easily on criteria chosen according to the semantics of particular biological applications, drawing attention to important mistakes and soaking up many insignificant differences that would otherwise be reported as errors. Generating high-accuracy dependency graphs from the output of phrase-structure parsers also provides access to the more detailed syntax trees that are used in several natural-language processing techniques.
Benchmarking natural-language parsers for biological applications using dependency graphs
Clegg, Andrew B; Shepherd, Adrian J
2007-01-01
Background Interest is growing in the application of syntactic parsers to natural language processing problems in biology, but assessing their performance is difficult because differences in linguistic convention can falsely appear to be errors. We present a method for evaluating their accuracy using an intermediate representation based on dependency graphs, in which the semantic relationships important in most information extraction tasks are closer to the surface. We also demonstrate how this method can be easily tailored to various application-driven criteria. Results Using the GENIA corpus as a gold standard, we tested four open-source parsers which have been used in bioinformatics projects. We first present overall performance measures, and test the two leading tools, the Charniak-Lease and Bikel parsers, on subtasks tailored to reflect the requirements of a system for extracting gene expression relationships. These two tools clearly outperform the other parsers in the evaluation, and achieve accuracy levels comparable to or exceeding native dependency parsers on similar tasks in previous biological evaluations. Conclusion Evaluating using dependency graphs allows parsers to be tested easily on criteria chosen according to the semantics of particular biological applications, drawing attention to important mistakes and soaking up many insignificant differences that would otherwise be reported as errors. Generating high-accuracy dependency graphs from the output of phrase-structure parsers also provides access to the more detailed syntax trees that are used in several natural-language processing techniques. PMID:17254351
Extracting noun phrases for all of MEDLINE.
Bennett, N. A.; He, Q.; Powell, K.; Schatz, B. R.
1999-01-01
A natural language parser that could extract noun phrases for all medical texts would be of great utility in analyzing content for information retrieval. We discuss the extraction of noun phrases from MEDLINE, using a general parser not tuned specifically for any medical domain. The noun phrase extractor is made up of three modules: tokenization; part-of-speech tagging; noun phrase identification. Using our program, we extracted noun phrases from the entire MEDLINE collection, encompassing 9.3 million abstracts. Over 270 million noun phrases were generated, of which 45 million were unique. The quality of these phrases was evaluated by examining all phrases from a sample collection of abstracts. The precision and recall of the phrases from our general parser compared favorably with those from three other parsers we had previously evaluated. We are continuing to improve our parser and evaluate our claim that a generic parser can effectively extract all the different phrases across the entire medical literature. PMID:10566444
A Protocol for Annotating Parser Differences. Research Report. ETS RR-16-02
ERIC Educational Resources Information Center
Bruno, James V.; Cahill, Aoife; Gyawali, Binod
2016-01-01
We present an annotation scheme for classifying differences in the outputs of syntactic constituency parsers when a gold standard is unavailable or undesired, as in the case of texts written by nonnative speakers of English. We discuss its automated implementation and the results of a case study that uses the scheme to choose a parser best suited…
Processing of ICARTT Data Files Using Fuzzy Matching and Parser Combinators
NASA Technical Reports Server (NTRS)
Rutherford, Matthew T.; Typanski, Nathan D.; Wang, Dali; Chen, Gao
2014-01-01
In this paper, the task of parsing and matching inconsistent, poorly formed text data through the use of parser combinators and fuzzy matching is discussed. An object-oriented implementation of the parser combinator technique is used to allow for a relatively simple interface for adapting base parsers. For matching tasks, a fuzzy matching algorithm with Levenshtein distance calculations is implemented to match string pair, which are otherwise difficult to match due to the aforementioned irregularities and errors in one or both pair members. Used in concert, the two techniques allow parsing and matching operations to be performed which had previously only been done manually.
The parser generator as a general purpose tool
NASA Technical Reports Server (NTRS)
Noonan, R. E.; Collins, W. R.
1985-01-01
The parser generator has proven to be an extremely useful, general purpose tool. It can be used effectively by programmers having only a knowledge of grammars and no training at all in the theory of formal parsing. Some of the application areas for which a table-driven parser can be used include interactive, query languages, menu systems, translators, and programming support tools. Each of these is illustrated by an example grammar.
Domain Adaption of Parsing for Operative Notes
Wang, Yan; Pakhomov, Serguei; Ryan, James O.; Melton, Genevieve B.
2016-01-01
Background Full syntactic parsing of clinical text as a part of clinical natural language processing (NLP) is critical for a wide range of applications, such as identification of adverse drug reactions, patient cohort identification, and gene interaction extraction. Several robust syntactic parsers are publicly available to produce linguistic representations for sentences. However, these existing parsers are mostly trained on general English text and often require adaptation for optimal performance on clinical text. Our objective was to adapt an existing general English parser for the clinical text of operative reports via lexicon augmentation, statistics adjusting, and grammar rules modification based on a set of biomedical text. Method The Stanford unlexicalized probabilistic context-free grammar (PCFG) parser lexicon was expanded with SPECIALIST lexicon along with statistics collected from a limited set of operative notes tagged with a two of POS taggers (GENIA tagger and MedPost). The most frequently occurring verb entries of the SPECIALIST lexicon were adjusted based on manual review of verb usage in operative notes. Stanford parser grammar production rules were also modified based on linguistic features of operative reports. An analogous approach was then applied to the GENIA corpus to test the generalizability of this approach to biomedical text. Results The new unlexicalized PCFG parser extended with the extra lexicon from SPECIALIST along with accurate statistics collected from an operative note corpus tagged with GENIA POS tagger improved the parser performance by 2.26% from 87.64% to 89.90%. There was a progressive improvement with the addition of multiple approaches. Most of the improvement occurred with lexicon augmentation combined with statistics from the operative notes corpus. Application of this approach on the GENIA corpus showed that parsing performance was boosted by 3.81% with a simple new grammar and the addition of the GENIA corpus lexicon. Conclusion Using statistics collected from clinical text tagged with POS taggers along with proper modification of grammars and lexicons of an unlexicalized PCFG parser can improve parsing performance. PMID:25661593
GBParsy: a GenBank flatfile parser library with high speed.
Lee, Tae-Ho; Kim, Yeon-Ki; Nahm, Baek Hie
2008-07-25
GenBank flatfile (GBF) format is one of the most popular sequence file formats because of its detailed sequence features and ease of readability. To use the data in the file by a computer, a parsing process is required and is performed according to a given grammar for the sequence and the description in a GBF. Currently, several parser libraries for the GBF have been developed. However, with the accumulation of DNA sequence information from eukaryotic chromosomes, parsing a eukaryotic genome sequence with these libraries inevitably takes a long time, due to the large GBF file and its correspondingly large genomic nucleotide sequence and related feature information. Thus, there is significant need to develop a parsing program with high speed and efficient use of system memory. We developed a library, GBParsy, which was C language-based and parses GBF files. The parsing speed was maximized by using content-specified functions in place of regular expressions that are flexible but slow. In addition, we optimized an algorithm related to memory usage so that it also increased parsing performance and efficiency of memory usage. GBParsy is at least 5-100x faster than current parsers in benchmark tests. GBParsy is estimated to extract annotated information from almost 100 Mb of a GenBank flatfile for chromosomal sequence information within a second. Thus, it should be used for a variety of applications such as on-time visualization of a genome at a web site.
PDB file parser and structure class implemented in Python.
Hamelryck, Thomas; Manderick, Bernard
2003-11-22
The biopython project provides a set of bioinformatics tools implemented in Python. Recently, biopython was extended with a set of modules that deal with macromolecular structure. Biopython now contains a parser for PDB files that makes the atomic information available in an easy-to-use but powerful data structure. The parser and data structure deal with features that are often left out or handled inadequately by other packages, e.g. atom and residue disorder (if point mutants are present in the crystal), anisotropic B factors, multiple models and insertion codes. In addition, the parser performs some sanity checking to detect obvious errors. The Biopython distribution (including source code and documentation) is freely available (under the Biopython license) from http://www.biopython.org
GazeParser: an open-source and multiplatform library for low-cost eye tracking and analysis.
Sogo, Hiroyuki
2013-09-01
Eye movement analysis is an effective method for research on visual perception and cognition. However, recordings of eye movements present practical difficulties related to the cost of the recording devices and the programming of device controls for use in experiments. GazeParser is an open-source library for low-cost eye tracking and data analysis; it consists of a video-based eyetracker and libraries for data recording and analysis. The libraries are written in Python and can be used in conjunction with PsychoPy and VisionEgg experimental control libraries. Three eye movement experiments are reported on performance tests of GazeParser. These showed that the means and standard deviations for errors in sampling intervals were less than 1 ms. Spatial accuracy ranged from 0.7° to 1.2°, depending on participant. In gap/overlap tasks and antisaccade tasks, the latency and amplitude of the saccades detected by GazeParser agreed with those detected by a commercial eyetracker. These results showed that the GazeParser demonstrates adequate performance for use in psychological experiments.
A natural language interface to databases
NASA Technical Reports Server (NTRS)
Ford, D. R.
1988-01-01
The development of a Natural Language Interface which is semantic-based and uses Conceptual Dependency representation is presented. The system was developed using Lisp and currently runs on a Symbolics Lisp machine. A key point is that the parser handles morphological analysis, which expands its capabilities of understanding more words.
Robo-Sensei's NLP-Based Error Detection and Feedback Generation
ERIC Educational Resources Information Center
Nagata, Noriko
2009-01-01
This paper presents a new version of Robo-Sensei's NLP (Natural Language Processing) system which updates the version currently available as the software package "ROBO-SENSEI: Personal Japanese Tutor" (Nagata, 2004). Robo-Sensei's NLP system includes a lexicon, a morphological generator, a word segmentor, a morphological parser, a syntactic…
Automatic Parsing of Parental Verbal Input
Sagae, Kenji; MacWhinney, Brian; Lavie, Alon
2006-01-01
To evaluate theoretical proposals regarding the course of child language acquisition, researchers often need to rely on the processing of large numbers of syntactically parsed utterances, both from children and their parents. Because it is so difficult to do this by hand, there are currently no parsed corpora of child language input data. To automate this process, we developed a system that combined the MOR tagger, a rule-based parser, and statistical disambiguation techniques. The resultant system obtained nearly 80% correct parses for the sentences spoken to children. To achieve this level, we had to construct a particular processing sequence that minimizes problems caused by the coverage/ambiguity trade-off in parser design. These procedures are particularly appropriate for use with the CHILDES database, an international corpus of transcripts. The data and programs are now freely available over the Internet. PMID:15190707
Is human sentence parsing serial or parallel? Evidence from event-related brain potentials.
Hopf, Jens-Max; Bader, Markus; Meng, Michael; Bayer, Josef
2003-01-01
In this ERP study we investigate the processes that occur in syntactically ambiguous German sentences at the point of disambiguation. Whereas most psycholinguistic theories agree on the view that processing difficulties arise when parsing preferences are disconfirmed (so-called garden-path effects), important differences exist with respect to theoretical assumptions about the parser's recovery from a misparse. A key distinction can be made between parsers that compute all alternative syntactic structures in parallel (parallel parsers) and parsers that compute only a single preferred analysis (serial parsers). To distinguish empirically between parallel and serial parsing models, we compare ERP responses to garden-path sentences with ERP responses to truly ungrammatical sentences. Garden-path sentences contain a temporary and ultimately curable ungrammaticality, whereas truly ungrammatical sentences remain so permanently--a difference which gives rise to different predictions in the two classes of parsing architectures. At the disambiguating word, ERPs in both sentence types show negative shifts of similar onset latency, amplitude, and scalp distribution in an initial time window between 300 and 500 ms. In a following time window (500-700 ms), the negative shift to garden-path sentences disappears at right central parietal sites, while it continues in permanently ungrammatical sentences. These data are taken as evidence for a strictly serial parser. The absence of a difference in the early time window indicates that temporary and permanent ungrammaticalities trigger the same kind of parsing responses. Later differences can be related to successful reanalysis in garden-path but not in ungrammatical sentences. Copyright 2003 Elsevier Science B.V.
Investigating AI with BASIC and Logo: Helping the Computer to Understand INPUTS.
ERIC Educational Resources Information Center
Mandell, Alan; Lucking, Robert
1988-01-01
Investigates using the microcomputer to develop a sentence parser to simulate intelligent conversation used in artificial intelligence applications. Compares the ability of LOGO and BASIC for this use. Lists and critiques several LOGO and BASIC parser programs. (MVL)
Chen, Hung-Ming; Liou, Yong-Zan
2014-10-01
In a mobile health management system, mobile devices act as the application hosting devices for personal health records (PHRs) and the healthcare servers construct to exchange and analyze PHRs. One of the most popular PHR standards is continuity of care record (CCR). The CCR is expressed in XML formats. However, parsing is an expensive operation that can degrade XML processing performance. Hence, the objective of this study was to identify different operational and performance characteristics for those CCR parsing models including the XML DOM parser, the SAX parser, the PULL parser, and the JSON parser with regard to JSON data converted from XML-based CCR. Thus, developers can make sensible choices for their target PHR applications to parse CCRs when using mobile devices or servers with different system resources. Furthermore, the simulation experiments of four case studies are conducted to compare the parsing performance on Android mobile devices and the server with large quantities of CCR data.
Progress in The Semantic Analysis of Scientific Code
NASA Technical Reports Server (NTRS)
Stewart, Mark
2000-01-01
This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.
Colaert, Niklaas; Barsnes, Harald; Vaudel, Marc; Helsens, Kenny; Timmerman, Evy; Sickmann, Albert; Gevaert, Kris; Martens, Lennart
2011-08-05
The Thermo Proteome Discoverer program integrates both peptide identification and quantification into a single workflow for peptide-centric proteomics. Furthermore, its close integration with Thermo mass spectrometers has made it increasingly popular in the field. Here, we present a Java library to parse the msf files that constitute the output of Proteome Discoverer. The parser is also implemented as a graphical user interface allowing convenient access to the information found in the msf files, and in Rover, a program to analyze and validate quantitative proteomics information. All code, binaries, and documentation is freely available at http://thermo-msf-parser.googlecode.com.
Chen, Mingyang; Stott, Amanda C; Li, Shenggang; Dixon, David A
2012-04-01
A robust metadata database called the Collaborative Chemistry Database Tool (CCDBT) for massive amounts of computational chemistry raw data has been designed and implemented. It performs data synchronization and simultaneously extracts the metadata. Computational chemistry data in various formats from different computing sources, software packages, and users can be parsed into uniform metadata for storage in a MySQL database. Parsing is performed by a parsing pyramid, including parsers written for different levels of data types and sets created by the parser loader after loading parser engines and configurations. Copyright © 2011 Elsevier Inc. All rights reserved.
Memory Retrieval in Parsing and Interpretation
ERIC Educational Resources Information Center
Schlueter, Ananda Lila Zoe
2017-01-01
This dissertation explores the relationship between the parser and the grammar in error-driven retrieval by examining the mechanism underlying the illusory licensing of subject-verb agreement violations ("agreement attraction"). Previous work motivates a two-stage model of agreement attraction in which the parser predicts the verb's…
Looking forwards and backwards: The real-time processing of Strong and Weak Crossover
Lidz, Jeffrey; Phillips, Colin
2017-01-01
We investigated the processing of pronouns in Strong and Weak Crossover constructions as a means of probing the extent to which the incremental parser can use syntactic information to guide antecedent retrieval. In Experiment 1 we show that the parser accesses a displaced wh-phrase as an antecedent for a pronoun when no grammatical constraints prohibit binding, but the parser ignores the same wh-phrase when it stands in a Strong Crossover relation to the pronoun. These results are consistent with two possibilities. First, the parser could apply Principle C at antecedent retrieval to exclude the wh-phrase on the basis of the c-command relation between its gap and the pronoun. Alternatively, retrieval might ignore any phrases that do not occupy an Argument position. Experiment 2 distinguished between these two possibilities by testing antecedent retrieval under Weak Crossover. In Weak Crossover binding of the pronoun is ruled out by the argument condition, but not Principle C. The results of Experiment 2 indicate that antecedent retrieval accesses matching wh-phrases in Weak Crossover configurations. On the basis of these findings we conclude that the parser can make rapid use of Principle C and c-command information to constrain retrieval. We discuss how our results support a view of antecedent retrieval that integrates inferences made over unseen syntactic structure into constraints on backward-looking processes like memory retrieval. PMID:28936483
ERIC Educational Resources Information Center
Heift, Trude; Schulze, Mathias
2012-01-01
This book provides the first comprehensive overview of theoretical issues, historical developments and current trends in ICALL (Intelligent Computer-Assisted Language Learning). It assumes a basic familiarity with Second Language Acquisition (SLA) theory and teaching, CALL and linguistics. It is of interest to upper undergraduate and/or graduate…
Linking Parser Development to Acquisition of Syntactic Knowledge
ERIC Educational Resources Information Center
Omaki, Akira; Lidz, Jeffrey
2015-01-01
Traditionally, acquisition of syntactic knowledge and the development of sentence comprehension behaviors have been treated as separate disciplines. This article reviews a growing body of work on the development of incremental sentence comprehension mechanisms and discusses how a better understanding of the developing parser can shed light on two…
The value of parsing as feature generation for gene mention recognition
Smith, Larry H; Wilbur, W John
2009-01-01
We measured the extent to which information surrounding a base noun phrase reflects the presence of a gene name, and evaluated seven different parsers in their ability to provide information for that purpose. Using the GENETAG corpus as a gold standard, we performed machine learning to recognize from its context when a base noun phrase contained a gene name. Starting with the best lexical features, we assessed the gain of adding dependency or dependency-like relations from a full sentence parse. Features derived from parsers improved performance in this partial gene mention recognition task by a small but statistically significant amount. There were virtually no differences between parsers in these experiments. PMID:19345281
A python tool for the implementation of domain-specific languages
NASA Astrophysics Data System (ADS)
Dejanović, Igor; Vaderna, Renata; Milosavljević, Gordana; Simić, Miloš; Vuković, Željko
2017-07-01
In this paper we describe textX, a meta-language and a tool for building Domain-Specific Languages. It is implemented in Python using Arpeggio PEG (Parsing Expression Grammar) parser library. From a single language description (grammar) textX will build a parser and a meta-model (a.k.a. abstract syntax) of the language. The parser is used to parse textual representations of models conforming to the meta-model. As a result of parsing, a Python object graph will be automatically created. The structure of the object graph will conform to the meta-model defined by the grammar. This approach frees a developer from the need to manually analyse a parse tree and transform it to other suitable representation. The textX library is independent of any integrated development environment and can be easily integrated in any Python project. The textX tool works as a grammar interpreter. The parser is configured at run-time using the grammar. The textX tool is a free and open-source project available at GitHub.
Incremental Refinement of FAÇADE Models with Attribute Grammar from 3d Point Clouds
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Staat, C.; Mandtler, L.; Pl¨umer, L.
2016-06-01
Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on façades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.
Structure before Meaning: Sentence Processing, Plausibility, and Subcategorization
Kizach, Johannes; Nyvad, Anne Mette; Christensen, Ken Ramshøj
2013-01-01
Natural language processing is a fast and automatized process. A crucial part of this process is parsing, the online incremental construction of a syntactic structure. The aim of this study was to test whether a wh-filler extracted from an embedded clause is initially attached as the object of the matrix verb with subsequent reanalysis, and if so, whether the plausibility of such an attachment has an effect on reaction time. Finally, we wanted to examine whether subcategorization plays a role. We used a method called G-Maze to measure response time in a self-paced reading design. The experiments confirmed that there is early attachment of fillers to the matrix verb. When this attachment is implausible, the off-line acceptability of the whole sentence is significantly reduced. The on-line results showed that G-Maze was highly suited for this type of experiment. In accordance with our predictions, the results suggest that the parser ignores (or has no access to information about) implausibility and attaches fillers as soon as possible to the matrix verb. However, the results also show that the parser uses the subcategorization frame of the matrix verb. In short, the parser ignores semantic information and allows implausible attachments but adheres to information about which type of object a verb can take, ensuring that the parser does not make impossible attachments. We argue that the evidence supports a syntactic parser informed by syntactic cues, rather than one guided by semantic cues or one that is blind, or completely autonomous. PMID:24116101
Structure before meaning: sentence processing, plausibility, and subcategorization.
Kizach, Johannes; Nyvad, Anne Mette; Christensen, Ken Ramshøj
2013-01-01
Natural language processing is a fast and automatized process. A crucial part of this process is parsing, the online incremental construction of a syntactic structure. The aim of this study was to test whether a wh-filler extracted from an embedded clause is initially attached as the object of the matrix verb with subsequent reanalysis, and if so, whether the plausibility of such an attachment has an effect on reaction time. Finally, we wanted to examine whether subcategorization plays a role. We used a method called G-Maze to measure response time in a self-paced reading design. The experiments confirmed that there is early attachment of fillers to the matrix verb. When this attachment is implausible, the off-line acceptability of the whole sentence is significantly reduced. The on-line results showed that G-Maze was highly suited for this type of experiment. In accordance with our predictions, the results suggest that the parser ignores (or has no access to information about) implausibility and attaches fillers as soon as possible to the matrix verb. However, the results also show that the parser uses the subcategorization frame of the matrix verb. In short, the parser ignores semantic information and allows implausible attachments but adheres to information about which type of object a verb can take, ensuring that the parser does not make impossible attachments. We argue that the evidence supports a syntactic parser informed by syntactic cues, rather than one guided by semantic cues or one that is blind, or completely autonomous.
Wagner, Michael M.; Cooper, Gregory F.; Ferraro, Jeffrey P.; Su, Howard; Gesteland, Per H.; Haug, Peter J.; Millett, Nicholas E.; Aronis, John M.; Nowalk, Andrew J.; Ruiz, Victor M.; López Pineda, Arturo; Shi, Lingyun; Van Bree, Rudy; Ginter, Thomas; Tsui, Fuchiang
2017-01-01
Objectives This study evaluates the accuracy and transferability of Bayesian case detection systems (BCD) that use clinical notes from emergency department (ED) to detect influenza cases. Methods A BCD uses natural language processing (NLP) to infer the presence or absence of clinical findings from ED notes, which are fed into a Bayesain network classifier (BN) to infer patients’ diagnoses. We developed BCDs at the University of Pittsburgh Medical Center (BCDUPMC) and Intermountain Healthcare in Utah (BCDIH). At each site, we manually built a rule-based NLP and trained a Bayesain network classifier from over 40,000 ED encounters between Jan. 2008 and May. 2010 using feature selection, machine learning, and expert debiasing approach. Transferability of a BCD in this study may be impacted by seven factors: development (source) institution, development parser, application (target) institution, application parser, NLP transfer, BN transfer, and classification task. We employed an ANOVA analysis to study their impacts on BCD performance. Results Both BCDs discriminated well between influenza and non-influenza on local test cases (AUCs > 0.92). When tested for transferability using the other institution’s cases, BCDUPMC discriminations declined minimally (AUC decreased from 0.95 to 0.94, p<0.01), and BCDIH discriminations declined more (from 0.93 to 0.87, p<0.0001). We attributed the BCDIH decline to the lower recall of the IH parser on UPMC notes. The ANOVA analysis showed five significant factors: development parser, application institution, application parser, BN transfer, and classification task. Conclusion We demonstrated high influenza case detection performance in two large healthcare systems in two geographically separated regions, providing evidentiary support for the use of automated case detection from routinely collected electronic clinical notes in national influenza surveillance. The transferability could be improved by training Bayesian network classifier locally and increasing the accuracy of the NLP parser. PMID:28380048
Ye, Ye; Wagner, Michael M; Cooper, Gregory F; Ferraro, Jeffrey P; Su, Howard; Gesteland, Per H; Haug, Peter J; Millett, Nicholas E; Aronis, John M; Nowalk, Andrew J; Ruiz, Victor M; López Pineda, Arturo; Shi, Lingyun; Van Bree, Rudy; Ginter, Thomas; Tsui, Fuchiang
2017-01-01
This study evaluates the accuracy and transferability of Bayesian case detection systems (BCD) that use clinical notes from emergency department (ED) to detect influenza cases. A BCD uses natural language processing (NLP) to infer the presence or absence of clinical findings from ED notes, which are fed into a Bayesain network classifier (BN) to infer patients' diagnoses. We developed BCDs at the University of Pittsburgh Medical Center (BCDUPMC) and Intermountain Healthcare in Utah (BCDIH). At each site, we manually built a rule-based NLP and trained a Bayesain network classifier from over 40,000 ED encounters between Jan. 2008 and May. 2010 using feature selection, machine learning, and expert debiasing approach. Transferability of a BCD in this study may be impacted by seven factors: development (source) institution, development parser, application (target) institution, application parser, NLP transfer, BN transfer, and classification task. We employed an ANOVA analysis to study their impacts on BCD performance. Both BCDs discriminated well between influenza and non-influenza on local test cases (AUCs > 0.92). When tested for transferability using the other institution's cases, BCDUPMC discriminations declined minimally (AUC decreased from 0.95 to 0.94, p<0.01), and BCDIH discriminations declined more (from 0.93 to 0.87, p<0.0001). We attributed the BCDIH decline to the lower recall of the IH parser on UPMC notes. The ANOVA analysis showed five significant factors: development parser, application institution, application parser, BN transfer, and classification task. We demonstrated high influenza case detection performance in two large healthcare systems in two geographically separated regions, providing evidentiary support for the use of automated case detection from routinely collected electronic clinical notes in national influenza surveillance. The transferability could be improved by training Bayesian network classifier locally and increasing the accuracy of the NLP parser.
An Experiment in Scientific Code Semantic Analysis
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.
1998-01-01
This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, distributed expert parsers. These semantic parser are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. The parsers will automatically recognize and document some static, semantic concepts and locate some program semantic errors. Results are shown for a subroutine test case and a collection of combustion code routines. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.
Parser Combinators: a Practical Application for Generating Parsers for NMR Data
Fenwick, Matthew; Weatherby, Gerard; Ellis, Heidi JC; Gryk, Michael R.
2013-01-01
Nuclear Magnetic Resonance (NMR) spectroscopy is a technique for acquiring protein data at atomic resolution and determining the three-dimensional structure of large protein molecules. A typical structure determination process results in the deposition of a large data sets to the BMRB (Bio-Magnetic Resonance Data Bank). This data is stored and shared in a file format called NMR-Star. This format is syntactically and semantically complex making it challenging to parse. Nevertheless, parsing these files is crucial to applying the vast amounts of biological information stored in NMR-Star files, allowing researchers to harness the results of previous studies to direct and validate future work. One powerful approach for parsing files is to apply a Backus-Naur Form (BNF) grammar, which is a high-level model of a file format. Translation of the grammatical model to an executable parser may be automatically accomplished. This paper will show how we applied a model BNF grammar of the NMR-Star format to create a free, open-source parser, using a method that originated in the functional programming world known as “parser combinators”. This paper demonstrates the effectiveness of a principled approach to file specification and parsing. This paper also builds upon our previous work [1], in that 1) it applies concepts from Functional Programming (which is relevant even though the implementation language, Java, is more mainstream than Functional Programming), and 2) all work and accomplishments from this project will be made available under standard open source licenses to provide the community with the opportunity to learn from our techniques and methods. PMID:24352525
ImageParser: a tool for finite element generation from three-dimensional medical images
Yin, HM; Sun, LZ; Wang, G; Yamada, T; Wang, J; Vannier, MW
2004-01-01
Background The finite element method (FEM) is a powerful mathematical tool to simulate and visualize the mechanical deformation of tissues and organs during medical examinations or interventions. It is yet a challenge to build up an FEM mesh directly from a volumetric image partially because the regions (or structures) of interest (ROIs) may be irregular and fuzzy. Methods A software package, ImageParser, is developed to generate an FEM mesh from 3-D tomographic medical images. This software uses a semi-automatic method to detect ROIs from the context of image including neighboring tissues and organs, completes segmentation of different tissues, and meshes the organ into elements. Results The ImageParser is shown to build up an FEM model for simulating the mechanical responses of the breast based on 3-D CT images. The breast is compressed by two plate paddles under an overall displacement as large as 20% of the initial distance between the paddles. The strain and tangential Young's modulus distributions are specified for the biomechanical analysis of breast tissues. Conclusion The ImageParser can successfully exact the geometry of ROIs from a complex medical image and generate the FEM mesh with customer-defined segmentation information. PMID:15461787
An Experiment in Scientific Program Understanding
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.; Owen, Karl (Technical Monitor)
2000-01-01
This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. Results are shown for three intensively studied codes and seven blind test cases; all test cases are state of the art scientific codes. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.
De Vincenzi, M
1996-01-01
This paper presents three experiments on the parsing of Italian wh-questions that manipulate the wh-type (who vs. which-N) and the wh extraction site (main clause, dependent clause with or without complementizer). The aim of these manipulations is to see whether the parser is sensitive to the type of dependencies being processed and whether the processing effects can be explained by a unique processing principle, the minimal chain principle (MCP; De Vincenzi, 1991). The results show that the parser, following the MCP, prefers structures with fewer and less complex chains. In particular: (1) There is a processing advantage for the wh-subject extractions, the structures with less complex chains; (2) there is a processing dissociation between the who and which questions; (3) the parser respects the principle that governs the well-formedness of the empty categories (ECP).
Designing a Constraint Based Parser for Sanskrit
NASA Astrophysics Data System (ADS)
Kulkarni, Amba; Pokar, Sheetal; Shukl, Devanand
Verbal understanding (śā bdabodha) of any utterance requires the knowledge of how words in that utterance are related to each other. Such knowledge is usually available in the form of cognition of grammatical relations. Generative grammars describe how a language codes these relations. Thus the knowledge of what information various grammatical relations convey is available from the generation point of view and not the analysis point of view. In order to develop a parser based on any grammar one should then know precisely the semantic content of the grammatical relations expressed in a language string, the clues for extracting these relations and finally whether these relations are expressed explicitly or implicitly. Based on the design principles that emerge from this knowledge, we model the parser as finding a directed Tree, given a graph with nodes representing the words and edges representing the possible relations between them. Further, we also use the Mīmā ṃsā constraint of ākā ṅkṣā (expectancy) to rule out non-solutions and sannidhi (proximity) to prioritize the solutions. We have implemented a parser based on these principles and its performance was found to be satisfactory giving us a confidence to extend its functionality to handle the complex sentences.
An Improved Tarpit for Network Deception
2016-03-25
World” program was, to one who is ready to join the cyber security workforce. Thirdly, I thank my mom and dad for their constant love , support, and...arrow in a part-whole relationship . In the diagram GreaseMonkey contains the three packet handler classes. The numbers next to the PriorityQueue and...arrow from Greasy to the config_parser module represents a usage relationship , where Greasy uses functions from config_parser to parse the configuration
Extracting BI-RADS Features from Portuguese Clinical Texts.
Nassif, Houssam; Cunha, Filipe; Moreira, Inês C; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês
2012-01-01
In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser's performance is comparable to the manual method.
A Semantic Analysis Method for Scientific and Engineering Code
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.
1998-01-01
This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.
Policy-Based Management Natural Language Parser
NASA Technical Reports Server (NTRS)
James, Mark
2009-01-01
The Policy-Based Management Natural Language Parser (PBEM) is a rules-based approach to enterprise management that can be used to automate certain management tasks. This parser simplifies the management of a given endeavor by establishing policies to deal with situations that are likely to occur. Policies are operating rules that can be referred to as a means of maintaining order, security, consistency, or other ways of successfully furthering a goal or mission. PBEM provides a way of managing configuration of network elements, applications, and processes via a set of high-level rules or business policies rather than managing individual elements, thus switching the control to a higher level. This software allows unique management rules (or commands) to be specified and applied to a cross-section of the Global Information Grid (GIG). This software embodies a parser that is capable of recognizing and understanding conversational English. Because all possible dialect variants cannot be anticipated, a unique capability was developed that parses passed on conversation intent rather than the exact way the words are used. This software can increase productivity by enabling a user to converse with the system in conversational English to define network policies. PBEM can be used in both manned and unmanned science-gathering programs. Because policy statements can be domain-independent, this software can be applied equally to a wide variety of applications.
Effects of Tasks on BOLD Signal Responses to Sentence Contrasts: Review and Commentary
Caplan, David; Gow, David
2010-01-01
Functional neuroimaging studies of syntactic processing have been interpreted as identifying the neural locations of parsing and interpretive operations. However, current behavioral studies of sentence processing indicate that many operations occur simultaneously with parsing and interpretation. In this review, we point to issues that arise in discriminating the effects of these concurrent processes from those of the parser/interpreter in neural measures and to approaches that may help resolve them. PMID:20932562
Adding a Medical Lexicon to an English Parser
Szolovits, Peter
2003-01-01
We present a heuristic method to map lexical (syntactic) information from one lexicon to another, and apply the technique to augment the lexicon of the Link Grammar Parser with an enormous medical vocabulary drawn from the Specialist lexicon developed by the National Library of Medicine. This paper presents and justifies the mapping method and addresses technical problems that have to be overcome. It illustrates the utility of the method with respect to a large corpus of emergency department notes. PMID:14728251
Semantic based man-machine interface for real-time communication
NASA Technical Reports Server (NTRS)
Ali, M.; Ai, C.-S.
1988-01-01
A flight expert system (FLES) was developed to assist pilots in monitoring, diagnosing and recovering from in-flight faults. To provide a communications interface between the flight crew and FLES, a natural language interface (NALI) was implemented. Input to NALI is processed by three processors: (1) the semantics parser; (2) the knowledge retriever; and (3) the response generator. First the semantic parser extracts meaningful words and phrases to generate an internal representation of the query. At this point, the semantic parser has the ability to map different input forms related to the same concept into the same internal representation. Then the knowledge retriever analyzes and stores the context of the query to aid in resolving ellipses and pronoun references. At the end of this process, a sequence of retrievel functions is created as a first step in generating the proper response. Finally, the response generator generates the natural language response to the query. The architecture of NALI was designed to process both temporal and nontemporal queries. The architecture and implementation of NALI are described.
Software Development Of XML Parser Based On Algebraic Tools
NASA Astrophysics Data System (ADS)
Georgiev, Bozhidar; Georgieva, Adriana
2011-12-01
In this paper, is presented one software development and implementation of an algebraic method for XML data processing, which accelerates XML parsing process. Therefore, the proposed in this article nontraditional approach for fast XML navigation with algebraic tools contributes to advanced efforts in the making of an easier user-friendly API for XML transformations. Here the proposed software for XML documents processing (parser) is easy to use and can manage files with strictly defined data structure. The purpose of the presented algorithm is to offer a new approach for search and restructuring hierarchical XML data. This approach permits fast XML documents processing, using algebraic model developed in details in previous works of the same authors. So proposed parsing mechanism is easy accessible to the web consumer who is able to control XML file processing, to search different elements (tags) in it, to delete and to add a new XML content as well. The presented various tests show higher rapidity and low consumption of resources in comparison with some existing commercial parsers.
"gnparser": a powerful parser for scientific names based on Parsing Expression Grammar.
Mozzherin, Dmitry Y; Myltsev, Alexander A; Patterson, David J
2017-05-26
Scientific names in biology act as universal links. They allow us to cross-reference information about organisms globally. However variations in spelling of scientific names greatly diminish their ability to interconnect data. Such variations may include abbreviations, annotations, misspellings, etc. Authorship is a part of a scientific name and may also differ significantly. To match all possible variations of a name we need to divide them into their elements and classify each element according to its role. We refer to this as 'parsing' the name. Parsing categorizes name's elements into those that are stable and those that are prone to change. Names are matched first by combining them according to their stable elements. Matches are then refined by examining their varying elements. This two stage process dramatically improves the number and quality of matches. It is especially useful for the automatic data exchange within the context of "Big Data" in biology. We introduce Global Names Parser (gnparser). It is a Java tool written in Scala language (a language for Java Virtual Machine) to parse scientific names. It is based on a Parsing Expression Grammar. The parser can be applied to scientific names of any complexity. It assigns a semantic meaning (such as genus name, species epithet, rank, year of publication, authorship, annotations, etc.) to all elements of a name. It is able to work with nested structures as in the names of hybrids. gnparser performs with ≈99% accuracy and processes 30 million name-strings/hour per CPU thread. The gnparser library is compatible with Scala, Java, R, Jython, and JRuby. The parser can be used as a command line application, as a socket server, a web-app or as a RESTful HTTP-service. It is released under an Open source MIT license. Global Names Parser (gnparser) is a fast, high precision tool for biodiversity informaticians and biologists working with large numbers of scientific names. It can replace expensive and error-prone manual parsing and standardization of scientific names in many situations, and can quickly enhance the interoperability of distributed biological information.
Detecting modification of biomedical events using a deep parsing approach.
Mackinlay, Andrew; Martinez, David; Baldwin, Timothy
2012-04-30
This work describes a system for identifying event mentions in bio-molecular research abstracts that are either speculative (e.g. analysis of IkappaBalpha phosphorylation, where it is not specified whether phosphorylation did or did not occur) or negated (e.g. inhibition of IkappaBalpha phosphorylation, where phosphorylation did not occur). The data comes from a standard dataset created for the BioNLP 2009 Shared Task. The system uses a machine-learning approach, where the features used for classification are a combination of shallow features derived from the words of the sentences and more complex features based on the semantic outputs produced by a deep parser. To detect event modification, we use a Maximum Entropy learner with features extracted from the data relative to the trigger words of the events. The shallow features are bag-of-words features based on a small sliding context window of 3-4 tokens on either side of the trigger word. The deep parser features are derived from parses produced by the English Resource Grammar and the RASP parser. The outputs of these parsers are converted into the Minimal Recursion Semantics formalism, and from this, we extract features motivated by linguistics and the data itself. All of these features are combined to create training or test data for the machine learning algorithm. Over the test data, our methods produce approximately a 4% absolute increase in F-score for detection of event modification compared to a baseline based only on the shallow bag-of-words features. Our results indicate that grammar-based techniques can enhance the accuracy of methods for detecting event modification.
Vosse, Theo; Kempen, Gerard
2009-12-01
We introduce a novel computer implementation of the Unification-Space parser (Vosse and Kempen in Cognition 75:105-143, 2000) in the form of a localist neural network whose dynamics is based on interactive activation and inhibition. The wiring of the network is determined by Performance Grammar (Kempen and Harbusch in Verb constructions in German and Dutch. Benjamins, Amsterdam, 2003), a lexicalist formalism with feature unification as binding operation. While the network is processing input word strings incrementally, the evolving shape of parse trees is represented in the form of changing patterns of activation in nodes that code for syntactic properties of words and phrases, and for the grammatical functions they fulfill. The system is capable, at least qualitatively and rudimentarily, of simulating several important dynamic aspects of human syntactic parsing, including garden-path phenomena and reanalysis, effects of complexity (various types of clause embeddings), fault-tolerance in case of unification failures and unknown words, and predictive parsing (expectation-based analysis, surprisal effects). English is the target language of the parser described.
Soares, Ana Paula; Fraga, Isabel; Comesaña, Montserrat; Piñeiro, Ana
2010-11-01
This work presents an analysis of the role of animacy in attachment preferences of relative clauses to complex noun phrases in European Portuguese (EP). The study of how the human parser solves this kind of syntactic ambiguities has been focus of extensive research. However, what is known about EP is both limited and puzzling. Additionally, as recent studies have stressed the importance of extra-syntactic variables in this process, two experiments were carried out to assess EP attachment preferences considering four animacy conditions: Study 1 used a sentence-completion-task, and Study 2 a self-paced reading task. Both studies indicate a significant preference for high attachment in EP. Furthermore, they showed that this preference was modulated by the animacy of the host NP: if the first host was inanimate and the second one was animate, the parser's preference changed to low attachment preference. These findings shed light on previous results regarding EP and strengthen the idea that, even in early stages of processing, the parser seems to be sensitive to extra-syntactic information.
Detecting modification of biomedical events using a deep parsing approach
2012-01-01
Background This work describes a system for identifying event mentions in bio-molecular research abstracts that are either speculative (e.g. analysis of IkappaBalpha phosphorylation, where it is not specified whether phosphorylation did or did not occur) or negated (e.g. inhibition of IkappaBalpha phosphorylation, where phosphorylation did not occur). The data comes from a standard dataset created for the BioNLP 2009 Shared Task. The system uses a machine-learning approach, where the features used for classification are a combination of shallow features derived from the words of the sentences and more complex features based on the semantic outputs produced by a deep parser. Method To detect event modification, we use a Maximum Entropy learner with features extracted from the data relative to the trigger words of the events. The shallow features are bag-of-words features based on a small sliding context window of 3-4 tokens on either side of the trigger word. The deep parser features are derived from parses produced by the English Resource Grammar and the RASP parser. The outputs of these parsers are converted into the Minimal Recursion Semantics formalism, and from this, we extract features motivated by linguistics and the data itself. All of these features are combined to create training or test data for the machine learning algorithm. Results Over the test data, our methods produce approximately a 4% absolute increase in F-score for detection of event modification compared to a baseline based only on the shallow bag-of-words features. Conclusions Our results indicate that grammar-based techniques can enhance the accuracy of methods for detecting event modification. PMID:22595089
Huang, Yang; Lowe, Henry J; Klein, Dan; Cucina, Russell J
2005-01-01
The aim of this study was to develop and evaluate a method of extracting noun phrases with full phrase structures from a set of clinical radiology reports using natural language processing (NLP) and to investigate the effects of using the UMLS(R) Specialist Lexicon to improve noun phrase identification within clinical radiology documents. The noun phrase identification (NPI) module is composed of a sentence boundary detector, a statistical natural language parser trained on a nonmedical domain, and a noun phrase (NP) tagger. The NPI module processed a set of 100 XML-represented clinical radiology reports in Health Level 7 (HL7)(R) Clinical Document Architecture (CDA)-compatible format. Computed output was compared with manual markups made by four physicians and one author for maximal (longest) NP and those made by one author for base (simple) NP, respectively. An extended lexicon of biomedical terms was created from the UMLS Specialist Lexicon and used to improve NPI performance. The test set was 50 randomly selected reports. The sentence boundary detector achieved 99.0% precision and 98.6% recall. The overall maximal NPI precision and recall were 78.9% and 81.5% before using the UMLS Specialist Lexicon and 82.1% and 84.6% after. The overall base NPI precision and recall were 88.2% and 86.8% before using the UMLS Specialist Lexicon and 93.1% and 92.6% after, reducing false-positives by 31.1% and false-negatives by 34.3%. The sentence boundary detector performs excellently. After the adaptation using the UMLS Specialist Lexicon, the statistical parser's NPI performance on radiology reports increased to levels comparable to the parser's native performance in its newswire training domain and to that reported by other researchers in the general nonmedical domain.
ANTLR Tree Grammar Generator and Extensions
NASA Technical Reports Server (NTRS)
Craymer, Loring
2005-01-01
A computer program implements two extensions of ANTLR (Another Tool for Language Recognition), which is a set of software tools for translating source codes between different computing languages. ANTLR supports predicated- LL(k) lexer and parser grammars, a notation for annotating parser grammars to direct tree construction, and predicated tree grammars. [ LL(k) signifies left-right, leftmost derivation with k tokens of look-ahead, referring to certain characteristics of a grammar.] One of the extensions is a syntax for tree transformations. The other extension is the generation of tree grammars from annotated parser or input tree grammars. These extensions can simplify the process of generating source-to-source language translators and they make possible an approach, called "polyphase parsing," to translation between computing languages. The typical approach to translator development is to identify high-level semantic constructs such as "expressions," "declarations," and "definitions" as fundamental building blocks in the grammar specification used for language recognition. The polyphase approach is to lump ambiguous syntactic constructs during parsing and then disambiguate the alternatives in subsequent tree transformation passes. Polyphase parsing is believed to be useful for generating efficient recognizers for C++ and other languages that, like C++, have significant ambiguities.
The parser doesn't ignore intransitivity, after all
Staub, Adrian
2015-01-01
Several previous studies (Adams, Clifton, & Mitchell, 1998; Mitchell, 1987; van Gompel & Pickering, 2001) have explored the question of whether the parser initially analyzes a noun phrase that follows an intransitive verb as the verb's direct object. Three eyetracking experiments examined this issue in more detail. Experiment 1 strongly replicated the finding (van Gompel & Pickering, 2001) that readers experience difficulty on this noun phrase in normal reading, and found that this difficulty occurs even with a class of intransitive verbs for which a direct object is categorically prohibited. Experiment 2, however, demonstrated that this effect is not due to syntactic misanalysis, but is instead due to disruption that occurs when a comma is absent at a subordinate clause/main clause boundary. Exploring a different construction, Experiment 3 replicated the finding (Pickering & Traxler, 2003; Traxler & Pickering, 1996) that when a noun phrase “filler” is an implausible direct object for an optionally transitive relative clause verb, processing difficulty results; however, there was no evidence for such difficulty when the relative clause verb was strictly intransitive. Taken together, the three experiments undermine the support for the claim that the parser initially ignores a verb's subcategorization restrictions. PMID:17470005
Xu, Hua; AbdelRahman, Samir; Lu, Yanxin; Denny, Joshua C.; Doan, Son
2011-01-01
Semantic-based sublanguage grammars have been shown to be an efficient method for medical language processing. However, given the complexity of the medical domain, parsers using such grammars inevitably encounter ambiguous sentences, which could be interpreted by different groups of production rules and consequently result in two or more parse trees. One possible solution, which has not been extensively explored previously, is to augment productions in medical sublanguage grammars with probabilities to resolve the ambiguity. In this study, we associated probabilities with production rules in a semantic-based grammar for medication findings and evaluated its performance on reducing parsing ambiguity. Using the existing data set from 2009 i2b2 NLP (Natural Language Processing) challenge for medication extraction, we developed a semantic-based CFG (Context Free Grammar) for parsing medication sentences and manually created a Treebank of 4,564 medication sentences from discharge summaries. Using the Treebank, we derived a semantic-based PCFG (probabilistic Context Free Grammar) for parsing medication sentences. Our evaluation using a 10-fold cross validation showed that the PCFG parser dramatically improved parsing performance when compared to the CFG parser. PMID:21856440
The Mystro system: A comprehensive translator toolkit
NASA Technical Reports Server (NTRS)
Collins, W. R.; Noonan, R. E.
1985-01-01
Mystro is a system that facilities the construction of compilers, assemblers, code generators, query interpretors, and similar programs. It provides features to encourage the use of iterative enhancement. Mystro was developed in response to the needs of NASA Langley Research Center (LaRC) and enjoys a number of advantages over similar systems. There are other programs available that can be used in building translators. These typically build parser tables, usually supply the source of a parser and parts of a lexical analyzer, but provide little or no aid for code generation. In general, only the front end of the compiler is addressed. Mystro, on the other hand, emphasizes tools for both ends of a compiler.
Building pathway graphs from BioPAX data in R.
Benis, Nirupama; Schokker, Dirkjan; Kramer, Frank; Smits, Mari A; Suarez-Diez, Maria
2016-01-01
Biological pathways are increasingly available in the BioPAX format which uses an RDF model for data storage. One can retrieve the information in this data model in the scripting language R using the package rBiopaxParser , which converts the BioPAX format to one readable in R. It also has a function to build a regulatory network from the pathway information. Here we describe an extension of this function. The new function allows the user to build graphs of entire pathways, including regulated as well as non-regulated elements, and therefore provides a maximum of information. This function is available as part of the rBiopaxParser distribution from Bioconductor.
Parsley: a Command-Line Parser for Astronomical Applications
NASA Astrophysics Data System (ADS)
Deich, William
Parsley is a sophisticated keyword + value parser, packaged as a library of routines that offers an easy method for providing command-line arguments to programs. It makes it easy for the user to enter values, and it makes it easy for the programmer to collect and validate the user's entries. Parsley is tuned for astronomical applications: for example, dates entered in Julian, Modified Julian, calendar, or several other formats are all recognized without special effort by the user or by the programmer; angles can be entered using decimal degrees or dd:mm:ss; time-like intervals as decimal hours, hh:mm:ss, or a variety of other units. Vectors of data are accepted as readily as scalars.
Solving LR Conflicts Through Context Aware Scanning
NASA Astrophysics Data System (ADS)
Leon, C. Rodriguez; Forte, L. Garcia
2011-09-01
This paper presents a new algorithm to compute the exact list of tokens expected by any LR syntax analyzer at any point of the scanning process. The lexer can, at any time, compute the exact list of valid tokens to return only tokens in this set. In the case than more than one matching token is in the valid set, the lexer can resort to a nested LR parser to disambiguate. Allowing nested LR parsing requires some slight modifications when building the LR parsing tables. We also show how LR parsers can parse conflictive and inherently ambiguous languages using a combination of nested parsing and context aware scanning. These expanded lexical analyzers can be generated from high level specifications.
Huang, Yang; Lowe, Henry J.; Klein, Dan; Cucina, Russell J.
2005-01-01
Objective: The aim of this study was to develop and evaluate a method of extracting noun phrases with full phrase structures from a set of clinical radiology reports using natural language processing (NLP) and to investigate the effects of using the UMLS® Specialist Lexicon to improve noun phrase identification within clinical radiology documents. Design: The noun phrase identification (NPI) module is composed of a sentence boundary detector, a statistical natural language parser trained on a nonmedical domain, and a noun phrase (NP) tagger. The NPI module processed a set of 100 XML-represented clinical radiology reports in Health Level 7 (HL7)® Clinical Document Architecture (CDA)–compatible format. Computed output was compared with manual markups made by four physicians and one author for maximal (longest) NP and those made by one author for base (simple) NP, respectively. An extended lexicon of biomedical terms was created from the UMLS Specialist Lexicon and used to improve NPI performance. Results: The test set was 50 randomly selected reports. The sentence boundary detector achieved 99.0% precision and 98.6% recall. The overall maximal NPI precision and recall were 78.9% and 81.5% before using the UMLS Specialist Lexicon and 82.1% and 84.6% after. The overall base NPI precision and recall were 88.2% and 86.8% before using the UMLS Specialist Lexicon and 93.1% and 92.6% after, reducing false-positives by 31.1% and false-negatives by 34.3%. Conclusion: The sentence boundary detector performs excellently. After the adaptation using the UMLS Specialist Lexicon, the statistical parser's NPI performance on radiology reports increased to levels comparable to the parser's native performance in its newswire training domain and to that reported by other researchers in the general nonmedical domain. PMID:15684131
NASA Astrophysics Data System (ADS)
Zhang, Min; Pavlicek, William; Panda, Anshuman; Langer, Steve G.; Morin, Richard; Fetterly, Kenneth A.; Paden, Robert; Hanson, James; Wu, Lin-Wei; Wu, Teresa
2015-03-01
DICOM Index Tracker (DIT) is an integrated platform to harvest rich information available from Digital Imaging and Communications in Medicine (DICOM) to improve quality assurance in radiology practices. It is designed to capture and maintain longitudinal patient-specific exam indices of interests for all diagnostic and procedural uses of imaging modalities. Thus, it effectively serves as a quality assurance and patient safety monitoring tool. The foundation of DIT is an intelligent database system which stores the information accepted and parsed via a DICOM receiver and parser. The database system enables the basic dosimetry analysis. The success of DIT implementation at Mayo Clinic Arizona calls for the DIT deployment at the enterprise level which requires significant improvements. First, for geographically distributed multi-site implementation, the first bottleneck is the communication (network) delay; the second is the scalability of the DICOM parser to handle the large volume of exams from different sites. To address this issue, DICOM receiver and parser are separated and decentralized by site. To facilitate the enterprise wide Quality Assurance (QA), a notable challenge is the great diversities of manufacturers, modalities and software versions, as the solution DIT Enterprise provides the standardization tool for device naming, protocol naming, physician naming across sites. Thirdly, advanced analytic engines are implemented online which support the proactive QA in DIT Enterprise.
Toward a theory of distributed word expert natural language parsing
NASA Technical Reports Server (NTRS)
Rieger, C.; Small, S.
1981-01-01
An approach to natural language meaning-based parsing in which the unit of linguistic knowledge is the word rather than the rewrite rule is described. In the word expert parser, knowledge about language is distributed across a population of procedural experts, each representing a word of the language, and each an expert at diagnosing that word's intended usage in context. The parser is structured around a coroutine control environment in which the generator-like word experts ask questions and exchange information in coming to collective agreement on sentence meaning. The word expert theory is advanced as a better cognitive model of human language expertise than the traditional rule-based approach. The technical discussion is organized around examples taken from the prototype LISP system which implements parts of the theory.
The power and limits of a rule-based morpho-semantic parser.
Baud, R. H.; Rassinoux, A. M.; Ruch, P.; Lovis, C.; Scherrer, J. R.
1999-01-01
The venue of Electronic Patient Record (EPR) implies an increasing amount of medical texts readily available for processing, as soon as convenient tools are made available. The chief application is text analysis, from which one can drive other disciplines like indexing for retrieval, knowledge representation, translation and inferencing for medical intelligent systems. Prerequisites for a convenient analyzer of medical texts are: building the lexicon, developing semantic representation of the domain, having a large corpus of texts available for statistical analysis, and finally mastering robust and powerful parsing techniques in order to satisfy the constraints of the medical domain. This article aims at presenting an easy-to-use parser ready to be adapted in different settings. It describes its power together with its practical limitations as experienced by the authors. PMID:10566313
The power and limits of a rule-based morpho-semantic parser.
Baud, R H; Rassinoux, A M; Ruch, P; Lovis, C; Scherrer, J R
1999-01-01
The venue of Electronic Patient Record (EPR) implies an increasing amount of medical texts readily available for processing, as soon as convenient tools are made available. The chief application is text analysis, from which one can drive other disciplines like indexing for retrieval, knowledge representation, translation and inferencing for medical intelligent systems. Prerequisites for a convenient analyzer of medical texts are: building the lexicon, developing semantic representation of the domain, having a large corpus of texts available for statistical analysis, and finally mastering robust and powerful parsing techniques in order to satisfy the constraints of the medical domain. This article aims at presenting an easy-to-use parser ready to be adapted in different settings. It describes its power together with its practical limitations as experienced by the authors.
Retrieval Interference in Syntactic Processing: The Case of Reflexive Binding in English.
Patil, Umesh; Vasishth, Shravan; Lewis, Richard L
2016-01-01
It has been proposed that in online sentence comprehension the dependency between a reflexive pronoun such as himself/herself and its antecedent is resolved using exclusively syntactic constraints. Under this strictly syntactic search account, Principle A of the binding theory-which requires that the antecedent c-command the reflexive within the same clause that the reflexive occurs in-constrains the parser's search for an antecedent. The parser thus ignores candidate antecedents that might match agreement features of the reflexive (e.g., gender) but are ineligible as potential antecedents because they are in structurally illicit positions. An alternative possibility accords no special status to structural constraints: in addition to using Principle A, the parser also uses non-structural cues such as gender to access the antecedent. According to cue-based retrieval theories of memory (e.g., Lewis and Vasishth, 2005), the use of non-structural cues should result in increased retrieval times and occasional errors when candidates partially match the cues, even if the candidates are in structurally illicit positions. In this paper, we first show how the retrieval processes that underlie the reflexive binding are naturally realized in the Lewis and Vasishth (2005) model. We present the predictions of the model under the assumption that both structural and non-structural cues are used during retrieval, and provide a critical analysis of previous empirical studies that failed to find evidence for the use of non-structural cues, suggesting that these failures may be Type II errors. We use this analysis and the results of further modeling to motivate a new empirical design that we use in an eye tracking study. The results of this study confirm the key predictions of the model concerning the use of non-structural cues, and are inconsistent with the strictly syntactic search account. These results present a challenge for theories advocating the infallibility of the human parser in the case of reflexive resolution, and provide support for the inclusion of agreement features such as gender in the set of retrieval cues.
Retrieval Interference in Syntactic Processing: The Case of Reflexive Binding in English
Patil, Umesh; Vasishth, Shravan; Lewis, Richard L.
2016-01-01
It has been proposed that in online sentence comprehension the dependency between a reflexive pronoun such as himself/herself and its antecedent is resolved using exclusively syntactic constraints. Under this strictly syntactic search account, Principle A of the binding theory—which requires that the antecedent c-command the reflexive within the same clause that the reflexive occurs in—constrains the parser's search for an antecedent. The parser thus ignores candidate antecedents that might match agreement features of the reflexive (e.g., gender) but are ineligible as potential antecedents because they are in structurally illicit positions. An alternative possibility accords no special status to structural constraints: in addition to using Principle A, the parser also uses non-structural cues such as gender to access the antecedent. According to cue-based retrieval theories of memory (e.g., Lewis and Vasishth, 2005), the use of non-structural cues should result in increased retrieval times and occasional errors when candidates partially match the cues, even if the candidates are in structurally illicit positions. In this paper, we first show how the retrieval processes that underlie the reflexive binding are naturally realized in the Lewis and Vasishth (2005) model. We present the predictions of the model under the assumption that both structural and non-structural cues are used during retrieval, and provide a critical analysis of previous empirical studies that failed to find evidence for the use of non-structural cues, suggesting that these failures may be Type II errors. We use this analysis and the results of further modeling to motivate a new empirical design that we use in an eye tracking study. The results of this study confirm the key predictions of the model concerning the use of non-structural cues, and are inconsistent with the strictly syntactic search account. These results present a challenge for theories advocating the infallibility of the human parser in the case of reflexive resolution, and provide support for the inclusion of agreement features such as gender in the set of retrieval cues. PMID:27303315
Neuroanatomical term generation and comparison between two terminologies.
Srinivas, Prashanti R; Gusfield, Daniel; Mason, Oliver; Gertz, Michael; Hogarth, Michael; Stone, James; Jones, Edward G; Gorin, Fredric A
2003-01-01
An approach and software tools are described for identifying and extracting compound terms (CTs), acronyms and their associated contexts from textual material that is associated with neuroanatomical atlases. A set of simple syntactic rules were appended to the output of a commercially available part of speech (POS) tagger (Qtag v 3.01) that extracts CTs and their associated context from the texts of neuroanatomical atlases. This "hybrid" parser. appears to be highly sensitive and recognized 96% of the potentially germane neuroanatomical CTs and acronyms present in the cat and primate thalamic atlases. A comparison of neuroanatomical CTs and acronymsbetween the cat and primate atlas texts was initially performed using exact-term matching. The implementation of string-matching algorithms significantly improved the identification of relevant terms and acronyms between the two domains. The End Gap Free string matcher identified 98% of CTs and the Needleman Wunsch (NW) string matcher matched 36% of acronyms between the two atlases. Combining several simple grammatical and lexical rules with the POS tagger ("hybrid parser") (1) extracted complex neuroanatomical terms and acronyms from selected cat and primate thalamic atlases and (2) and facilitated the semi-automated generation of a highly granular thalamic terminology. The implementation of string-matching algorithms (1) reconciled terminological errors generated by optical character recognition (OCR) software used to generate the neuroanatomical text information and (2) increased the sensitivity of matching neuroanatomical terms and acronyms between the two neuroanatomical domains that were generated by the "hybrid" parser.
Disambiguating the species of biomedical named entities using natural language parsers
Wang, Xinglong; Tsujii, Jun'ichi; Ananiadou, Sophia
2010-01-01
Motivation: Text mining technologies have been shown to reduce the laborious work involved in organizing the vast amount of information hidden in the literature. One challenge in text mining is linking ambiguous word forms to unambiguous biological concepts. This article reports on a comprehensive study on resolving the ambiguity in mentions of biomedical named entities with respect to model organisms and presents an array of approaches, with focus on methods utilizing natural language parsers. Results: We build a corpus for organism disambiguation where every occurrence of protein/gene entity is manually tagged with a species ID, and evaluate a number of methods on it. Promising results are obtained by training a machine learning model on syntactic parse trees, which is then used to decide whether an entity belongs to the model organism denoted by a neighbouring species-indicating word (e.g. yeast). The parser-based approaches are also compared with a supervised classification method and results indicate that the former are a more favorable choice when domain portability is of concern. The best overall performance is obtained by combining the strengths of syntactic features and supervised classification. Availability: The corpus and demo are available at http://www.nactem.ac.uk/deca_details/start.cgi, and the software is freely available as U-Compare components (Kano et al., 2009): NaCTeM Species Word Detector and NaCTeM Species Disambiguator. U-Compare is available at http://-compare.org/ Contact: xinglong.wang@manchester.ac.uk PMID:20053840
An efficient representation of spatial information for expert reasoning in robotic vehicles
NASA Technical Reports Server (NTRS)
Scott, Steven; Interrante, Mark
1987-01-01
The previous generation of robotic vehicles and drones was designed for a specific task, with limited flexibility in executing their mission. This limited flexibility arises because the robotic vehicles do not possess the intelligence and knowledge upon which to make significant tactical decisions. Current development of robotic vehicles is toward increased intelligence and capabilities, adapting to a changing environment and altering mission objectives. The latest techniques in artificial intelligence (AI) are being employed to increase the robotic vehicle's intelligent decision-making capabilities. This document describes the design of the SARA spatial database tool, which is composed of request parser, reasoning, computations, and database modules that collectively manage and derive information useful for robotic vehicles.
FLIP for FLAG model visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wooten, Hasani Omar
A graphical user interface has been developed for FLAG users. FLIP (FLAG Input deck Parser) provides users with an organized view of FLAG models and a means for efficiently and easily navigating and editing nodes, parameters, and variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busby, L.
This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.
Friedman, Lee; Rigas, Ioannis; Abdulin, Evgeny; Komogortsev, Oleg V
2018-05-15
Nystrӧm and Holmqvist have published a method for the classification of eye movements during reading (ONH) (Nyström & Holmqvist, 2010). When we applied this algorithm to our data, the results were not satisfactory, so we modified the algorithm (now the MNH) to better classify our data. The changes included: (1) reducing the amount of signal filtering, (2) excluding a new type of noise, (3) removing several adaptive thresholds and replacing them with fixed thresholds, (4) changing the way that the start and end of each saccade was determined, (5) employing a new algorithm for detecting PSOs, and (6) allowing a fixation period to either begin or end with noise. A new method for the evaluation of classification algorithms is presented. It was designed to provide comprehensive feedback to an algorithm developer, in a time-efficient manner, about the types and numbers of classification errors that an algorithm produces. This evaluation was conducted by three expert raters independently, across 20 randomly chosen recordings, each classified by both algorithms. The MNH made many fewer errors in determining when saccades start and end, and it also detected some fixations and saccades that the ONH did not. The MNH fails to detect very small saccades. We also evaluated two additional algorithms: the EyeLink Parser and a more current, machine-learning-based algorithm. The EyeLink Parser tended to find more saccades that ended too early than did the other methods, and we found numerous problems with the output of the machine-learning-based algorithm.
A translator writing system for microcomputer high-level languages and assemblers
NASA Technical Reports Server (NTRS)
Collins, W. R.; Knight, J. C.; Noonan, R. E.
1980-01-01
In order to implement high level languages whenever possible, a translator writing system of advanced design was developed. It is intended for routine production use by many programmers working on different projects. As well as a fairly conventional parser generator, it includes a system for the rapid generation of table driven code generators. The parser generator was developed from a prototype version. The translator writing system includes various tools for the management of the source text of a compiler under construction. In addition, it supplies various default source code sections so that its output is always compilable and executable. The system thereby encourages iterative enhancement as a development methodology by ensuring an executable program from the earliest stages of a compiler development project. The translator writing system includes PASCAL/48 compiler, three assemblers, and two compilers for a subset of HAL/S.
Semantic super networks: A case analysis of Wikipedia papers
NASA Astrophysics Data System (ADS)
Kostyuchenko, Evgeny; Lebedeva, Taisiya; Goritov, Alexander
2017-11-01
An algorithm for constructing super-large semantic networks has been developed in current work. Algorithm was tested using the "Cosmos" category of the Internet encyclopedia "Wikipedia" as an example. During the implementation, a parser for the syntax analysis of Wikipedia pages was developed. A graph based on list of articles and categories was formed. On the basis of the obtained graph analysis, algorithms for finding domains of high connectivity in a graph were proposed and tested. Algorithms for constructing a domain based on the number of links and the number of articles in the current subject area is considered. The shortcomings of these algorithms are shown and explained, an algorithm is developed on their joint use. The possibility of applying a combined algorithm for obtaining the final domain is shown. The problem of instability of the received domain was discovered when starting an algorithm from two neighboring vertices related to the domain.
Vulnerabilities in Bytecode Removed by Analysis, Nuanced Confinement and Diversification (VIBRANCE)
2015-06-01
VIBRANCE tool starts with a vulnerable Java application and automatically hardens it against SQL injection, OS command injection, file path traversal...7 2.2 Java Front End...7 2.2.2 Java Byte Code Parser
Chen, W; Kowatch, R; Lin, S; Splaingard, M; Huang, Y
2015-01-01
Nationwide Children's Hospital established an i2b2 (Informatics for Integrating Biology & the Bedside) application for sleep disorder cohort identification. Discrete data were gleaned from semistructured sleep study reports. The system showed to work more efficiently than the traditional manual chart review method, and it also enabled searching capabilities that were previously not possible. We report on the development and implementation of the sleep disorder i2b2 cohort identification system using natural language processing of semi-structured documents. We developed a natural language processing approach to automatically parse concepts and their values from semi-structured sleep study documents. Two parsers were developed: a regular expression parser for extracting numeric concepts and a NLP based tree parser for extracting textual concepts. Concepts were further organized into i2b2 ontologies based on document structures and in-domain knowledge. 26,550 concepts were extracted with 99% being textual concepts. 1.01 million facts were extracted from sleep study documents such as demographic information, sleep study lab results, medications, procedures, diagnoses, among others. The average accuracy of terminology parsing was over 83% when comparing against those by experts. The system is capable of capturing both standard and non-standard terminologies. The time for cohort identification has been reduced significantly from a few weeks to a few seconds. Natural language processing was shown to be powerful for quickly converting large amount of semi-structured or unstructured clinical data into discrete concepts, which in combination of intuitive domain specific ontologies, allows fast and effective interactive cohort identification through the i2b2 platform for research and clinical use.
Wh-filler-gap dependency formation guides reflexive antecedent search
Frazier, Michael; Ackerman, Lauren; Baumann, Peter; Potter, David; Yoshida, Masaya
2015-01-01
Prior studies on online sentence processing have shown that the parser can resolve non-local dependencies rapidly and accurately. This study investigates the interaction between the processing of two such non-local dependencies: wh-filler-gap dependencies (WhFGD) and reflexive-antecedent dependencies. We show that reflexive-antecedent dependency resolution is sensitive to the presence of a WhFGD, and argue that the filler-gap dependency established by WhFGD resolution is selected online as the antecedent of a reflexive dependency. We investigate the processing of constructions like (1), where two NPs might be possible antecedents for the reflexive, namely which cowgirl and Mary. Even though Mary is linearly closer to the reflexive, the only grammatically licit antecedent for the reflexive is the more distant wh-NP, which cowgirl. (1). Which cowgirl did Mary expect to have injured herself due to negligence? Four eye-tracking text-reading experiments were conducted on examples like (1), differing in whether the embedded clause was non-finite (1 and 3) or finite (2 and 4), and in whether the tail of the wh-dependency intervened between the reflexive and its closest overt antecedent (1 and 2) or the wh-dependency was associated with a position earlier in the sentence (3 and 4). The results of Experiments 1 and 2 indicate the parser accesses the result of WhFGD formation during reflexive antecedent search. The resolution of a wh-dependency alters the representation that reflexive antecedent search operates over, allowing the grammatical but linearly distant antecedent to be accessed rapidly. In the absence of a long-distance WhFGD (Experiments 3 and 4), wh-NPs were not found to impact reading times of the reflexive, indicating that the parser's ability to select distant wh-NPs as reflexive antecedents crucially involves syntactic structure. PMID:26500579
Chen, W.; Kowatch, R.; Lin, S.; Splaingard, M.
2015-01-01
Summary Nationwide Children’s Hospital established an i2b2 (Informatics for Integrating Biology & the Bedside) application for sleep disorder cohort identification. Discrete data were gleaned from semistructured sleep study reports. The system showed to work more efficiently than the traditional manual chart review method, and it also enabled searching capabilities that were previously not possible. Objective We report on the development and implementation of the sleep disorder i2b2 cohort identification system using natural language processing of semi-structured documents. Methods We developed a natural language processing approach to automatically parse concepts and their values from semi-structured sleep study documents. Two parsers were developed: a regular expression parser for extracting numeric concepts and a NLP based tree parser for extracting textual concepts. Concepts were further organized into i2b2 ontologies based on document structures and in-domain knowledge. Results 26,550 concepts were extracted with 99% being textual concepts. 1.01 million facts were extracted from sleep study documents such as demographic information, sleep study lab results, medications, procedures, diagnoses, among others. The average accuracy of terminology parsing was over 83% when comparing against those by experts. The system is capable of capturing both standard and non-standard terminologies. The time for cohort identification has been reduced significantly from a few weeks to a few seconds. Conclusion Natural language processing was shown to be powerful for quickly converting large amount of semi-structured or unstructured clinical data into discrete concepts, which in combination of intuitive domain specific ontologies, allows fast and effective interactive cohort identification through the i2b2 platform for research and clinical use. PMID:26171080
Facilitating Analysis of Multiple Partial Data Streams
NASA Technical Reports Server (NTRS)
Maimone, Mark W.; Liebersbach, Robert R.
2008-01-01
Robotic Operations Automation: Mechanisms, Imaging, Navigation report Generation (ROAMING) is a set of computer programs that facilitates and accelerates both tactical and strategic analysis of time-sampled data especially the disparate and often incomplete streams of Mars Explorer Rover (MER) telemetry data described in the immediately preceding article. As used here, tactical refers to the activities over a relatively short time (one Martian day in the original MER application) and strategic refers to a longer time (the entire multi-year MER missions in the original application). Prior to installation, ROAMING must be configured with the types of data of interest, and parsers must be modified to understand the format of the input data (many example parsers are provided, including for general CSV files). Thereafter, new data from multiple disparate sources are automatically resampled into a single common annotated spreadsheet stored in a readable space-separated format, and these data can be processed or plotted at any time scale. Such processing or plotting makes it possible to study not only the details of a particular activity spanning only a few seconds, but also longer-term trends. ROAMING makes it possible to generate mission-wide plots of multiple engineering quantities [e.g., vehicle tilt as in Figure 1(a), motor current, numbers of images] that, heretofore could be found only in thousands of separate files. ROAMING also supports automatic annotation of both images and graphs. In the MER application, labels given to terrain features by rover scientists and engineers are automatically plotted in all received images based on their associated camera models (see Figure 2), times measured in seconds are mapped to Mars local time, and command names or arbitrary time-labeled events can be used to label engineering plots, as in Figure 1(b).
Construction of a menu-based system
NASA Technical Reports Server (NTRS)
Noonan, R. E.; Collins, W. R.
1985-01-01
The development of the user interface to a software code management system is discussed. The user interface was specified using a grammar and implemented using a LR parser generator. This was found to be an effective method for the rapid prototyping of a menu based system.
Identifying the null subject: evidence from event-related brain potentials.
Demestre, J; Meltzer, S; García-Albea, J E; Vigil, A
1999-05-01
Event-related brain potentials (ERPs) were recorded during spoken language comprehension to study the on-line effects of gender agreement violations in controlled infinitival complements. Spanish sentences were constructed in which the complement clause contained a predicate adjective marked for syntactic gender. By manipulating the gender of the antecedent (i.e., the controller) of the implicit subject while holding constant the gender of the adjective, pairs of grammatical and ungrammatical sentences were created. The detection of such a gender agreement violation would indicate that the parser had established the coreference relation between the null subject and its antecedent. The results showed a complex biphasic ERP (i.e., an early negativity with prominence at anterior and central sites, followed by a centroparietal positivity) in the violating condition as compared to the non-violating conditions. The brain reacts to NP-adjective gender agreement violations within a few hundred milliseconds of their occurrence. The data imply that the parser has properly coindexed the null subject of an infinitive clause with its antecedent.
BamTools: a C++ API and toolkit for analyzing and managing BAM files.
Barnett, Derek W; Garrison, Erik K; Quinlan, Aaron R; Strömberg, Michael P; Marth, Gabor T
2011-06-15
Analysis of genomic sequencing data requires efficient, easy-to-use access to alignment results and flexible data management tools (e.g. filtering, merging, sorting, etc.). However, the enormous amount of data produced by current sequencing technologies is typically stored in compressed, binary formats that are not easily handled by the text-based parsers commonly used in bioinformatics research. We introduce a software suite for programmers and end users that facilitates research analysis and data management using BAM files. BamTools provides both the first C++ API publicly available for BAM file support as well as a command-line toolkit. BamTools was written in C++, and is supported on Linux, Mac OSX and MS Windows. Source code and documentation are freely available at http://github.org/pezmaster31/bamtools.
Multimedia CALLware: The Developer's Responsibility.
ERIC Educational Resources Information Center
Dodigovic, Marina
The early computer-assisted-language-learning (CALL) programs were silent and mostly limited to screen or printer supported written text as the prevailing communication resource. The advent of powerful graphics, sound and video combined with AI-based parsers and sound recognition devices gradually turned the computer into a rather anthropomorphic…
Mention Detection: Heuristics for the OntoNotes Annotations
2011-01-01
Mention Detection: Heuristics for the OntoNotes annotations Jonathan K. Kummerfeld, Mohit Bansal, David Burkett and Dan Klein Computer Science...considered the provided parses and parses produced by the Berke - ley parser (Petrov et al., 2006) trained on the pro- vided training data. We added a
Grammar as a Programming Language. Artificial Intelligence Memo 391.
ERIC Educational Resources Information Center
Rowe, Neil
Student projects that involve writing generative grammars in the computer language, "LOGO," are described in this paper, which presents a grammar-running control structure that allows students to modify and improve the grammar interpreter itself while learning how a simple kind of computer parser works. Included are procedures for…
The Effect of Syntactic Constraints on the Processing of Backwards Anaphora
ERIC Educational Resources Information Center
Kazanina, Nina; Lau, Ellen F.; Lieberman, Moti; Yoshida, Masaya; Phillips, Colin
2007-01-01
This article presents three studies that investigate when syntactic constraints become available during the processing of long-distance backwards pronominal dependencies ("backwards anaphora" or "cataphora"). Earlier work demonstrated that in such structures the parser initiates an active search for an antecedent for a pronoun, leading to gender…
Brain Responses to Filled Gaps
ERIC Educational Resources Information Center
Hestvik, Arild; Maxfield, Nathan; Schwartz, Richard G.; Shafer, Valerie
2007-01-01
An unresolved issue in the study of sentence comprehension is whether the process of gap-filling is mediated by the construction of empty categories (traces), or whether the parser relates fillers directly to the associated verb's argument structure. We conducted an event-related potentials (ERP) study that used the violation paradigm to examine…
BamTools: a C++ API and toolkit for analyzing and managing BAM files
Barnett, Derek W.; Garrison, Erik K.; Quinlan, Aaron R.; Strömberg, Michael P.; Marth, Gabor T.
2011-01-01
Motivation: Analysis of genomic sequencing data requires efficient, easy-to-use access to alignment results and flexible data management tools (e.g. filtering, merging, sorting, etc.). However, the enormous amount of data produced by current sequencing technologies is typically stored in compressed, binary formats that are not easily handled by the text-based parsers commonly used in bioinformatics research. Results: We introduce a software suite for programmers and end users that facilitates research analysis and data management using BAM files. BamTools provides both the first C++ API publicly available for BAM file support as well as a command-line toolkit. Availability: BamTools was written in C++, and is supported on Linux, Mac OSX and MS Windows. Source code and documentation are freely available at http://github.org/pezmaster31/bamtools. Contact: barnetde@bc.edu PMID:21493652
Gro2mat: a package to efficiently read gromacs output in MATLAB.
Dien, Hung; Deane, Charlotte M; Knapp, Bernhard
2014-07-30
Molecular dynamics (MD) simulations are a state-of-the-art computational method used to investigate molecular interactions at atomic scale. Interaction processes out of experimental reach can be monitored using MD software, such as Gromacs. Here, we present the gro2mat package that allows fast and easy access to Gromacs output files from Matlab. Gro2mat enables direct parsing of the most common Gromacs output formats including the binary xtc-format. No openly available Matlab parser currently exists for this format. The xtc reader is orders of magnitudes faster than other available pdb/ascii workarounds. Gro2mat is especially useful for scientists with an interest in quick prototyping of new mathematical and statistical approaches for Gromacs trajectory analyses. © 2014 Wiley Periodicals, Inc. Copyright © 2014 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Jared, Debra; Jouravlev, Olessia; Joanisse, Marc F.
2017-01-01
Decomposition theories of morphological processing in visual word recognition posit an early morpho-orthographic parser that is blind to semantic information, whereas parallel distributed processing (PDP) theories assume that the transparency of orthographic-semantic relationships influences processing from the beginning. To test these…
ERIC Educational Resources Information Center
Maxfield, Nathan D.; Lyon, Justine M.; Silliman, Elaine R.
2009-01-01
Bailey and Ferreira (2003) hypothesized and reported behavioral evidence that disfluencies (filled and silent pauses) undesirably affect sentence processing when they appear before disambiguating verbs in Garden Path (GP) sentences. Disfluencies here cause the parser to "linger" on, and apparently accept as correct, an erroneous parse. Critically,…
Evolution of the Generic Lock System at Jefferson Lab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brian Bevins; Yves Roblin
2003-10-13
The Generic Lock system is a software framework that allows highly flexible feedback control of large distributed systems. It allows system operators to implement new feedback loops between arbitrary process variables quickly and with no disturbance to the underlying control system. Several different types of feedback loops are provided and more are being added. This paper describes the further evolution of the system since it was first presented at ICALEPCS 2001 and reports on two years of successful use in accelerator operations. The framework has been enhanced in several key ways. Multiple-input, multiple-output (MIMO) lock types have been added formore » accelerator orbit and energy stabilization. The general purpose Proportional-Integral-Derivative (PID) locks can now be tuned automatically. The generic lock server now makes use of the Proxy IOC (PIOC) developed at Jefferson Lab to allow the locks to be monitored from any EPICS Channel Access aware client. (Previously clients had to be Cdev aware.) The dependency on the Qt XML parser has been replaced with the freely available Xerces DOM parser from the Apache project.« less
Intelligent interfaces for expert systems
NASA Technical Reports Server (NTRS)
Villarreal, James A.; Wang, Lui
1988-01-01
Vital to the success of an expert system is an interface to the user which performs intelligently. A generic intelligent interface is being developed for expert systems. This intelligent interface was developed around the in-house developed Expert System for the Flight Analysis System (ESFAS). The Flight Analysis System (FAS) is comprised of 84 configuration controlled FORTRAN subroutines that are used in the preflight analysis of the space shuttle. In order to use FAS proficiently, a person must be knowledgeable in the areas of flight mechanics, the procedures involved in deploying a certain payload, and an overall understanding of the FAS. ESFAS, still in its developmental stage, is taking into account much of this knowledge. The generic intelligent interface involves the integration of a speech recognizer and synthesizer, a preparser, and a natural language parser to ESFAS. The speech recognizer being used is capable of recognizing 1000 words of connected speech. The natural language parser is a commercial software package which uses caseframe instantiation in processing the streams of words from the speech recognizer or the keyboard. The systems configuration is described along with capabilities and drawbacks.
Two models of minimalist, incremental syntactic analysis.
Stabler, Edward P
2013-07-01
Minimalist grammars (MGs) and multiple context-free grammars (MCFGs) are weakly equivalent in the sense that they define the same languages, a large mildly context-sensitive class that properly includes context-free languages. But in addition, for each MG, there is an MCFG which is strongly equivalent in the sense that it defines the same language with isomorphic derivations. However, the structure-building rules of MGs but not MCFGs are defined in a way that generalizes across categories. Consequently, MGs can be exponentially more succinct than their MCFG equivalents, and this difference shows in parsing models too. An incremental, top-down beam parser for MGs is defined here, sound and complete for all MGs, and hence also capable of parsing all MCFG languages. But since the parser represents its grammar transparently, the relative succinctness of MGs is again evident. Although the determinants of MG structure are narrowly and discretely defined, probabilistic influences from a much broader domain can influence even the earliest analytic steps, allowing frequency and context effects to come early and from almost anywhere, as expected in incremental models. Copyright © 2013 Cognitive Science Society, Inc.
Reading Orthographically Strange Nonwords: Modelling Backup Strategies in Reading
ERIC Educational Resources Information Center
Perry, Conrad
2018-01-01
The latest version of the connectionist dual process model of reading (CDP++.parser) was tested on a set of nonwords, many of which were orthographically strange (e.g., PSIZ). A grapheme-by-grapheme read-out strategy was used because the normal strategy produced many poor responses. The new strategy allowed the model to produce results similar to…
Working Memory in the Processing of Long-Distance Dependencies: Interference and Filler Maintenance
ERIC Educational Resources Information Center
Ness, Tal; Meltzer-Asscher, Aya
2017-01-01
During the temporal delay between the filler and gap sites in long-distance dependencies, the "active filler" strategy can be implemented in two ways: the filler phrase can be actively maintained in working memory ("maintenance account"), or it can be retrieved only when the parser posits a gap ("retrieval account").…
Microsoft Biology Initiative: .NET Bioinformatics Platform and Tools
Diaz Acosta, B.
2011-01-01
The Microsoft Biology Initiative (MBI) is an effort in Microsoft Research to bring new technology and tools to the area of bioinformatics and biology. This initiative is comprised of two primary components, the Microsoft Biology Foundation (MBF) and the Microsoft Biology Tools (MBT). MBF is a language-neutral bioinformatics toolkit built as an extension to the Microsoft .NET Framework—initially aimed at the area of Genomics research. Currently, it implements a range of parsers for common bioinformatics file formats; a range of algorithms for manipulating DNA, RNA, and protein sequences; and a set of connectors to biological web services such as NCBI BLAST. MBF is available under an open source license, and executables, source code, demo applications, documentation and training materials are freely downloadable from http://research.microsoft.com/bio. MBT is a collection of tools that enable biology and bioinformatics researchers to be more productive in making scientific discoveries.
Model-based object classification using unification grammars and abstract representations
NASA Astrophysics Data System (ADS)
Liburdy, Kathleen A.; Schalkoff, Robert J.
1993-04-01
The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.
The development of a program analysis environment for Ada
NASA Technical Reports Server (NTRS)
Brown, David B.; Carlisle, Homer W.; Chang, Kai-Hsiung; Cross, James H.; Deason, William H.; Haga, Kevin D.; Huggins, John R.; Keleher, William R. A.; Starke, Benjamin B.; Weyrich, Orville R.
1989-01-01
A unit level, Ada software module testing system, called Query Utility Environment for Software Testing of Ada (QUEST/Ada), is described. The project calls for the design and development of a prototype system. QUEST/Ada design began with a definition of the overall system structure and a description of component dependencies. The project team was divided into three groups to resolve the preliminary designs of the parser/scanner: the test data generator, and the test coverage analyzer. The Phase 1 report is a working document from which the system documentation will evolve. It provides history, a guide to report sections, a literature review, the definition of the system structure and high level interfaces, descriptions of the prototype scope, the three major components, and the plan for the remainder of the project. The appendices include specifications, statistics, two papers derived from the current research, a preliminary users' manual, and the proposal and work plan for Phase 2.
UniGene Tabulator: a full parser for the UniGene format.
Lenzi, Luca; Frabetti, Flavia; Facchin, Federica; Casadei, Raffaella; Vitale, Lorenza; Canaider, Silvia; Carinci, Paolo; Zannotti, Maria; Strippoli, Pierluigi
2006-10-15
UniGene Tabulator 1.0 provides a solution for full parsing of UniGene flat file format; it implements a structured graphical representation of each data field present in UniGene following import into a common database managing system usable in a personal computer. This database includes related tables for sequence, protein similarity, sequence-tagged site (STS) and transcript map interval (TXMAP) data, plus a summary table where each record represents a UniGene cluster. UniGene Tabulator enables full local management of UniGene data, allowing parsing, querying, indexing, retrieving, exporting and analysis of UniGene data in a relational database form, usable on Macintosh (OS X 10.3.9 or later) and Windows (2000, with service pack 4, XP, with service pack 2 or later) operating systems-based computers. The current release, including both the FileMaker runtime applications, is freely available at http://apollo11.isto.unibo.it/software/
Lazzarato, F; Franceschinis, G; Botta, M; Cordero, F; Calogero, R A
2004-11-01
RRE allows the extraction of non-coding regions surrounding a coding sequence [i.e. gene upstream region, 5'-untranslated region (5'-UTR), introns, 3'-UTR, downstream region] from annotated genomic datasets available at NCBI. RRE parser and web-based interface are accessible at http://www.bioinformatica.unito.it/bioinformatics/rre/rre.html
Sorry Dave, I’m Afraid I Can’t Do That: Explaining Unachievable Robot Tasks using Natural Language
2013-06-24
processing components used by Brooks et al. [6]: the Bikel parser [3] combined with the null element (understood subject) restoration of Gabbard et al...Intelligent Robots and Systems (IROS), pages 1988 – 1993, 2010. [12] Ryan Gabbard , Mitch Marcus, and Seth Kulick. Fully parsing the Penn Treebank. In Human
ERIC Educational Resources Information Center
Metzner, Paul; von der Malsburg, Titus; Vasishth, Shravan; Rösler, Frank
2017-01-01
How important is the ability to freely control eye movements for reading comprehension? And how does the parser make use of this freedom? We investigated these questions using coregistration of eye movements and event-related brain potentials (ERPs) while participants read either freely or in a computer-controlled word-by-word format (also known…
Integrated Intelligence: Robot Instruction via Interactive Grounded Learning
2016-02-14
ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Robotics; Natural Language Processing ; Grounded Language ...Logical Forms for Referring Expression Generation, Emperical Methods in Natural Language Processing (EMNLP). 18-OCT-13, . : , Tom Kwiatkowska, Eunsol...Choi, Yoav Artzi, Luke Zettlemoyer. Scaling Semantic Parsers with On-the-fly Ontology Matching, Emperical Methods in Natural Langauge Processing
ERIC Educational Resources Information Center
Dekydtspotter, Laurent
2001-01-01
From the perspective of Fodor's (1983) theory of mental organization and Chomsky's (1995) Minimalist theory of grammar, considers constraints on the interpretation of French-type and English-type cardinality interrogatives in the task of sentence comprehension, as a function of a universal parsing algorithm and hypotheses embodied in a French-type…
ACPYPE - AnteChamber PYthon Parser interfacE.
Sousa da Silva, Alan W; Vranken, Wim F
2012-07-23
ACPYPE (or AnteChamber PYthon Parser interfacE) is a wrapper script around the ANTECHAMBER software that simplifies the generation of small molecule topologies and parameters for a variety of molecular dynamics programmes like GROMACS, CHARMM and CNS. It is written in the Python programming language and was developed as a tool for interfacing with other Python based applications such as the CCPN software suite (for NMR data analysis) and ARIA (for structure calculations from NMR data). ACPYPE is open source code, under GNU GPL v3, and is available as a stand-alone application at http://www.ccpn.ac.uk/acpype and as a web portal application at http://webapps.ccpn.ac.uk/acpype. We verified the topologies generated by ACPYPE in three ways: by comparing with default AMBER topologies for standard amino acids; by generating and verifying topologies for a large set of ligands from the PDB; and by recalculating the structures for 5 protein-ligand complexes from the PDB. ACPYPE is a tool that simplifies the automatic generation of topology and parameters in different formats for different molecular mechanics programmes, including calculation of partial charges, while being object oriented for integration with other applications.
Griss, Johannes; Reisinger, Florian; Hermjakob, Henning; Vizcaíno, Juan Antonio
2012-03-01
We here present the jmzReader library: a collection of Java application programming interfaces (APIs) to parse the most commonly used peak list and XML-based mass spectrometry (MS) data formats: DTA, MS2, MGF, PKL, mzXML, mzData, and mzML (based on the already existing API jmzML). The library is optimized to be used in conjunction with mzIdentML, the recently released standard data format for reporting protein and peptide identifications, developed by the HUPO proteomics standards initiative (PSI). mzIdentML files do not contain spectra data but contain references to different kinds of external MS data files. As a key functionality, all parsers implement a common interface that supports the various methods used by mzIdentML to reference external spectra. Thus, when developing software for mzIdentML, programmers no longer have to support multiple MS data file formats but only this one interface. The library (which includes a viewer) is open source and, together with detailed documentation, can be downloaded from http://code.google.com/p/jmzreader/. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Expressions Module for the Satellite Orbit Analysis Program
NASA Technical Reports Server (NTRS)
Edmonds, Karina
2008-01-01
The Expressions Module is a software module that has been incorporated into the Satellite Orbit Analysis Program (SOAP). The module includes an expressions- parser submodule built on top of an analytical system, enabling the user to define logical and numerical variables and constants. The variables can capture output from SOAP orbital-prediction and geometric-engine computations. The module can combine variables and constants with built-in logical operators (such as Boolean AND, OR, and NOT), relational operators (such as >, <, or =), and mathematical operators (such as addition, subtraction, multiplication, division, modulus, exponentiation, differentiation, and integration). Parentheses can be used to specify precedence of operations. The module contains a library of mathematical functions and operations, including logarithms, trigonometric functions, Bessel functions, minimum/ maximum operations, and floating- point-to-integer conversions. The module supports combinations of time, distance, and angular units and has a dimensional- analysis component that checks for correct usage of units. A parser based on the Flex language and the Bison program looks for and indicates errors in syntax. SOAP expressions can be built using other expressions as arguments, thus enabling the user to build analytical trees. A graphical user interface facilitates use.
Perruchet, Pierre; Tillmann, Barbara
2010-03-01
This study investigates the joint influences of three factors on the discovery of new word-like units in a continuous artificial speech stream: the statistical structure of the ongoing input, the initial word-likeness of parts of the speech flow, and the contextual information provided by the earlier emergence of other word-like units. Results of an experiment conducted with adult participants show that these sources of information have strong and interactive influences on word discovery. The authors then examine the ability of different models of word segmentation to account for these results. PARSER (Perruchet & Vinter, 1998) is compared to the view that word segmentation relies on the exploitation of transitional probabilities between successive syllables, and with the models based on the Minimum Description Length principle, such as INCDROP. The authors submit arguments suggesting that PARSER has the advantage of accounting for the whole pattern of data without ad-hoc modifications, while relying exclusively on general-purpose learning principles. This study strengthens the growing notion that nonspecific cognitive processes, mainly based on associative learning and memory principles, are able to account for a larger part of early language acquisition than previously assumed. Copyright © 2009 Cognitive Science Society, Inc.
Lexical and sublexical units in speech perception.
Giroux, Ibrahima; Rey, Arnaud
2009-03-01
Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units (Swingley, 2005). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes. Copyright © 2009, Cognitive Science Society, Inc.
ChemicalTagger: A tool for semantic text-mining in chemistry.
Hawizy, Lezan; Jessop, David M; Adams, Nico; Murray-Rust, Peter
2011-05-16
The primary method for scientific communication is in the form of published scientific articles and theses which use natural language combined with domain-specific terminology. As such, they contain free owing unstructured text. Given the usefulness of data extraction from unstructured literature, we aim to show how this can be achieved for the discipline of chemistry. The highly formulaic style of writing most chemists adopt make their contributions well suited to high-throughput Natural Language Processing (NLP) approaches. We have developed the ChemicalTagger parser as a medium-depth, phrase-based semantic NLP tool for the language of chemical experiments. Tagging is based on a modular architecture and uses a combination of OSCAR, domain-specific regex and English taggers to identify parts-of-speech. The ANTLR grammar is used to structure this into tree-based phrases. Using a metric that allows for overlapping annotations, we achieved machine-annotator agreements of 88.9% for phrase recognition and 91.9% for phrase-type identification (Action names). It is possible parse to chemical experimental text using rule-based techniques in conjunction with a formal grammar parser. ChemicalTagger has been deployed for over 10,000 patents and has identified solvents from their linguistic context with >99.5% precision.
Linking Semantic and Knowledge Representations in a Multi-Domain Dialogue System
2007-06-01
accuracy evaluation presented in the next section shows that the generic version of the grammar performs similarly well on two evaluation domains...of extra insertions; for example, discourse adverbials such as now were inserted if present in the lattice. In addition, different tense and pronoun...automatic lexicon specialization technique improves parser speed and accuracy. 1 Introduction This paper presents an architecture of a language
The Hermod Behavioral Synthesis System
1988-06-08
LDescription 1 lib tech-independent Transformation & Parser Optimization lib Hardware • g - utSynhesze Generator li Datapath lb Hardware liCotllb...Proc. 22nd Design Automation Conference, ACM/IEEE, June 1985, pp. 475-481. [7] G . De Micheli, "Synthesis of Control Systems", in Design Systems for...VLSI Circuits: Logic Synthesis and Silicon Compilation, G . De Micheli, A. Sangiovanni-Vincentelli, and P. Antognetti, (editor), Martinus Nijhoff
ERIC Educational Resources Information Center
Ouellon, Conrad, Comp.
Presentations from a colloquium on applications of research on natural languages to computer science address the following topics: (1) analysis of complex adverbs; (2) parser use in computerized text analysis; (3) French language utilities; (4) lexicographic mapping of official language notices; (5) phonographic codification of Spanish; (6)…
Xyce Parallel Electronic Simulator : reference guide, version 2.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.
Effective Cyber Situation Awareness (CSA) Assessment and Training
2013-11-01
activity/scenario. y. Save Wireshark Captures. z. Save SNORT logs. aa. Save MySQL databases. 4. After the completion of the scenario, the reversion...line or from custom Java code. • Cisco ASA Parser: Builds normalized vendor-neutral firewall rule specifications from Cisco ASA and PIX firewall...The Service tool lets analysts build Cauldron models from either the command line or from custom Java code. Functionally, it corresponds to the
Xyce™ Parallel Electronic Simulator Reference Guide Version 6.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce . This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.
A Formal Model of Ambiguity and its Applications in Machine Translation
2010-01-01
structure indicates linguisti- cally implausible segmentation that might be generated using dictionary - driven approaches...derivation. As was done in the monolingual case, the functions LHS, RHSi, RHSo and υ can be extended to a derivation δ. D(q) where q ∈V denotes the... monolingual parses. My algorithm runs more efficiently than O(n6) with many grammars (including those that required using heuristic search with other parsers
Predicting complex syntactic structure in real time: Processing of negative sentences in Russian.
Kazanina, Nina
2017-11-01
In Russian negative sentences the verb's direct object may appear either in the accusative case, which is licensed by the verb (as is common cross-linguistically), or in the genitive case, which is licensed by the negation (Russian-specific "genitive-of-negation" phenomenon). Such sentences were used to investigate whether case marking is employed for anticipating syntactic structure, and whether lexical heads other than the verb can be predicted on the basis of a case-marked noun phrase. Experiment 1, a completion task, confirmed that genitive-of-negation is part of Russian speakers' active grammatical repertoire. In Experiments 2 and 3, the genitive/accusative case manipulation on the preverbal object led to shorter reading times at the negation and verb in the genitive versus accusative condition. Furthermore, Experiment 3 manipulated linear order of the direct object and the negated verb in order to distinguish whether the abovementioned facilitatory effect was predictive or integrative in nature, and concluded that the parser actively predicts a verb and (otherwise optional) negation on the basis of a preceding genitive-marked object. Similarly to a head-final language, case-marking information on preverbal noun phrases (NPs) is used by the parser to enable incremental structure building in a free-word-order language such as Russian.
ChemicalTagger: A tool for semantic text-mining in chemistry
2011-01-01
Background The primary method for scientific communication is in the form of published scientific articles and theses which use natural language combined with domain-specific terminology. As such, they contain free owing unstructured text. Given the usefulness of data extraction from unstructured literature, we aim to show how this can be achieved for the discipline of chemistry. The highly formulaic style of writing most chemists adopt make their contributions well suited to high-throughput Natural Language Processing (NLP) approaches. Results We have developed the ChemicalTagger parser as a medium-depth, phrase-based semantic NLP tool for the language of chemical experiments. Tagging is based on a modular architecture and uses a combination of OSCAR, domain-specific regex and English taggers to identify parts-of-speech. The ANTLR grammar is used to structure this into tree-based phrases. Using a metric that allows for overlapping annotations, we achieved machine-annotator agreements of 88.9% for phrase recognition and 91.9% for phrase-type identification (Action names). Conclusions It is possible parse to chemical experimental text using rule-based techniques in conjunction with a formal grammar parser. ChemicalTagger has been deployed for over 10,000 patents and has identified solvents from their linguistic context with >99.5% precision. PMID:21575201
GENPLOT: A formula-based Pascal program for data manipulation and plotting
NASA Astrophysics Data System (ADS)
Kramer, Matthew J.
Geochemical processes involving alteration, differentiation, fractionation, or migration of elements may be elucidated by a number of discrimination or variation diagrams (e.g., AFM, Harker, Pearce, and many others). The construction of these diagrams involves arithmetic combination of selective elements (involving major, minor, or trace elements). GENPLOT utilizes a formula-based algorithm (an expression parser) which enables the program to manipulate multiparameter databases and plot XY, ternary, tetrahedron, and REE type plots without needing to change either the source code or rearranging databases. Formulae may be any quadratic expression whose variables are the column headings of the data matrix. A full-screen editor with limited equations and arithmetic functions (spreadsheet) has been incorporated into the program to aid data entry and editing. Data are stored as ASCII files to facilitate interchange of data between other programs and computers. GENPLOT was developed in Turbo Pascal for the IBM and compatible computers but also is available in Apple Pascal for the Apple Ile and Ill. Because the source code is too extensive to list here (about 5200 lines of Pascal code), the expression parsing routine, which is central to GENPLOT's flexibility is incorporated into a smaller demonstration program named SOLVE. The following paper includes a discussion on how the expression parser works and a detailed description of GENPLOT's capabilities.
Speech rhythm facilitates syntactic ambiguity resolution: ERP evidence.
Roncaglia-Denissen, Maria Paula; Schmidt-Kassow, Maren; Kotz, Sonja A
2013-01-01
In the current event-related potential (ERP) study, we investigated how speech rhythm impacts speech segmentation and facilitates the resolution of syntactic ambiguities in auditory sentence processing. Participants listened to syntactically ambiguous German subject- and object-first sentences that were spoken with either regular or irregular speech rhythm. Rhythmicity was established by a constant metric pattern of three unstressed syllables between two stressed ones that created rhythmic groups of constant size. Accuracy rates in a comprehension task revealed that participants understood rhythmically regular sentences better than rhythmically irregular ones. Furthermore, the mean amplitude of the P600 component was reduced in response to object-first sentences only when embedded in rhythmically regular but not rhythmically irregular context. This P600 reduction indicates facilitated processing of sentence structure possibly due to a decrease in processing costs for the less-preferred structure (object-first). Our data suggest an early and continuous use of rhythm by the syntactic parser and support language processing models assuming an interactive and incremental use of linguistic information during language processing.
Speech Rhythm Facilitates Syntactic Ambiguity Resolution: ERP Evidence
Roncaglia-Denissen, Maria Paula; Schmidt-Kassow, Maren; Kotz, Sonja A.
2013-01-01
In the current event-related potential (ERP) study, we investigated how speech rhythm impacts speech segmentation and facilitates the resolution of syntactic ambiguities in auditory sentence processing. Participants listened to syntactically ambiguous German subject- and object-first sentences that were spoken with either regular or irregular speech rhythm. Rhythmicity was established by a constant metric pattern of three unstressed syllables between two stressed ones that created rhythmic groups of constant size. Accuracy rates in a comprehension task revealed that participants understood rhythmically regular sentences better than rhythmically irregular ones. Furthermore, the mean amplitude of the P600 component was reduced in response to object-first sentences only when embedded in rhythmically regular but not rhythmically irregular context. This P600 reduction indicates facilitated processing of sentence structure possibly due to a decrease in processing costs for the less-preferred structure (object-first). Our data suggest an early and continuous use of rhythm by the syntactic parser and support language processing models assuming an interactive and incremental use of linguistic information during language processing. PMID:23409109
On the Shallow Processing (Dis)Advantage: Grammar and Economy.
Koornneef, Arnout; Reuland, Eric
2016-01-01
In the psycholinguistic literature it has been proposed that readers and listeners often adopt a "good-enough" processing strategy in which a "shallow" representation of an utterance driven by (top-down) extra-grammatical processes has a processing advantage over a "deep" (bottom-up) grammatically-driven representation of that same utterance. In the current contribution we claim, both on theoretical and experimental grounds, that this proposal is overly simplistic. Most importantly, in the domain of anaphora there is now an accumulating body of evidence showing that the anaphoric dependencies between (reflexive) pronominals and their antecedents are subject to an economy hierarchy. In this economy hierarchy, deriving anaphoric dependencies by deep-grammatical-operations requires less processing costs than doing so by shallow-extra-grammatical-operations. In addition, in case of ambiguity when both a shallow and a deep derivation are available to the parser, the latter is actually preferred. This, we argue, contradicts the basic assumptions of the shallow-deep dichotomy and, hence, a rethinking of the good-enough processing framework is warranted.
2016-02-01
In addition , the parser updates some parameters based on uncertainties. For example, Analytica was very slow to update Pk values based on...moderate range. The additional security environments helped to fill gaps in lower severity. Weapons Effectiveness Pk values were modified to account for two...project is to help improve the value and character of defense resource planning in an era of growing uncertainty and complex strategic challenges
NASA Astrophysics Data System (ADS)
Derriere, Sebastien; Gray, Norman; Demleitner, Markus; Louys, Mireille; Ochsenbein, Francois; Derriere, Sebastien; Gray, Norman
2014-05-01
This document describes a recommended syntax for writing the string representation of unit labels ("VOUnits"). In addition, it describes a set of recognised and deprecated units, which is as far as possible consistent with other relevant standards (BIPM, ISO/IEC and the IAU). The intention is that units written to conform to this specification will likely also be parsable by other well-known parsers. To this end, we include machine-readable grammars for other units syntaxes.
Xyce parallel electronic simulator reference guide, Version 6.0.1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
2014-01-01
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide [1] .
Xyce parallel electronic simulator reference guide, version 6.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
2013-08-01
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide [1] .
2000-01-01
for flight test data, and both generic and specialized tools of data filtering , data calibration, modeling , system identification, and simulation...GRAMMATICAL MODEL AND PARSER FOR AIR TRAFFIC CONTROLLER’S COMMANDS 11 A SPEECH-CONTROLLED INTERACTIVE VIRTUAL ENVIRONMENT FOR SHIP FAMILIARIZATION 12... MODELING AND SIMULATION IN THE 21ST CENTURY 23 NEW COTS HARDWARE AND SOFTWARE REDUCE THE COST AND EFFORT IN REPLACING AGING FLIGHT SIMULATORS SUBSYSTEMS
Criteria for Evaluating the Performance of Compilers
1974-10-01
cannot be made to fit, then an auxiliary mechanism outside the parser might be used . Finally, changing the choice of parsing tech - nique to a...was not useful in providing a basic for compiler evaluation. The study of the first question eztablished criteria and methodb for assigning four...program. The study of the second question estab- lished criteria for defining a "compiler Gibson mix", and established methods for using this "mix" to
Intelligent Agents as a Basis for Natural Language Interfaces
1988-01-01
language analysis component of UC, which produces a semantic representa tion of the input. This representation is in the form of a KODIAK network (see...Appendix A). Next, UC’s Concretion Mechanism performs concretion inferences ([Wilensky, 1983] and [Norvig, 1983]) based on the semantic network...The first step in UC’s processing is done by UC’s parser/understander component which produces a KODIAK semantic network representa tion of
Learning for Semantic Parsing with Kernels under Various Forms of Supervision
2007-08-01
natural language sentences to their formal executable meaning representations. This is a challenging problem and is critical for developing computing...sentences are semantically tractable. This indi- cates that Geoquery is more challenging domain for semantic parsing than ATIS. In the past, there have been a...Combining parsers. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/ VLC -99), pp. 187–194
NASA Astrophysics Data System (ADS)
Manzella, Giuseppe M. R.; Bartolini, Andrea; Bustaffa, Franco; D'Angelo, Paolo; De Mattei, Maurizio; Frontini, Francesca; Maltese, Maurizio; Medone, Daniele; Monachini, Monica; Novellino, Antonio; Spada, Andrea
2016-04-01
The MAPS (Marine Planning and Service Platform) project is aiming at building a computer platform supporting a Marine Information and Knowledge System. One of the main objective of the project is to develop a repository that should gather, classify and structure marine scientific literature and data thus guaranteeing their accessibility to researchers and institutions by means of standard protocols. In oceanography the cost related to data collection is very high and the new paradigm is based on the concept to collect once and re-use many times (for re-analysis, marine environment assessment, studies on trends, etc). This concept requires the access to quality controlled data and to information that is provided in reports (grey literature) and/or in relevant scientific literature. Hence, creation of new technology is needed by integrating several disciplines such as data management, information systems, knowledge management. In one of the most important EC projects on data management, namely SeaDataNet (www.seadatanet.org), an initial example of knowledge management is provided through the Common Data Index, that is providing links to data and (eventually) to papers. There are efforts to develop search engines to find author's contributions to scientific literature or publications. This implies the use of persistent identifiers (such as DOI), as is done in ORCID. However very few efforts are dedicated to link publications to the data cited or used or that can be of importance for the published studies. This is the objective of MAPS. Full-text technologies are often unsuccessful since they assume the presence of specific keywords in the text; in order to fix this problem, the MAPS project suggests to use different semantic technologies for retrieving the text and data and thus getting much more complying results. The main parts of our design of the search engine are: • Syntactic parser - This module is responsible for the extraction of "rich words" from the text: the whole document gets parsed to extract the words which are more meaningful for the main argument of the document, and applies the extraction in the form of N-grams (mono-grams, bi-grams, tri-grams). • MAPS database - This module is a simple database which contains all the N-grams used by MAPS (physical parameters from SeaDataNet vocabularies) to define our marine "ontology". • Relation identifier - This module performs the most important task of identifying relationships between the N-gram extracted from the text by the parser and the provided oceanographic terminology. It checks N-grams supplied by the Syntactic parser and then matches them with the terms stored in the MAPS database. Found matches are returned back to the parser with flexed form appearing in the source text. • A "relaxed" extractor - This option can be activated when the search engine is launched. It was introduced to give the user a chance to create new N-grams combining existing mono-grams and bi-grams in the database with rich-words found within the source text. The innovation of a semantic engine lies in the fact that the process is not just about the retrieval of already known documents by means of a simple term query but rather the retrieval of a population of documents whose existence was unknown. The system answers by showing a screenshot of results ordered according to the following criteria: • Relevance - of the document with respect to the concept that is searched • Date - of publication of the paper • Source - data provider as defined in the SeaDataNet Common Data Index • Matrix - environmental matrices as defined in the oceanographic field • Geographic area - area specified in the text • Clustering - the process of organizing objects into groups whose members are similar The clustering returns as the output the related documents. For each document the MAPS visualization provides: • Title, author, source/provider of data, web address • Tagging of key terms or concepts • Summary of the document • Visualization of the whole document The possibility of inserting the number of citations for each document among the criteria of the advanced search is currently undergoing; in this case the engine should be able to connect to any of the existing bibliographic citation systems (such as Google Scholar, Scopus, etc.).
Analysis of the Impact of Data Normalization on Cyber Event Correlation Query Performance
2012-03-01
2003). Organizations use it in planning, target marketing , decision-making, data analysis, and customer services (Shin, 2003). Organizations that...Following this IP address is a router message sequence number. This is a globally unique number for each router terminal and can range from...Appendix G, invokes the PERL parser for the log files from a particular USAF base, and invokes the CTL file that loads the resultant CSV file into the
Open Source Software Projects Needing Security Investments
2015-06-19
modtls, BouncyCastle, gpg, otr, axolotl. 7. Static analyzers: Clang, Frama-C. 8. Nginx. 9. OpenVPN . It was noted that the funding model may be similar...to OpenSSL, where consulting funds the company. It was also noted that OpenVPN needs to correctly use OpenSSL in order to be secure, so focusing on...Dovecot 4. Other high-impact network services: OpenSSH, OpenVPN , BIND, ISC DHCP, University of Delaware NTPD 5. Core infrastructure data parsers
Xyce parallel electronic simulator reference guide, version 6.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
2014-03-01
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide [1] .
System Data Model (SDM) Source Code
2012-08-23
CROSS_COMPILE=/opt/gumstix/build_arm_nofpu/staging_dir/bin/arm-linux-uclibcgnueabi- 8 : CC=$(CROSS_COMPILE)gcc 9: CXX=$(CROSS_COMPILE)g++ 10 : AR...and flags to pass to it 6: LEX=flex 7: LEXFLAGS=-B 8 : 9: ## The parser generator to invoke and flags to pass to it 10 : YACC=bison 11: YACCFLAGS...5: # Point to default PetaLinux root directory 6: ifndef ROOTDIR 7: ROOTDIR=$(PETALINUX)/software/petalinux-dist 8 : endif 9: 10 : PATH:=$(PATH
Understanding and Capturing People’s Mobile App Privacy Preferences
2013-10-28
The entire apps’ metadata takes up about 500MB of storage space when stored in a MySQL database and all the binary files take approximately 300GB of...functionality that can de- compile Dalvik bytecodes to Java source code faster than other de-compilers. Given the scale of the app analysis we planned on... java libraries, such as parser, sql connectors, etc Targeted Ads 137 admob, adwhirl, greystripe… Provided by mobile behavioral ads company to
DSS 13 Microprocessor Antenna Controller
NASA Technical Reports Server (NTRS)
Gosline, R. M.
1984-01-01
A microprocessor based antenna controller system developed as part of the unattended station project for DSS 13 is described. Both the hardware and software top level designs are presented and the major problems encounted are discussed. Developments useful to related projects include a JPL standard 15 line interface using a single board computer, a general purpose parser, a fast floating point to ASCII conversion technique, and experience gained in using off board floating point processors with the 8080 CPU.
Intelligent Information Retrieval for a Multimedia Database Using Captions
1992-07-23
The user was allowed to retrieve any of several multimedia types depending on the descriptors entered. An example mentioned was the assembly of a...statistics showed some performance improvements over a keyword search. Similar type work was described by Wong eL al (1987) where a vector space representation...keyword) lists for searching the lexicon (a syntactic parser is not used); a type hierarchy of terms was used in the process. The system then checked the
Extracting BI-RADS Features from Portuguese Clinical Texts
Nassif, Houssam; Cunha, Filipe; Moreira, Inês C.; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês
2013-01-01
In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser’s performance is comparable to the manual method. PMID:23797461
Friederici, A D
1995-09-01
This paper presents a model describing the temporal and neurotopological structure of syntactic processes during comprehension. It postulates three distinct phases of language comprehension, two of which are primarily syntactic in nature. During the first phase the parser assigns the initial syntactic structure on the basis of word category information. These early structural processes are assumed to be subserved by the anterior parts of the left hemisphere, as event-related brain potentials show this area to be maximally activated when phrase structure violations are processed and as circumscribed lesions in this area lead to an impairment of the on-line structural assignment. During the second phase lexical-semantic and verb-argument structure information is processed. This phase is neurophysiologically manifest in a negative component in the event-related brain potential around 400 ms after stimulus onset which is distributed over the left and right temporo-parietal areas when lexical-semantic information is processed and over left anterior areas when verb-argument structure information is processed. During the third phase the parser tries to map the initial syntactic structure onto the available lexical-semantic and verb-argument structure information. In case of an unsuccessful match between the two types of information reanalyses may become necessary. These processes of structural reanalysis are correlated with a centroparietally distributed late positive component in the event-related brain potential.(ABSTRACT TRUNCATED AT 250 WORDS)
The Organization of Knowledge in a Multi-Lingual, Integrated Parser.
1984-11-01
presunto S maniatico sexual quo dio muerte a golpes y a punalades a una mujer do 55 anos, informiron fuentes illegadas a Is investigacion. Literally in...el hospital la joven Rosa Areas, la que fue herida de bala por un uniformado. English: Rosa Areas is still in the hospital after being shot and wounded...by a soldier. In this sentence, the subject, " joven " (young person), is found after the verb, "se encuentra" (finds herself). To handle situations
Extract and visualize geolocation from any text file
NASA Astrophysics Data System (ADS)
Boustani, M.
2015-12-01
There are variety of text file formats such as PDF, HTML and more which contains words about locations(countries, cities, regions and more). GeoParser developed as one of sub-projects under DARPA Memex to help finding any geolocation information crawled website data. It is a web application benefiting from Apache Tika to extract locations from any text file format and visualize geolocations on the map. https://github.com/MBoustani/GeoParserhttps://github.com/chrismattmann/tika-pythonhttp://www.darpa.mil/program/memex
Numerical Function Generators Using LUT Cascades
2007-06-01
either algebraically (for example, sinðxÞ) or as a table of input/ output values. The user defines the numerical function by using the syntax of Scilab ...defined function in Scilab or specify it directly. Note that, by changing the parser of our system, any format can be used for the design entry. First...Methods for Multiple-Valued Input Address Generators,” Proc. 36th IEEE Int’l Symp. Multiple-Valued Logic (ISMVL ’06), May 2006. [29] Scilab 3.0, INRIA-ENPC
DBPQL: A view-oriented query language for the Intel Data Base Processor
NASA Technical Reports Server (NTRS)
Fishwick, P. A.
1983-01-01
An interactive query language (BDPQL) for the Intel Data Base Processor (DBP) is defined. DBPQL includes a parser generator package which permits the analyst to easily create and manipulate the query statement syntax and semantics. The prototype language, DBPQL, includes trace and performance commands to aid the analyst when implementing new commands and analyzing the execution characteristics of the DBP. The DBPQL grammar file and associated key procedures are included as an appendix to this report.
Catalog Descriptions Using VOTable Files
NASA Astrophysics Data System (ADS)
Thompson, R.; Levay, K.; Kimball, T.; White, R.
2008-08-01
Additional information is frequently required to describe database table contents and make it understandable to users. For this reason, the Multimission Archive at Space Telescope (MAST) creates Òdescription filesÓ for each table/catalog. After trying various XML and CSV formats, we finally chose VOTable. These files are easy to update via an HTML form, easily read using an XML parser such as (in our case) the PHP5 SimpleXML extension, and have found multiple uses in our data access/retrieval process.
Parser for Sabin-to-Mahoney Transition Model of Quasispecies Replication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ecale Zhou, Carol
2016-01-03
This code is a data parse for preparing output from the Qspp agent-based stochastic simulation model for plotting in Excel. This code is specific to a set of simulations that were run for the purpose of preparing data for a publication. It is necessary to make this code open-source in order to publish the model code (Qspp), which has already been released. There is a necessity of assuring that results from using Qspp for a publication
Open Radio Communications Architecture Core Framework V1.1.0 Volume 1 Software Users Manual
2005-02-01
on a PC utilizing the KDE desktop that comes with Red Hat Linux . The default desktop for most Red Hat Linux installations is the GNOME desktop. The...SCA) v2.2. The software was designed for a desktop computer running the Linux operating system (OS). It was developed in C++, uses ACE/TAO for CORBA...middleware, Xerces for the XML parser, and Red Hat Linux for the Operating System. The software is referred to as, Open Radio Communication
1990-01-01
Identification of Syntactic Units Exemplar I.A. (#l) Problem (1) The tough coach the young. (2) The tough coach married a star. (3) The tough coach married ...34the tough" vs. "the tough coach" and (b) "people" vs. " married people." The problem could also be considered a problem of determining lexical...and " married " in example (2). Once the parser specifies a verb, the structure of the rest of the sentence is determined: specifying "coach" as a
Multi-lingual search engine to access PubMed monolingual subsets: a feasibility study.
Darmoni, Stéfan J; Soualmia, Lina F; Griffon, Nicolas; Grosjean, Julien; Kerdelhué, Gaétan; Kergourlay, Ivan; Dahamna, Badisse
2013-01-01
PubMed contains many articles in languages other than English but it is difficult to find them using the English version of the Medical Subject Headings (MeSH) Thesaurus. The aim of this work is to propose a tool allowing access to a PubMed subset in one language, and to evaluate its performance. Translations of MeSH were enriched and gathered in the information system. PubMed subsets in main European languages were also added in our database, using a dedicated parser. The CISMeF generic semantic search engine was evaluated on the response time for simple queries. MeSH descriptors are currently available in 11 languages in the information system. All the 654,000 PubMed citations in French were integrated into CISMeF database. None of the response times exceed the threshold defined for usability (2 seconds). It is now possible to freely access biomedical literature in French using a tool in French; health professionals and lay people with a low English language may find it useful. It will be expended to several European languages: German, Spanish, Norwegian and Portuguese.
On the Shallow Processing (Dis)Advantage: Grammar and Economy
Koornneef, Arnout; Reuland, Eric
2016-01-01
In the psycholinguistic literature it has been proposed that readers and listeners often adopt a “good-enough” processing strategy in which a “shallow” representation of an utterance driven by (top-down) extra-grammatical processes has a processing advantage over a “deep” (bottom-up) grammatically-driven representation of that same utterance. In the current contribution we claim, both on theoretical and experimental grounds, that this proposal is overly simplistic. Most importantly, in the domain of anaphora there is now an accumulating body of evidence showing that the anaphoric dependencies between (reflexive) pronominals and their antecedents are subject to an economy hierarchy. In this economy hierarchy, deriving anaphoric dependencies by deep—grammatical—operations requires less processing costs than doing so by shallow—extra-grammatical—operations. In addition, in case of ambiguity when both a shallow and a deep derivation are available to the parser, the latter is actually preferred. This, we argue, contradicts the basic assumptions of the shallow–deep dichotomy and, hence, a rethinking of the good-enough processing framework is warranted. PMID:26903897
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Kelly; Budge, Kent; Lowrie, Rob
2016-03-03
Draco is an object-oriented component library geared towards numerically intensive, radiation (particle) transport applications built for parallel computing hardware. It consists of semi-independent packages and a robust build system. The packages in Draco provide a set of components that can be used by multiple clients to build transport codes. The build system can also be extracted for use in clients. Software includes smart pointers, Design-by-Contract assertions, unit test framework, wrapped MPI functions, a file parser, unstructured mesh data structures, a random number generator, root finders and an angular quadrature component.
Xyce™ Parallel Electronic Simulator Reference Guide, Version 6.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik V.; Mei, Ting
2016-06-01
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users’ Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users’ Guide. The information herein is subject to change without notice. Copyright © 2002-2016 Sandia Corporation. All rights reserved.
jmzML, an open-source Java API for mzML, the PSI standard for MS data.
Côté, Richard G; Reisinger, Florian; Martens, Lennart
2010-04-01
We here present jmzML, a Java API for the Proteomics Standards Initiative mzML data standard. Based on the Java Architecture for XML Binding and XPath-based XML indexer random-access XML parser, jmzML can handle arbitrarily large files in minimal memory, allowing easy and efficient processing of mzML files using the Java programming language. jmzML also automatically resolves internal XML references on-the-fly. The library (which includes a viewer) can be downloaded from http://jmzml.googlecode.com.
Sterling Software: An NLToolset-based System for MUC-6
1995-11-01
COCA - COLA ADVERTISING *PERIOD* ) ("’OOUBLEQUOTE"’ *EO-P"’ *SO-P"’ "’CAP* ABBREV _MR *CAP...34 Coca - Cola ". Since we weren’t using the parser, the part-of- speech obtained by a lexical lookup was of interest mainly if it was something like city-name...any contextual clues (such as "White House", "Fannie Mae", "Big Board", " Coca - cola " and "Coke", "Macy’s", "Exxon", etc). 252 SUB 6 0 0
Natural-Language Parser for PBEM
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
A computer program called "Hunter" accepts, as input, a colloquial-English description of a set of policy-based-management rules, and parses that description into a form useable by policy-based enterprise management (PBEM) software. PBEM is a rules-based approach suitable for automating some management tasks. PBEM simplifies the management of a given enterprise through establishment of policies addressing situations that are likely to occur. Hunter was developed to have a unique capability to extract the intended meaning instead of focusing on parsing the exact ways in which individual words are used.
Signal Processing Expert Code (SPEC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ames, H.S.
1985-12-01
The purpose of this paper is to describe a prototype expert system called SPEC which was developed to demonstrate the utility of providing an intelligent interface for users of SIG, a general purpose signal processing code. The expert system is written in NIL, runs on a VAX 11/750 and consists of a backward chaining inference engine and an English-like parser. The inference engine uses knowledge encoded as rules about the formats of SIG commands and about how to perform frequency analyses using SIG. The system demonstrated that expert system can be used to control existing codes.
Parallel File System I/O Performance Testing On LANL Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiens, Isaac Christian; Green, Jennifer Kathleen
2016-08-18
These are slides from a presentation on parallel file system I/O performance testing on LANL clusters. I/O is a known bottleneck for HPC applications. Performance optimization of I/O is often required. This summer project entailed integrating IOR under Pavilion and automating the results analysis. The slides cover the following topics: scope of the work, tools utilized, IOR-Pavilion test workflow, build script, IOR parameters, how parameters are passed to IOR, *run_ior: functionality, Python IOR-Output Parser, Splunk data format, Splunk dashboard and features, and future work.
Taxa: An R package implementing data standards and methods for taxonomic data
Foster, Zachary S.L.; Chamberlain, Scott; Grünwald, Niklaus J.
2018-01-01
The taxa R package provides a set of tools for defining and manipulating taxonomic data. The recent and widespread application of DNA sequencing to community composition studies is making large data sets with taxonomic information commonplace. However, compared to typical tabular data, this information is encoded in many different ways and the hierarchical nature of taxonomic classifications makes it difficult to work with. There are many R packages that use taxonomic data to varying degrees but there is currently no cross-package standard for how this information is encoded and manipulated. We developed the R package taxa to provide a robust and flexible solution to storing and manipulating taxonomic data in R and any application-specific information associated with it. Taxa provides parsers that can read common sources of taxonomic information (taxon IDs, sequence IDs, taxon names, and classifications) from nearly any format while preserving associated data. Once parsed, the taxonomic data and any associated data can be manipulated using a cohesive set of functions modeled after the popular R package dplyr. These functions take into account the hierarchical nature of taxa and can modify the taxonomy or associated data in such a way that both are kept in sync. Taxa is currently being used by the metacoder and taxize packages, which provide broadly useful functionality that we hope will speed adoption by users and developers. PMID:29707201
Replacing Fortran Namelists with JSON
NASA Astrophysics Data System (ADS)
Robinson, T. E., Jr.
2017-12-01
Maintaining a log of input parameters for a climate model is very important to understanding potential causes for answer changes during the development stages. Additionally, since modern Fortran is now interoperable with C, a more modern approach to software infrastructure to include code written in C is necessary. Merging these two separate facets of climate modeling requires a quality control for monitoring changes to input parameters and model defaults that can work with both Fortran and C. JSON will soon replace namelists as the preferred key/value pair input in the GFDL model. By adding a JSON parser written in C into the model, the input can be used by all functions and subroutines in the model, errors can be handled by the model instead of by the internal namelist parser, and the values can be output into a single file that is easily parsable by readily available tools. Input JSON files can handle all of the functionality of a namelist while being portable between C and Fortran. Fortran wrappers using unlimited polymorphism are crucial to allow for simple and compact code which avoids the need for many subroutines contained in an interface. Errors can be handled with more detail by providing information about location of syntax errors or typos. The output JSON provides a ground truth for values that the model actually uses by providing not only the values loaded through the input JSON, but also any default values that were not included. This kind of quality control on model input is crucial for maintaining reproducibility and understanding any answer changes resulting from changes in the input.
pymzML--Python module for high-throughput bioinformatics on mass spectrometry data.
Bald, Till; Barth, Johannes; Niehues, Anna; Specht, Michael; Hippler, Michael; Fufezan, Christian
2012-04-01
pymzML is an extension to Python that offers (i) an easy access to mass spectrometry (MS) data that allows the rapid development of tools, (ii) a very fast parser for mzML data, the standard data format in MS and (iii) a set of functions to compare or handle spectra. pymzML requires Python2.6.5+ and is fully compatible with Python3. The module is freely available on http://pymzml.github.com or pypi, is published under LGPL license and requires no additional modules to be installed. christian@fufezan.net.
KEGGParser: parsing and editing KEGG pathway maps in Matlab.
Arakelyan, Arsen; Nersisyan, Lilit
2013-02-15
KEGG pathway database is a collection of manually drawn pathway maps accompanied with KGML format files intended for use in automatic analysis. KGML files, however, do not contain the required information for complete reproduction of all the events indicated in the static image of a pathway map. Several parsers and editors of KEGG pathways exist for processing KGML files. We introduce KEGGParser-a MATLAB based tool for KEGG pathway parsing, semiautomatic fixing, editing, visualization and analysis in MATLAB environment. It also works with Scilab. The source code is available at http://www.mathworks.com/matlabcentral/fileexchange/37561.
Using a CLIPS expert system to automatically manage TCP/IP networks and their components
NASA Technical Reports Server (NTRS)
Faul, Ben M.
1991-01-01
A expert system that can directly manage networks components on a Transmission Control Protocol/Internet Protocol (TCP/IP) network is described. Previous expert systems for managing networks have focused on managing network faults after they occur. However, this proactive expert system can monitor and control network components in near real time. The ability to directly manage network elements from the C Language Integrated Production System (CLIPS) is accomplished by the integration of the Simple Network Management Protocol (SNMP) and a Abstract Syntax Notation (ASN) parser into the CLIPS artificial intelligence language.
Use of General-purpose Negation Detection to Augment Concept Indexing of Medical Documents
Mutalik, Pradeep G.; Deshpande, Aniruddha; Nadkarni, Prakash M.
2001-01-01
Objectives: To test the hypothesis that most instances of negated concepts in dictated medical documents can be detected by a strategy that relies on tools developed for the parsing of formal (computer) languages—specifically, a lexical scanner (“lexer”) that uses regular expressions to generate a finite state machine, and a parser that relies on a restricted subset of context-free grammars, known as LALR(1) grammars. Methods: A diverse training set of 40 medical documents from a variety of specialties was manually inspected and used to develop a program (Negfinder) that contained rules to recognize a large set of negated patterns occurring in the text. Negfinder's lexer and parser were developed using tools normally used to generate programming language compilers. The input to Negfinder consisted of medical narrative that was preprocessed to recognize UMLS concepts: the text of a recognized concept had been replaced with a coded representation that included its UMLS concept ID. The program generated an index with one entry per instance of a concept in the document, where the presence or absence of negation of that concept was recorded. This information was used to mark up the text of each document by color-coding it to make it easier to inspect. The parser was then evaluated in two ways: 1) a test set of 60 documents (30 discharge summaries, 30 surgical notes) marked-up by Negfinder was inspected visually to quantify false-positive and false-negative results; and 2) a different test set of 10 documents was independently examined for negatives by a human observer and by Negfinder, and the results were compared. Results: In the first evaluation using marked-up documents, 8,358 instances of UMLS concepts were detected in the 60 documents, of which 544 were negations detected by the program and verified by human observation (true-positive results, or TPs). Thirteen instances were wrongly flagged as negated (false-positive results, or FPs), and the program missed 27 instances of negation (false-negative results, or FNs), yielding a sensitivity of 95.3 percent and a specificity of 97.7 percent. In the second evaluation using independent negation detection, 1,869 concepts were detected in 10 documents, with 135 TPs, 12 FPs, and 6 FNs, yielding a sensitivity of 95.7 percent and a specificity of 91.8 percent. One of the words “no,” “denies/denied,” “not,” or “without” was present in 92.5 percent of all negations. Conclusions: Negation of most concepts in medical narrative can be reliably detected by a simple strategy. The reliability of detection depends on several factors, the most important being the accuracy of concept matching. PMID:11687566
iBIOMES Lite: Summarizing Biomolecular Simulation Data in Limited Settings
2015-01-01
As the amount of data generated by biomolecular simulations dramatically increases, new tools need to be developed to help manage this data at the individual investigator or small research group level. In this paper, we introduce iBIOMES Lite, a lightweight tool for biomolecular simulation data indexing and summarization. The main goal of iBIOMES Lite is to provide a simple interface to summarize computational experiments in a setting where the user might have limited privileges and limited access to IT resources. A command-line interface allows the user to summarize, publish, and search local simulation data sets. Published data sets are accessible via static hypertext markup language (HTML) pages that summarize the simulation protocols and also display data analysis graphically. The publication process is customized via extensible markup language (XML) descriptors while the HTML summary template is customized through extensible stylesheet language (XSL). iBIOMES Lite was tested on different platforms and at several national computing centers using various data sets generated through classical and quantum molecular dynamics, quantum chemistry, and QM/MM. The associated parsers currently support AMBER, GROMACS, Gaussian, and NWChem data set publication. The code is available at https://github.com/jcvthibault/ibiomes. PMID:24830957
Structural syntactic prediction measured with ELAN: evidence from ERPs.
Fonteneau, Elisabeth
2013-02-08
The current study used event-related potentials (ERPs) to investigate how and when argument structure information is used during the processing of sentences with a filler-gap dependency. We hypothesize that one specific property - animacy (living vs. non-living) - is used by the parser during the building of the syntactic structure. Participants heard sentences that were rated off-line as having an expected noun (Who did the Lion King chase the caravan with?) or an unexpected noun (Who did Lion King chase the animal with?). This prediction is based on the animacy properties relation between the wh-word and the noun in the object position. ERPs from the noun in the unexpected condition (animal) elicited a typical Early Left Anterior Negativity (ELAN)/P600 complex compared to the noun in the expected condition (caravan). Firstly, these results demonstrate that the ELAN reflects not only grammatical category violation but also animacy property expectations in filler-gap dependency. Secondly, our data suggests that the language comprehension system is able to make detailed predictions about aspects of the upcoming words to build up the syntactic structure. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Almas, Muhammad Shoaib; Vanfretti, Luigi
2017-01-01
Synchrophasor measurements from Phasor Measurement Units (PMUs) are the primary sensors used to deploy Wide-Area Monitoring, Protection and Control (WAMPAC) systems. PMUs stream out synchrophasor measurements through the IEEE C37.118.2 protocol using TCP/IP or UDP/IP. The proposed method establishes a direct communication between two PMUs, thus eliminating the requirement of an intermediate phasor data concentrator, data mediator and/or protocol parser and thereby ensuring minimum communication latency without considering communication link delays. This method allows utilizing synchrophasor measurements internally in a PMU to deploy custom protection and control algorithms. These algorithms are deployed using protection logic equations which are supported by all the PMU vendors. Moreover, this method reduces overall equipment cost as the algorithms execute internally in a PMU and therefore does not require any additional controller for their deployment. The proposed method can be utilized for fast prototyping of wide-area measurements based protection and control applications. The proposed method is tested by coupling commercial PMUs as Hardware-in-the-Loop (HIL) with Opal-RT's eMEGAsim Real-Time Simulator (RTS). As illustrative example, anti-islanding protection application is deployed using proposed method and its performance is assessed. The essential points in the method are: •Bypassing intermediate phasor data concentrator or protocol parsers as the synchrophasors are communicated directly between the PMUs (minimizes communication delays).•Wide Area Protection and Control Algorithm is deployed using logic equations in the client PMU, therefore eliminating the requirement for an external hardware controller (cost curtailment)•Effortless means to exploit PMU measurements in an environment familiar to protection engineers.
Towards comprehensive syntactic and semantic annotations of the clinical narrative
Albright, Daniel; Lanfranchi, Arrick; Fredriksen, Anwen; Styler, William F; Warner, Colin; Hwang, Jena D; Choi, Jinho D; Dligach, Dmitriy; Nielsen, Rodney D; Martin, James; Ward, Wayne; Palmer, Martha; Savova, Guergana K
2013-01-01
Objective To create annotated clinical narratives with layers of syntactic and semantic labels to facilitate advances in clinical natural language processing (NLP). To develop NLP algorithms and open source components. Methods Manual annotation of a clinical narrative corpus of 127 606 tokens following the Treebank schema for syntactic information, PropBank schema for predicate-argument structures, and the Unified Medical Language System (UMLS) schema for semantic information. NLP components were developed. Results The final corpus consists of 13 091 sentences containing 1772 distinct predicate lemmas. Of the 766 newly created PropBank frames, 74 are verbs. There are 28 539 named entity (NE) annotations spread over 15 UMLS semantic groups, one UMLS semantic type, and the Person semantic category. The most frequent annotations belong to the UMLS semantic groups of Procedures (15.71%), Disorders (14.74%), Concepts and Ideas (15.10%), Anatomy (12.80%), Chemicals and Drugs (7.49%), and the UMLS semantic type of Sign or Symptom (12.46%). Inter-annotator agreement results: Treebank (0.926), PropBank (0.891–0.931), NE (0.697–0.750). The part-of-speech tagger, constituency parser, dependency parser, and semantic role labeler are built from the corpus and released open source. A significant limitation uncovered by this project is the need for the NLP community to develop a widely agreed-upon schema for the annotation of clinical concepts and their relations. Conclusions This project takes a foundational step towards bringing the field of clinical NLP up to par with NLP in the general domain. The corpus creation and NLP components provide a resource for research and application development that would have been previously impossible. PMID:23355458
Synonym set extraction from the biomedical literature by lexical pattern discovery.
McCrae, John; Collier, Nigel
2008-03-24
Although there are a large number of thesauri for the biomedical domain many of them lack coverage in terms and their variant forms. Automatic thesaurus construction based on patterns was first suggested by Hearst 1, but it is still not clear how to automatically construct such patterns for different semantic relations and domains. In particular it is not certain which patterns are useful for capturing synonymy. The assumption of extant resources such as parsers is also a limiting factor for many languages, so it is desirable to find patterns that do not use syntactical analysis. Finally to give a more consistent and applicable result it is desirable to use these patterns to form synonym sets in a sound way. We present a method that automatically generates regular expression patterns by expanding seed patterns in a heuristic search and then develops a feature vector based on the occurrence of term pairs in each developed pattern. This allows for a binary classifications of term pairs as synonymous or non-synonymous. We then model this result as a probability graph to find synonym sets, which is equivalent to the well-studied problem of finding an optimal set cover. We achieved 73.2% precision and 29.7% recall by our method, out-performing hand-made resources such as MeSH and Wikipedia. We conclude that automatic methods can play a practical role in developing new thesauri or expanding on existing ones, and this can be done with only a small amount of training data and no need for resources such as parsers. We also concluded that the accuracy can be improved by grouping into synonym sets.
GOC-TX: A Reliable Ticket Synchronization Application for the Open Science Grid
NASA Astrophysics Data System (ADS)
Hayashi, Soichi; Gopu, Arvind; Quick, Robert
2011-12-01
One of the major operational issues faced by large multi-institutional collaborations is permitting its users and support staff to use their native ticket tracking environment while also exchanging these tickets with collaborators. After several failed attempts at email-parser based ticket exchanges, the OSG Operations Group has designed a comprehensive ticket synchronizing application. The GOC-TX application uses web-service interfaces offered by various commercial, open source and other homegrown ticketing systems, to synchronize tickets between two or more of these systems. GOC-TX operates independently from any ticketing system. It can be triggered by one ticketing system via email, active messaging, or a web-services call to check for current sync-status, pull applicable recent updates since prior synchronizations to the source ticket, and apply the updates to a destination ticket. The currently deployed production version of GOC-TX is able to synchronize tickets between the Numara Footprints ticketing system used by the OSG and the following systems: European Grid Initiative's system Global Grid User Support (GGUS) and the Request Tracker (RT) system used by Brookhaven. Additional interfaces to the BMC Remedy system used by Fermilab, and to other instances of RT used by other OSG partners, are expected to be completed in summer 2010. A fully configurable open source version is expected to be made available by early autumn 2010. This paper will cover the structure of the GOC-TX application, its evolution, and the problems encountered by OSG Operations group with ticket exchange within the OSG Collaboration.
Learning to Understand Natural Language with Less Human Effort
2015-05-01
j ); if one of these has the correct logical form, ` j = `i, then tj is taken as the approximate maximizer. 29 2.3 Discussion This chapter...where j indexes entity tuples (e1, e2). Training optimizes the semantic parser parameters θ to predict Y = yj,Z = zj given S = sj . The parameters θ...be au tif ul / J J N 1 /N 1 λ f .f L on do n /N N P N λ x .M (x ,“ lo nd on ”, C IT Y ) N : λ x .M (x ,“ lo nd on ”, C IT Y ) (S [d cl ]\\N
How Architecture-Driven Modernization Is Changing the Game in Information System Modernization
2010-04-01
Health Administration MUMPS to Java 300K 4 mo. State of OR Employee Retirement System COBOL to C# .Net 250K 4 mo. Civilian State of WA Off. of Super of...Jovial, Mumps , A MagnaX, Natural, B PVL, P owerBuilder, A SQL, Vax Basic, s V B 6, + Others E revolution, inc. C, Target System "To Be" C#, C...successfully completed in 4 months • Created a new JANUSTM MUMPS parser TM , Implementation • Final “To-Be” Documentation • JANUS rules engine
Speed up of XML parsers with PHP language implementation
NASA Astrophysics Data System (ADS)
Georgiev, Bozhidar; Georgieva, Adriana
2012-11-01
In this paper, authors introduce PHP5's XML implementation and show how to read, parse, and write a short and uncomplicated XML file using Simple XML in a PHP environment. The possibilities for mutual work of PHP5 language and XML standard are described. The details of parsing process with Simple XML are also cleared. A practical project PHP-XML-MySQL presents the advantages of XML implementation in PHP modules. This approach allows comparatively simple search of XML hierarchical data by means of PHP software tools. The proposed project includes database, which can be extended with new data and new XML parsing functions.
A Risk Assessment System with Automatic Extraction of Event Types
NASA Astrophysics Data System (ADS)
Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula
In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.
QUEST/Ada: Query utility environment for software testing of Ada
NASA Technical Reports Server (NTRS)
Brown, David B.
1989-01-01
Results of research and development efforts are presented for Task 1, Phase 2 of a general project entitled, The Development of a Program Analysis Environment for Ada. A prototype of the QUEST/Ada system was developed to collect data to determine the effectiveness of the rule-based testing paradigm. The prototype consists of five parts: the test data generator, the parser/scanner, the test coverage analyzer, a symbolic evaluator, and a data management facility, known as the Librarian. These components are discussed at length. Also presented is an experimental design for the evaluations, an overview of the project, and a schedule for its completion.
Automatic Speech Recognition in Air Traffic Control: a Human Factors Perspective
NASA Technical Reports Server (NTRS)
Karlsson, Joakim
1990-01-01
The introduction of Automatic Speech Recognition (ASR) technology into the Air Traffic Control (ATC) system has the potential to improve overall safety and efficiency. However, because ASR technology is inherently a part of the man-machine interface between the user and the system, the human factors issues involved must be addressed. Here, some of the human factors problems are identified and related methods of investigation are presented. Research at M.I.T.'s Flight Transportation Laboratory is being conducted from a human factors perspective, focusing on intelligent parser design, presentation of feedback, error correction strategy design, and optimal choice of input modalities.
Pen-based Interfaces for Engineering and Education
NASA Astrophysics Data System (ADS)
Stahovich, Thomas F.
Sketches are an important problem-solving tool in many fields. This is particularly true of engineering design, where sketches facilitate creativity by providing an efficient medium for expressing ideas. However, despite the importance of sketches in engineering practice, current engineering software still relies on traditional mouse and keyboard interfaces, with little or no capabilities to handle free-form sketch input. With recent advances in machine-interpretation techniques, it is now becoming possible to create practical interpretation-based interfaces for such software. In this chapter, we report on our efforts to create interpretation techniques to enable pen-based engineering applications. We describe work on two fundamental sketch understanding problems. The first is sketch parsing, the task of clustering pen strokes or geometric primitives into individual symbols. The second is symbol recognition, the task of classifying symbols once they have been located by a parser. We have used the techniques that we have developed to construct several pen-based engineering analysis tools. These are used here as examples to illustrate our methods. We have also begun to use our techniques to create pen-based tutoring systems that scaffold students in solving problems in the same way they would ordinarily solve them with paper and pencil. The chapter concludes with a brief discussion of these systems.
Deriving pathway maps from automated text analysis using a grammar-based approach.
Olsson, Björn; Gawronska, Barbara; Erlendsson, Björn
2006-04-01
We demonstrate how automated text analysis can be used to support the large-scale analysis of metabolic and regulatory pathways by deriving pathway maps from textual descriptions found in the scientific literature. The main assumption is that correct syntactic analysis combined with domain-specific heuristics provides a good basis for relation extraction. Our method uses an algorithm that searches through the syntactic trees produced by a parser based on a Referent Grammar formalism, identifies relations mentioned in the sentence, and classifies them with respect to their semantic class and epistemic status (facts, counterfactuals, hypotheses). The semantic categories used in the classification are based on the relation set used in KEGG (Kyoto Encyclopedia of Genes and Genomes), so that pathway maps using KEGG notation can be automatically generated. We present the current version of the relation extraction algorithm and an evaluation based on a corpus of abstracts obtained from PubMed. The results indicate that the method is able to combine a reasonable coverage with high accuracy. We found that 61% of all sentences were parsed, and 97% of the parse trees were judged to be correct. The extraction algorithm was tested on a sample of 300 parse trees and was found to produce correct extractions in 90.5% of the cases.
An automatic indexing method for medical documents.
Wagner, M. M.
1991-01-01
This paper describes MetaIndex, an automatic indexing program that creates symbolic representations of documents for the purpose of document retrieval. MetaIndex uses a simple transition network parser to recognize a language that is derived from the set of main concepts in the Unified Medical Language System Metathesaurus (Meta-1). MetaIndex uses a hierarchy of medical concepts, also derived from Meta-1, to represent the content of documents. The goal of this approach is to improve document retrieval performance by better representation of documents. An evaluation method is described, and the performance of MetaIndex on the task of indexing the Slice of Life medical image collection is reported. PMID:1807564
NOBLAST and JAMBLAST: New Options for BLAST and a Java Application Manager for BLAST results.
Lagnel, Jacques; Tsigenopoulos, Costas S; Iliopoulos, Ioannis
2009-03-15
NOBLAST (New Options for BLAST) is an open source program that provides a new user-friendly tabular output format for various NCBI BLAST programs (Blastn, Blastp, Blastx, Tblastn, Tblastx, Mega BLAST and Psi BLAST) without any use of a parser and provides E-value correction in case of use of segmented BLAST database. JAMBLAST using the NOBLAST output allows the user to manage, view and filter the BLAST hits using a number of selection criteria. A distribution package of NOBLAST and JAMBLAST including detailed installation procedure is freely available from http://sourceforge.net/projects/JAMBLAST/ and http://sourceforge.net/projects/NOBLAST. Supplementary data are available at Bioinformatics online.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan
2015-02-16
CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less
XAFSmass: a program for calculating the optimal mass of XAFS samples
NASA Astrophysics Data System (ADS)
Klementiev, K.; Chernikov, R.
2016-05-01
We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.
Parsing Citations in Biomedical Articles Using Conditional Random Fields
Zhang, Qing; Cao, Yong-Gang; Yu, Hong
2011-01-01
Citations are used ubiquitously in biomedical full-text articles and play an important role for representing both the rhetorical structure and the semantic content of the articles. As a result, text mining systems will significantly benefit from a tool that automatically extracts the content of a citation. In this study, we applied the supervised machine-learning algorithms Conditional Random Fields (CRFs) to automatically parse a citation into its fields (e.g., Author, Title, Journal, and Year). With a subset of html format open-access PubMed Central articles, we report an overall 97.95% F1-score. The citation parser can be accessed at: http://www.cs.uwm.edu/~qing/projects/cithit/index.html. PMID:21419403
Development of clinical contents model markup language for electronic health records.
Yun, Ji-Hyun; Ahn, Sun-Ju; Kim, Yoon
2012-09-01
To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. CCML HAS THE FOLLOWING STRENGTHS: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems.
Semi-automated ontology generation and evolution
NASA Astrophysics Data System (ADS)
Stirtzinger, Anthony P.; Anken, Craig S.
2009-05-01
Extending the notion of data models or object models, ontology can provide rich semantic definition not only to the meta-data but also to the instance data of domain knowledge, making these semantic definitions available in machine readable form. However, the generation of an effective ontology is a difficult task involving considerable labor and skill. This paper discusses an Ontology Generation and Evolution Processor (OGEP) aimed at automating this process, only requesting user input when un-resolvable ambiguous situations occur. OGEP directly attacks the main barrier which prevents automated (or self learning) ontology generation: the ability to understand the meaning of artifacts and the relationships the artifacts have to the domain space. OGEP leverages existing lexical to ontological mappings in the form of WordNet, and Suggested Upper Merged Ontology (SUMO) integrated with a semantic pattern-based structure referred to as the Semantic Grounding Mechanism (SGM) and implemented as a Corpus Reasoner. The OGEP processing is initiated by a Corpus Parser performing a lexical analysis of the corpus, reading in a document (or corpus) and preparing it for processing by annotating words and phrases. After the Corpus Parser is done, the Corpus Reasoner uses the parts of speech output to determine the semantic meaning of a word or phrase. The Corpus Reasoner is the crux of the OGEP system, analyzing, extrapolating, and evolving data from free text into cohesive semantic relationships. The Semantic Grounding Mechanism provides a basis for identifying and mapping semantic relationships. By blending together the WordNet lexicon and SUMO ontological layout, the SGM is given breadth and depth in its ability to extrapolate semantic relationships between domain entities. The combination of all these components results in an innovative approach to user assisted semantic-based ontology generation. This paper will describe the OGEP technology in the context of the architectural components referenced above and identify a potential technology transition path to Scott AFB's Tanker Airlift Control Center (TACC) which serves as the Air Operations Center (AOC) for the Air Mobility Command (AMC).
Locating Anomalies in Complex Data Sets Using Visualization and Simulation
NASA Technical Reports Server (NTRS)
Panetta, Karen
2001-01-01
The research goals are to create a simulation framework that can accept any combination of models written at the gate or behavioral level. The framework provides the ability to fault simulate and create scenarios of experiments using concurrent simulation. In order to meet these goals we have had to fulfill the following requirements. The ability to accept models written in VHDL, Verilog or the C languages. The ability to propagate faults through any model type. The ability to create experiment scenarios efficiently without generating every possible combination of variables. The ability to accept adversity of fault models beyond the single stuck-at model. Major development has been done to develop a parser that can accept models written in various languages. This work has generated considerable attention from other universities and industry for its flexibility and usefulness. The parser uses LEXX and YACC to parse Verilog and C. We have also utilized our industrial partnership with Alternative System's Inc. to import vhdl into our simulator. For multilevel simulation, we needed to modify the simulator architecture to accept models that contained multiple outputs. This enabled us to accept behavioral components. The next major accomplishment was the addition of "functional fault models". Functional fault models change the behavior of a gate or model. For example, a bridging fault can make an OR gate behave like an AND gate. This has applications beyond fault simulation. This modeling flexibility will make the simulator more useful for doing verification and model comparison. For instance, two or more versions of an ALU can be comparatively simulated in a single execution. The results will show where and how the models differed so that the performance and correctness of the models may be evaluated. A considerable amount of time has been dedicated to validating the simulator performance on larger models provided by industry and other universities.
Brain responses to filled gaps.
Hestvik, Arild; Maxfield, Nathan; Schwartz, Richard G; Shafer, Valerie
2007-03-01
An unresolved issue in the study of sentence comprehension is whether the process of gap-filling is mediated by the construction of empty categories (traces), or whether the parser relates fillers directly to the associated verb's argument structure. We conducted an event-related potentials (ERP) study that used the violation paradigm to examine the time course and spatial distribution of brain responses to ungrammatically filled gaps. The results indicate that the earliest brain response to the violation is an early left anterior negativity (eLAN). This ERP indexes an early phase of pure syntactic structure building, temporally preceding ERPs that reflect semantic integration and argument structure satisfaction. The finding is interpreted as evidence that gap-filling is mediated by structurally predicted empty categories, rather than directly by argument structure operations.
Development of Clinical Contents Model Markup Language for Electronic Health Records
Yun, Ji-Hyun; Kim, Yoon
2012-01-01
Objectives To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Methods Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. Results CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. Conclusions CCML has the following strengths: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems. PMID:23115739
Wright, Adam; Sittig, Dean F.
2008-01-01
In this paper we describe and evaluate a new distributed architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support), which leverages current health information exchange efforts and is based on the principles of a service-oriented architecture. The architecture allows disparate clinical information systems and clinical decision support systems to be seamlessly integrated over a network according to a set of interfaces and protocols described in this paper. The architecture described is fully defined and developed, and six use cases have been developed and tested using a prototype electronic health record which links to one of the existing prototype National Health Information Networks (NHIN): drug interaction checking, syndromic surveillance, diagnostic decision support, inappropriate prescribing in older adults, information at the point of care and a simple personal health record. Some of these use cases utilize existing decision support systems, which are either commercially or freely available at present, and developed outside of the SANDS project, while other use cases are based on decision support systems developed specifically for the project. Open source code for many of these components is available, and an open source reference parser is also available for comparison and testing of other clinical information systems and clinical decision support systems that wish to implement the SANDS architecture. PMID:18434256
An error-resistant linguistic protocol for air traffic control
NASA Technical Reports Server (NTRS)
Cushing, Steven
1989-01-01
The research results described here are intended to enhance the effectiveness of the DATALINK interface that is scheduled by the Federal Aviation Administration (FAA) to be deployed during the 1990's to improve the safety of various aspects of aviation. While voice has a natural appeal as the preferred means of communication both among humans themselves and between humans and machines as the form of communication that people find most convenient, the complexity and flexibility of natural language are problematic, because of the confusions and misunderstandings that can arise as a result of ambiguity, unclear reference, intonation peculiarities, implicit inference, and presupposition. The DATALINK interface will avoid many of these problems by replacing voice with vision and speech with written instructions. This report describes results achieved to date on an on-going research effort to refine the protocol of the DATALINK system so as to avoid many of the linguistic problems that still remain in the visual mode. In particular, a working prototype DATALINK simulator system has been developed consisting of an unambiguous, context-free grammar and parser, based on the current air-traffic-control language and incorporated into a visual display involving simulated touch-screen buttons and three levels of menu screens. The system is written in the C programming language and runs on the Macintosh II computer. After reviewing work already done on the project, new tasks for further development are described.
NASA Technical Reports Server (NTRS)
Himer, J. T.
1992-01-01
Fortran has largely enjoyed prominence for the past few decades as the computer programming language of choice for numerically intensive scientific, engineering, and process control applications. Fortran's well understood static language syntax has allowed resulting parsers and compiler optimizing technologies to often generate among the most efficient and fastest run-time executables, particularly on high-end scalar and vector supercomputers. Computing architectures and paradigms have changed considerably since the last ANSI/ISO Fortran release in 1978, and while FORTRAN 77 has more than survived, it's aged features provide only partial functionality for today's demanding computing environments. The simple block procedural languages have been necessarily evolving, or giving way, to specialized supercomputing, network resource, and object-oriented paradigms. To address these new computing demands, ANSI has worked for the last 12-years with three international public reviews to deliver Fortran 90. Fortran 90 has superseded and replaced ISO FORTRAN 77 internationally as the sole Fortran standard; while in the US, Fortran 90 is expected to be adopted as the ANSI standard this summer, coexisting with ANSI FORTRAN 77 until at least 1996. The development path and current state of Fortran will be briefly described highlighting the many new Fortran 90 syntactic and semantic additions which support (among others): free form source; array syntax; new control structures; modules and interfaces; pointers; derived data types; dynamic memory; enhanced I/O; operator overloading; data abstraction; user optional arguments; new intrinsics for array, bit manipulation, and system inquiry; and enhanced portability through better generic control of underlying system arithmetic models. Examples from dynamical astronomy, signal and image processing will attempt to illustrate Fortran 90's applicability to today's general scalar, vector, and parallel scientific and engineering requirements and object oriented programming paradigms. Time permitting, current work proceeding on the future development of Fortran 2000 and collateral standards will be introduced.
DOEDEF Software System, Version 2. 2: Operational instructions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meirans, L.
The DOEDEF (Department of Energy Data Exchange Format) Software System is a collection of software routines written to facilitate the manipulation of IGES (Initial Graphics Exchange Specification) data. Typically, the IGES data has been produced by the IGES processors for a Computer-Aided Design (CAD) system, and the data manipulations are user-defined ''flavoring'' operations. The DOEDEF Software System is used in conjunction with the RIM (Relational Information Management) DBMS from Boeing Computer Services (Version 7, UD18 or higher). The three major pieces of the software system are: Parser, reads an ASCII IGES file and converts it to the RIM database equivalent;more » Kernel, provides the user with IGES-oriented interface routines to the database; and Filewriter, writes the RIM database to an IGES file.« less
Light at Night Markup Language (LANML): XML Technology for Light at Night Monitoring Data
NASA Astrophysics Data System (ADS)
Craine, B. L.; Craine, E. R.; Craine, E. M.; Crawford, D. L.
2013-05-01
Light at Night Markup Language (LANML) is a standard, based upon XML, useful in acquiring, validating, transporting, archiving and analyzing multi-dimensional light at night (LAN) datasets of any size. The LANML standard can accommodate a variety of measurement scenarios including single spot measures, static time-series, web based monitoring networks, mobile measurements, and airborne measurements. LANML is human-readable, machine-readable, and does not require a dedicated parser. In addition LANML is flexible; ensuring future extensions of the format will remain backward compatible with analysis software. The XML technology is at the heart of communicating over the internet and can be equally useful at the desktop level, making this standard particularly attractive for web based applications, educational outreach and efficient collaboration between research groups.
A study of actions in operative notes.
Wang, Yan; Pakhomov, Serguei; Burkart, Nora E; Ryan, James O; Melton, Genevieve B
2012-01-01
Operative notes contain rich information about techniques, instruments, and materials used in procedures. To assist development of effective information extraction (IE) techniques for operative notes, we investigated the sublanguage used to describe actions within the operative report 'procedure description' section. Deep parsing results of 362,310 operative notes with an expanded Stanford parser using the SPECIALIST Lexicon resulted in 200 verbs (92% coverage) including 147 action verbs. Nominal action predicates for each action verb were gathered from WordNet, SPECIALIST Lexicon, New Oxford American Dictionary and Stedman's Medical Dictionary. Coverage gaps were seen in existing lexical, domain, and semantic resources (Unified Medical Language System (UMLS) Metathesaurus, SPECIALIST Lexicon, WordNet and FrameNet). Our findings demonstrate the need to construct surgical domain-specific semantic resources for IE from operative notes.
BIOSPIDA: A Relational Database Translator for NCBI.
Hagen, Matthew S; Lee, Eva K
2010-11-13
As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time.
NASA Astrophysics Data System (ADS)
Pascoe, Charlotte; Lawrence, Bryan; Moine, Marie-Pierre; Ford, Rupert; Devine, Gerry
2010-05-01
The EU METAFOR Project (http://metaforclimate.eu) has created a web-based model documentation questionnaire to collect metadata from the modelling groups that are running simulations in support of the Coupled Model Intercomparison Project - 5 (CMIP5). The CMIP5 model documentation questionnaire will retrieve information about the details of the models used, how the simulations were carried out, how the simulations conformed to the CMIP5 experiment requirements and details of the hardware used to perform the simulations. The metadata collected by the CMIP5 questionnaire will allow CMIP5 data to be compared in a scientifically meaningful way. This paper describes the life-cycle of the CMIP5 questionnaire development which starts with relatively unstructured input from domain specialists and ends with formal XML documents that comply with the METAFOR Common Information Model (CIM). Each development step is associated with a specific tool. (1) Mind maps are used to capture information requirements from domain experts and build a controlled vocabulary, (2) a python parser processes the XML files generated by the mind maps, (3) Django (python) is used to generate the dynamic structure and content of the web based questionnaire from processed xml and the METAFOR CIM, (4) Python parsers ensure that information entered into the CMIP5 questionnaire is output as CIM compliant xml, (5) CIM compliant output allows automatic information capture tools to harvest questionnaire content into databases such as the Earth System Grid (ESG) metadata catalogue. This paper will focus on how Django (python) and XML input files are used to generate the structure and content of the CMIP5 questionnaire. It will also address how the choice of development tools listed above provided a framework that enabled working scientists (who we would never ordinarily get to interact with UML and XML) to be part the iterative development process and ensure that the CMIP5 model documentation questionnaire reflects what scientists want to know about the models. Keywords: metadata, CMIP5, automatic information capture, tool development
Power estimation on functional level for programmable processors
NASA Astrophysics Data System (ADS)
Schneider, M.; Blume, H.; Noll, T. G.
2004-05-01
In diesem Beitrag werden verschiedene Ansätze zur Verlustleistungsschätzung von programmierbaren Prozessoren vorgestellt und bezüglich ihrer Übertragbarkeit auf moderne Prozessor-Architekturen wie beispielsweise Very Long Instruction Word (VLIW)-Architekturen bewertet. Besonderes Augenmerk liegt hierbei auf dem Konzept der sogenannten Functional-Level Power Analysis (FLPA). Dieser Ansatz basiert auf der Einteilung der Prozessor-Architektur in funktionale Blöcke wie beispielsweise Processing-Unit, Clock-Netzwerk, interner Speicher und andere. Die Verlustleistungsaufnahme dieser Bl¨ocke wird parameterabhängig durch arithmetische Modellfunktionen beschrieben. Durch automatisierte Analyse von Assemblercodes des zu schätzenden Systems mittels eines Parsers können die Eingangsparameter wie beispielsweise der erzielte Parallelitätsgrad oder die Art des Speicherzugriffs gewonnen werden. Dieser Ansatz wird am Beispiel zweier moderner digitaler Signalprozessoren durch eine Vielzahl von Basis-Algorithmen der digitalen Signalverarbeitung evaluiert. Die ermittelten Schätzwerte für die einzelnen Algorithmen werden dabei mit physikalisch gemessenen Werten verglichen. Es ergibt sich ein sehr kleiner maximaler Schätzfehler von 3%. In this contribution different approaches for power estimation for programmable processors are presented and evaluated concerning their capability to be applied to modern digital signal processor architectures like e.g. Very Long InstructionWord (VLIW) -architectures. Special emphasis will be laid on the concept of so-called Functional-Level Power Analysis (FLPA). This approach is based on the separation of the processor architecture into functional blocks like e.g. processing unit, clock network, internal memory and others. The power consumption of these blocks is described by parameter dependent arithmetic model functions. By application of a parser based automized analysis of assembler codes of the systems to be estimated the input parameters of the Correspondence to: H. Blume (blume@eecs.rwth-aachen.de) arithmetic functions like e.g. the achieved degree of parallelism or the kind and number of memory accesses can be computed. This approach is exemplarily demonstrated and evaluated applying two modern digital signal processors and a variety of basic algorithms of digital signal processing. The resulting estimation values for the inspected algorithms are compared to physically measured values. A resulting maximum estimation error of 3% is achieved.
ULTRA: Universal Grammar as a Universal Parser
Medeiros, David P.
2018-01-01
A central concern of generative grammar is the relationship between hierarchy and word order, traditionally understood as two dimensions of a single syntactic representation. A related concern is directionality in the grammar. Traditional approaches posit process-neutral grammars, embodying knowledge of language, put to use with infinite facility both for production and comprehension. This has crystallized in the view of Merge as the central property of syntax, perhaps its only novel feature. A growing number of approaches explore grammars with different directionalities, often with more direct connections to performance mechanisms. This paper describes a novel model of universal grammar as a one-directional, universal parser. Mismatch between word order and interpretation order is pervasive in comprehension; in the present model, word order is language-particular and interpretation order (i.e., hierarchy) is universal. These orders are not two dimensions of a unified abstract object (e.g., precedence and dominance in a single tree); rather, both are temporal sequences, and UG is an invariant real-time procedure (based on Knuth's stack-sorting algorithm) transforming word order into hierarchical order. This shift in perspective has several desirable consequences. It collapses linearization, displacement, and composition into a single performance process. The architecture provides a novel source of brackets (labeled unambiguously and without search), which are understood not as part-whole constituency relations, but as storage and retrieval routines in parsing. It also explains why neutral word order within single syntactic cycles avoids 213-like permutations. The model identifies cycles as extended projections of lexical heads, grounding the notion of phase. This is achieved with a universal processor, dispensing with parameters. The empirical focus is word order in noun phrases. This domain provides some of the clearest evidence for 213-avoidance as a cross-linguistic word order generalization. Importantly, recursive phrase structure “bottoms out” in noun phrases, which are typically a single cycle (though further cycles may be embedded, e.g., relative clauses). By contrast, a simple transitive clause plausibly involves two cycles (vP and CP), embedding further nominal cycles. In the present theory, recursion is fundamentally distinct from structure-building within a single cycle, and different word order restrictions might emerge in larger domains like clauses. PMID:29497394
Wright, Adam; Sittig, Dean F
2008-12-01
In this paper, we describe and evaluate a new distributed architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support), which leverages current health information exchange efforts and is based on the principles of a service-oriented architecture. The architecture allows disparate clinical information systems and clinical decision support systems to be seamlessly integrated over a network according to a set of interfaces and protocols described in this paper. The architecture described is fully defined and developed, and six use cases have been developed and tested using a prototype electronic health record which links to one of the existing prototype National Health Information Networks (NHIN): drug interaction checking, syndromic surveillance, diagnostic decision support, inappropriate prescribing in older adults, information at the point of care and a simple personal health record. Some of these use cases utilize existing decision support systems, which are either commercially or freely available at present, and developed outside of the SANDS project, while other use cases are based on decision support systems developed specifically for the project. Open source code for many of these components is available, and an open source reference parser is also available for comparison and testing of other clinical information systems and clinical decision support systems that wish to implement the SANDS architecture. The SANDS architecture for decision support has several significant advantages over other architectures for clinical decision support. The most salient of these are:
Object-oriented parsing of biological databases with Python.
Ramu, C; Gemünd, C; Gibson, T J
2000-07-01
While database activities in the biological area are increasing rapidly, rather little is done in the area of parsing them in a simple and object-oriented way. We present here an elegant, simple yet powerful way of parsing biological flat-file databases. We have taken EMBL, SWISSPROT and GENBANK as examples. EMBL and SWISS-PROT do not differ much in the format structure. GENBANK has a very different format structure than EMBL and SWISS-PROT. Extracting the desired fields in an entry (for example a sub-sequence with an associated feature) for later analysis is a constant need in the biological sequence-analysis community: this is illustrated with tools to make new splice-site databases. The interface to the parser is abstract in the sense that the access to all the databases is independent from their different formats, since parsing instructions are hidden.
NASA Technical Reports Server (NTRS)
Liebowitz, J.
1986-01-01
The development of an expert system prototype for software functional requirement determination for NASA Goddard's Command Management System, as part of its process of transforming general requests into specific near-earth satellite commands, is described. The present knowledge base was formulated through interactions with domain experts, and was then linked to the existing Knowledge Engineering Systems (KES) expert system application generator. Steps in the knowledge-base development include problem-oriented attribute hierarchy development, knowledge management approach determination, and knowledge base encoding. The KES Parser and Inspector, in addition to backcasting and analogical mapping, were used to validate the expert system-derived requirements for one of the major functions of a spacecraft, the solar Maximum Mission. Knowledge refinement, evaluation, and implementation procedures of the expert system were then accomplished.
BIOSPIDA: A Relational Database Translator for NCBI
Hagen, Matthew S.; Lee, Eva K.
2010-01-01
As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time. PMID:21347013
Efficient processing of MPEG-21 metadata in the binary domain
NASA Astrophysics Data System (ADS)
Timmerer, Christian; Frank, Thomas; Hellwagner, Hermann; Heuer, Jörg; Hutter, Andreas
2005-10-01
XML-based metadata is widely adopted across the different communities and plenty of commercial and open source tools for processing and transforming are available on the market. However, all of these tools have one thing in common: they operate on plain text encoded metadata which may become a burden in constrained and streaming environments, i.e., when metadata needs to be processed together with multimedia content on the fly. In this paper we present an efficient approach for transforming such kind of metadata which are encoded using MPEG's Binary Format for Metadata (BiM) without additional en-/decoding overheads, i.e., within the binary domain. Therefore, we have developed an event-based push parser for BiM encoded metadata which transforms the metadata by a limited set of processing instructions - based on traditional XML transformation techniques - operating on bit patterns instead of cost-intensive string comparisons.
Syntactic Prediction in Language Comprehension: Evidence From Either…or
Staub, Adrian; Clifton, Charles
2006-01-01
Readers’ eye movements were monitored as they read sentences in which two noun phrases or two independent clauses were connected by the word or (NP-coordination and S-coordination, respectively). The word either could be present or absent earlier in the sentence. When either was present, the material immediately following or was read more quickly, across both sentence types. In addition, there was evidence that readers misanalyzed the S-coordination structure as an NP-coordination structure only when either was absent. The authors interpret the results as indicating that the word either enabled readers to predict the arrival of a coordination structure; this predictive activation facilitated processing of this structure when it ultimately arrived, and in the case of S-coordination sentences, enabled readers to avoid the incorrect NP-coordination analysis. The authors argue that these results support parsing theories according to which the parser can build predictable syntactic structure before encountering the corresponding lexical input. PMID:16569157
Suggestions for Improvement of User Access to GOCE L2 Data
NASA Astrophysics Data System (ADS)
Tscherning, C. C.
2011-07-01
ESA's has required that most GOCE L2 products are delivered in XML format. This creates difficulties for the users because a Parser written in Perl is needed to convert the files to files without XML tags. However several products, such as the coefficients of spherical harmonic coefficients are made available on standard form through the International Center for Global Gravity Field Models. The variance-covariance information for the gravity field models is only available without XML tags. It is suggested that all XML products are made available in the Virtual Data Archive as files without tags. This will besides making the data directly usable by a FORTRAN program also reduce the size (storage requirements) of the product to about 30 %. A further reduction of used storage should be made by tuning the number of digits for the individual quantities in the products, so that it corresponds to the actual number of significant digits.
DIEGO: detection of differential alternative splicing using Aitchison's geometry.
Doose, Gero; Bernhart, Stephan H; Wagener, Rabea; Hoffmann, Steve
2018-03-15
Alternative splicing is a biological process of fundamental importance in most eukaryotes. It plays a pivotal role in cell differentiation and gene regulation and has been associated with a number of different diseases. The widespread availability of RNA-Sequencing capacities allows an ever closer investigation of differentially expressed isoforms. However, most tools for differential alternative splicing (DAS) analysis do not take split reads, i.e. the most direct evidence for a splice event, into account. Here, we present DIEGO, a compositional data analysis method able to detect DAS between two sets of RNA-Seq samples based on split reads. The python tool DIEGO works without isoform annotations and is fast enough to analyze large experiments while being robust and accurate. We provide python and perl parsers for common formats. The software is available at: www.bioinf.uni-leipzig.de/Software/DIEGO. steve@bioinf.uni-leipzig.de. Supplementary data are available at Bioinformatics online.
Integrated verification and testing system (IVTS) for HAL/S programs
NASA Technical Reports Server (NTRS)
Senn, E. H.; Ames, K. R.; Smith, K. A.
1983-01-01
The IVTS is a large software system designed to support user-controlled verification analysis and testing activities for programs written in the HAL/S language. The system is composed of a user interface and user command language, analysis tools and an organized data base of host system files. The analysis tools are of four major types: (1) static analysis, (2) symbolic execution, (3) dynamic analysis (testing), and (4) documentation enhancement. The IVTS requires a split HAL/S compiler, divided at the natural separation point between the parser/lexical analyzer phase and the target machine code generator phase. The IVTS uses the internal program form (HALMAT) between these two phases as primary input for the analysis tools. The dynamic analysis component requires some way to 'execute' the object HAL/S program. The execution medium may be an interpretive simulation or an actual host or target machine.
Software Vulnerability Taxonomy Consolidation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polepeddi, Sriram S.
2004-12-07
In today's environment, computers and networks are increasing exposed to a number of software vulnerabilities. Information about these vulnerabilities is collected and disseminated via various large publicly available databases such as BugTraq, OSVDB and ICAT. Each of these databases, individually, do not cover all aspects of a vulnerability and lack a standard format among them, making it difficult for end-users to easily compare various vulnerabilities. A central database of vulnerabilities has not been available until today for a number of reasons, such as the non-uniform methods by which current vulnerability database providers receive information, disagreement over which features of amore » particular vulnerability are important and how best to present them, and the non-utility of the information presented in many databases. The goal of this software vulnerability taxonomy consolidation project is to address the need for a universally accepted vulnerability taxonomy that classifies vulnerabilities in an unambiguous manner. A consolidated vulnerability database (CVDB) was implemented that coalesces and organizes vulnerability data from disparate data sources. Based on the work done in this paper, there is strong evidence that a consolidated taxonomy encompassing and organizing all relevant data can be achieved. However, three primary obstacles remain: lack of referencing a common ''primary key'', un-structured and free-form descriptions of necessary vulnerability data, and lack of data on all aspects of a vulnerability. This work has only considered data that can be unambiguously extracted from various data sources by straightforward parsers. It is felt that even with the use of more advanced, information mining tools, which can wade through the sea of unstructured vulnerability data, this current integration methodology would still provide repeatable, unambiguous, and exhaustive results. Though the goal of coalescing all available data, which would be of use to system administrators, software developers and vulnerability researchers is not yet achieved, this work has resulted in the most exhaustive collection of vulnerability data to date.« less
A Python library for FAIRer access and deposition to the Metabolomics Workbench Data Repository.
Smelter, Andrey; Moseley, Hunter N B
2018-01-01
The Metabolomics Workbench Data Repository is a public repository of mass spectrometry and nuclear magnetic resonance data and metadata derived from a wide variety of metabolomics studies. The data and metadata for each study is deposited, stored, and accessed via files in the domain-specific 'mwTab' flat file format. In order to improve the accessibility, reusability, and interoperability of the data and metadata stored in 'mwTab' formatted files, we implemented a Python library and package. This Python package, named 'mwtab', is a parser for the domain-specific 'mwTab' flat file format, which provides facilities for reading, accessing, and writing 'mwTab' formatted files. Furthermore, the package provides facilities to validate both the format and required metadata elements of a given 'mwTab' formatted file. In order to develop the 'mwtab' package we used the official 'mwTab' format specification. We used Git version control along with Python unit-testing framework as well as continuous integration service to run those tests on multiple versions of Python. Package documentation was developed using sphinx documentation generator. The 'mwtab' package provides both Python programmatic library interfaces and command-line interfaces for reading, writing, and validating 'mwTab' formatted files. Data and associated metadata are stored within Python dictionary- and list-based data structures, enabling straightforward, 'pythonic' access and manipulation of data and metadata. Also, the package provides facilities to convert 'mwTab' files into a JSON formatted equivalent, enabling easy reusability of the data by all modern programming languages that implement JSON parsers. The 'mwtab' package implements its metadata validation functionality based on a pre-defined JSON schema that can be easily specialized for specific types of metabolomics studies. The library also provides a command-line interface for interconversion between 'mwTab' and JSONized formats in raw text and a variety of compressed binary file formats. The 'mwtab' package is an easy-to-use Python package that provides FAIRer utilization of the Metabolomics Workbench Data Repository. The source code is freely available on GitHub and via the Python Package Index. Documentation includes a 'User Guide', 'Tutorial', and 'API Reference'. The GitHub repository also provides 'mwtab' package unit-tests via a continuous integration service.
HTSeq--a Python framework to work with high-throughput sequencing data.
Anders, Simon; Pyl, Paul Theodor; Huber, Wolfgang
2015-01-15
A large choice of tools exists for many standard tasks in the analysis of high-throughput sequencing (HTS) data. However, once a project deviates from standard workflows, custom scripts are needed. We present HTSeq, a Python library to facilitate the rapid development of such scripts. HTSeq offers parsers for many common data formats in HTS projects, as well as classes to represent data, such as genomic coordinates, sequences, sequencing reads, alignments, gene model information and variant calls, and provides data structures that allow for querying via genomic coordinates. We also present htseq-count, a tool developed with HTSeq that preprocesses RNA-Seq data for differential expression analysis by counting the overlap of reads with genes. HTSeq is released as an open-source software under the GNU General Public Licence and available from http://www-huber.embl.de/HTSeq or from the Python Package Index at https://pypi.python.org/pypi/HTSeq. © The Author 2014. Published by Oxford University Press.
Activity Scratchpad Prototype: Simplifying the Rover Activity Planning Cycle
NASA Technical Reports Server (NTRS)
Abramyan, Lucy
2005-01-01
The Mars Exploration Rover mission depends on the Science Activity Planner as its primary interface to the Spirit and Opportunity Rovers. Scientists alternate between a series of mouse clicks and keyboard inputs to create a set of instructions for the rovers. To accelerate planning by minimizing mouse usage, a rover planning editor should receive the majority of inputted commands from the keyboard. Thorough investigation of the Eclipse platform's Java editor has provided the understanding of the base model for the Activity Scratchpad. Desirable Eclipse features can be mapped to specific rover planning commands, such as auto-completion for activity titles and content assist for target names. A custom editor imitating the Java editor's features was created with an XML parser for experimenting purposes. The prototype editor minimized effort for redundant tasks and significantly improved the visual representation of XML syntax by highlighting keywords, coloring rules, folding projections, and providing hover assist, templates and an outline view of the code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohn, Michael; Adams, Paul
2006-09-05
The L3 system is a computational steering environment for image processing and scientific computing. It consists of an interactive graphical language and interface. Its purpose is to help advanced users in controlling their computational software and assist in the management of data accumulated during numerical experiments. L3 provides a combination of features not found in other environments; these are: - textual and graphical construction of programs - persistence of programs and associated data - direct mapping between the scripts, the parameters, and the produced data - implicit hierarchial data organization - full programmability, including conditionals and functions - incremental executionmore » of programs The software includes the l3 language and the graphical environment. The language is a single-assignment functional language; the implementation consists of lexer, parser, interpreter, storage handler, and editing support, The graphical environment is an event-driven nested list viewer/editor providing graphical elements corresponding to the language. These elements are both the represenation of a users program and active interfaces to the values computed by that program.« less
Khumrin, Piyapong; Chumpoo, Pitupoom
2016-03-01
Electrocardiography is one of the most important non-invasive diagnostic tools for diagnosing coronary heart disease. The electrocardiography information system in Maharaj Nakorn Chiang Mai Hospital required a massive manual labor effort. In this article, we propose an approach toward the integration of heterogeneous electrocardiography data and the implementation of an integrated electrocardiography information system into the existing Hospital Information System. The system integrates different electrocardiography formats into a consistent electrocardiography rendering by using Java software. The interface acts as middleware to seamlessly integrate different electrocardiography formats. Instead of using a common electrocardiography protocol, we applied a central format based on Java classes for mapping different electrocardiography formats which contains a specific parser for each electrocardiography format to acquire the same information. Our observations showed that the new system improved the effectiveness of data management, work flow, and data quality; increased the availability of information; and finally improved quality of care. © The Author(s) 2014.
Predictive processing of novel compounds: evidence from Japanese.
Hirose, Yuki; Mazuka, Reiko
2015-03-01
Our study argues that pre-head anticipatory processing operates at a level below the level of the sentence. A visual-world eye-tracking study demonstrated that, in processing of Japanese novel compounds, the compound structure can be constructed prior to the head if the prosodic information on the preceding modifier constituent signals that the Compound Accent Rule (CAR) is being applied. This prosodic cue rules out the single head analysis of the modifier noun, which would otherwise be a natural and economical choice. Once the structural representation for the head is computed in advance, the parser becomes faster in identifying the compound meaning. This poses a challenge to models maintaining that structural integration and word recognition are separate processes. At the same time, our results, together with previous findings, suggest the possibility that there is some degree of staging during the processing of different sources of information during the comprehension of compound nouns. Copyright © 2014 Elsevier B.V. All rights reserved.
RCrawler: An R package for parallel web crawling and scraping
NASA Astrophysics Data System (ADS)
Khalil, Salim; Fakir, Mohamed
RCrawler is a contributed R package for domain-based web crawling and content scraping. As the first implementation of a parallel web crawler in the R environment, RCrawler can crawl, parse, store pages, extract contents, and produce data that can be directly employed for web content mining applications. However, it is also flexible, and could be adapted to other applications. The main features of RCrawler are multi-threaded crawling, content extraction, and duplicate content detection. In addition, it includes functionalities such as URL and content-type filtering, depth level controlling, and a robot.txt parser. Our crawler has a highly optimized system, and can download a large number of pages per second while being robust against certain crashes and spider traps. In this paper, we describe the design and functionality of RCrawler, and report on our experience of implementing it in an R environment, including different optimizations that handle the limitations of R. Finally, we discuss our experimental results.
KEGGtranslator: visualizing and converting the KEGG PATHWAY database to various formats.
Wrzodek, Clemens; Dräger, Andreas; Zell, Andreas
2011-08-15
The KEGG PATHWAY database provides a widely used service for metabolic and nonmetabolic pathways. It contains manually drawn pathway maps with information about the genes, reactions and relations contained therein. To store these pathways, KEGG uses KGML, a proprietary XML-format. Parsers and translators are needed to process the pathway maps for usage in other applications and algorithms. We have developed KEGGtranslator, an easy-to-use stand-alone application that can visualize and convert KGML formatted XML-files into multiple output formats. Unlike other translators, KEGGtranslator supports a plethora of output formats, is able to augment the information in translated documents (e.g. MIRIAM annotations) beyond the scope of the KGML document, and amends missing components to fragmentary reactions within the pathway to allow simulations on those. KEGGtranslator is freely available as a Java(™) Web Start application and for download at http://www.cogsys.cs.uni-tuebingen.de/software/KEGGtranslator/. KGML files can be downloaded from within the application. clemens.wrzodek@uni-tuebingen.de Supplementary data are available at Bioinformatics online.
JS-MS: a cross-platform, modular javascript viewer for mass spectrometry signals.
Rosen, Jebediah; Handy, Kyle; Gillan, André; Smith, Rob
2017-11-06
Despite the ubiquity of mass spectrometry (MS), data processing tools can be surprisingly limited. To date, there is no stand-alone, cross-platform 3-D visualizer for MS data. Available visualization toolkits require large libraries with multiple dependencies and are not well suited for custom MS data processing modules, such as MS storage systems or data processing algorithms. We present JS-MS, a 3-D, modular JavaScript client application for viewing MS data. JS-MS provides several advantages over existing MS viewers, such as a dependency-free, browser-based, one click, cross-platform install and better navigation interfaces. The client includes a modular Java backend with a novel streaming.mzML parser to demonstrate the API-based serving of MS data to the viewer. JS-MS enables custom MS data processing and evaluation by providing fast, 3-D visualization using improved navigation without dependencies. JS-MS is publicly available with a GPLv2 license at github.com/optimusmoose/jsms.
Zhou, Li; Plasek, Joseph M; Mahoney, Lisa M; Karipineni, Neelima; Chang, Frank; Yan, Xuemin; Chang, Fenny; Dimaggio, Dana; Goldman, Debora S.; Rocha, Roberto A.
2011-01-01
Clinical information is often coded using different terminologies, and therefore is not interoperable. Our goal is to develop a general natural language processing (NLP) system, called Medical Text Extraction, Reasoning and Mapping System (MTERMS), which encodes clinical text using different terminologies and simultaneously establishes dynamic mappings between them. MTERMS applies a modular, pipeline approach flowing from a preprocessor, semantic tagger, terminology mapper, context analyzer, and parser to structure inputted clinical notes. Evaluators manually reviewed 30 free-text and 10 structured outpatient clinical notes compared to MTERMS output. MTERMS achieved an overall F-measure of 90.6 and 94.0 for free-text and structured notes respectively for medication and temporal information. The local medication terminology had 83.0% coverage compared to RxNorm’s 98.0% coverage for free-text notes. 61.6% of mappings between the terminologies are exact match. Capture of duration was significantly improved (91.7% vs. 52.5%) from systems in the third i2b2 challenge. PMID:22195230
Recognition of speaker-dependent continuous speech with KEAL
NASA Astrophysics Data System (ADS)
Mercier, G.; Bigorgne, D.; Miclet, L.; Le Guennec, L.; Querre, M.
1989-04-01
A description of the speaker-dependent continuous speech recognition system KEAL is given. An unknown utterance, is recognized by means of the followng procedures: acoustic analysis, phonetic segmentation and identification, word and sentence analysis. The combination of feature-based, speaker-independent coarse phonetic segmentation with speaker-dependent statistical classification techniques is one of the main design features of the acoustic-phonetic decoder. The lexical access component is essentially based on a statistical dynamic programming technique which aims at matching a phonemic lexical entry containing various phonological forms, against a phonetic lattice. Sentence recognition is achieved by use of a context-free grammar and a parsing algorithm derived from Earley's parser. A speaker adaptation module allows some of the system parameters to be adjusted by matching known utterances with their acoustical representation. The task to be performed, described by its vocabulary and its grammar, is given as a parameter of the system. Continuously spoken sentences extracted from a 'pseudo-Logo' language are analyzed and results are presented.
Highs and Lows in English Attachment.
Grillo, Nino; Costa, João; Fernandes, Bruno; Santi, Andrea
2015-11-01
Grillo and Costa (2014) claim that Relative-Clause attachment ambiguity resolution is largely dependent on whether or not a Pseudo-Relative interpretation is available. Data from Italian, and other languages allowing Pseudo-Relatives, support this hypothesis. Pseudo-Relative availability, however, covaries with the semantics of the main predicate (e.g., perceptual vs. stative). Experiment 1 assesses whether this predicate distinction alone can account for prior attachment results by testing it with a language that disallows Pseudo-Relatives (i.e. English). Low Attachment was found independent of Predicate-Type. Predicate-Type did however have a minor modulatory role. Experiment 2 shows that English, traditionally classified as a Low Attachment language, can demonstrate High Attachment with sentences globally ambiguous between a Small-Clause and a reduced Relative-Clause interpretation. These results support a grammatical account of previous effects and provide novel evidence for the parser's preference of a Small-Clause over a Restrictive interpretation, crosslinguistically. Copyright © 2015 Elsevier B.V. All rights reserved.
Modular implementation of a digital hardware design automation system
NASA Astrophysics Data System (ADS)
Masud, M.
An automation system based on AHPL (A Hardware Programming Language) was developed. The project may be divided into three distinct phases: (1) Upgrading of AHPL to make it more universally applicable; (2) Implementation of a compiler for the language; and (3) illustration of how the compiler may be used to support several phases of design activities. Several new features were added to AHPL. These include: application-dependent parameters, mutliple clocks, asynchronous results, functional registers and primitive functions. The new language, called Universal AHPL, has been defined rigorously. The compiler design is modular. The parsing is done by an automatic parser generated from the SLR(1)BNF grammar of the language. The compiler produces two data bases from the AHPL description of a circuit. The first one is a tabular representation of the circuit, and the second one is a detailed interconnection linked list. The two data bases provide a means to interface the compiler to application-dependent CAD systems.
Xiang, Ming; Grove, Julian; Giannakidou, Anastasia
2013-01-01
Previous psycholinguistics studies have shown that when forming a long distance dependency in online processing, the parser sometimes accepts a sentence even though the required grammatical constraints are only partially met. A mechanistic account of how such errors arise sheds light on both the underlying linguistic representations involved and the processing mechanisms that put such representations together. In the current study, we contrast the negative polarity items (NPI) interference effect, as shown by the acceptance of an ungrammatical sentence like "The bills that democratic senators have voted for will ever become law," with the well-known phenomenon of agreement attraction ("The key to the cabinets are … "). On the surface, these two types of errors look alike and thereby can be explained as being driven by the same source: similarity based memory interference. However, we argue that the linguistic representations involved in NPI licensing are substantially different from those of subject-verb agreement, and therefore the interference effects in each domain potentially arise from distinct sources. In particular, we show that NPI interference at least partially arises from pragmatic inferences. In a self-paced reading study with an acceptability judgment task, we showed NPI interference was modulated by participants' general pragmatic communicative skills, as quantified by the Autism-Spectrum Quotient (AQ, Baron-Cohen et al., 2001), especially in offline tasks. Participants with more autistic traits were actually less prone to the NPI interference effect than those with fewer autistic traits. This result contrasted with agreement attraction conditions, which were not influenced by individual pragmatic skill differences. We also show that different NPI licensors seem to have distinct interference profiles. We discuss two kinds of interference effects for NPI licensing: memory-retrieval based and pragmatically triggered.
Xiang, Ming; Grove, Julian; Giannakidou, Anastasia
2013-01-01
Previous psycholinguistics studies have shown that when forming a long distance dependency in online processing, the parser sometimes accepts a sentence even though the required grammatical constraints are only partially met. A mechanistic account of how such errors arise sheds light on both the underlying linguistic representations involved and the processing mechanisms that put such representations together. In the current study, we contrast the negative polarity items (NPI) interference effect, as shown by the acceptance of an ungrammatical sentence like “The bills that democratic senators have voted for will ever become law,” with the well-known phenomenon of agreement attraction (“The key to the cabinets are … ”). On the surface, these two types of errors look alike and thereby can be explained as being driven by the same source: similarity based memory interference. However, we argue that the linguistic representations involved in NPI licensing are substantially different from those of subject-verb agreement, and therefore the interference effects in each domain potentially arise from distinct sources. In particular, we show that NPI interference at least partially arises from pragmatic inferences. In a self-paced reading study with an acceptability judgment task, we showed NPI interference was modulated by participants' general pragmatic communicative skills, as quantified by the Autism-Spectrum Quotient (AQ, Baron-Cohen et al., 2001), especially in offline tasks. Participants with more autistic traits were actually less prone to the NPI interference effect than those with fewer autistic traits. This result contrasted with agreement attraction conditions, which were not influenced by individual pragmatic skill differences. We also show that different NPI licensors seem to have distinct interference profiles. We discuss two kinds of interference effects for NPI licensing: memory-retrieval based and pragmatically triggered. PMID:24109468
Fan, Jung-Wei; Friedman, Carol
2011-01-01
Biomedical natural language processing (BioNLP) is a useful technique that unlocks valuable information stored in textual data for practice and/or research. Syntactic parsing is a critical component of BioNLP applications that rely on correctly determining the sentence and phrase structure of free text. In addition to dealing with the vast amount of domain-specific terms, a robust biomedical parser needs to model the semantic grammar to obtain viable syntactic structures. With either a rule-based or corpus-based approach, the grammar engineering process requires substantial time and knowledge from experts, and does not always yield a semantically transferable grammar. To reduce the human effort and to promote semantic transferability, we propose an automated method for deriving a probabilistic grammar based on a training corpus consisting of concept strings and semantic classes from the Unified Medical Language System (UMLS), a comprehensive terminology resource widely used by the community. The grammar is designed to specify noun phrases only due to the nominal nature of the majority of biomedical terminological concepts. Evaluated on manually parsed clinical notes, the derived grammar achieved a recall of 0.644, precision of 0.737, and average cross-bracketing of 0.61, which demonstrated better performance than a control grammar with the semantic information removed. Error analysis revealed shortcomings that could be addressed to improve performance. The results indicated the feasibility of an approach which automatically incorporates terminology semantics in the building of an operational grammar. Although the current performance of the unsupervised solution does not adequately replace manual engineering, we believe once the performance issues are addressed, it could serve as an aide in a semi-supervised solution. PMID:21549857
JBioWH: an open-source Java framework for bioinformatics data integration
Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor
2013-01-01
The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh PMID:23846595
PPInterFinder--a mining tool for extracting causal relations on human proteins from literature.
Raja, Kalpana; Subramani, Suresh; Natarajan, Jeyakumar
2013-01-01
One of the most common and challenging problem in biomedical text mining is to mine protein-protein interactions (PPIs) from MEDLINE abstracts and full-text research articles because PPIs play a major role in understanding the various biological processes and the impact of proteins in diseases. We implemented, PPInterFinder--a web-based text mining tool to extract human PPIs from biomedical literature. PPInterFinder uses relation keyword co-occurrences with protein names to extract information on PPIs from MEDLINE abstracts and consists of three phases. First, it identifies the relation keyword using a parser with Tregex and a relation keyword dictionary. Next, it automatically identifies the candidate PPI pairs with a set of rules related to PPI recognition. Finally, it extracts the relations by matching the sentence with a set of 11 specific patterns based on the syntactic nature of PPI pair. We find that PPInterFinder is capable of predicting PPIs with the accuracy of 66.05% on AIMED corpus and outperforms most of the existing systems. DATABASE URL: http://www.biomining-bu.in/ppinterfinder/
JBioWH: an open-source Java framework for bioinformatics data integration.
Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor
2013-01-01
The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh.
Experimental Evaluation of Processing Time for the Synchronization of XML-Based Business Objects
NASA Astrophysics Data System (ADS)
Ameling, Michael; Wolf, Bernhard; Springer, Thomas; Schill, Alexander
Business objects (BOs) are data containers for complex data structures used in business applications such as Supply Chain Management and Customer Relationship Management. Due to the replication of application logic, multiple copies of BOs are created which have to be synchronized and updated. This is a complex and time consuming task because BOs rigorously vary in their structure according to the distribution, number and size of elements. Since BOs are internally represented as XML documents, the parsing of XML is one major cost factor which has to be considered for minimizing the processing time during synchronization. The prediction of the parsing time for BOs is an significant property for the selection of an efficient synchronization mechanism. In this paper, we present a method to evaluate the influence of the structure of BOs on their parsing time. The results of our experimental evaluation incorporating four different XML parsers examine the dependencies between the distribution of elements and the parsing time. Finally, a general cost model will be validated and simplified according to the results of the experimental setup.
PPInterFinder—a mining tool for extracting causal relations on human proteins from literature
Raja, Kalpana; Subramani, Suresh; Natarajan, Jeyakumar
2013-01-01
One of the most common and challenging problem in biomedical text mining is to mine protein–protein interactions (PPIs) from MEDLINE abstracts and full-text research articles because PPIs play a major role in understanding the various biological processes and the impact of proteins in diseases. We implemented, PPInterFinder—a web-based text mining tool to extract human PPIs from biomedical literature. PPInterFinder uses relation keyword co-occurrences with protein names to extract information on PPIs from MEDLINE abstracts and consists of three phases. First, it identifies the relation keyword using a parser with Tregex and a relation keyword dictionary. Next, it automatically identifies the candidate PPI pairs with a set of rules related to PPI recognition. Finally, it extracts the relations by matching the sentence with a set of 11 specific patterns based on the syntactic nature of PPI pair. We find that PPInterFinder is capable of predicting PPIs with the accuracy of 66.05% on AIMED corpus and outperforms most of the existing systems. Database URL: http://www.biomining-bu.in/ppinterfinder/ PMID:23325628
A person is not a number: discourse involvement in subject-verb agreement computation.
Mancini, Simona; Molinaro, Nicola; Rizzi, Luigi; Carreiras, Manuel
2011-09-02
Agreement is a very important mechanism for language processing. Mainstream psycholinguistic research on subject-verb agreement processing has emphasized the purely formal and encapsulated nature of this phenomenon, positing an equivalent access to person and number features. However, person and number are intrinsically different, because person conveys extra-syntactic information concerning the participants in the speech act. To test the person-number dissociation hypothesis we investigated the neural correlates of subject-verb agreement in Spanish, using person and number violations. While number agreement violations produced a left-anterior negativity followed by a P600 with a posterior distribution, the negativity elicited by person anomalies had a centro-posterior maximum and was followed by a P600 effect that was frontally distributed in the early phase and posteriorly distributed in the late phase. These data reveal that the parser is differentially sensitive to the two features and that it deals with the two anomalies by adopting different strategies, due to the different levels of analysis affected by the person and number violations. Copyright © 2011 Elsevier B.V. All rights reserved.
Patson, Nikole D; Ferreira, Fernanda
2009-05-01
In three eyetracking studies, we investigated the role of conceptual plurality in initial parsing decisions in temporarily ambiguous sentences with reciprocal verbs (e.g., While the lovers kissed the baby played alone). We varied the subject of the first clause using three types of plural noun phrases: conjoined noun phrases (the bride and the groom), plural definite descriptions (the lovers), and numerically quantified noun phrases (the two lovers). We found no evidence for garden-path effects when the subject was conjoined (Ferreira & McClure, 1997), but traditional garden-path effects were found with the other plural noun phrases. In addition, we tested plural anaphors that had a plural antecedent present in the discourse. We found that when the antecedent was conjoined, garden-path effects were absent compared to cases in which the antecedent was a plural definite description. Our results indicate that the parser is sensitive to the conceptual representation of a plural constituent. In particular, it appears that a Complex Reference Object (Moxey et al., 2004) automatically activates a reciprocal reading of a reciprocal verb.
NASA Astrophysics Data System (ADS)
Kuznetsov, Valentin; Riley, Daniel; Afaq, Anzar; Sekhri, Vijay; Guo, Yuyi; Lueking, Lee
2010-04-01
The CMS experiment has implemented a flexible and powerful system enabling users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to CMS physics data. To this, we have added a generalized query system in addition to the existing web and programmatic interfaces to the DBS. This query system is based on a query language that hides the complexity of the underlying database structure by discovering the join conditions between database tables. This provides a way of querying the system that is simple and straightforward for CMS data managers and physicists to use without requiring knowledge of the database tables or keys. The DBS Query Language uses the ANTLR tool to build the input query parser and tokenizer, followed by a query builder that uses a graph representation of the DBS schema to construct the SQL query sent to underlying database. We will describe the design of the query system, provide details of the language components and overview of how this component fits into the overall data discovery system architecture.
INITIATE: An Intelligent Adaptive Alert Environment.
Jafarpour, Borna; Abidi, Samina Raza; Ahmad, Ahmad Marwan; Abidi, Syed Sibte Raza
2015-01-01
Exposure to a large volume of alerts generated by medical Alert Generating Systems (AGS) such as drug-drug interaction softwares or clinical decision support systems over-whelms users and causes alert fatigue in them. Some of alert fatigue effects are ignoring crucial alerts and longer response times. A common approach to avoid alert fatigue is to devise mechanisms in AGS to stop them from generating alerts that are deemed irrelevant. In this paper, we present a novel framework called INITIATE: an INtellIgent adapTIve AlerT Environment to avoid alert fatigue by managing alerts generated by one or more AGS. We have identified and categories the lifecycle of different alerts and have developed alert management logic as per the alerts' lifecycle. Our framework incorporates an ontology that represents the alert management strategy and an alert management engine that executes this strategy. Our alert management framework offers the following features: (1) Adaptability based on users' feedback; (2) Personalization and aggregation of messages; and (3) Connection to Electronic Medical Records by implementing a HL7 Clinical Document Architecture parser.
Assembling proteomics data as a prerequisite for the analysis of large scale experiments
Schmidt, Frank; Schmid, Monika; Thiede, Bernd; Pleißner, Klaus-Peter; Böhme, Martina; Jungblut, Peter R
2009-01-01
Background Despite the complete determination of the genome sequence of a huge number of bacteria, their proteomes remain relatively poorly defined. Beside new methods to increase the number of identified proteins new database applications are necessary to store and present results of large- scale proteomics experiments. Results In the present study, a database concept has been developed to address these issues and to offer complete information via a web interface. In our concept, the Oracle based data repository system SQL-LIMS plays the central role in the proteomics workflow and was applied to the proteomes of Mycobacterium tuberculosis, Helicobacter pylori, Salmonella typhimurium and protein complexes such as 20S proteasome. Technical operations of our proteomics labs were used as the standard for SQL-LIMS template creation. By means of a Java based data parser, post-processed data of different approaches, such as LC/ESI-MS, MALDI-MS and 2-D gel electrophoresis (2-DE), were stored in SQL-LIMS. A minimum set of the proteomics data were transferred in our public 2D-PAGE database using a Java based interface (Data Transfer Tool) with the requirements of the PEDRo standardization. Furthermore, the stored proteomics data were extractable out of SQL-LIMS via XML. Conclusion The Oracle based data repository system SQL-LIMS played the central role in the proteomics workflow concept. Technical operations of our proteomics labs were used as standards for SQL-LIMS templates. Using a Java based parser, post-processed data of different approaches such as LC/ESI-MS, MALDI-MS and 1-DE and 2-DE were stored in SQL-LIMS. Thus, unique data formats of different instruments were unified and stored in SQL-LIMS tables. Moreover, a unique submission identifier allowed fast access to all experimental data. This was the main advantage compared to multi software solutions, especially if personnel fluctuations are high. Moreover, large scale and high-throughput experiments must be managed in a comprehensive repository system such as SQL-LIMS, to query results in a systematic manner. On the other hand, these database systems are expensive and require at least one full time administrator and specialized lab manager. Moreover, the high technical dynamics in proteomics may cause problems to adjust new data formats. To summarize, SQL-LIMS met the requirements of proteomics data handling especially in skilled processes such as gel-electrophoresis or mass spectrometry and fulfilled the PSI standardization criteria. The data transfer into a public domain via DTT facilitated validation of proteomics data. Additionally, evaluation of mass spectra by post-processing using MS-Screener improved the reliability of mass analysis and prevented storage of data junk. PMID:19166578
Specification, Design, and Analysis of Advanced HUMS Architectures
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
2004-01-01
During the two-year project period, we have worked on several aspects of domain-specific architectures for HUMS. In particular, we looked at using scenario-based approach for the design and designed a language for describing such architectures. The language is now being used in all aspects of our HUMS design. In particular, we have made contributions in the following areas. 1) We have employed scenarios in the development of HUMS in three main areas. They are: (a) To improve reusability by using scenarios as a library indexing tool and as a domain analysis tool; (b) To improve maintainability by recording design rationales from two perspectives - problem domain and solution domain; (c) To evaluate the software architecture. 2) We have defined a new architectural language called HADL or HUMS Architectural Definition Language. It is a customized version of xArch/xADL. It is based on XML and, hence, is easily portable from domain to domain, application to application, and machine to machine. Specifications written in HADL can be easily read and parsed using the currently available XML parsers. Thus, there is no need to develop a plethora of software to support HADL. 3) We have developed an automated design process that involves two main techniques: (a) Selection of solutions from a large space of designs; (b) Synthesis of designs. However, the automation process is not an absolute Artificial Intelligence (AI) approach though it uses a knowledge-based system that epitomizes a specific HUMS domain. The process uses a database of solutions as an aid to solve the problems rather than creating a new design in the literal sense. Since searching is adopted as the main technique, the challenges involved are: (a) To minimize the effort in searching the database where a very large number of possibilities exist; (b) To develop representations that could conveniently allow us to depict design knowledge evolved over many years; (c) To capture the required information that aid the automation process.
Graph-based layout analysis for PDF documents
NASA Astrophysics Data System (ADS)
Xu, Canhui; Tang, Zhi; Tao, Xin; Li, Yun; Shi, Cao
2013-03-01
To increase the flexibility and enrich the reading experience of e-book on small portable screens, a graph based method is proposed to perform layout analysis on Portable Document Format (PDF) documents. Digital born document has its inherent advantages like representing texts and fractional images in explicit form, which can be straightforwardly exploited. To integrate traditional image-based document analysis and the inherent meta-data provided by PDF parser, the page primitives including text, image and path elements are processed to produce text and non text layer for respective analysis. Graph-based method is developed in superpixel representation level, and page text elements corresponding to vertices are used to construct an undirected graph. Euclidean distance between adjacent vertices is applied in a top-down manner to cut the graph tree formed by Kruskal's algorithm. And edge orientation is then used in a bottom-up manner to extract text lines from each sub tree. On the other hand, non-textual objects are segmented by connected component analysis. For each segmented text and non-text composite, a 13-dimensional feature vector is extracted for labelling purpose. The experimental results on selected pages from PDF books are presented.
Spatialized audio improves call sign recognition during multi-aircraft control.
Kim, Sungbin; Miller, Michael E; Rusnock, Christina F; Elshaw, John J
2018-07-01
We investigated the impact of a spatialized audio display on response time, workload, and accuracy while monitoring auditory information for relevance. The human ability to differentiate sound direction implies that spatial audio may be used to encode information. Therefore, it is hypothesized that spatial audio cues can be applied to aid differentiation of critical versus noncritical verbal auditory information. We used a human performance model and a laboratory study involving 24 participants to examine the effect of applying a notional, automated parser to present audio in a particular ear depending on information relevance. Operator workload and performance were assessed while subjects listened for and responded to relevant audio cues associated with critical information among additional noncritical information. Encoding relevance through spatial location in a spatial audio display system--as opposed to monophonic, binaural presentation--significantly reduced response time and workload, particularly for noncritical information. Future auditory displays employing spatial cues to indicate relevance have the potential to reduce workload and improve operator performance in similar task domains. Furthermore, these displays have the potential to reduce the dependence of workload and performance on the number of audio cues. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, R.C.
This thesis involved the construction of (1) a grammar that incorporates knowledge on base invariancy and secondary structure in a molecule and (2) a parser engine that uses the grammar to position bases into the structural subunits of the molecule. These concepts were combined with a novel pinning technique to form a tool that semi-automates insertion of a new species into the alignment for the 16S rRNA molecule (a component of the ribosome) maintained by Dr. Carl Woese's group at the University of Illinois at Urbana. The tool was tested on species extracted from the alignment and on a groupmore » of entirely new species. The results were very encouraging, and the tool should be substantial aid to the curators of the 16S alignment. The construction of the grammar was itself automated, allowing application of the tool to alignments for other molecules. The logic programming language Prolog was used to construct all programs involved. The computational linguistics approach used here was found to be a useful way to attach the problem of insertion into an alignment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Ronald C.
This thesis involved the construction of (1) a grammar that incorporates knowledge on base invariancy and secondary structure in a molecule and (2) a parser engine that uses the grammar to position bases into the structural subunits of the molecule. These concepts were combined with a novel pinning technique to form a tool that semi-automates insertion of a new species into the alignment for the 16S rRNA molecule (a component of the ribosome) maintained by Dr. Carl Woese`s group at the University of Illinois at Urbana. The tool was tested on species extracted from the alignment and on a groupmore » of entirely new species. The results were very encouraging, and the tool should be substantial aid to the curators of the 16S alignment. The construction of the grammar was itself automated, allowing application of the tool to alignments for other molecules. The logic programming language Prolog was used to construct all programs involved. The computational linguistics approach used here was found to be a useful way to attach the problem of insertion into an alignment.« less
Recon2Neo4j: applying graph database technologies for managing comprehensive genome-scale networks.
Balaur, Irina; Mazein, Alexander; Saqi, Mansoor; Lysenko, Artem; Rawlings, Christopher J; Auffray, Charles
2017-04-01
The goal of this work is to offer a computational framework for exploring data from the Recon2 human metabolic reconstruction model. Advanced user access features have been developed using the Neo4j graph database technology and this paper describes key features such as efficient management of the network data, examples of the network querying for addressing particular tasks, and how query results are converted back to the Systems Biology Markup Language (SBML) standard format. The Neo4j-based metabolic framework facilitates exploration of highly connected and comprehensive human metabolic data and identification of metabolic subnetworks of interest. A Java-based parser component has been developed to convert query results (available in the JSON format) into SBML and SIF formats in order to facilitate further results exploration, enhancement or network sharing. The Neo4j-based metabolic framework is freely available from: https://diseaseknowledgebase.etriks.org/metabolic/browser/ . The java code files developed for this work are available from the following url: https://github.com/ibalaur/MetabolicFramework . ibalaur@eisbm.org. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Recon2Neo4j: applying graph database technologies for managing comprehensive genome-scale networks
Mazein, Alexander; Saqi, Mansoor; Lysenko, Artem; Rawlings, Christopher J.; Auffray, Charles
2017-01-01
Abstract Summary: The goal of this work is to offer a computational framework for exploring data from the Recon2 human metabolic reconstruction model. Advanced user access features have been developed using the Neo4j graph database technology and this paper describes key features such as efficient management of the network data, examples of the network querying for addressing particular tasks, and how query results are converted back to the Systems Biology Markup Language (SBML) standard format. The Neo4j-based metabolic framework facilitates exploration of highly connected and comprehensive human metabolic data and identification of metabolic subnetworks of interest. A Java-based parser component has been developed to convert query results (available in the JSON format) into SBML and SIF formats in order to facilitate further results exploration, enhancement or network sharing. Availability and Implementation: The Neo4j-based metabolic framework is freely available from: https://diseaseknowledgebase.etriks.org/metabolic/browser/. The java code files developed for this work are available from the following url: https://github.com/ibalaur/MetabolicFramework. Contact: ibalaur@eisbm.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27993779
Saying What You're Looking For: Linguistics Meets Video Search.
Barrett, Daniel Paul; Barbu, Andrei; Siddharth, N; Siskind, Jeffrey Mark
2016-10-01
We present an approach to searching large video corpora for clips which depict a natural-language query in the form of a sentence. Compositional semantics is used to encode subtle meaning differences lost in other approaches, such as the difference between two sentences which have identical words but entirely different meaning: The person rode the horse versus The horse rode the person. Given a sentential query and a natural-language parser, we produce a score indicating how well a video clip depicts that sentence for each clip in a corpus and return a ranked list of clips. Two fundamental problems are addressed simultaneously: detecting and tracking objects, and recognizing whether those tracks depict the query. Because both tracking and object detection are unreliable, our approach uses the sentential query to focus the tracker on the relevant participants and ensures that the resulting tracks are described by the sentential query. While most earlier work was limited to single-word queries which correspond to either verbs or nouns, we search for complex queries which contain multiple phrases, such as prepositional phrases, and modifiers, such as adverbs. We demonstrate this approach by searching for 2,627 naturally elicited sentential queries in 10 Hollywood movies.
The role of parallelism in the real-time processing of anaphora.
Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P
2012-06-01
Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution.
The role of parallelism in the real-time processing of anaphora
Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P.
2012-01-01
Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution. PMID:23741080
Transformation as a Design Process and Runtime Architecture for High Integrity Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bespalko, S.J.; Winter, V.L.
1999-04-05
We have discussed two aspects of creating high integrity software that greatly benefit from the availability of transformation technology, which in this case is manifest by the requirement for a sophisticated backtracking parser. First, because of the potential for correctly manipulating programs via small changes, an automated non-procedural transformation system can be a valuable tool for constructing high assurance software. Second, modeling the processing of translating data into information as a, perhaps, context-dependent grammar leads to an efficient, compact implementation. From a practical perspective, the transformation process should begin in the domain language in which a problem is initially expressed.more » Thus in order for a transformation system to be practical it must be flexible with respect to domain-specific languages. We have argued that transformation applied to specification results in a highly reliable system. We also attempted to briefly demonstrate that transformation technology applied to the runtime environment will result in a safe and secure system. We thus believe that the sophisticated multi-lookahead backtracking parsing technology is central to the task of being in a position to demonstrate the existence of HIS.« less
Masanz, James J; Ogren, Philip V; Zheng, Jiaping; Sohn, Sunghwan; Kipper-Schuler, Karin C; Chute, Christopher G
2010-01-01
We aim to build and evaluate an open-source natural language processing system for information extraction from electronic medical record clinical free-text. We describe and evaluate our system, the clinical Text Analysis and Knowledge Extraction System (cTAKES), released open-source at http://www.ohnlp.org. The cTAKES builds on existing open-source technologies—the Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit. Its components, specifically trained for the clinical domain, create rich linguistic and semantic annotations. Performance of individual components: sentence boundary detector accuracy=0.949; tokenizer accuracy=0.949; part-of-speech tagger accuracy=0.936; shallow parser F-score=0.924; named entity recognizer and system-level evaluation F-score=0.715 for exact and 0.824 for overlapping spans, and accuracy for concept mapping, negation, and status attributes for exact and overlapping spans of 0.957, 0.943, 0.859, and 0.580, 0.939, and 0.839, respectively. Overall performance is discussed against five applications. The cTAKES annotations are the foundation for methods and modules for higher-level semantic processing of clinical free-text. PMID:20819853
Heavy NP shift is the parser’s last resort: Evidence from eye movements ⋆
Staub, Adrian; Clifton, Charles; Frazier, Lyn
2006-01-01
Two eye movement experiments explored the roles of verbal subcategorization possibilities and transitivity biases in the processing of heavy NP shift sentences in which the verb’s direct object appears to the right of a post-verbal phrase. In Experiment 1, participants read sentences in which a prepositional phrase immediately followed the verb, which was either obligatorily transitive or had a high transitivity bias (e.g., Jack praised/watched from the stands his daughter’s attempt to shoot a basket). Experiment 2 compared unshifted sentences to sentences in which an adverb intervened between the verb and its object, and obligatorily transitive verbs to optionally transitive verbs with widely varying transitivity biases. In both experiments, evidence of processing difficulty appeared on the material that intervened between the verb and its object when the verb was obligatorily transitive, and on the shifted direct object when the verb was optionally transitive, regardless of transitivity bias. We conclude that the parser adopts the heavy NP shift analysis only when it is forced to by the grammar, which we interpret in terms of a preference for immediate incremental interpretation. PMID:17047731
Jared, Debra; Jouravlev, Olessia; Joanisse, Marc F
2017-03-01
Decomposition theories of morphological processing in visual word recognition posit an early morpho-orthographic parser that is blind to semantic information, whereas parallel distributed processing (PDP) theories assume that the transparency of orthographic-semantic relationships influences processing from the beginning. To test these alternatives, the performance of participants on transparent (foolish), quasi-transparent (bookish), opaque (vanish), and orthographic control words (bucket) was examined in a series of 5 experiments. In Experiments 1-3 variants of a masked priming lexical-decision task were used; Experiment 4 used a masked priming semantic decision task, and Experiment 5 used a single-word (nonpriming) semantic decision task with a color-boundary manipulation. In addition to the behavioral data, event-related potential (ERP) data were collected in Experiments 1, 2, 4, and 5. Across all experiments, we observed a graded effect of semantic transparency in behavioral and ERP data, with the largest effect for semantically transparent words, the next largest for quasi-transparent words, and the smallest for opaque words. The results are discussed in terms of decomposition versus PDP approaches to morphological processing. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Xyce Parallel Electronic Simulator Reference Guide Version 6.7.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce . This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide [1] . The information herein is subject to change without notice. Copyright c 2002-2017 Sandia Corporation. All rights reserved. Trademarks Xyce TM Electronic Simulator and Xyce TMmore » are trademarks of Sandia Corporation. Orcad, Orcad Capture, PSpice and Probe are registered trademarks of Cadence Design Systems, Inc. Microsoft, Windows and Windows 7 are registered trademarks of Microsoft Corporation. Medici, DaVinci and Taurus are registered trademarks of Synopsys Corporation. Amtec and TecPlot are trademarks of Amtec Engineering, Inc. All other trademarks are property of their respective owners. Contacts World Wide Web http://xyce.sandia.gov https://info.sandia.gov/xyce (Sandia only) Email xyce@sandia.gov (outside Sandia) xyce-sandia@sandia.gov (Sandia only) Bug Reports (Sandia only) http://joseki-vm.sandia.gov/bugzilla http://morannon.sandia.gov/bugzilla« less
Activate/Inhibit KGCS Gateway via Master Console EIC Pad-B Display
NASA Technical Reports Server (NTRS)
Ferreira, Pedro Henrique
2014-01-01
My internship consisted of two major projects for the Launch Control System.The purpose of the first project was to implement the Application Control Language (ACL) to Activate Data Acquisition (ADA) and to Inhibit Data Acquisition (IDA) the Kennedy Ground Control Sub-Systems (KGCS) Gateway, to update existing Pad-B End Item Control (EIC) Display to program the ADA and IDA buttons with new ACL, and to test and release the ACL Display.The second project consisted of unit testing all of the Application Services Framework (ASF) by March 21st. The XmlFileReader was unit tested and reached 100 coverage. The XmlFileReader class is used to grab information from XML files and use them to initialize elements in the other framework elements by using the Xerces C++ XML Parser; which is open source commercial off the shelf software. The ScriptThread was also tested. ScriptThread manages the creation and activation of script threads. A large amount of the time was used in initializing the environment and learning how to set up unit tests and getting familiar with the specific segments of the project that were assigned to us.
XML-Based Visual Specification of Multidisciplinary Applications
NASA Technical Reports Server (NTRS)
Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad
2001-01-01
The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.
Xyce parallel electronic simulator : reference guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.
2011-05-01
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide. The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. It is targeted specifically to runmore » on large-scale parallel computing platforms but also runs well on a variety of architectures including single processor workstations. It also aims to support a variety of devices and models specific to Sandia needs. This document is intended to complement the Xyce Users Guide. It contains comprehensive, detailed information about a number of topics pertinent to the usage of Xyce. Included in this document is a netlist reference for the input-file commands and elements supported within Xyce; a command line reference, which describes the available command line arguments for Xyce; and quick-references for users of other circuit codes, such as Orcad's PSpice and Sandia's ChileSPICE.« less
MetaJC++: A flexible and automatic program transformation technique using meta framework
NASA Astrophysics Data System (ADS)
Beevi, Nadera S.; Reghu, M.; Chitraprasad, D.; Vinodchandra, S. S.
2014-09-01
Compiler is a tool to translate abstract code containing natural language terms to machine code. Meta compilers are available to compile more than one languages. We have developed a meta framework intends to combine two dissimilar programming languages, namely C++ and Java to provide a flexible object oriented programming platform for the user. Suitable constructs from both the languages have been combined, thereby forming a new and stronger Meta-Language. The framework is developed using the compiler writing tools, Flex and Yacc to design the front end of the compiler. The lexer and parser have been developed to accommodate the complete keyword set and syntax set of both the languages. Two intermediate representations have been used in between the translation of the source program to machine code. Abstract Syntax Tree has been used as a high level intermediate representation that preserves the hierarchical properties of the source program. A new machine-independent stack-based byte-code has also been devised to act as a low level intermediate representation. The byte-code is essentially organised into an output class file that can be used to produce an interpreted output. The results especially in the spheres of providing C++ concepts in Java have given an insight regarding the potential strong features of the resultant meta-language.
Harmony Search Algorithm for Word Sense Disambiguation.
Abed, Saad Adnan; Tiun, Sabrina; Omar, Nazlia
2015-01-01
Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used.
Rinaldi, Fabio; Schneider, Gerold; Kaljurand, Kaarel; Hess, Michael; Andronis, Christos; Konstandi, Ourania; Persidis, Andreas
2007-02-01
The amount of new discoveries (as published in the scientific literature) in the biomedical area is growing at an exponential rate. This growth makes it very difficult to filter the most relevant results, and thus the extraction of the core information becomes very expensive. Therefore, there is a growing interest in text processing approaches that can deliver selected information from scientific publications, which can limit the amount of human intervention normally needed to gather those results. This paper presents and evaluates an approach aimed at automating the process of extracting functional relations (e.g. interactions between genes and proteins) from scientific literature in the biomedical domain. The approach, using a novel dependency-based parser, is based on a complete syntactic analysis of the corpus. We have implemented a state-of-the-art text mining system for biomedical literature, based on a deep-linguistic, full-parsing approach. The results are validated on two different corpora: the manually annotated genomics information access (GENIA) corpus and the automatically annotated arabidopsis thaliana circadian rhythms (ATCR) corpus. We show how a deep-linguistic approach (contrary to common belief) can be used in a real world text mining application, offering high-precision relation extraction, while at the same time retaining a sufficient recall.
Harmony Search Algorithm for Word Sense Disambiguation
Abed, Saad Adnan; Tiun, Sabrina; Omar, Nazlia
2015-01-01
Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used. PMID:26422368
DIATOM (Data Initialization and Modification) Library Version 7.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, David A.; Schmitt, Robert G.; Hensinger, David M.
DIATOM is a library that provides numerical simulation software with a computational geometry front end that can be used to build up complex problem geometries from collections of simpler shapes. The library provides a parser which allows for application-independent geometry descriptions to be embedded in simulation software input decks. Descriptions take the form of collections of primitive shapes and/or CAD input files and material properties that can be used to describe complex spatial and temporal distributions of numerical quantities (often called “database variables” or “fields”) to help define starting conditions for numerical simulations. The capability is designed to be generalmore » purpose, robust and computationally efficient. By using a combination of computational geometry and recursive divide-and-conquer approximation techniques, a wide range of primitive shapes are supported to arbitrary degrees of fidelity, controllable through user input and limited only by machine resources. Through the use of call-back functions, numerical simulation software can request the value of a field at any time or location in the problem domain. Typically, this is used only for defining initial conditions, but the capability is not limited to just that use. The most recent version of DIATOM provides the ability to import the solution field from one numerical solution as input for another.« less
An English language interface for constrained domains
NASA Technical Reports Server (NTRS)
Page, Brenda J.
1989-01-01
The Multi-Satellite Operations Control Center (MSOCC) Jargon Interpreter (MJI) demonstrates an English language interface for a constrained domain. A constrained domain is defined as one with a small and well delineated set of actions and objects. The set of actions chosen for the MJI is from the domain of MSOCC Applications Executive (MAE) Systems Test and Operations Language (STOL) directives and contains directives for signing a cathode ray tube (CRT) on or off, calling up or clearing a display page, starting or stopping a procedure, and controlling history recording. The set of objects chosen consists of CRTs, display pages, STOL procedures, and history files. Translation from English sentences to STOL directives is done in two phases. In the first phase, an augmented transition net (ATN) parser and dictionary are used for determining grammatically correct parsings of input sentences. In the second phase, grammatically typed sentences are submitted to a forward-chaining rule-based system for interpretation and translation into equivalent MAE STOL directives. Tests of the MJI show that it is able to translate individual clearly stated sentences into the subset of directives selected for the prototype. This approach to an English language interface may be used for similarly constrained situations by modifying the MJI's dictionary and rules to reflect the change of domain.
NASA Technical Reports Server (NTRS)
2010-01-01
Topics covered include: Active and Passive Hybrid Sensor; Quick-Response Thermal Actuator for Use as a Heat Switch; System for Hydrogen Sensing; Method for Detecting Perlite Compaction in Large Cryogenic Tanks; Using Thin-Film Thermometers as Heaters in Thermal Control Applications; Directional Spherical Cherenkov Detector; AlGaN Ultraviolet Detectors for Dual-Band UV Detection; K-Band Traveling-Wave Tube Amplifier; Simplified Load-Following Control for a Fuel Cell System; Modified Phase-meter for a Heterodyne Laser Interferometer; Loosely Coupled GPS-Aided Inertial Navigation System for Range Safety; Sideband-Separating, Millimeter-Wave Heterodyne Receiver; Coaxial Propellant Injectors With Faceplate Annulus Control; Adaptable Diffraction Gratings With Wavefront Transformation; Optimizing a Laser Process for Making Carbon Nanotubes; Thermogravimetric Analysis of Single-Wall Carbon Nanotubes; Robotic Arm Comprising Two Bending Segments; Magnetostrictive Brake; Low-Friction, Low-Profile, High-Moment Two-Axis Joint; Foil Gas Thrust Bearings for High-Speed Turbomachinery; Miniature Multi-Axis Mechanism for Hand Controllers; Digitally Enhanced Heterodyne Interferometry; Focusing Light Beams To Improve Atomic-Vapor Optical Buffers; Landmark Detection in Orbital Images Using Salience Histograms; Efficient Bit-to-Symbol Likelihood Mappings; Capacity Maximizing Constellations; Natural-Language Parser for PBEM; Policy Process Editor for P(sup 3)BM Software; A Quality System Database; Trajectory Optimization: OTIS 4; and Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator.
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Archetype Model-Driven Development Framework for EHR Web System.
Kobayashi, Shinji; Kimura, Eizen; Ishihara, Ken
2013-12-01
This article describes the Web application framework for Electronic Health Records (EHRs) we have developed to reduce construction costs for EHR sytems. The openEHR project has developed clinical model driven architecture for future-proof interoperable EHR systems. This project provides the specifications to standardize clinical domain model implementations, upon which the ISO/CEN 13606 standards are based. The reference implementation has been formally described in Eiffel. Moreover C# and Java implementations have been developed as reference. While scripting languages had been more popular because of their higher efficiency and faster development in recent years, they had not been involved in the openEHR implementations. From 2007, we have used the Ruby language and Ruby on Rails (RoR) as an agile development platform to implement EHR systems, which is in conformity with the openEHR specifications. We implemented almost all of the specifications, the Archetype Definition Language parser, and RoR scaffold generator from archetype. Although some problems have emerged, most of them have been resolved. We have provided an agile EHR Web framework, which can build up Web systems from archetype models using RoR. The feasibility of the archetype model to provide semantic interoperability of EHRs has been demonstrated and we have verified that that it is suitable for the construction of EHR systems.
EST Express: PHP/MySQL based automated annotation of ESTs from expression libraries
Smith, Robin P; Buchser, William J; Lemmon, Marcus B; Pardinas, Jose R; Bixby, John L; Lemmon, Vance P
2008-01-01
Background Several biological techniques result in the acquisition of functional sets of cDNAs that must be sequenced and analyzed. The emergence of redundant databases such as UniGene and centralized annotation engines such as Entrez Gene has allowed the development of software that can analyze a great number of sequences in a matter of seconds. Results We have developed "EST Express", a suite of analytical tools that identify and annotate ESTs originating from specific mRNA populations. The software consists of a user-friendly GUI powered by PHP and MySQL that allows for online collaboration between researchers and continuity with UniGene, Entrez Gene and RefSeq. Two key features of the software include a novel, simplified Entrez Gene parser and tools to manage cDNA library sequencing projects. We have tested the software on a large data set (2,016 samples) produced by subtractive hybridization. Conclusion EST Express is an open-source, cross-platform web server application that imports sequences from cDNA libraries, such as those generated through subtractive hybridization or yeast two-hybrid screens. It then provides several layers of annotation based on Entrez Gene and RefSeq to allow the user to highlight useful genes and manage cDNA library projects. PMID:18402700
DEEPEN: A negation detection system for clinical text incorporating dependency relation into NegEx
Mehrabi, Saeed; Krishnan, Anand; Sohn, Sunghwan; Roch, Alexandra M; Schmidt, Heidi; Kesterson, Joe; Beesley, Chris; Dexter, Paul; Schmidt, C. Max; Liu, Hongfang; Palakal, Mathew
2018-01-01
In Electronic Health Records (EHRs), much of valuable information regarding patients’ conditions is embedded in free text format. Natural language processing (NLP) techniques have been developed to extract clinical information from free text. One challenge faced in clinical NLP is that the meaning of clinical entities is heavily affected by modifiers such as negation. A negation detection algorithm, NegEx, applies a simplistic approach that has been shown to be powerful in clinical NLP. However, due to the failure to consider the contextual relationship between words within a sentence, NegEx fails to correctly capture the negation status of concepts in complex sentences. Incorrect negation assignment could cause inaccurate diagnosis of patients’ condition or contaminated study cohorts. We developed a negation algorithm called DEEPEN to decrease NegEx’s false positives by taking into account the dependency relationship between negation words and concepts within a sentence using Stanford dependency parser. The system was developed and tested using EHR data from Indiana University (IU) and it was further evaluated on Mayo Clinic dataset to assess its generalizability. The evaluation results demonstrate DEEPEN, which incorporates dependency parsing into NegEx, can reduce the number of incorrect negation assignment for patients with positive findings, and therefore improve the identification of patients with the target clinical findings in EHRs. PMID:25791500
Processing Control Information in a Nominal Control Construction: An Eye-Tracking Study.
Kwon, Nayoung; Sturt, Patrick
2016-08-01
In an eye-tracking experiment, we examined the processing of the nominal control construction. Participants' eye-movements were monitored while they read sentences that included either giver control nominals (e.g. promise in Luke's promise to Sophia to photograph himself) or recipient control nominals (e.g. plea in Luke's plea to Sophia to photograph herself). In order to examine both the initial access of control information, and its later use in on-line processing, we combined a manipulation of nominal control with a gender match/mismatch paradigm. Results showed that there was evidence of processing difficulty for giver control sentences (relative to recipient control sentences) at the point where the control dependency was initially created, suggesting that control information was accessed during the early parsing stages. This effect is attributed to a recency preference in the formation of control dependencies; the parser prefers to assign a recent antecedent to PRO. In addition, readers slowed down after reading a reflexive pronoun that mismatched with the gender of the antecedent indicated by the control nominal (e.g. Luke's promise to Sophia to photograph herself). The mismatch cost suggests that control information of the nominal control construction was used to constrain dependency formation involving a controller, PRO and a reflexive, confirming the use of control information in on-line interpretation.
EST Express: PHP/MySQL based automated annotation of ESTs from expression libraries.
Smith, Robin P; Buchser, William J; Lemmon, Marcus B; Pardinas, Jose R; Bixby, John L; Lemmon, Vance P
2008-04-10
Several biological techniques result in the acquisition of functional sets of cDNAs that must be sequenced and analyzed. The emergence of redundant databases such as UniGene and centralized annotation engines such as Entrez Gene has allowed the development of software that can analyze a great number of sequences in a matter of seconds. We have developed "EST Express", a suite of analytical tools that identify and annotate ESTs originating from specific mRNA populations. The software consists of a user-friendly GUI powered by PHP and MySQL that allows for online collaboration between researchers and continuity with UniGene, Entrez Gene and RefSeq. Two key features of the software include a novel, simplified Entrez Gene parser and tools to manage cDNA library sequencing projects. We have tested the software on a large data set (2,016 samples) produced by subtractive hybridization. EST Express is an open-source, cross-platform web server application that imports sequences from cDNA libraries, such as those generated through subtractive hybridization or yeast two-hybrid screens. It then provides several layers of annotation based on Entrez Gene and RefSeq to allow the user to highlight useful genes and manage cDNA library projects.
Getting DNA copy numbers without control samples
2012-01-01
Background The selection of the reference to scale the data in a copy number analysis has paramount importance to achieve accurate estimates. Usually this reference is generated using control samples included in the study. However, these control samples are not always available and in these cases, an artificial reference must be created. A proper generation of this signal is crucial in terms of both noise and bias. We propose NSA (Normality Search Algorithm), a scaling method that works with and without control samples. It is based on the assumption that genomic regions enriched in SNPs with identical copy numbers in both alleles are likely to be normal. These normal regions are predicted for each sample individually and used to calculate the final reference signal. NSA can be applied to any CN data regardless the microarray technology and preprocessing method. It also finds an optimal weighting of the samples minimizing possible batch effects. Results Five human datasets (a subset of HapMap samples, Glioblastoma Multiforme (GBM), Ovarian, Prostate and Lung Cancer experiments) have been analyzed. It is shown that using only tumoral samples, NSA is able to remove the bias in the copy number estimation, to reduce the noise and therefore, to increase the ability to detect copy number aberrations (CNAs). These improvements allow NSA to also detect recurrent aberrations more accurately than other state of the art methods. Conclusions NSA provides a robust and accurate reference for scaling probe signals data to CN values without the need of control samples. It minimizes the problems of bias, noise and batch effects in the estimation of CNs. Therefore, NSA scaling approach helps to better detect recurrent CNAs than current methods. The automatic selection of references makes it useful to perform bulk analysis of many GEO or ArrayExpress experiments without the need of developing a parser to find the normal samples or possible batches within the data. The method is available in the open-source R package NSA, which is an add-on to the aroma.cn framework. http://www.aroma-project.org/addons. PMID:22898240
Resolving anaphoras for the extraction of drug-drug interactions in pharmacological documents
2010-01-01
Background Drug-drug interactions are frequently reported in the increasing amount of biomedical literature. Information Extraction (IE) techniques have been devised as a useful instrument to manage this knowledge. Nevertheless, IE at the sentence level has a limited effect because of the frequent references to previous entities in the discourse, a phenomenon known as 'anaphora'. DrugNerAR, a drug anaphora resolution system is presented to address the problem of co-referring expressions in pharmacological literature. This development is part of a larger and innovative study about automatic drug-drug interaction extraction. Methods The system uses a set of linguistic rules drawn by Centering Theory over the analysis provided by a biomedical syntactic parser. Semantic information provided by the Unified Medical Language System (UMLS) is also integrated in order to improve the recognition and the resolution of nominal drug anaphors. Besides, a corpus has been developed in order to analyze the phenomena and evaluate the current approach. Each possible case of anaphoric expression was looked into to determine the most effective way of resolution. Results An F-score of 0.76 in anaphora resolution was achieved, outperforming significantly the baseline by almost 73%. This ad-hoc reference line was developed to check the results as there is no previous work on anaphora resolution in pharmalogical documents. The obtained results resemble those found in related-semantic domains. Conclusions The present approach shows very promising results in the challenge of accounting for anaphoric expressions in pharmacological texts. DrugNerAr obtains similar results to other approaches dealing with anaphora resolution in the biomedical domain, but, unlike these approaches, it focuses on documents reflecting drug interactions. The Centering Theory has proved being effective at the selection of antecedents in anaphora resolution. A key component in the success of this framework is the analysis provided by the MMTx program and the DrugNer system that allows to deal with the complexity of the pharmacological language. It is expected that the positive results of the resolver increases performance of our future drug-drug interaction extraction system. PMID:20406499
Getting DNA copy numbers without control samples.
Ortiz-Estevez, Maria; Aramburu, Ander; Rubio, Angel
2012-08-16
The selection of the reference to scale the data in a copy number analysis has paramount importance to achieve accurate estimates. Usually this reference is generated using control samples included in the study. However, these control samples are not always available and in these cases, an artificial reference must be created. A proper generation of this signal is crucial in terms of both noise and bias.We propose NSA (Normality Search Algorithm), a scaling method that works with and without control samples. It is based on the assumption that genomic regions enriched in SNPs with identical copy numbers in both alleles are likely to be normal. These normal regions are predicted for each sample individually and used to calculate the final reference signal. NSA can be applied to any CN data regardless the microarray technology and preprocessing method. It also finds an optimal weighting of the samples minimizing possible batch effects. Five human datasets (a subset of HapMap samples, Glioblastoma Multiforme (GBM), Ovarian, Prostate and Lung Cancer experiments) have been analyzed. It is shown that using only tumoral samples, NSA is able to remove the bias in the copy number estimation, to reduce the noise and therefore, to increase the ability to detect copy number aberrations (CNAs). These improvements allow NSA to also detect recurrent aberrations more accurately than other state of the art methods. NSA provides a robust and accurate reference for scaling probe signals data to CN values without the need of control samples. It minimizes the problems of bias, noise and batch effects in the estimation of CNs. Therefore, NSA scaling approach helps to better detect recurrent CNAs than current methods. The automatic selection of references makes it useful to perform bulk analysis of many GEO or ArrayExpress experiments without the need of developing a parser to find the normal samples or possible batches within the data. The method is available in the open-source R package NSA, which is an add-on to the aroma.cn framework. http://www.aroma-project.org/addons.
Özgür, Arzucan; Hur, Junguk; He, Yongqun
2016-01-01
The Interaction Network Ontology (INO) logically represents biological interactions, pathways, and networks. INO has been demonstrated to be valuable in providing a set of structured ontological terms and associated keywords to support literature mining of gene-gene interactions from biomedical literature. However, previous work using INO focused on single keyword matching, while many interactions are represented with two or more interaction keywords used in combination. This paper reports our extension of INO to include combinatory patterns of two or more literature mining keywords co-existing in one sentence to represent specific INO interaction classes. Such keyword combinations and related INO interaction type information could be automatically obtained via SPARQL queries, formatted in Excel format, and used in an INO-supported SciMiner, an in-house literature mining program. We studied the gene interaction sentences from the commonly used benchmark Learning Logic in Language (LLL) dataset and one internally generated vaccine-related dataset to identify and analyze interaction types containing multiple keywords. Patterns obtained from the dependency parse trees of the sentences were used to identify the interaction keywords that are related to each other and collectively represent an interaction type. The INO ontology currently has 575 terms including 202 terms under the interaction branch. The relations between the INO interaction types and associated keywords are represented using the INO annotation relations: 'has literature mining keywords' and 'has keyword dependency pattern'. The keyword dependency patterns were generated via running the Stanford Parser to obtain dependency relation types. Out of the 107 interactions in the LLL dataset represented with two-keyword interaction types, 86 were identified by using the direct dependency relations. The LLL dataset contained 34 gene regulation interaction types, each of which associated with multiple keywords. A hierarchical display of these 34 interaction types and their ancestor terms in INO resulted in the identification of specific gene-gene interaction patterns from the LLL dataset. The phenomenon of having multi-keyword interaction types was also frequently observed in the vaccine dataset. By modeling and representing multiple textual keywords for interaction types, the extended INO enabled the identification of complex biological gene-gene interactions represented with multiple keywords.
Discovery of Predicate-Oriented Relations among Named Entities Extracted from Thai Texts
NASA Astrophysics Data System (ADS)
Tongtep, Nattapong; Theeramunkong, Thanaruk
Extracting named entities (NEs) and their relations is more difficult in Thai than in other languages due to several Thai specific characteristics, including no explicit boundaries for words, phrases and sentences; few case markers and modifier clues; high ambiguity in compound words and serial verbs; and flexible word orders. Unlike most previous works which focused on NE relations of specific actions, such as work_for, live_in, located_in, and kill, this paper proposes more general types of NE relations, called predicate-oriented relation (PoR), where an extracted action part (verb) is used as a core component to associate related named entities extracted from Thai Texts. Lacking a practical parser for the Thai language, we present three types of surface features, i.e. punctuation marks (such as token spaces), entity types and the number of entities and then apply five alternative commonly used learning schemes to investigate their performance on predicate-oriented relation extraction. The experimental results show that our approach achieves the F-measure of 97.76%, 99.19%, 95.00% and 93.50% on four different types of predicate-oriented relation (action-location, location-action, action-person and person-action) in crime-related news documents using a data set of 1,736 entity pairs. The effects of NE extraction techniques, feature sets and class unbalance on the performance of relation extraction are explored.
Detection of signals in mRNAs that influence translation.
Brown, Chris M; Jacobs, Grant; Stockwell, Peter; Schreiber, Mark
2003-01-01
Genome sequencing efforts mean that we now have extensive data from a wide range of organisms to study. Understanding the differing natures of the biology of these organisms is an important aim of genome analysis. We are interested in signals that affect translation of mRNAs. Some signals in the mRNA influence how efficiently it is translated into protein. Previous studies have indicated that many important signals are located around the initiation and termination codons. We have developed tools described here to extract the relevant sequence regions from GenBank. To create databases organised by species, or higher taxonomic groupings (eg planta), a program was developed to dynamically view and edit the taxonomy database. Data from relevant species were then extracted using our Genbank feature table parser. We analysed all available sequences, particularly those from complete genomes. Patterns were then identified using information theory. The software is available from http://transterm.otago.ac.nz. Patterns around the initiation codons for most of the organisms fall into two groups, containing the previously known Shine-Dalgarno and Kozaks efficiency signals. However, we have identified several organisms that appear to utilise novel systems. Our analysis indicates that some organisms with extremely high GC% genomes do not have a strong dependence on base pairing ribosome binding sites, as the complementary sequence is absent from many genes.
The InSAR Scientific Computing Environment
NASA Technical Reports Server (NTRS)
Rosen, Paul A.; Gurrola, Eric; Sacco, Gian Franco; Zebker, Howard
2012-01-01
We have developed a flexible and extensible Interferometric SAR (InSAR) Scientific Computing Environment (ISCE) for geodetic image processing. ISCE was designed from the ground up as a geophysics community tool for generating stacks of interferograms that lend themselves to various forms of time-series analysis, with attention paid to accuracy, extensibility, and modularity. The framework is python-based, with code elements rigorously componentized by separating input/output operations from the processing engines. This allows greater flexibility and extensibility in the data models, and creates algorithmic code that is less susceptible to unnecessary modification when new data types and sensors are available. In addition, the components support provenance and checkpointing to facilitate reprocessing and algorithm exploration. The algorithms, based on legacy processing codes, have been adapted to assume a common reference track approach for all images acquired from nearby orbits, simplifying and systematizing the geometry for time-series analysis. The framework is designed to easily allow user contributions, and is distributed for free use by researchers. ISCE can process data from the ALOS, ERS, EnviSAT, Cosmo-SkyMed, RadarSAT-1, RadarSAT-2, and TerraSAR-X platforms, starting from Level-0 or Level 1 as provided from the data source, and going as far as Level 3 geocoded deformation products. With its flexible design, it can be extended with raw/meta data parsers to enable it to work with radar data from other platforms
Archetype Model-Driven Development Framework for EHR Web System
Kimura, Eizen; Ishihara, Ken
2013-01-01
Objectives This article describes the Web application framework for Electronic Health Records (EHRs) we have developed to reduce construction costs for EHR sytems. Methods The openEHR project has developed clinical model driven architecture for future-proof interoperable EHR systems. This project provides the specifications to standardize clinical domain model implementations, upon which the ISO/CEN 13606 standards are based. The reference implementation has been formally described in Eiffel. Moreover C# and Java implementations have been developed as reference. While scripting languages had been more popular because of their higher efficiency and faster development in recent years, they had not been involved in the openEHR implementations. From 2007, we have used the Ruby language and Ruby on Rails (RoR) as an agile development platform to implement EHR systems, which is in conformity with the openEHR specifications. Results We implemented almost all of the specifications, the Archetype Definition Language parser, and RoR scaffold generator from archetype. Although some problems have emerged, most of them have been resolved. Conclusions We have provided an agile EHR Web framework, which can build up Web systems from archetype models using RoR. The feasibility of the archetype model to provide semantic interoperability of EHRs has been demonstrated and we have verified that that it is suitable for the construction of EHR systems. PMID:24523991
SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER
NASA Technical Reports Server (NTRS)
Scotti, S. J.
1994-01-01
SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.
Smelter, Andrey; Astra, Morgan; Moseley, Hunter N B
2017-03-17
The Biological Magnetic Resonance Data Bank (BMRB) is a public repository of Nuclear Magnetic Resonance (NMR) spectroscopic data of biological macromolecules. It is an important resource for many researchers using NMR to study structural, biophysical, and biochemical properties of biological macromolecules. It is primarily maintained and accessed in a flat file ASCII format known as NMR-STAR. While the format is human readable, the size of most BMRB entries makes computer readability and explicit representation a practical requirement for almost any rigorous systematic analysis. To aid in the use of this public resource, we have developed a package called nmrstarlib in the popular open-source programming language Python. The nmrstarlib's implementation is very efficient, both in design and execution. The library has facilities for reading and writing both NMR-STAR version 2.1 and 3.1 formatted files, parsing them into usable Python dictionary- and list-based data structures, making access and manipulation of the experimental data very natural within Python programs (i.e. "saveframe" and "loop" records represented as individual Python dictionary data structures). Another major advantage of this design is that data stored in original NMR-STAR can be easily converted into its equivalent JavaScript Object Notation (JSON) format, a lightweight data interchange format, facilitating data access and manipulation using Python and any other programming language that implements a JSON parser/generator (i.e., all popular programming languages). We have also developed tools to visualize assigned chemical shift values and to convert between NMR-STAR and JSONized NMR-STAR formatted files. Full API Reference Documentation, User Guide and Tutorial with code examples are also available. We have tested this new library on all current BMRB entries: 100% of all entries are parsed without any errors for both NMR-STAR version 2.1 and version 3.1 formatted files. We also compared our software to three currently available Python libraries for parsing NMR-STAR formatted files: PyStarLib, NMRPyStar, and PyNMRSTAR. The nmrstarlib package is a simple, fast, and efficient library for accessing data from the BMRB. The library provides an intuitive dictionary-based interface with which Python programs can read, edit, and write NMR-STAR formatted files and their equivalent JSONized NMR-STAR files. The nmrstarlib package can be used as a library for accessing and manipulating data stored in NMR-STAR files and as a command-line tool to convert from NMR-STAR file format into its equivalent JSON file format and vice versa, and to visualize chemical shift values. Furthermore, the nmrstarlib implementation provides a guide for effectively JSONizing other older scientific formats, improving the FAIRness of data in these formats.
MolTalk – a programming library for protein structures and structure analysis
Diemand, Alexander V; Scheib, Holger
2004-01-01
Background Two of the mostly unsolved but increasingly urgent problems for modern biologists are a) to quickly and easily analyse protein structures and b) to comprehensively mine the wealth of information, which is distributed along with the 3D co-ordinates by the Protein Data Bank (PDB). Tools which address this issue need to be highly flexible and powerful but at the same time must be freely available and easy to learn. Results We present MolTalk, an elaborate programming language, which consists of the programming library libmoltalk implemented in Objective-C and the Smalltalk-based interpreter MolTalk. MolTalk combines the advantages of an easy to learn and programmable procedural scripting with the flexibility and power of a full programming language. An overview of currently available applications of MolTalk is given and with PDBChainSaw one such application is described in more detail. PDBChainSaw is a MolTalk-based parser and information extraction utility of PDB files. Weekly updates of the PDB are synchronised with PDBChainSaw and are available for free download from the MolTalk project page following the link to PDBChainSaw. For each chain in a protein structure, PDBChainSaw extracts the sequence from its co-ordinates and provides additional information from the PDB-file header section, such as scientific organism, compound name, and EC code. Conclusion MolTalk provides a rich set of methods to analyse and even modify experimentally determined or modelled protein structures. These methods vary in complexity and are thus suitable for beginners and advanced programmers alike. We envision MolTalk to be most valuable in the following applications: 1) To analyse protein structures repetitively in large-scale, i.e. to benchmark protein structure prediction methods or to evaluate structural models. The quality of the resulting 3D-models can be assessed by e.g. calculating a Ramachandran-Sasisekharan plot. 2) To quickly retrieve information for (a limited number of) macro-molecular structures, i.e. H-bonds, salt bridges, contacts between amino acids and ligands or at the interface between two chains. 3) To programme more complex structural bioinformatics software and to implement demanding algorithms through its portability to Objective-C, e.g. iMolTalk. 4) To be used as a front end to databases, e.g. PDBChainSaw. PMID:15096277
MolTalk--a programming library for protein structures and structure analysis.
Diemand, Alexander V; Scheib, Holger
2004-04-19
Two of the mostly unsolved but increasingly urgent problems for modern biologists are a) to quickly and easily analyse protein structures and b) to comprehensively mine the wealth of information, which is distributed along with the 3D co-ordinates by the Protein Data Bank (PDB). Tools which address this issue need to be highly flexible and powerful but at the same time must be freely available and easy to learn. We present MolTalk, an elaborate programming language, which consists of the programming library libmoltalk implemented in Objective-C and the Smalltalk-based interpreter MolTalk. MolTalk combines the advantages of an easy to learn and programmable procedural scripting with the flexibility and power of a full programming language. An overview of currently available applications of MolTalk is given and with PDBChainSaw one such application is described in more detail. PDBChainSaw is a MolTalk-based parser and information extraction utility of PDB files. Weekly updates of the PDB are synchronised with PDBChainSaw and are available for free download from the MolTalk project page http://www.moltalk.org following the link to PDBChainSaw. For each chain in a protein structure, PDBChainSaw extracts the sequence from its co-ordinates and provides additional information from the PDB-file header section, such as scientific organism, compound name, and EC code. MolTalk provides a rich set of methods to analyse and even modify experimentally determined or modelled protein structures. These methods vary in complexity and are thus suitable for beginners and advanced programmers alike. We envision MolTalk to be most valuable in the following applications:1) To analyse protein structures repetitively in large-scale, i.e. to benchmark protein structure prediction methods or to evaluate structural models. The quality of the resulting 3D-models can be assessed by e.g. calculating a Ramachandran-Sasisekharan plot.2) To quickly retrieve information for (a limited number of) macro-molecular structures, i.e. H-bonds, salt bridges, contacts between amino acids and ligands or at the interface between two chains.3) To programme more complex structural bioinformatics software and to implement demanding algorithms through its portability to Objective-C, e.g. iMolTalk.4) To be used as a front end to databases, e.g. PDBChainSaw.
Radiology metrics for safe use and regulatory compliance with CT imaging
NASA Astrophysics Data System (ADS)
Paden, Robert; Pavlicek, William
2018-03-01
The MACRA Act creates a Merit-Based Payment System, with monitoring patient exposure from CT providing one possible quality metric for meeting merit requirements. Quality metrics are also required by The Joint Commission, ACR, and CMS as facilities are tasked to perform reviews of CT irradiation events outside of expected ranges, review protocols for appropriateness, and validate parameters for low dose lung cancer screening. In order to efficiently collect and analyze irradiation events and associated DICOM tags, all clinical CT devices were DICOM connected to a parser which extracted dose related information for storage into a database. Dose data from every exam is compared to the appropriate external standard exam type. AAPM recommended CTDIvol values for head and torso, adult and pediatrics, coronary and perfusion exams are used for this study. CT doses outside the expected range were automatically formatted into a report for analysis and review documentation. CT Technologist textual content, the reason for proceeding with an irradiation above the recommended threshold, is captured for inclusion in the follow up reviews by physics staff. The use of a knowledge based approach in labeling individual protocol and device settings is a practical solution resulting in efficiency of analysis and review. Manual methods would require approximately 150 person-hours for our facility, exclusive of travel time and independent of device availability. An efficiency of 89% time savings occurs through use of this informatics tool including a low dose CT comparison review and low dose lung cancer screening requirements set forth by CMS.
The neurobiology of syntax: beyond string sets.
Petersson, Karl Magnus; Hagoort, Peter
2012-07-19
The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty.
The neurobiology of syntax: beyond string sets
Petersson, Karl Magnus; Hagoort, Peter
2012-01-01
The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty. PMID:22688633
Aural mapping of STEM concepts using literature mining
NASA Astrophysics Data System (ADS)
Bharadwaj, Venkatesh
Recent technological applications have made the life of people too much dependent on Science, Technology, Engineering, and Mathematics (STEM) and its applications. Understanding basic level science is a must in order to use and contribute to this technological revolution. Science education in middle and high school levels however depends heavily on visual representations such as models, diagrams, figures, animations and presentations etc. This leaves visually impaired students with very few options to learn science and secure a career in STEM related areas. Recent experiments have shown that small aural clues called Audemes are helpful in understanding and memorization of science concepts among visually impaired students. Audemes are non-verbal sound translations of a science concept. In order to facilitate science concepts as Audemes, for visually impaired students, this thesis presents an automatic system for audeme generation from STEM textbooks. This thesis describes the systematic application of multiple Natural Language Processing tools and techniques, such as dependency parser, POS tagger, Information Retrieval algorithm, Semantic mapping of aural words, machine learning etc., to transform the science concept into a combination of atomic-sounds, thus forming an audeme. We present a rule based classification method for all STEM related concepts. This work also presents a novel way of mapping and extracting most related sounds for the words being used in textbook. Additionally, machine learning methods are used in the system to guarantee the customization of output according to a user's perception. The system being presented is robust, scalable, fully automatic and dynamically adaptable for audeme generation.
'Isotopo' a database application for facile analysis and management of mass isotopomer data.
Ahmed, Zeeshan; Zeeshan, Saman; Huber, Claudia; Hensel, Michael; Schomburg, Dietmar; Münch, Richard; Eylert, Eva; Eisenreich, Wolfgang; Dandekar, Thomas
2014-01-01
The composition of stable-isotope labelled isotopologues/isotopomers in metabolic products can be measured by mass spectrometry and supports the analysis of pathways and fluxes. As a prerequisite, the original mass spectra have to be processed, managed and stored to rapidly calculate, analyse and compare isotopomer enrichments to study, for instance, bacterial metabolism in infection. For such applications, we provide here the database application 'Isotopo'. This software package includes (i) a database to store and process isotopomer data, (ii) a parser to upload and translate different data formats for such data and (iii) an improved application to process and convert signal intensities from mass spectra of (13)C-labelled metabolites such as tertbutyldimethylsilyl-derivatives of amino acids. Relative mass intensities and isotopomer distributions are calculated applying a partial least square method with iterative refinement for high precision data. The data output includes formats such as graphs for overall enrichments in amino acids. The package is user-friendly for easy and robust data management of multiple experiments. The 'Isotopo' software is available at the following web link (section Download): http://spp1316.uni-wuerzburg.de/bioinformatics/isotopo/. The package contains three additional files: software executable setup (installer), one data set file (discussed in this article) and one excel file (which can be used to convert data from excel to '.iso' format). The 'Isotopo' software is compatible only with the Microsoft Windows operating system. http://spp1316.uni-wuerzburg.de/bioinformatics/isotopo/. © The Author(s) 2014. Published by Oxford University Press.
Omaki, Akira; Lau, Ellen F.; Davidson White, Imogen; Dakan, Myles L.; Apple, Aaron; Phillips, Colin
2015-01-01
Much work has demonstrated that speakers of verb-final languages are able to construct rich syntactic representations in advance of verb information. This may reflect general architectural properties of the language processor, or it may only reflect a language-specific adaptation to the demands of verb-finality. The present study addresses this issue by examining whether speakers of a verb-medial language (English) wait to consult verb transitivity information before constructing filler-gap dependencies, where internal arguments are fronted and hence precede the verb. This configuration makes it possible to investigate whether the parser actively makes representational commitments on the gap position before verb transitivity information becomes available. A key prediction of the view that rich pre-verbal structure building is a general architectural property is that speakers of verb-medial languages should predictively construct dependencies in advance of verb transitivity information, and therefore that disruption should be observed when the verb has intransitive subcategorization frames that are incompatible with the predicted structure. In three reading experiments (self-paced and eye-tracking) that manipulated verb transitivity, we found evidence for reading disruption when the verb was intransitive, although no such reading difficulty was observed when the critical verb was embedded inside a syntactic island structure, which blocks filler-gap dependency completion. These results are consistent with the hypothesis that in English, as in verb-final languages, information from preverbal noun phrases is sufficient to trigger active dependency completion without having access to verb transitivity information. PMID:25914658
Omaki, Akira; Lau, Ellen F; Davidson White, Imogen; Dakan, Myles L; Apple, Aaron; Phillips, Colin
2015-01-01
Much work has demonstrated that speakers of verb-final languages are able to construct rich syntactic representations in advance of verb information. This may reflect general architectural properties of the language processor, or it may only reflect a language-specific adaptation to the demands of verb-finality. The present study addresses this issue by examining whether speakers of a verb-medial language (English) wait to consult verb transitivity information before constructing filler-gap dependencies, where internal arguments are fronted and hence precede the verb. This configuration makes it possible to investigate whether the parser actively makes representational commitments on the gap position before verb transitivity information becomes available. A key prediction of the view that rich pre-verbal structure building is a general architectural property is that speakers of verb-medial languages should predictively construct dependencies in advance of verb transitivity information, and therefore that disruption should be observed when the verb has intransitive subcategorization frames that are incompatible with the predicted structure. In three reading experiments (self-paced and eye-tracking) that manipulated verb transitivity, we found evidence for reading disruption when the verb was intransitive, although no such reading difficulty was observed when the critical verb was embedded inside a syntactic island structure, which blocks filler-gap dependency completion. These results are consistent with the hypothesis that in English, as in verb-final languages, information from preverbal noun phrases is sufficient to trigger active dependency completion without having access to verb transitivity information.
Knowledge portal for Six Sigma DMAIC process
NASA Astrophysics Data System (ADS)
ThanhDat, N.; Claudiu, K. V.; Zobia, R.; Lobont, Lucian
2016-08-01
Knowledge plays a crucial role in success of DMAIC (Define, Measure, Analysis, Improve, and Control) execution. It is therefore necessary to share and renew the knowledge. Yet, one problem arising is how to create a place where knowledge are collected and shared effectively. We believe that Knowledge Portal (KP) is an important solution for the problem. In this article, the works concerning with requirements and functionalities for KP are first reviewed. Afterwards, a procedure with necessary tools to develop and implement a KP for DMAIC (KPD) is proposed. Particularly, KPD is built on the basis of free and open-source content and learning management systems, and Ontology Engineering. In order to structure and store knowledge, tools such as Protégé, OWL, as well as OWL-RDF Parsers are used. A Knowledge Reasoner module is developed in PHP language, ARC2, MySQL and SPARQL endpoint for the purpose of querying and inferring knowledge available from Ontologies. In order to validate the availability of the procedure, a KPD is built with the proposed functionalities and tools. The authors find that the KPD benefits an organization in constructing Web sites by itself with simple steps of implementation and low initial costs. It creates a space of knowledge exchange and supports effectively collecting DMAIC reports as well as sharing knowledge created. The authors’ evaluation result shows that DMAIC knowledge is found exactly with a high success rate and a good level of response time of queries.
HIGH-PRECISION BIOLOGICAL EVENT EXTRACTION: EFFECTS OF SYSTEM AND OF DATA
Cohen, K. Bretonnel; Verspoor, Karin; Johnson, Helen L.; Roeder, Chris; Ogren, Philip V.; Baumgartner, William A.; White, Elizabeth; Tipney, Hannah; Hunter, Lawrence
2013-01-01
We approached the problems of event detection, argument identification, and negation and speculation detection in the BioNLP’09 information extraction challenge through concept recognition and analysis. Our methodology involved using the OpenDMAP semantic parser with manually written rules. The original OpenDMAP system was updated for this challenge with a broad ontology defined for the events of interest, new linguistic patterns for those events, and specialized coordination handling. We achieved state-of-the-art precision for two of the three tasks, scoring the highest of 24 teams at precision of 71.81 on Task 1 and the highest of 6 teams at precision of 70.97 on Task 2. We provide a detailed analysis of the training data and show that a number of trigger words were ambiguous as to event type, even when their arguments are constrained by semantic class. The data is also shown to have a number of missing annotations. Analysis of a sampling of the comparatively small number of false positives returned by our system shows that major causes of this type of error were failing to recognize second themes in two-theme events, failing to recognize events when they were the arguments to other events, failure to recognize nontheme arguments, and sentence segmentation errors. We show that specifically handling coordination had a small but important impact on the overall performance of the system. The OpenDMAP system and the rule set are available at http://bionlp.sourceforge.net. PMID:25937701
GO Explorer: A gene-ontology tool to aid in the interpretation of shotgun proteomics data.
Carvalho, Paulo C; Fischer, Juliana Sg; Chen, Emily I; Domont, Gilberto B; Carvalho, Maria Gc; Degrave, Wim M; Yates, John R; Barbosa, Valmir C
2009-02-24
Spectral counting is a shotgun proteomics approach comprising the identification and relative quantitation of thousands of proteins in complex mixtures. However, this strategy generates bewildering amounts of data whose biological interpretation is a challenge. Here we present a new algorithm, termed GO Explorer (GOEx), that leverages the gene ontology (GO) to aid in the interpretation of proteomic data. GOEx stands out because it combines data from protein fold changes with GO over-representation statistics to help draw conclusions. Moreover, it is tightly integrated within the PatternLab for Proteomics project and, thus, lies within a complete computational environment that provides parsers and pattern recognition tools designed for spectral counting. GOEx offers three independent methods to query data: an interactive directed acyclic graph, a specialist mode where key words can be searched, and an automatic search. Its usefulness is demonstrated by applying it to help interpret the effects of perillyl alcohol, a natural chemotherapeutic agent, on glioblastoma multiform cell lines (A172). We used a new multi-surfactant shotgun proteomic strategy and identified more than 2600 proteins; GOEx pinpointed key sets of differentially expressed proteins related to cell cycle, alcohol catabolism, the Ras pathway, apoptosis, and stress response, to name a few. GOEx facilitates organism-specific studies by leveraging GO and providing a rich graphical user interface. It is a simple to use tool, specialized for biologists who wish to analyze spectral counting data from shotgun proteomics. GOEx is available at http://pcarvalho.com/patternlab.
Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang
1999-01-01
Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230
Semantic Web Infrastructure Supporting NextFrAMES Modeling Platform
NASA Astrophysics Data System (ADS)
Lakhankar, T.; Fekete, B. M.; Vörösmarty, C. J.
2008-12-01
Emerging modeling frameworks offer new ways to modelers to develop model applications by offering a wide range of software components to handle common modeling tasks such as managing space and time, distributing computational tasks in parallel processing environment, performing input/output and providing diagnostic facilities. NextFrAMES, the next generation updates to the Framework for Aquatic Modeling of the Earth System originally developed at University of New Hampshire and currently hosted at The City College of New York takes a step further by hiding most of these services from modeler behind a platform agnostic modeling platform that allows scientists to focus on the implementation of scientific concepts in the form of a new modeling markup language and through a minimalist application programming interface that provide means to implement model processes. At the core of the NextFrAMES modeling platform there is a run-time engine that interprets the modeling markup language loads the module plugins establishes the model I/O and executes the model defined by the modeling XML and the accompanying plugins. The current implementation of the run-time engine is designed for single processor or symmetric multi processing (SMP) systems but future implementation of the run-time engine optimized for different hardware architectures are anticipated. The modeling XML and the accompanying plugins define the model structure and the computational processes in a highly abstract manner, which is not only suitable for the run-time engine, but has the potential to integrate into semantic web infrastructure, where intelligent parsers can extract information about the model configurations such as input/output requirements applicable space and time scales and underlying modeling processes. The NextFrAMES run-time engine itself is also designed to tap into web enabled data services directly, therefore it can be incorporated into complex workflow to implement End-to-End application from observation to the delivery of highly aggregated information. Our presentation will discuss the web services ranging from OpenDAP and WaterOneFlow data services to metadata provided through catalog services that could serve NextFrAMES modeling applications. We will also discuss the support infrastructure needed to streamline the integration of NextFrAMES into an End-to-End application to deliver highly processed information to end users. The End-to-End application will be demonstrated through examples from the State-of-the Global Water System effort that builds on data services provided through WMO's Global Terrestrial Network for Hydrology to deliver water resources related information to policy makers for better water management. Key components of this E2E system are promoted as Community of Practice examples for the Global Observing System of Systems therefore the State-of-the Global Water System can be viewed as test case for the interoperability of the incorporated web service components.
An OpenEarth Framework (OEF) for Integrating and Visualizing Earth Science Data
NASA Astrophysics Data System (ADS)
Moreland, J. L.; Nadeau, D. R.; Baru, C.; Crosby, C. J.
2009-12-01
The integration of data is essential to make transformative progress in understanding the complex processes operating at the Earth’s surface and within its interior. While our current ability to collect massive amounts of data, develop structural models, and generate high-resolution dynamics models is well developed, our ability to quantitatively integrate these data and models into holistic interpretations of Earth systems is poorly developed. We lack the basic tools to realize a first-order goal in Earth science of developing integrated 4D models of Earth structure and processes using a complete range of available constraints, at a time when the research agenda of major efforts such as EarthScope demand such a capability. Among the challenges to 3D data integration are data that may be in different coordinate spaces, units, value ranges, file formats, and data structures. While several file format standards exist, they are infrequently or incorrectly used. Metadata is often missing, misleading, or relegated to README text files along side the data. This leaves much of the work to integrate data bogged down by simple data management tasks. The OpenEarth Framework (OEF) being developed by GEON addresses these data management difficulties. The software incorporates file format parsers, data interpretation heuristics, user interfaces to prompt for missing information, and visualization techniques to merge data into a common visual model. The OEF’s data access libraries parse formal and de facto standard file formats and map their data into a common data model. The software handles file format quirks, storage details, caching, local and remote file access, and web service protocol handling. Heuristics are used to determine coordinate spaces, units, and other key data features. Where multiple data structure, naming, and file organization conventions exist, those heuristics check for each convention’s use to find a high confidence interpretation of the data. When no convention or embedded data yields a suitable answer, the user is prompted to fill in the blanks. The OEF’s interaction libraries assist in the construction of user interfaces for data management. These libraries support data import, data prompting, data introspection, the management of the contents of a common data model, and the creation of derived data to support visualization. Finally, visualization libraries provide interactive visualization using an extended version of NASA WorldWind. The OEF viewer supports visualization of terrains, point clouds, 3D volumes, imagery, cutting planes, isosurfaces, and more. Data may be color coded, shaded, and displayed above, or below the terrain, and always registered into a common coordinate space. The OEF architecture is open and cross-platform software libraries are available separately for use with other software projects, while modules from other projects may be integrated into the OEF to extend its features. The OEF is currently being used to visualize data from EarthScope-related research in the Western US.
Local anaphor licensing in an SOV language: implications for retrieval strategies
Kush, Dave; Phillips, Colin
2014-01-01
Because morphological and syntactic constraints govern the distribution of potential antecedents for local anaphors, local antecedent retrieval might be expected to make equal use of both syntactic and morphological cues. However, previous research (e.g., Dillon et al., 2013) has shown that local antecedent retrieval is not susceptible to the same morphological interference effects observed during the resolution of morphologically-driven grammatical dependencies, such as subject-verb agreement checking (e.g., Pearlmutter et al., 1999). Although this lack of interference has been taken as evidence that syntactic cues are given priority over morphological cues in local antecedent retrieval, the absence of interference could also be the result of a confound in the materials used: the post-verbal position of local anaphors in prior studies may obscure morphological interference that would otherwise be visible if the critical anaphor were in a different position. We investigated the licensing of local anaphors (reciprocals) in Hindi, an SOV language, in order to determine whether pre-verbal anaphors are subject to morphological interference from feature-matching distractors in a way that post-verbal anaphors are not. Computational simulations using a version of the ACT-R parser (Lewis and Vasishth, 2005) predicted that a feature-matching distractor should facilitate the processing of an unlicensed reciprocal if morphological cues are used in antecedent retrieval. In a self-paced reading study we found no evidence that distractors eased processing of an unlicensed reciprocal. However, the presence of a distractor increased difficulty of processing following the reciprocal. We discuss the significance of these results for theories of cue selection in retrieval. PMID:25414680
Computer-Assisted Update of a Consumer Health Vocabulary Through Mining of Social Network Data
2011-01-01
Background Consumer health vocabularies (CHVs) have been developed to aid consumer health informatics applications. This purpose is best served if the vocabulary evolves with consumers’ language. Objective Our objective was to create a computer assisted update (CAU) system that works with live corpora to identify new candidate terms for inclusion in the open access and collaborative (OAC) CHV. Methods The CAU system consisted of three main parts: a Web crawler and an HTML parser, a candidate term filter that utilizes natural language processing tools including term recognition methods, and a human review interface. In evaluation, the CAU system was applied to the health-related social network website PatientsLikeMe.com. The system’s utility was assessed by comparing the candidate term list it generated to a list of valid terms hand extracted from the text of the crawled webpages. Results The CAU system identified 88,994 unique terms 1- to 7-grams (“n-grams” are n consecutive words within a sentence) in 300 crawled PatientsLikeMe.com webpages. The manual review of the crawled webpages identified 651 valid terms not yet included in the OAC CHV or the Unified Medical Language System (UMLS) Metathesaurus, a collection of vocabularies amalgamated to form an ontology of medical terms, (ie, 1 valid term per 136.7 candidate n-grams). The term filter selected 774 candidate terms, of which 237 were valid terms, that is, 1 valid term among every 3 or 4 candidates reviewed. Conclusion The CAU system is effective for generating a list of candidate terms for human review during CHV development. PMID:21586386
Computer-assisted update of a consumer health vocabulary through mining of social network data.
Doing-Harris, Kristina M; Zeng-Treitler, Qing
2011-05-17
Consumer health vocabularies (CHVs) have been developed to aid consumer health informatics applications. This purpose is best served if the vocabulary evolves with consumers' language. Our objective was to create a computer assisted update (CAU) system that works with live corpora to identify new candidate terms for inclusion in the open access and collaborative (OAC) CHV. The CAU system consisted of three main parts: a Web crawler and an HTML parser, a candidate term filter that utilizes natural language processing tools including term recognition methods, and a human review interface. In evaluation, the CAU system was applied to the health-related social network website PatientsLikeMe.com. The system's utility was assessed by comparing the candidate term list it generated to a list of valid terms hand extracted from the text of the crawled webpages. The CAU system identified 88,994 unique terms 1- to 7-grams ("n-grams" are n consecutive words within a sentence) in 300 crawled PatientsLikeMe.com webpages. The manual review of the crawled webpages identified 651 valid terms not yet included in the OAC CHV or the Unified Medical Language System (UMLS) Metathesaurus, a collection of vocabularies amalgamated to form an ontology of medical terms, (ie, 1 valid term per 136.7 candidate n-grams). The term filter selected 774 candidate terms, of which 237 were valid terms, that is, 1 valid term among every 3 or 4 candidates reviewed. The CAU system is effective for generating a list of candidate terms for human review during CHV development.
Park, Hyun Sang; Cho, Hune; Kim, Hwa Sun
2015-04-01
The objectives of this research were to develop and evaluate a cell phone application based on the standard protocol for personal health devices and the standard information model for personal health records to support effective blood glucose management and standardized service for patients with diabetes. An application was developed for Android 4.0.3. In addition, an IEEE 11073 Manager, Medical Device Encoding Rule, and Bluetooth Health Device Profile Connector were developed for standardized health communication with a glucometer, and a Continuity of Care Document (CCD) Composer and CCD Parser were developed for CCD document exchange. The developed application was evaluated by five healthcare professionals and 87 users through a questionnaire comprising the following variables: usage intention, effort expectancy, social influence, facilitating condition, perceived risk, and voluntariness. As a result of the evaluation of usability, it was confirmed that the developed application is useful for blood glucose self-monitoring by diabetic patients. In particular, the healthcare professionals stated their own views that the application is useful to observe the trends in blood glucose change through the automatic function which records a blood glucose level measured using Bluetooth function, and the function which checks accumulated records of blood glucose levels. Also, a result of the evaluation of usage intention was 3.52 ± 0.42 out of 5 points. The application developed by our research team was confirmed by the verification of healthcare professionals that accurate feedback can be provided to healthcare professionals during the management of diabetic patients or education for glucose management.
Park, Hyun Sang; Cho, Hune
2015-01-01
Objectives The objectives of this research were to develop and evaluate a cell phone application based on the standard protocol for personal health devices and the standard information model for personal health records to support effective blood glucose management and standardized service for patients with diabetes. Methods An application was developed for Android 4.0.3. In addition, an IEEE 11073 Manager, Medical Device Encoding Rule, and Bluetooth Health Device Profile Connector were developed for standardized health communication with a glucometer, and a Continuity of Care Document (CCD) Composer and CCD Parser were developed for CCD document exchange. The developed application was evaluated by five healthcare professionals and 87 users through a questionnaire comprising the following variables: usage intention, effort expectancy, social influence, facilitating condition, perceived risk, and voluntariness. Results As a result of the evaluation of usability, it was confirmed that the developed application is useful for blood glucose self-monitoring by diabetic patients. In particular, the healthcare professionals stated their own views that the application is useful to observe the trends in blood glucose change through the automatic function which records a blood glucose level measured using Bluetooth function, and the function which checks accumulated records of blood glucose levels. Also, a result of the evaluation of usage intention was 3.52 ± 0.42 out of 5 points. Conclusions The application developed by our research team was confirmed by the verification of healthcare professionals that accurate feedback can be provided to healthcare professionals during the management of diabetic patients or education for glucose management. PMID:25995960
3D gain modeling of LMJ and NIF amplifiers
NASA Astrophysics Data System (ADS)
LeTouze, Geoffroy; Cabourdin, Olivier; Mengue, J. F.; Guenet, Mireille; Grebot, Eric; Seznec, Stephane E.; Jancaitis, Kenneth S.; Marshall, Christopher D.; Zapata, Luis E.; Erlandson, A. E.
1999-07-01
A 3D ray-trace model has been developed to predict the performance of flashlamp pumped laser amplifiers. The computer program, written in C++, includes a graphical display option using the Open Inventor library, as well as a parser and a loader allowing the user to easily model complex multi-segment amplifier systems. It runs both on a workstation cluster at LLNL, and on the T3E Cray at CEA. We will discuss how we have reduce the required computation time without changing precision by optimizing the parameters which set the discretization level of the calculation. As an example, the sample of calculation points is chosen to fit the pumping profile through the thickness of amplifier slabs. We will show the difference in pump rates with our latest model as opposed to those produced by our earlier 2.5D code AmpModel. We will also present the results of calculations which model surfaces and other 3D effects such as top and bottom refelcotr positions and reflectivity which could not be included in the 2.5D model. This new computer model also includes a full 3D calculation of the amplified spontaneous emission rate in the laser slab, as opposed to the 2.5D model which tracked only the variation in the gain across the transverse dimensions of the slab. We will present the impact of this evolution of the model on the predicted stimulated decay rate and the resulting gain distribution. Comparison with most recent AmpLab experimental result will be presented, in the different typical NIF and LMJ configurations.
Critical evaluation of reverse engineering tool Imagix 4D!
Yadav, Rashmi; Patel, Ravindra; Kothari, Abhay
2016-01-01
The comprehension of legacy codes is difficult to understand. Various commercial reengineering tools are available that have unique working styles, and are equipped with their inherent capabilities and shortcomings. The focus of the available tools is in visualizing static behavior not the dynamic one. Therefore, it is difficult for people who work in software product maintenance, code understanding reengineering/reverse engineering. Consequently, the need for a comprehensive reengineering/reverse engineering tool arises. We found the usage of Imagix 4D to be good as it generates the maximum pictorial representations in the form of flow charts, flow graphs, class diagrams, metrics and, to a partial extent, dynamic visualizations. We evaluated Imagix 4D with the help of a case study involving a few samples of source code. The behavior of the tool was analyzed on multiple small codes and a large code gcc C parser. Large code evaluation was performed to uncover dead code, unstructured code, and the effect of not including required files at preprocessing level. The utility of Imagix 4D to prepare decision density and complexity metrics for a large code was found to be useful in getting to know how much reengineering is required. At the outset, Imagix 4D offered limitations in dynamic visualizations, flow chart separation (large code) and parsing loops. The outcome of evaluation will eventually help in upgrading Imagix 4D and posed a need of full featured tools in the area of software reengineering/reverse engineering. It will also help the research community, especially those who are interested in the realm of software reengineering tool building.
A new polymorphic and multicopy MHC gene family related to nonmammalian class I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leelayuwat, C.; Degli-Esposti, M.A.; Abraham, L.J.
1994-12-31
The authors have used genomic analysis to characterize a region of the central major histocompatibility complex (MHC) spanning {approximately} 300 kilobases (kb) between TNF and HLA-B. This region has been suggested to carry genetic factors relevant to the development of autoimmune diseases such as myasthenia gravis (MG) and insulin dependent diabetes mellitus (IDDM). Genomic sequence was analyzed for coding potential, using two neural network programs, GRAIL and GeneParser. A genomic probe, JAB, containing putative coding sequences (PERB11) located 60 kb centromeric of HLA-B, was used for northern analysis of human tissues. Multiple transcripts were detected. Southern analysis of genomic DNAmore » and overlapping YAC clones, covering the region from BAT1 to HLA-F, indicated that there are at least five copies of PERB11, four of which are located within this region of the MHC. The partial cDNA sequence of PERB11 was obtained from poly-A RNA derived from skeletal muscle. The putative amino acid sequence of PERB11 shares {approximately} 30% identity to MHC class I molecules from various species, including reptiles, chickens, and frogs, as well as to other MHC class I-like molecules, such as the IgG FcR of the mouse and rat and the human Zn-{alpha}2-glycoprotein. From direct comparison of amino acid sequences, it is concluded that PERB11 is a distinct molecule more closely related to nonmammalian than known mammalian MHC class I molecules. Genomic sequence analysis of PERB11 from five MHC ancestral haplotypes (AH) indicated that the gene is polymorphic at both DNA and protein level. The results suggest that the authors have identified a novel polymorphic gene family with multiple copies within the MHC. 48 refs., 10 figs., 2 tabs.« less
Normalization of relative and incomplete temporal expressions in clinical narratives.
Sun, Weiyi; Rumshisky, Anna; Uzuner, Ozlem
2015-09-01
To improve the normalization of relative and incomplete temporal expressions (RI-TIMEXes) in clinical narratives. We analyzed the RI-TIMEXes in temporally annotated corpora and propose two hypotheses regarding the normalization of RI-TIMEXes in the clinical narrative domain: the anchor point hypothesis and the anchor relation hypothesis. We annotated the RI-TIMEXes in three corpora to study the characteristics of RI-TMEXes in different domains. This informed the design of our RI-TIMEX normalization system for the clinical domain, which consists of an anchor point classifier, an anchor relation classifier, and a rule-based RI-TIMEX text span parser. We experimented with different feature sets and performed an error analysis for each system component. The annotation confirmed the hypotheses that we can simplify the RI-TIMEXes normalization task using two multi-label classifiers. Our system achieves anchor point classification, anchor relation classification, and rule-based parsing accuracy of 74.68%, 87.71%, and 57.2% (82.09% under relaxed matching criteria), respectively, on the held-out test set of the 2012 i2b2 temporal relation challenge. Experiments with feature sets reveal some interesting findings, such as: the verbal tense feature does not inform the anchor relation classification in clinical narratives as much as the tokens near the RI-TIMEX. Error analysis showed that underrepresented anchor point and anchor relation classes are difficult to detect. We formulate the RI-TIMEX normalization problem as a pair of multi-label classification problems. Considering only RI-TIMEX extraction and normalization, the system achieves statistically significant improvement over the RI-TIMEX results of the best systems in the 2012 i2b2 challenge. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Xyce Parallel Electronic Simulator Reference Guide Version 6.4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Mei, Ting; Russo, Thomas V.
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce . This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide [1] . Trademarks The information herein is subject to change without notice. Copyright c 2002-2015 Sandia Corporation. All rights reserved. Xyce TM Electronic Simulator and Xyce TMmore » are trademarks of Sandia Corporation. Portions of the Xyce TM code are: Copyright c 2002, The Regents of the University of California. Produced at the Lawrence Livermore National Laboratory. Written by Alan Hindmarsh, Allan Taylor, Radu Serban. UCRL-CODE-2002-59 All rights reserved. Orcad, Orcad Capture, PSpice and Probe are registered trademarks of Cadence Design Systems, Inc. Microsoft, Windows and Windows 7 are registered trademarks of Microsoft Corporation. Medici, DaVinci and Taurus are registered trademarks of Synopsys Corporation. Amtec and TecPlot are trademarks of Amtec Engineering, Inc. Xyce 's expression library is based on that inside Spice 3F5 developed by the EECS Department at the University of California. The EKV3 MOSFET model was developed by the EKV Team of the Electronics Laboratory-TUC of the Technical University of Crete. All other trademarks are property of their respective owners. Contacts Bug Reports (Sandia only) http://joseki.sandia.gov/bugzilla http://charleston.sandia.gov/bugzilla World Wide Web http://xyce.sandia.gov http://charleston.sandia.gov/xyce (Sandia only) Email xyce@sandia.gov (outside Sandia) xyce-sandia@sandia.gov (Sandia only)« less
Kim, Hwa Sun; Cho, Hune
2011-01-01
Objectives The Health Level Seven Interface Engine (HL7 IE), developed by Kyungpook National University, has been employed in health information systems, however users without a background in programming have reported difficulties in using it. Therefore, we developed a graphical user interface (GUI) engine to make the use of the HL7 IE more convenient. Methods The GUI engine was directly connected with the HL7 IE to handle the HL7 version 2.x messages. Furthermore, the information exchange rules (called the mapping data), represented by a conceptual graph in the GUI engine, were transformed into program objects that were made available to the HL7 IE; the mapping data were stored as binary files for reuse. The usefulness of the GUI engine was examined through information exchange tests between an HL7 version 2.x message and a health information database system. Results Users could easily create HL7 version 2.x messages by creating a conceptual graph through the GUI engine without requiring assistance from programmers. In addition, time could be saved when creating new information exchange rules by reusing the stored mapping data. Conclusions The GUI engine was not able to incorporate information types (e.g., extensible markup language, XML) other than the HL7 version 2.x messages and the database, because it was designed exclusively for the HL7 IE protocol. However, in future work, by including additional parsers to manage XML-based information such as Continuity of Care Documents (CCD) and Continuity of Care Records (CCR), we plan to ensure that the GUI engine will be more widely accessible for the health field. PMID:22259723
A New Paradigm to Analyze Data Completeness of Patient Data.
Nasir, Ayan; Gurupur, Varadraj; Liu, Xinliang
2016-08-03
There is a need to develop a tool that will measure data completeness of patient records using sophisticated statistical metrics. Patient data integrity is important in providing timely and appropriate care. Completeness is an important step, with an emphasis on understanding the complex relationships between data fields and their relative importance in delivering care. This tool will not only help understand where data problems are but also help uncover the underlying issues behind them. Develop a tool that can be used alongside a variety of health care database software packages to determine the completeness of individual patient records as well as aggregate patient records across health care centers and subpopulations. The methodology of this project is encapsulated within the Data Completeness Analysis Package (DCAP) tool, with the major components including concept mapping, CSV parsing, and statistical analysis. The results from testing DCAP with Healthcare Cost and Utilization Project (HCUP) State Inpatient Database (SID) data show that this tool is successful in identifying relative data completeness at the patient, subpopulation, and database levels. These results also solidify a need for further analysis and call for hypothesis driven research to find underlying causes for data incompleteness. DCAP examines patient records and generates statistics that can be used to determine the completeness of individual patient data as well as the general thoroughness of record keeping in a medical database. DCAP uses a component that is customized to the settings of the software package used for storing patient data as well as a Comma Separated Values (CSV) file parser to determine the appropriate measurements. DCAP itself is assessed through a proof of concept exercise using hypothetical data as well as available HCUP SID patient data.
A New Paradigm to Analyze Data Completeness of Patient Data
Nasir, Ayan; Liu, Xinliang
2016-01-01
Summary Background There is a need to develop a tool that will measure data completeness of patient records using sophisticated statistical metrics. Patient data integrity is important in providing timely and appropriate care. Completeness is an important step, with an emphasis on understanding the complex relationships between data fields and their relative importance in delivering care. This tool will not only help understand where data problems are but also help uncover the underlying issues behind them. Objectives Develop a tool that can be used alongside a variety of health care database software packages to determine the completeness of individual patient records as well as aggregate patient records across health care centers and subpopulations. Methods The methodology of this project is encapsulated within the Data Completeness Analysis Package (DCAP) tool, with the major components including concept mapping, CSV parsing, and statistical analysis. Results The results from testing DCAP with Healthcare Cost and Utilization Project (HCUP) State Inpatient Database (SID) data show that this tool is successful in identifying relative data completeness at the patient, subpopulation, and database levels. These results also solidify a need for further analysis and call for hypothesis driven research to find underlying causes for data incompleteness. Conclusion DCAP examines patient records and generates statistics that can be used to determine the completeness of individual patient data as well as the general thoroughness of record keeping in a medical database. DCAP uses a component that is customized to the settings of the software package used for storing patient data as well as a Comma Separated Values (CSV) file parser to determine the appropriate measurements. DCAP itself is assessed through a proof of concept exercise using hypothetical data as well as available HCUP SID patient data. PMID:27484918
Durand, Patrick; Labarre, Laurent; Meil, Alain; Divo, Jean-Louis; Vandenbrouck, Yves; Viari, Alain; Wojcik, Jérôme
2006-01-17
A large variety of biological data can be represented by graphs. These graphs can be constructed from heterogeneous data coming from genomic and post-genomic technologies, but there is still need for tools aiming at exploring and analysing such graphs. This paper describes GenoLink, a software platform for the graphical querying and exploration of graphs. GenoLink provides a generic framework for representing and querying data graphs. This framework provides a graph data structure, a graph query engine, allowing to retrieve sub-graphs from the entire data graph, and several graphical interfaces to express such queries and to further explore their results. A query consists in a graph pattern with constraints attached to the vertices and edges. A query result is the set of all sub-graphs of the entire data graph that are isomorphic to the pattern and satisfy the constraints. The graph data structure does not rely upon any particular data model but can dynamically accommodate for any user-supplied data model. However, for genomic and post-genomic applications, we provide a default data model and several parsers for the most popular data sources. GenoLink does not require any programming skill since all operations on graphs and the analysis of the results can be carried out graphically through several dedicated graphical interfaces. GenoLink is a generic and interactive tool allowing biologists to graphically explore various sources of information. GenoLink is distributed either as a standalone application or as a component of the Genostar/Iogma platform. Both distributions are free for academic research and teaching purposes and can be requested at academy@genostar.com. A commercial licence form can be obtained for profit company at info@genostar.com. See also http://www.genostar.org.
Kim, Hwa Sun; Cho, Hune; Lee, In Keun
2011-12-01
The Health Level Seven Interface Engine (HL7 IE), developed by Kyungpook National University, has been employed in health information systems, however users without a background in programming have reported difficulties in using it. Therefore, we developed a graphical user interface (GUI) engine to make the use of the HL7 IE more convenient. The GUI engine was directly connected with the HL7 IE to handle the HL7 version 2.x messages. Furthermore, the information exchange rules (called the mapping data), represented by a conceptual graph in the GUI engine, were transformed into program objects that were made available to the HL7 IE; the mapping data were stored as binary files for reuse. The usefulness of the GUI engine was examined through information exchange tests between an HL7 version 2.x message and a health information database system. Users could easily create HL7 version 2.x messages by creating a conceptual graph through the GUI engine without requiring assistance from programmers. In addition, time could be saved when creating new information exchange rules by reusing the stored mapping data. The GUI engine was not able to incorporate information types (e.g., extensible markup language, XML) other than the HL7 version 2.x messages and the database, because it was designed exclusively for the HL7 IE protocol. However, in future work, by including additional parsers to manage XML-based information such as Continuity of Care Documents (CCD) and Continuity of Care Records (CCR), we plan to ensure that the GUI engine will be more widely accessible for the health field.
Knowledge Acquisition and Management for the NASA Earth Exchange (NEX)
NASA Astrophysics Data System (ADS)
Votava, P.; Michaelis, A.; Nemani, R. R.
2013-12-01
NASA Earth Exchange (NEX) is a data, computing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform with access to large supercomputing resources. As more and more projects are being executed on NEX, we are increasingly focusing on capturing the knowledge of the NEX users and provide mechanisms for sharing it with the community in order to facilitate reuse and accelerate research. There are many possible knowledge contributions to NEX, it can be a wiki entry on the NEX portal contributed by a developer, information extracted from a publication in an automated way, or a workflow captured during code execution on the supercomputing platform. The goal of the NEX knowledge platform is to capture and organize this information and make it easily accessible to the NEX community and beyond. The knowledge acquisition process consists of three main faucets - data and metadata, workflows and processes, and web-based information. Once the knowledge is acquired, it is processed in a number of ways ranging from custom metadata parsers to entity extraction using natural language processing techniques. The processed information is linked with existing taxonomies and aligned with internal ontology (which heavily reuses number of external ontologies). This forms a knowledge graph that can then be used to improve users' search query results as well as provide additional analytics capabilities to the NEX system. Such a knowledge graph will be an important building block in creating a dynamic knowledge base for the NEX community where knowledge is both generated and easily shared.
Durand, Patrick; Labarre, Laurent; Meil, Alain; Divo1, Jean-Louis; Vandenbrouck, Yves; Viari, Alain; Wojcik, Jérôme
2006-01-01
Background A large variety of biological data can be represented by graphs. These graphs can be constructed from heterogeneous data coming from genomic and post-genomic technologies, but there is still need for tools aiming at exploring and analysing such graphs. This paper describes GenoLink, a software platform for the graphical querying and exploration of graphs. Results GenoLink provides a generic framework for representing and querying data graphs. This framework provides a graph data structure, a graph query engine, allowing to retrieve sub-graphs from the entire data graph, and several graphical interfaces to express such queries and to further explore their results. A query consists in a graph pattern with constraints attached to the vertices and edges. A query result is the set of all sub-graphs of the entire data graph that are isomorphic to the pattern and satisfy the constraints. The graph data structure does not rely upon any particular data model but can dynamically accommodate for any user-supplied data model. However, for genomic and post-genomic applications, we provide a default data model and several parsers for the most popular data sources. GenoLink does not require any programming skill since all operations on graphs and the analysis of the results can be carried out graphically through several dedicated graphical interfaces. Conclusion GenoLink is a generic and interactive tool allowing biologists to graphically explore various sources of information. GenoLink is distributed either as a standalone application or as a component of the Genostar/Iogma platform. Both distributions are free for academic research and teaching purposes and can be requested at academy@genostar.com. A commercial licence form can be obtained for profit company at info@genostar.com. See also . PMID:16417636
The state and profile of open source software projects in health and medical informatics.
Janamanchi, Balaji; Katsamakas, Evangelos; Raghupathi, Wullianallur; Gao, Wei
2009-07-01
Little has been published about the application profiles and development patterns of open source software (OSS) in health and medical informatics. This study explores these issues with an analysis of health and medical informatics related OSS projects on SourceForge, a large repository of open source projects. A search was conducted on the SourceForge website during the period from May 1 to 15, 2007, to identify health and medical informatics OSS projects. This search resulted in a sample of 174 projects. A Java-based parser was written to extract data for several of the key variables of each project. Several visually descriptive statistics were generated to analyze the profiles of the OSS projects. Many of the projects have sponsors, implying a growing interest in OSS among organizations. Sponsorship, we discovered, has a significant impact on project success metrics. Nearly two-thirds of the projects have a restrictive license type. Restrictive licensing may indicate tighter control over the development process. Our sample includes a wide range of projects that are at various stages of development (status). Projects targeted towards the advanced end user are primarily focused on bio-informatics, data formats, database and medical science applications. We conclude that there exists an active and thriving OSS development community that is focusing on health and medical informatics. A wide range of OSS applications are in development, from bio-informatics to hospital information systems. A profile of OSS in health and medical informatics emerges that is distinct and unique to the health care field. Future research can focus on OSS acceptance and diffusion and impact on cost, efficiency and quality of health care.
Dependency-based Siamese long short-term memory network for learning sentence representations
Zhu, Wenhao; Ni, Jianyue; Wei, Baogang; Lu, Zhiguo
2018-01-01
Textual representations play an important role in the field of natural language processing (NLP). The efficiency of NLP tasks, such as text comprehension and information extraction, can be significantly improved with proper textual representations. As neural networks are gradually applied to learn the representation of words and phrases, fairly efficient models of learning short text representations have been developed, such as the continuous bag of words (CBOW) and skip-gram models, and they have been extensively employed in a variety of NLP tasks. Because of the complex structure generated by the longer text lengths, such as sentences, algorithms appropriate for learning short textual representations are not applicable for learning long textual representations. One method of learning long textual representations is the Long Short-Term Memory (LSTM) network, which is suitable for processing sequences. However, the standard LSTM does not adequately address the primary sentence structure (subject, predicate and object), which is an important factor for producing appropriate sentence representations. To resolve this issue, this paper proposes the dependency-based LSTM model (D-LSTM). The D-LSTM divides a sentence representation into two parts: a basic component and a supporting component. The D-LSTM uses a pre-trained dependency parser to obtain the primary sentence information and generate supporting components, and it also uses a standard LSTM model to generate the basic sentence components. A weight factor that can adjust the ratio of the basic and supporting components in a sentence is introduced to generate the sentence representation. Compared with the representation learned by the standard LSTM, the sentence representation learned by the D-LSTM contains a greater amount of useful information. The experimental results show that the D-LSTM is superior to the standard LSTM for sentences involving compositional knowledge (SICK) data. PMID:29513748
Deductive Coordination of Multiple Geospatial Knowledge Sources
NASA Astrophysics Data System (ADS)
Waldinger, R.; Reddy, M.; Culy, C.; Hobbs, J.; Jarvis, P.; Dungan, J. L.
2002-12-01
Deductive inference is applied to choreograph the cooperation of multiple knowledge sources to respond to geospatial queries. When no one source can provide an answer, the response may be deduced from pieces of the answer provided by many sources. Examples of sources include (1) The Alexandria Digital Library Gazetteer, a repository that gives the locations for almost six million place names, (2) The Cia World Factbook, an online almanac with basic information about more than 200 countries. (3) The SRI TerraVision 3D Terrain Visualization System, which displays a flight-simulator-like interactive display of geographic data held in a database, (4) The NASA GDACC WebGIS client for searching satellite and other geographic data available through OpenGIS Consortium (OGC) Web Map Servers, and (5) The Northern Arizona University Latitude/Longitude Distance Calculator. Queries are phrased in English and are translated into logical theorems by the Gemini Natural Language Parser. The theorems are proved by SNARK, a first-order-logic theorem prover, in the context of an axiomatic geospatial theory. The theory embodies a representational scheme that takes into account the fact that the same place may have many names, and the same name may refer to many places. SNARK has built-in procedures (RCC8 and the Allen calculus, respectively) for reasoning about spatial and temporal concepts. External knowledge sources may be consulted by SNARK as the proof is in progress, so that most knowledge need not be stored axiomatically. The Open Agent Architecture (OAA) facilitates communication between sources that may be implemented on different machines in different computer languages. An answer to the query, in the form of text or an image, is extracted from the proof. Currently, three-dimensional images are displayed by TerraVision but other displays are possible. The combined system is called Geo-Logica. Some example queries that can be handled by Geo-Logica include: (1) show the petrified forests in Oregon north of Portland, (2) show the lake in Argentina with the highest elevation, and (3) Show the IGPB land cover classification, derived using MODIS, of Montana for July, 2000. Use of a theorem prover allows sources to cooperate even if they adapt different notational conventions and representation schemes and have never been designed to work together. New sources can be added without reprogramming the system, by providing axioms that advertise their capabilities. Future directions include entering into a dialogue with the user to clarify ambiguities, elaborate on previous questions, or provide new information necessary to answer the question. In addition, of particular interest is to deal with temporally varying data, with answers displayed as animated images.
A systems approach for designing a radio station layout for the U.S. National Airspace
NASA Astrophysics Data System (ADS)
Boci, Erton S.
Today's National Airspace System (NAS) is managed using an aging surveillance radar system. Current radar technology is not adequate to sustain the rapid growth of the commercial, civil, and federal aviation sectors and cannot be adapted to use emerging 21st century airspace surveillance technologies. With 87,000 flights to manage per day, America's ground based radar system has hit a growth ceiling. Consequently, the FAA has embarked on a broad-reaching effort called the Next Generation Air Transportation System (NextGen) that seeks to transform today's aviation airspace management and ensure increased safety and capacity in our NAS. This dissertation presents a systems approach to Service Volume (SV) engineering, a relatively new field of engineering that has emerged in support of the FAA's Automatic Dependent Surveillance -- Broadcast (ADS-B) Air Traffic Modernization Program. SV Engineering is responsible for radio station layout design that would provide the required radio frequency (RF) coverage over a set of Service Volumes, each which represents a section of controlled airspace that is served by a particular air control facility or service. The radio station layout must be optimized to meet system performance, safety, and interference requirements while minimizing the number of radio station sites required to provide RF coverage of the entire airspace of the Unites States. The interference level requirements at the victim (of interference) receivers are the most important and stringent requirements imposed on the ADS-B radio station layout and configuration. In this dissertation, we show a novel and practical way to achieve this optimality by developing and employing several key techniques such as such as reverse radio line-of-site (RLOS) and complex entity-relationship modeling, to address the greater challenges of engineering this complex system. Given that numerous NAS radar facilities are clustered together in relative close proximity to each other, we can optimize site selection placement for coverage through a process of coverage aggregation if we anticipate and leverage the emergent properties that manifest from their aggregation. This optimization process across the NAS significantly reduces the total number of RS sites necessary for complete coverage. Furthermore, in this dissertation, we show the approach taken to develop an entity-relationship model that will support the data capture and distribution of RF SV design. We utilize the CORE software environment to develop a geospatial / RF design entityrelationship (ER) model schema that in conjunction with development of several advanced parsers facilitates effective data management and the communication of complex model logical and parametric detail. Authors note: While the modern standard for scientific papers is to use the International System of Units (SI), this paper was written using the units of measure of the civilian aviation domain to make this research accessible and useful to that community.
The eNanoMapper database for nanomaterial safety information
Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon
2015-01-01
Summary Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state transfer” (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure–activity relationships for nanomaterials (NanoQSAR). PMID:26425413
Text data extraction for a prospective, research-focused data mart: implementation and validation
2012-01-01
Background Translational research typically requires data abstracted from medical records as well as data collected specifically for research. Unfortunately, many data within electronic health records are represented as text that is not amenable to aggregation for analyses. We present a scalable open source SQL Server Integration Services package, called Regextractor, for including regular expression parsers into a classic extract, transform, and load workflow. We have used Regextractor to abstract discrete data from textual reports from a number of ‘machine generated’ sources. To validate this package, we created a pulmonary function test data mart and analyzed the quality of the data mart versus manual chart review. Methods Eleven variables from pulmonary function tests performed closest to the initial clinical evaluation date were studied for 100 randomly selected subjects with scleroderma. One research assistant manually reviewed, abstracted, and entered relevant data into a database. Correlation with data obtained from the automated pulmonary function test data mart within the Northwestern Medical Enterprise Data Warehouse was determined. Results There was a near perfect (99.5%) agreement between results generated from the Regextractor package and those obtained via manual chart abstraction. The pulmonary function test data mart has been used subsequently to monitor disease progression of patients in the Northwestern Scleroderma Registry. In addition to the pulmonary function test example presented in this manuscript, the Regextractor package has been used to create cardiac catheterization and echocardiography data marts. The Regextractor package was released as open source software in October 2009 and has been downloaded 552 times as of 6/1/2012. Conclusions Collaboration between clinical researchers and biomedical informatics experts enabled the development and validation of a tool (Regextractor) to parse, abstract and assemble structured data from text data contained in the electronic health record. Regextractor has been successfully used to create additional data marts in other medical domains and is available to the public. PMID:22970696
A Modular Framework for Transforming Structured Data into HTML with Machine-Readable Annotations
NASA Astrophysics Data System (ADS)
Patton, E. W.; West, P.; Rozell, E.; Zheng, J.
2010-12-01
There is a plethora of web-based Content Management Systems (CMS) available for maintaining projects and data, i.a. However, each system varies in its capabilities and often content is stored separately and accessed via non-uniform web interfaces. Moving from one CMS to another (e.g., MediaWiki to Drupal) can be cumbersome, especially if a large quantity of data must be adapted to the new system. To standardize the creation, display, management, and sharing of project information, we have assembled a framework that uses existing web technologies to transform data provided by any service that supports the SPARQL Protocol and RDF Query Language (SPARQL) queries into HTML fragments, allowing it to be embedded in any existing website. The framework utilizes a two-tier XML Stylesheet Transformation (XSLT) that uses existing ontologies (e.g., Friend-of-a-Friend, Dublin Core) to interpret query results and render them as HTML documents. These ontologies can be used in conjunction with custom ontologies suited to individual needs (e.g., domain-specific ontologies for describing data records). Furthermore, this transformation process encodes machine-readable annotations, namely, the Resource Description Framework in attributes (RDFa), into the resulting HTML, so that capable parsers and search engines can extract the relationships between entities (e.g, people, organizations, datasets). To facilitate editing of content, the framework provides a web-based form system, mapping each query to a dynamically generated form that can be used to modify and create entities, while keeping the native data store up-to-date. This open framework makes it easy to duplicate data across many different sites, allowing researchers to distribute their data in many different online forums. In this presentation we will outline the structure of queries and the stylesheets used to transform them, followed by a brief walkthrough that follows the data from storage to human- and machine-accessible web page. We conclude with a discussion on content caching and steps toward performing queries across multiple domains.
chemf: A purely functional chemistry toolkit.
Höck, Stefan; Riedl, Rainer
2012-12-20
Although programming in a type-safe and referentially transparent style offers several advantages over working with mutable data structures and side effects, this style of programming has not seen much use in chemistry-related software. Since functional programming languages were designed with referential transparency in mind, these languages offer a lot of support when writing immutable data structures and side-effects free code. We therefore started implementing our own toolkit based on the above programming paradigms in a modern, versatile programming language. We present our initial results with functional programming in chemistry by first describing an immutable data structure for molecular graphs together with a couple of simple algorithms to calculate basic molecular properties before writing a complete SMILES parser in accordance with the OpenSMILES specification. Along the way we show how to deal with input validation, error handling, bulk operations, and parallelization in a purely functional way. At the end we also analyze and improve our algorithms and data structures in terms of performance and compare it to existing toolkits both object-oriented and purely functional. All code was written in Scala, a modern multi-paradigm programming language with a strong support for functional programming and a highly sophisticated type system. We have successfully made the first important steps towards a purely functional chemistry toolkit. The data structures and algorithms presented in this article perform well while at the same time they can be safely used in parallelized applications, such as computer aided drug design experiments, without further adjustments. This stands in contrast to existing object-oriented toolkits where thread safety of data structures and algorithms is a deliberate design decision that can be hard to implement. Finally, the level of type-safety achieved by Scala highly increased the reliability of our code as well as the productivity of the programmers involved in this project.
Text data extraction for a prospective, research-focused data mart: implementation and validation.
Hinchcliff, Monique; Just, Eric; Podlusky, Sofia; Varga, John; Chang, Rowland W; Kibbe, Warren A
2012-09-13
Translational research typically requires data abstracted from medical records as well as data collected specifically for research. Unfortunately, many data within electronic health records are represented as text that is not amenable to aggregation for analyses. We present a scalable open source SQL Server Integration Services package, called Regextractor, for including regular expression parsers into a classic extract, transform, and load workflow. We have used Regextractor to abstract discrete data from textual reports from a number of 'machine generated' sources. To validate this package, we created a pulmonary function test data mart and analyzed the quality of the data mart versus manual chart review. Eleven variables from pulmonary function tests performed closest to the initial clinical evaluation date were studied for 100 randomly selected subjects with scleroderma. One research assistant manually reviewed, abstracted, and entered relevant data into a database. Correlation with data obtained from the automated pulmonary function test data mart within the Northwestern Medical Enterprise Data Warehouse was determined. There was a near perfect (99.5%) agreement between results generated from the Regextractor package and those obtained via manual chart abstraction. The pulmonary function test data mart has been used subsequently to monitor disease progression of patients in the Northwestern Scleroderma Registry. In addition to the pulmonary function test example presented in this manuscript, the Regextractor package has been used to create cardiac catheterization and echocardiography data marts. The Regextractor package was released as open source software in October 2009 and has been downloaded 552 times as of 6/1/2012. Collaboration between clinical researchers and biomedical informatics experts enabled the development and validation of a tool (Regextractor) to parse, abstract and assemble structured data from text data contained in the electronic health record. Regextractor has been successfully used to create additional data marts in other medical domains and is available to the public.
Syntactic Constraints and Individual Differences in Native and Non-Native Processing of Wh-Movement
Johnson, Adrienne; Fiorentino, Robert; Gabriele, Alison
2016-01-01
There is a debate as to whether second language (L2) learners show qualitatively similar processing profiles as native speakers or whether L2 learners are restricted in their ability to use syntactic information during online processing. In the realm of wh-dependency resolution, research has examined whether learners, similar to native speakers, attempt to resolve wh-dependencies in grammatically licensed contexts but avoid positing gaps in illicit contexts such as islands. Also at issue is whether the avoidance of gap filling in islands is due to adherence to syntactic constraints or whether islands simply present processing bottlenecks. One approach has been to examine the relationship between processing abilities and the establishment of wh-dependencies in islands. Grammatical accounts of islands do not predict such a relationship as the parser should simply not predict gaps in illicit contexts. In contrast, a pattern of results showing that individuals with more processing resources are better able to establish wh-dependencies in islands could conceivably be compatible with certain processing accounts. In a self-paced reading experiment which examines the processing of wh-dependencies, we address both questions, examining whether native English speakers and Korean learners of English show qualitatively similar patterns and whether there is a relationship between working memory, as measured by counting span and reading span, and processing in both island and non-island contexts. The results of the self-paced reading experiment suggest that learners can use syntactic information on the same timecourse as native speakers, showing qualitative similarity between the two groups. Results of regression analyses did not reveal a significant relationship between working memory and the establishment of wh-dependencies in islands but we did observe significant relationships between working memory and the processing of licit wh-dependencies. As the contexts in which these relationships emerged differed for learners and native speakers, our results call for further research examining individual differences in dependency resolution in both populations. PMID:27148152
Syntactic Constraints and Individual Differences in Native and Non-Native Processing of Wh-Movement.
Johnson, Adrienne; Fiorentino, Robert; Gabriele, Alison
2016-01-01
There is a debate as to whether second language (L2) learners show qualitatively similar processing profiles as native speakers or whether L2 learners are restricted in their ability to use syntactic information during online processing. In the realm of wh-dependency resolution, research has examined whether learners, similar to native speakers, attempt to resolve wh-dependencies in grammatically licensed contexts but avoid positing gaps in illicit contexts such as islands. Also at issue is whether the avoidance of gap filling in islands is due to adherence to syntactic constraints or whether islands simply present processing bottlenecks. One approach has been to examine the relationship between processing abilities and the establishment of wh-dependencies in islands. Grammatical accounts of islands do not predict such a relationship as the parser should simply not predict gaps in illicit contexts. In contrast, a pattern of results showing that individuals with more processing resources are better able to establish wh-dependencies in islands could conceivably be compatible with certain processing accounts. In a self-paced reading experiment which examines the processing of wh-dependencies, we address both questions, examining whether native English speakers and Korean learners of English show qualitatively similar patterns and whether there is a relationship between working memory, as measured by counting span and reading span, and processing in both island and non-island contexts. The results of the self-paced reading experiment suggest that learners can use syntactic information on the same timecourse as native speakers, showing qualitative similarity between the two groups. Results of regression analyses did not reveal a significant relationship between working memory and the establishment of wh-dependencies in islands but we did observe significant relationships between working memory and the processing of licit wh-dependencies. As the contexts in which these relationships emerged differed for learners and native speakers, our results call for further research examining individual differences in dependency resolution in both populations.
A search engine to access PubMed monolingual subsets: proof of concept and evaluation in French.
Griffon, Nicolas; Schuers, Matthieu; Soualmia, Lina Fatima; Grosjean, Julien; Kerdelhué, Gaétan; Kergourlay, Ivan; Dahamna, Badisse; Darmoni, Stéfan Jacques
2014-12-01
PubMed contains numerous articles in languages other than English. However, existing solutions to access these articles in the language in which they were written remain unconvincing. The aim of this study was to propose a practical search engine, called Multilingual PubMed, which will permit access to a PubMed subset in 1 language and to evaluate the precision and coverage for the French version (Multilingual PubMed-French). To create this tool, translations of MeSH were enriched (eg, adding synonyms and translations in French) and integrated into a terminology portal. PubMed subsets in several European languages were also added to our database using a dedicated parser. The response time for the generic semantic search engine was evaluated for simple queries. BabelMeSH, Multilingual PubMed-French, and 3 different PubMed strategies were compared by searching for literature in French. Precision and coverage were measured for 20 randomly selected queries. The results were evaluated as relevant to title and abstract, the evaluator being blind to search strategy. More than 650,000 PubMed citations in French were integrated into the Multilingual PubMed-French information system. The response times were all below the threshold defined for usability (2 seconds). Two search strategies (Multilingual PubMed-French and 1 PubMed strategy) showed high precision (0.93 and 0.97, respectively), but coverage was 4 times higher for Multilingual PubMed-French. It is now possible to freely access biomedical literature using a practical search tool in French. This tool will be of particular interest for health professionals and other end users who do not read or query sufficiently in English. The information system is theoretically well suited to expand the approach to other European languages, such as German, Spanish, Norwegian, and Portuguese.
A Search Engine to Access PubMed Monolingual Subsets: Proof of Concept and Evaluation in French
Schuers, Matthieu; Soualmia, Lina Fatima; Grosjean, Julien; Kerdelhué, Gaétan; Kergourlay, Ivan; Dahamna, Badisse; Darmoni, Stéfan Jacques
2014-01-01
Background PubMed contains numerous articles in languages other than English. However, existing solutions to access these articles in the language in which they were written remain unconvincing. Objective The aim of this study was to propose a practical search engine, called Multilingual PubMed, which will permit access to a PubMed subset in 1 language and to evaluate the precision and coverage for the French version (Multilingual PubMed-French). Methods To create this tool, translations of MeSH were enriched (eg, adding synonyms and translations in French) and integrated into a terminology portal. PubMed subsets in several European languages were also added to our database using a dedicated parser. The response time for the generic semantic search engine was evaluated for simple queries. BabelMeSH, Multilingual PubMed-French, and 3 different PubMed strategies were compared by searching for literature in French. Precision and coverage were measured for 20 randomly selected queries. The results were evaluated as relevant to title and abstract, the evaluator being blind to search strategy. Results More than 650,000 PubMed citations in French were integrated into the Multilingual PubMed-French information system. The response times were all below the threshold defined for usability (2 seconds). Two search strategies (Multilingual PubMed-French and 1 PubMed strategy) showed high precision (0.93 and 0.97, respectively), but coverage was 4 times higher for Multilingual PubMed-French. Conclusions It is now possible to freely access biomedical literature using a practical search tool in French. This tool will be of particular interest for health professionals and other end users who do not read or query sufficiently in English. The information system is theoretically well suited to expand the approach to other European languages, such as German, Spanish, Norwegian, and Portuguese. PMID:25448528
ALPS - A LINEAR PROGRAM SOLVER
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
chemf: A purely functional chemistry toolkit
2012-01-01
Background Although programming in a type-safe and referentially transparent style offers several advantages over working with mutable data structures and side effects, this style of programming has not seen much use in chemistry-related software. Since functional programming languages were designed with referential transparency in mind, these languages offer a lot of support when writing immutable data structures and side-effects free code. We therefore started implementing our own toolkit based on the above programming paradigms in a modern, versatile programming language. Results We present our initial results with functional programming in chemistry by first describing an immutable data structure for molecular graphs together with a couple of simple algorithms to calculate basic molecular properties before writing a complete SMILES parser in accordance with the OpenSMILES specification. Along the way we show how to deal with input validation, error handling, bulk operations, and parallelization in a purely functional way. At the end we also analyze and improve our algorithms and data structures in terms of performance and compare it to existing toolkits both object-oriented and purely functional. All code was written in Scala, a modern multi-paradigm programming language with a strong support for functional programming and a highly sophisticated type system. Conclusions We have successfully made the first important steps towards a purely functional chemistry toolkit. The data structures and algorithms presented in this article perform well while at the same time they can be safely used in parallelized applications, such as computer aided drug design experiments, without further adjustments. This stands in contrast to existing object-oriented toolkits where thread safety of data structures and algorithms is a deliberate design decision that can be hard to implement. Finally, the level of type-safety achieved by Scala highly increased the reliability of our code as well as the productivity of the programmers involved in this project. PMID:23253942
Chen, Henry W; Du, Jingcheng; Song, Hsing-Yi; Liu, Xiangyu; Jiang, Guoqian
2018-01-01
Background Today, there is an increasing need to centralize and standardize electronic health data within clinical research as the volume of data continues to balloon. Domain-specific common data elements (CDEs) are emerging as a standard approach to clinical research data capturing and reporting. Recent efforts to standardize clinical study CDEs have been of great benefit in facilitating data integration and data sharing. The importance of the temporal dimension of clinical research studies has been well recognized; however, very few studies have focused on the formal representation of temporal constraints and temporal relationships within clinical research data in the biomedical research community. In particular, temporal information can be extremely powerful to enable high-quality cancer research. Objective The objective of the study was to develop and evaluate an ontological approach to represent the temporal aspects of cancer study CDEs. Methods We used CDEs recorded in the National Cancer Institute (NCI) Cancer Data Standards Repository (caDSR) and created a CDE parser to extract time-relevant CDEs from the caDSR. Using the Web Ontology Language (OWL)–based Time Event Ontology (TEO), we manually derived representative patterns to semantically model the temporal components of the CDEs using an observing set of randomly selected time-related CDEs (n=600) to create a set of TEO ontological representation patterns. In evaluating TEO’s ability to represent the temporal components of the CDEs, this set of representation patterns was tested against two test sets of randomly selected time-related CDEs (n=425). Results It was found that 94.2% (801/850) of the CDEs in the test sets could be represented by the TEO representation patterns. Conclusions In conclusion, TEO is a good ontological model for representing the temporal components of the CDEs recorded in caDSR. Our representative model can harness the Semantic Web reasoning and inferencing functionalities and present a means for temporal CDEs to be machine-readable, streamlining meaningful searches. PMID:29472179
Sheppard, Shannon M; Love, Tracy; Midgley, Katherine J; Holcomb, Phillip J; Shapiro, Lewis P
2017-12-01
Event-related potentials (ERPs) were used to examine how individuals with aphasia and a group of age-matched controls use prosody and themattic fit information in sentences containing temporary syntactic ambiguities. Two groups of individuals with aphasia were investigated; those demonstrating relatively good sentence comprehension whose primary language difficulty is anomia (Individuals with Anomic Aphasia (IWAA)), and those who demonstrate impaired sentence comprehension whose primary diagnosis is Broca's aphasia (Individuals with Broca's Aphasia (IWBA)). The stimuli had early closure syntactic structure and contained a temporary early closure (correct)/late closure (incorrect) syntactic ambiguity. The prosody was manipulated to either be congruent or incongruent, and the temporarily ambiguous NP was also manipulated to either be a plausible or an implausible continuation for the subordinate verb (e.g., "While the band played the song/the beer pleased all the customers."). It was hypothesized that an implausible NP in sentences with incongruent prosody may provide the parser with a plausibility cue that could be used to predict syntactic structure. The results revealed that incongruent prosody paired with a plausibility cue resulted in an N400-P600 complex at the implausible NP (the beer) in both the controls and the IWAAs, yet incongruent prosody without a plausibility cue resulted in an N400-P600 at the critical verb (pleased) only in healthy controls. IWBAs did not show evidence of N400 or P600 effects at the ambiguous NP or critical verb, although they did show evidence of a delayed N400 effect at the sentence-final word in sentences with incongruent prosody. These results suggest that IWAAs have difficulty integrating prosodic cues with underlying syntactic structure when lexical-semantic information is not available to aid their parse. IWBAs have difficulty integrating both prosodic and lexical-semantic cues with syntactic structure, likely due to a processing delay. Copyright © 2017 Elsevier Ltd. All rights reserved.
The eNanoMapper database for nanomaterial safety information.
Jeliazkova, Nina; Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon
2015-01-01
The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the "representational state transfer" (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure-activity relationships for nanomaterials (NanoQSAR).
Data Fusion and Visualization with the OpenEarth Framework (OEF)
NASA Astrophysics Data System (ADS)
Nadeau, D. R.; Baru, C.; Fouch, M. J.; Crosby, C. J.
2010-12-01
Data fusion is an increasingly important problem to solve as we strive to integrate data from multiple sources and build better models of the complex processes operating at the Earth’s surface and its interior. These data are often large, multi-dimensional, and subject to differing conventions for file formats, data structures, coordinate spaces, units of measure, and metadata organization. When visualized, these data require differing, and often conflicting, conventions for visual representations, dimensionality, icons, color schemes, labeling, and interaction. These issues make the visualization of fused Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data fusion and visualization suite of software being developed at the Supercomputer Center at the University of California, San Diego. Funded by the NSF, the project is leveraging virtual globe technology from NASA’s WorldWind to create interactive 3D visualization tools that combine layered data from a variety of sources to create a holistic view of features at, above, and beneath the Earth’s surface. The OEF architecture is cross-platform, multi-threaded, modular, and based upon Java. The OEF’s modular approach yields a collection of compatible mix-and-match components for assembling custom applications. Available modules support file format handling, web service communications, data management, data filtering, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats. Each one imports data into a general-purpose data representation that supports multidimensional grids, topography, points, lines, polygons, images, and more. From there these data then may be manipulated, merged, filtered, reprojected, and visualized. Visualization features support conventional and new visualization techniques for looking at topography, tomography, maps, and feature geometry. 3D grid data such as seismic tomography may be sliced by multiple oriented cutting planes and isosurfaced to create 3D skins that trace feature boundaries within the data. Topography may be overlaid with satellite imagery along with data such as gravity and magnetics measurements. Multiple data sets may be visualized simultaneously using overlapping layers and a common 3D+time coordinate space. Data management within the OEF handles and hides the quirks of differing file formats, web protocols, storage structures, coordinate spaces, and metadata representations. Derived data are computed automatically to support interaction and visualization while the original data is left unchanged in its original form. Data is cached for better memory and network efficiency, and all visualization is accelerated by 3D graphics hardware found on today’s computers. The OpenEarth Framework project is currently prototyping the software for use in the visualization, and integration of continental scale geophysical data being produced by EarthScope-related research in the Western US. The OEF is providing researchers with new ways to display and interrogate their data and is anticipated to be a valuable tool for future EarthScope-related research.
Bader, Markus
2018-01-01
This paper presents three acceptability experiments investigating German verb-final clauses in order to explore possible sources of sentence complexity during human parsing. The point of departure was De Vries et al.'s (2011) generalization that sentences with three or more crossed or nested dependencies are too complex for being processed by the human parsing mechanism without difficulties. This generalization is partially based on findings from Bach et al. (1986) concerning the acceptability of complex verb clusters in German and Dutch. The first experiment tests this generalization by comparing two sentence types: (i) sentences with three nested dependencies within a single clause that contains three verbs in a complex verb cluster; (ii) sentences with four nested dependencies distributed across two embedded clauses, one center-embedded within the other, each containing a two-verb cluster. The results show that sentences with four nested dependencies are judged as acceptable as control sentences with only two nested dependencies, whereas sentences with three nested dependencies are judged as only marginally acceptable. This argues against De Vries et al.'s (2011) claim that the human parser can process no more than two nested dependencies. The results are used to refine the Verb-Cluster Complexity Hypothesis of Bader and Schmid (2009a). The second and the third experiment investigate sentences with four nested dependencies in more detail in order to explore alternative sources of sentence complexity: the number of predicted heads to be held in working memory (storage cost in terms of the Dependency Locality Theory [DLT], Gibson, 2000) and the length of the involved dependencies (integration cost in terms of the DLT). Experiment 2 investigates sentences for which storage cost and integration cost make conflicting predictions. The results show that storage cost outweighs integration cost. Experiment 3 shows that increasing integration cost in sentences with two degrees of center embedding leads to decreased acceptability. Taken together, the results argue in favor of a multifactorial account of the limitations on center embedding in natural languages. PMID:29410633
EOS ODL Metadata On-line Viewer
NASA Astrophysics Data System (ADS)
Yang, J.; Rabi, M.; Bane, B.; Ullman, R.
2002-12-01
We have recently developed and deployed an EOS ODL metadata on-line viewer. The EOS ODL metadata viewer is a web server that takes: 1) an EOS metadata file in Object Description Language (ODL), 2) parameters, such as which metadata to view and what style of display to use, and returns an HTML or XML document displaying the requested metadata in the requested style. This tool is developed to address widespread complaints by science community that the EOS Data and Information System (EOSDIS) metadata files in ODL are difficult to read by allowing users to upload and view an ODL metadata file in different styles using a web browser. Users have the selection to view all the metadata or part of the metadata, such as Collection metadata, Granule metadata, or Unsupported Metadata. Choices of display styles include 1) Web: a mouseable display with tabs and turn-down menus, 2) Outline: Formatted and colored text, suitable for printing, 3) Generic: Simple indented text, a direct representation of the underlying ODL metadata, and 4) None: No stylesheet is applied and the XML generated by the converter is returned directly. Not all display styles are implemented for all the metadata choices. For example, Web style is only implemented for Collection and Granule metadata groups with known attribute fields, but not for Unsupported, Other, and All metadata. The overall strategy of the ODL viewer is to transform an ODL metadata file to a viewable HTML in two steps. The first step is to convert the ODL metadata file to an XML using a Java-based parser/translator called ODL2XML. The second step is to transform the XML to an HTML using stylesheets. Both operations are done on the server side. This allows a lot of flexibility in the final result, and is very portable cross-platform. Perl CGI behind the Apache web server is used to run the Java ODL2XML, and then run the results through an XSLT processor. The EOS ODL viewer can be accessed from either a PC or a Mac using Internet Explorer 5.0+ or Netscape 4.7+.
SEMG signal compression based on two-dimensional techniques.
de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino
2016-04-18
Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance allied to low computational complexity, all in terms of percent root-mean-square difference [Formula: see text] compression factor. The proposed schemes are effective and, specifically, the modified MMP algorithm can be considered as an interesting alternative for isometric signals, regarding traditional SEMG encoders. Besides, the approach based on off-the-shelf image encoders has the potential of fast implementation and dissemination, given that many embedded systems may already have such encoders available, in the underlying hardware/software architecture.
User-defined functions in the Arden Syntax: An extension proposal.
Karadimas, Harry; Ebrahiminia, Vahid; Lepage, Eric
2015-12-11
The Arden Syntax is a knowledge-encoding standard, started in 1989, and now in its 10th revision, maintained by the health level seven (HL7) organization. It has constructs borrowed from several language concepts that were available at that time (mainly the HELP hospital information system and the Regenstrief medical record system (RMRS), but also the Pascal language, functional languages and the data structure of frames, used in artificial intelligence). The syntax has a rationale for its constructs, and has restrictions that follow this rationale. The main goal of the Standard is to promote knowledge sharing, by avoiding the complexity of traditional programs, so that a medical logic module (MLM) written in the Arden Syntax can remain shareable and understandable across institutions. One of the restrictions of the syntax is that you cannot define your own functions and subroutines inside an MLM. An MLM can, however, call another MLM, where this MLM will serve as a function. This will add an additional dependency between MLMs, a known criticism of the Arden Syntax knowledge model. This article explains why we believe the Arden Syntax would benefit from a construct for user-defined functions, discusses the need, the benefits and the limitations of such a construct. We used the recent grammar of the Arden Syntax v.2.10, and both the Arden Syntax standard document and the Arden Syntax Rationale article as guidelines. We gradually introduced production rules to the grammar. We used the CUP parsing tool to verify that no ambiguities were detected. A new grammar was produced, that supports user-defined functions. 22 production rules were added to the grammar. A parser was built using the CUP parsing tool. A few examples are given to illustrate the concepts. All examples were parsed correctly. It is possible to add user-defined functions to the Arden Syntax in a way that remains coherent with the standard. We believe that this enhances the readability and the robustness of MLMs. A detailed proposal will be submitted by the end of the year to the HL7 workgroup on Arden Syntax. Copyright © 2015 Elsevier B.V. All rights reserved.
Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code
NASA Astrophysics Data System (ADS)
Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.
2015-12-01
WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).
Bader, Markus
2017-01-01
This paper presents three acceptability experiments investigating German verb-final clauses in order to explore possible sources of sentence complexity during human parsing. The point of departure was De Vries et al.'s (2011) generalization that sentences with three or more crossed or nested dependencies are too complex for being processed by the human parsing mechanism without difficulties. This generalization is partially based on findings from Bach et al. (1986) concerning the acceptability of complex verb clusters in German and Dutch. The first experiment tests this generalization by comparing two sentence types: (i) sentences with three nested dependencies within a single clause that contains three verbs in a complex verb cluster; (ii) sentences with four nested dependencies distributed across two embedded clauses, one center-embedded within the other, each containing a two-verb cluster. The results show that sentences with four nested dependencies are judged as acceptable as control sentences with only two nested dependencies, whereas sentences with three nested dependencies are judged as only marginally acceptable. This argues against De Vries et al.'s (2011) claim that the human parser can process no more than two nested dependencies. The results are used to refine the Verb-Cluster Complexity Hypothesis of Bader and Schmid (2009a). The second and the third experiment investigate sentences with four nested dependencies in more detail in order to explore alternative sources of sentence complexity: the number of predicted heads to be held in working memory (storage cost in terms of the Dependency Locality Theory [DLT], Gibson, 2000) and the length of the involved dependencies (integration cost in terms of the DLT). Experiment 2 investigates sentences for which storage cost and integration cost make conflicting predictions. The results show that storage cost outweighs integration cost. Experiment 3 shows that increasing integration cost in sentences with two degrees of center embedding leads to decreased acceptability. Taken together, the results argue in favor of a multifactorial account of the limitations on center embedding in natural languages.
The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data
NASA Astrophysics Data System (ADS)
Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris
2010-05-01
Data integration is increasingly important as we strive to combine data from disparate sources and assemble better models of the complex processes operating at the Earth's surface and within its interior. These data are often large, multi-dimensional, and subject to differing conventions for data structures, file formats, coordinate spaces, and units of measure. When visualized, these data require differing, and sometimes conflicting, conventions for visual representations, dimensionality, symbology, and interaction. All of this makes the visualization of integrated Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data integration and visualization suite of applications and libraries being developed by the GEON project at the University of California, San Diego, USA. Funded by the NSF, the project is leveraging virtual globe technology from NASA's WorldWind to create interactive 3D visualization tools that combine and layer data from a wide variety of sources to create a holistic view of features at, above, and beneath the Earth's surface. The OEF architecture is open, cross-platform, modular, and based upon Java. The OEF's modular approach to software architecture yields an array of mix-and-match software components for assembling custom applications. Available modules support file format handling, web service communications, data management, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats used in the field. Each one imports data into a general-purpose common data model supporting multidimensional regular and irregular grids, topography, feature geometry, and more. Data within these data models may be manipulated, combined, reprojected, and visualized. The OEF's visualization features support a variety of conventional and new visualization techniques for looking at topography, tomography, point clouds, imagery, maps, and feature geometry. 3D data such as seismic tomography may be sliced by multiple oriented cutting planes and isosurfaced to create 3D skins that trace feature boundaries within the data. Topography may be overlaid with satellite imagery, maps, and data such as gravity and magnetics measurements. Multiple data sets may be visualized simultaneously using overlapping layers within a common 3D coordinate space. Data management within the OEF handles and hides the inevitable quirks of differing file formats, web protocols, storage structures, coordinate spaces, and metadata representations. Heuristics are used to extract necessary metadata used to guide data and visual operations. Derived data representations are computed to better support fluid interaction and visualization while the original data is left unchanged in its original form. Data is cached for better memory and network efficiency, and all visualization makes use of 3D graphics hardware support found on today's computers. The OpenEarth Framework project is currently prototyping the software for use in the visualization, and integration of continental scale geophysical data being produced by EarthScope-related research in the Western US. The OEF is providing researchers with new ways to display and interrogate their data and is anticipated to be a valuable tool for future EarthScope-related research.
Xyce Parallel Electronic Simulator Reference Guide Version 6.6.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce . This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide [1] . The information herein is subject to change without notice. Copyright c 2002-2016 Sandia Corporation. All rights reserved. Acknowledgements The BSIM Group at the University ofmore » California, Berkeley developed the BSIM3, BSIM4, BSIM6, BSIM-CMG and BSIM-SOI models. The BSIM3 is Copyright c 1999, Regents of the University of California. The BSIM4 is Copyright c 2006, Regents of the University of California. The BSIM6 is Copyright c 2015, Regents of the University of California. The BSIM-CMG is Copyright c 2012 and 2016, Regents of the University of California. The BSIM-SOI is Copyright c 1990, Regents of the University of California. All rights reserved. The Mextram model has been developed by NXP Semiconductors until 2007, Delft University of Technology from 2007 to 2014, and Auburn University since April 2015. Copyrights c of Mextram are with Delft University of Technology, NXP Semiconductors and Auburn University. The MIT VS Model Research Group developed the MIT Virtual Source (MVS) model. Copyright c 2013 Massachusetts Institute of Technology (MIT). The EKV3 MOSFET model was developed by the EKV Team of the Electronics Laboratory-TUC of the Technical University of Crete. Trademarks Xyce TM Electronic Simulator and Xyce TM are trademarks of Sandia Corporation. Orcad, Orcad Capture, PSpice and Probe are registered trademarks of Cadence Design Systems, Inc. Microsoft, Windows and Windows 7 are registered trademarks of Microsoft Corporation. Medici, DaVinci and Taurus are registered trademarks of Synopsys Corporation. Amtec and TecPlot are trademarks of Amtec Engineering, Inc. All other trademarks are property of their respective owners. Contacts World Wide Web http://xyce.sandia.gov https://info.sandia.gov/xyce (Sandia only) Email xyce@sandia.gov (outside Sandia) xyce-sandia@sandia.gov (Sandia only) Bug Reports (Sandia only) http://joseki-vm.sandia.gov/bugzilla http://morannon.sandia.gov/bugzilla« less
Ground station software for receiving and handling Irecin telemetry data
NASA Astrophysics Data System (ADS)
Ferrante, M.; Petrozzi, M.; Di Ciolo, L.; Ortenzi, A.; Troso, G
2004-11-01
The on board resources, needed to perform the mission tasks, are very limited in nano-satellites. This paper proposes a software system to receive, manage and process in Real Time the Telemetry data coming from IRECIN nanosatellite and transmit operator manual commands and operative procedures. During the receiving phase, it shows the IRECIN subsystem physical values, visualizes the IRECIN attitude, and performs other suitable functions. The IRECIN Ground Station program is in charge to exchange information between IRECIN and the Ground segment. It carries out, in real time during IRECIN transmission phase, IRECIN attitude drawing, sun direction drawing, power supply received from Sun, visualization of the telemetry data, visualization of Earth magnetic field and more other functions. The received data are memorized and interpreted by a module, parser, and distribute to the suitable modules. Moreover it allows sending manual and automatic commands. Manual commands are delivered by an operator, on the other hand, automatic commands are provided by pre-configured operative procedures. Operative procedures development is realized in a previous phase called configuration phase. This program is also in charge to carry out a test session by mean the scheduler and commanding modules allowing execution of specific tasks without operator control. A log module to memorize received and transmitted data is realized. A phase to analyze, filter and visualize in off line the collected data, called post analysis, is based on the data extraction form the log module. At the same time, the Ground Station Software can work in network allowing managing, receiving and sending data/commands from different sites. The proposed system constitutes the software of IRECIN Ground Station. IRECIN is a modular nanosatellite weighting less than 2 kg, constituted by sixteen external sides with surface-mounted solar cells and three internal Al plates, kept together by four steel bars. Lithium-ions batteries are used. Attitude is determined by two three-axis magnetometers and the solar panels data. Control is provided by an active magnetic control system. The spacecraft will be spin- stabilized with the spin-axis normal to the orbit. All IRECIN electronic components are SMD technology in order to reduce weight and size. The realized Electronic board are completely developed, realized and tested at the Vitrociset S.P.A. under control of Research and Develop Group
Mehraein-Ghomi, Farideh; Church, Dawn R.; Schreiber, Cynthia L.; Weichmann, Ashley M.; Basu, Hirak S.; Wilding, George
2015-01-01
Accumulating evidence shows that androgen receptor (AR) activation and signaling plays a key role in growth and progression in all stages of prostate cancer, even under low androgen levels or in the absence of androgen in the castration-resistant prostate cancer. Sustained activation of AR under androgen-deprived conditions may be due to its interaction with co-activators, such as p52 NF-κB subunit, and/or an increase in its stability by phosphorylation that delays its degradation. Here we identified a specific inhibitor of AR/p52 interaction, AR/p52-02, via a high throughput screen based on the reconstitution of Gaussia Luciferase. We found that AR/p52-02 markedly inhibited growth of both castration-resistant C4-2 (IC50 ∼6 μM) and parental androgen-dependent LNCaP (IC50 ∼4 μM) human prostate cancer cells under low androgen conditions. Growth inhibition was associated with significantly reduced nuclear p52 levels and DNA binding activity, as well as decreased phosphorylation of AR at serine 81, increased AR ubiquitination, and decreased AR transcriptional activity as indicated by decreased prostate-specific antigen (PSA) mRNA levels in both cell lines. AR/p52-02 also caused a reduction in levels of p21WAF/CIP1, which is a direct AR targeted gene in that its expression correlates with androgen stimulation and mitogenic proliferation in prostate cancer under physiologic levels of androgen, likely by disrupting the AR signaling axis. The reduced level of cyclinD1 reported previously for this compound may be due to the reduction in nuclear presence and activity of p52, which directly regulates cyclinD1 expression, as well as the reduction in p21WAF/CIP1, since p21WAF/CIP1 is reported to stabilize nuclear cyclinD1 in prostate cancer. Overall, the data suggest that specifically inhibiting the interaction of AR with p52 and blocking activity of p52 and pARser81 may be an effective means of reducing castration-resistant prostate cancer cell growth. PMID:26622945
2011-01-01
Background Several tools have been developed to perform global gene expression profile data analysis, to search for specific chromosomal regions whose features meet defined criteria as well as to study neighbouring gene expression. However, most of these tools are tailored for a specific use in a particular context (e.g. they are species-specific, or limited to a particular data format) and they typically accept only gene lists as input. Results TRAM (Transcriptome Mapper) is a new general tool that allows the simple generation and analysis of quantitative transcriptome maps, starting from any source listing gene expression values for a given gene set (e.g. expression microarrays), implemented as a relational database. It includes a parser able to assign univocal and updated gene symbols to gene identifiers from different data sources. Moreover, TRAM is able to perform intra-sample and inter-sample data normalization, including an original variant of quantile normalization (scaled quantile), useful to normalize data from platforms with highly different numbers of investigated genes. When in 'Map' mode, the software generates a quantitative representation of the transcriptome of a sample (or of a pool of samples) and identifies if segments of defined lengths are over/under-expressed compared to the desired threshold. When in 'Cluster' mode, the software searches for a set of over/under-expressed consecutive genes. Statistical significance for all results is calculated with respect to genes localized on the same chromosome or to all genome genes. Transcriptome maps, showing differential expression between two sample groups, relative to two different biological conditions, may be easily generated. We present the results of a biological model test, based on a meta-analysis comparison between a sample pool of human CD34+ hematopoietic progenitor cells and a sample pool of megakaryocytic cells. Biologically relevant chromosomal segments and gene clusters with differential expression during the differentiation toward megakaryocyte were identified. Conclusions TRAM is designed to create, and statistically analyze, quantitative transcriptome maps, based on gene expression data from multiple sources. The release includes FileMaker Pro database management runtime application and it is freely available at http://apollo11.isto.unibo.it/software/, along with preconfigured implementations for mapping of human, mouse and zebrafish transcriptomes. PMID:21333005
2012-01-01
Background In the scientific biodiversity community, it is increasingly perceived the need to build a bridge between molecular and traditional biodiversity studies. We believe that the information technology could have a preeminent role in integrating the information generated by these studies with the large amount of molecular data we can find in bioinformatics public databases. This work is primarily aimed at building a bioinformatic infrastructure for the integration of public and private biodiversity data through the development of GIDL, an Intelligent Data Loader coupled with the Molecular Biodiversity Database. The system presented here organizes in an ontological way and locally stores the sequence and annotation data contained in the GenBank primary database. Methods The GIDL architecture consists of a relational database and of an intelligent data loader software. The relational database schema is designed to manage biodiversity information (Molecular Biodiversity Database) and it is organized in four areas: MolecularData, Experiment, Collection and Taxonomy. The MolecularData area is inspired to an established standard in Generic Model Organism Databases, the Chado relational schema. The peculiarity of Chado, and also its strength, is the adoption of an ontological schema which makes use of the Sequence Ontology. The Intelligent Data Loader (IDL) component of GIDL is an Extract, Transform and Load software able to parse data, to discover hidden information in the GenBank entries and to populate the Molecular Biodiversity Database. The IDL is composed by three main modules: the Parser, able to parse GenBank flat files; the Reasoner, which automatically builds CLIPS facts mapping the biological knowledge expressed by the Sequence Ontology; the DBFiller, which translates the CLIPS facts into ordered SQL statements used to populate the database. In GIDL Semantic Web technologies have been adopted due to their advantages in data representation, integration and processing. Results and conclusions Entries coming from Virus (814,122), Plant (1,365,360) and Invertebrate (959,065) divisions of GenBank rel.180 have been loaded in the Molecular Biodiversity Database by GIDL. Our system, combining the Sequence Ontology and the Chado schema, allows a more powerful query expressiveness compared with the most commonly used sequence retrieval systems like Entrez or SRS. PMID:22536971
Ma, Handong; Weng, Chunhua
2016-04-01
To link public data resources for predicting post-marketing drug safety label changes by analyzing the Convergent Focus Shift patterns among drug testing trials. We identified 256 top-selling prescription drugs between 2003 and 2013 and divided them into 83 BBW drugs (drugs with at least one black box warning label) and 173 ROBUST drugs (drugs without any black box warning label) based on their FDA black box warning (BBW) records. We retrieved 7499 clinical trials that each had at least one of these drugs for intervention from the ClinicalTrials.gov. We stratified all the trials by pre-marketing or post-marketing status, study phase, and study start date. For each trial, we retrieved drug and disease concepts from clinical trial summaries to model its study population using medParser and SNOMED-CT. Convergent Focus Shift (CFS) pattern was calculated and used to assess the temporal changes in study populations from pre-marketing to post-marketing trials for each drug. Then we selected 68 candidate drugs, 18 with BBW warning and 50 without, that each had at least nine pre-marketing trials and nine post-marketing trials for predictive modeling. A random forest predictive model was developed to predict BBW acquisition incidents based on CFS patterns among these drugs. Pre- and post-marketing trials of BBW and ROBUST drugs were compared to look for their differences in CFS patterns. Among the 18 BBW drugs, we consistently observed that the post-marketing trials focused more on recruiting patients with medical conditions previously unconsidered in the pre-marketing trials. In contrast, among the 50 ROBUST drugs, the post-marketing trials involved a variety of medications for testing their associations with target intervention(s). We found it feasible to predict BBW acquisitions using different CFS patterns between the two groups of drugs. Our random forest predictor achieved an AUC of 0.77. We also demonstrated the feasibility of the predictor for identifying long-term BBW acquisition events without compromising prediction accuracy. This study contributes a method for post-marketing pharmacovigilance using Convergent Focus Shift (CFS) patterns in clinical trial study populations mined from linked public data resources. These signals are otherwise unavailable from individual data resources. We demonstrated the added value of linked public data and the feasibility of integrating ClinicalTrials.gov summaries and drug safety labels for post-marketing surveillance. Future research is needed to ensure better accessibility and linkage of heterogeneous drug safety data for efficient pharmacovigilance. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cardille, J. A.; Gonzales, R.; Parrott, L.; Bai, J.
2009-12-01
How should researchers store and share data? For most of history, scientists with results and data to share have been mostly limited to books and journal articles. In recent decades, the advent of personal computers and shared data formats has made it feasible, though often cumbersome, to transfer data between individuals or among small groups. Meanwhile, the use of automatic samplers, simulation models, and other data-production techniques has increased greatly. The result is that there is more and more data to store, and a greater expectation that they will be available at the click of a button. In 10 or 20 years, will we still send emails to each other to learn about what data exist? The development and widespread familiarity with virtual globes like Google Earth and NASA WorldWind has created the potential, in just the last few years, to revolutionize the way we share data, search for and search through data, and understand the relationship between individual projects in research networks, where sharing and dissemination of knowledge is encouraged. For the last two years, we have been building the GeoSearch application, a cutting-edge online resource for the storage, sharing, search, and retrieval of data produced by research networks. Linking NASA’s WorldWind globe platform, the data browsing toolkit prefuse, and SQL databases, GeoSearch’s version 1.0 enables flexible searches and novel geovisualizations of large amounts of related scientific data. These data may be submitted to the database by individual researchers and processed by GeoSearch’s data parser. Ultimately, data from research groups gathered in a research network would be shared among users via the platform. Access is not limited to the scientists themselves; administrators can determine which data can be presented publicly and which require group membership. Under the auspices of the Canada’s Sustainable Forestry Management Network of Excellence, we have created a moderate-sized database of ecological measurements in forests; we expect to extend the approach to a Quebec lake research network encompassing decades of lake measurements. In this session, we will describe and present four related components of the new system: GeoSearch’s globe-based searching and display of scientific data; prefuse-based visualization of social connections among members of a scientific research network; geolocation of research projects using Google Spreadsheets, KML, and Google Earth/Maps; and collaborative construction of a geolocated database of research articles. Each component is designed to have applications for scientists themselves as well as the general public. Although each implementation is in its infancy, we believe they could be useful to other researcher networks.
Dynamic Server-Based KML Code Generator Method for Level-of-Detail Traversal of Geospatial Data
NASA Technical Reports Server (NTRS)
Baxes, Gregory; Mixon, Brian; Linger, TIm
2013-01-01
Web-based geospatial client applications such as Google Earth and NASA World Wind must listen to data requests, access appropriate stored data, and compile a data response to the requesting client application. This process occurs repeatedly to support multiple client requests and application instances. Newer Web-based geospatial clients also provide user-interactive functionality that is dependent on fast and efficient server responses. With massively large datasets, server-client interaction can become severely impeded because the server must determine the best way to assemble data to meet the client applications request. In client applications such as Google Earth, the user interactively wanders through the data using visually guided panning and zooming actions. With these actions, the client application is continually issuing data requests to the server without knowledge of the server s data structure or extraction/assembly paradigm. A method for efficiently controlling the networked access of a Web-based geospatial browser to server-based datasets in particular, massively sized datasets has been developed. The method specifically uses the Keyhole Markup Language (KML), an Open Geospatial Consortium (OGS) standard used by Google Earth and other KML-compliant geospatial client applications. The innovation is based on establishing a dynamic cascading KML strategy that is initiated by a KML launch file provided by a data server host to a Google Earth or similar KMLcompliant geospatial client application user. Upon execution, the launch KML code issues a request for image data covering an initial geographic region. The server responds with the requested data along with subsequent dynamically generated KML code that directs the client application to make follow-on requests for higher level of detail (LOD) imagery to replace the initial imagery as the user navigates into the dataset. The approach provides an efficient data traversal path and mechanism that can be flexibly established for any dataset regardless of size or other characteristics. The method yields significant improvements in userinteractive geospatial client and data server interaction and associated network bandwidth requirements. The innovation uses a C- or PHP-code-like grammar that provides a high degree of processing flexibility. A set of language lexer and parser elements is provided that offers a complete language grammar for writing and executing language directives. A script is wrapped and passed to the geospatial data server by a client application as a component of a standard KML-compliant statement. The approach provides an efficient means for a geospatial client application to request server preprocessing of data prior to client delivery. Data is structured in a quadtree format. As the user zooms into the dataset, geographic regions are subdivided into four child regions. Conversely, as the user zooms out, four child regions collapse into a single, lower-LOD region. The approach provides an efficient data traversal path and mechanism that can be flexibly established for any dataset regardless of size or other characteristics.
Next Generation Flight Displays Using HTML5
NASA Technical Reports Server (NTRS)
Greenwood, Brian
2016-01-01
The Human Integrated Vehicles and Environments (HIVE) lab at Johnson Space Center (JSC) is focused on bringing together inter-disciplinary talent to design and integrate innovative human interface technologies for next generation manned spacecraft. As part of this objective, my summer internship project centered on an ongoing investigation in to building flight displays using the HTML5 standard. Specifically, the goals of my project were to build and demo "flight-like" crew and wearable displays as well as create a webserver for live systems being developed by the Advanced Exploration Systems (AES) program. In parallel to my project, a LabVIEW application, called a display server, was created by the HIVE that uses an XTCE (XML (Extensible Markup Language) Telemetry and Command Exchange) parser and CCSDS (Consultative Committee for Space Data System) space packet decoder to translate telemetry items sent by the CFS (Core Flight Software) over User Datagram Protocol (UDP). It was the webserver's job to receive these UDP messages and send them to the displays. To accomplish this functionality, I utilized Node.js and the accompanying Express framework. On the display side, I was responsible for creating the power system (AMPS) displays. I did this by using HTML5, CSS and JavaScript to create web pages that could update and change dynamically based on the data they received from the webserver. At this point, I have not started on the commanding, being able to send back to the CFS, portion of the displays but hope to have this functionality working by the completion of my internship. I also created a way to test the webserver's functionality without the display server by making a JavaScript application that read in a comma-separate values (CSV) file and converted it to XML which was then sent over UDP. One of the major requirements of my project was to build everything using as little preexisting code as possible, which I accomplished by only using a handful of JavaScript libraries. As a side project, I created a model of the HIVE lab and Building 29 using SketchUp. I obtained the floorplans of the building from the JSC Geographic Information Systems (GIS), which were computer-aided design (CAD) files, and imported them into SketchUp. I then took those floorplans and created a 3D model of the building from them. Working in conjunction with the Hybrid Reality lab in Building 32, the SketchUp model was imported into Unreal Engine for use with the HTC Vive. Using the Vive, I was able to interact with the model I created in virtual reality (VR). The purpose of this side project was to be able to visualize potential lab layouts and mockup designs as they are in development in order to finalize design decisions. Pending approval, the model that I created will be used in the Build-As-You-Test: Can Hybrid Reality Improve the SE/HSI Design Process project in the fall. Getting the opportunity to work at NASA has been one of the most memorable experiences of my life. Over the course of my internship, I improved my programming and web development abilities substantially. I will take all the skills and experiences I have had while at NASA back to school with me in the fall and hope to pursue a career in the aerospace industry after graduating in the spring.
Adjustable direct current and pulsed circuit fault current limiter
Boenig, Heinrich J.; Schillig, Josef B.
2003-09-23
A fault current limiting system for direct current circuits and for pulsed power circuit. In the circuits, a current source biases a diode that is in series with the circuits' transmission line. If fault current in a circuit exceeds current from the current source biasing the diode open, the diode will cease conducting and route the fault current through the current source and an inductor. This limits the rate of rise and the peak value of the fault current.
The magnetospheric currents - An introduction
NASA Technical Reports Server (NTRS)
Akasofu, S.-I.
1984-01-01
It is pointed out that the scientific discipline concerned with magnetospheric currents has grown out from geomagnetism and, in particular, from geomagnetic storm studies. The International Geophysical Year (IGY) introduced a new area for this discipline by making 'man-made satellites' available for the exploration of space around the earth. In this investigation, a brief description is provided of the magnetospheric currents in terms of eight component current systems. Attention is given to the Sq current, the Chapman-Ferraro current, the ring current (the symmetric component), the current systems driven by the solar wind-magnetosphere dynamo (SMD), the cross-tail current system, the average ionospheric current pattern, an example of an instantaneous current pattern, field-aligned currents, and driving mechanisms and models.
Detection of rip current using camera monitoring techniques
NASA Astrophysics Data System (ADS)
Kim, T.
2016-02-01
Rip currents are approximately shore normal seaward flows which are strong, localized and rather narrow. They are known that stacked water by longshore currents suddenly flow back out to sea as rip currents. They are transient phenomena and their generation time and location are unpredictable. They are also doing significant roles for offshore sediment transport and beach erosion. Rip currents can be very hazardous to swimmers or floaters because of their strong seaward flows and sudden depth changes by narrow and strong flows. Because of its importance in terms of safety, shoreline evolution and pollutant transport, a number of studies have been attempted to find out their mechanisms. However, understanding of rip currents is still not enough to make warning to people in the water by predicting their location and time. This paper investigates the development of rip currents using camera images. Since rip currents are developed by longshore currents, the observed longshore current variations in space and time can be used to detect rip current generation. Most of the time convergence of two longshore currents in the opposite direction is the outbreak of rip current. In order to observe longshore currents, an optical current meter(OCM) technique proposed by Chickadel et al.(2003) is used. The relationship between rip current generation time and longshore current velocity variation observed by OCM is analyzed from the images taken on the shore. The direct measurement of rip current velocity is also tested using image analysis techniques. Quantitative estimation of rip current strength is also conducted by using average and variance image of rip current area. These efforts will contribute to reduce the hazards of swimmers by prediction and warning of rip current generation.
Scrape-off-layer currents during MHD activity and disruptions in HBT-EP
NASA Astrophysics Data System (ADS)
Levesque, J. P.; Desanto, S.; Battey, A.; Bialek, J.; Brooks, J. W.; Mauel, M. E.; Navratil, G. A.
2017-10-01
We report scrape-off layer (SOL) current measurements during MHD mode activity and disruptions in the HBT-EP tokamak. Currents are measured via Rogowski coils mounted on tiles in the low-field-side SOL, toroidal jumpers between otherwise-isolated vessel sections, and segmented plasma current Rogowski coils. These currents strongly depend on the plasma's major radius, mode amplitude, and mode phase. Plasma current asymmetries and SOL currents during disruptions reach 4% of the plasma current. Asymmetric toroidal currents between vessel sections rotate at tens of kHz through most of the current quench, then symmetrize once Ip reaches 30% of its pre-disruptive value. Toroidal jumper currents oscillate between co- and counter-Ip, with co-Ip being dominant on average during disruptions. Increases in local plasma current correlate with counter-Ip current in the nearest toroidal jumper. Measurements are interpreted in the context of two models that produce contrary predictions for the toroidal vessel current polarity during disruptions. Plasma current asymmetries are consistent with both models, and scale with plasma displacement toward the wall. Progress of ongoing SOL current diagnostic upgrades is also presented. Supported by U.S. DOE Grant DE-FG02-86ER53222.
Measurement technology of RF interference current in high current system
NASA Astrophysics Data System (ADS)
Zhao, Zhihua; Li, Jianxuan; Zhang, Xiangming; Zhang, Lei
2018-06-01
Current probe is a detection method commonly used in electromagnetic compatibility. With the development of power electronics technology, the power level of power conversion devices is constantly increasing, and the power current of the electric energy conversion device in the electromagnetic launch system can reach 10kA. Current probe conventionally used in EMC (electromagnetic compatibility) detection cannot meet the test requirements on high current system due to the magnetic saturation problem. The conventional high current sensor is also not suitable for the RF (Radio Frequency) interference current measurement in high current power device due to the high noise level in the output of active amplifier. In this paper, a passive flexible current probe based on Rogowski coil and matching resistance is proposed that can withstand high current and has low noise level, to solve the measurement problems of interference current in high current power converter. And both differential mode and common mode current detection can be easily carried out with the proposed probe because of the probe's flexible structure.
Anomalous - viscosity current drive
Stix, Thomas H.; Ono, Masayuki
1988-01-01
An apparatus and method for maintaining a steady-state current in a toroidal magnetically confined plasma. An electric current is generated in an edge region at or near the outermost good magnetic surface of the toroidal plasma. The edge current is generated in a direction parallel to the flow of current in the main plasma and such that its current density is greater than the average density of the main plasma current. The current flow in the edge region is maintained in a direction parallel to the main current for a period of one or two of its characteristic decay times. Current from the edge region will penetrate radially into the plasma and augment the main plasma current through the mechanism of anomalous viscosity. In another aspect of the invention, current flow driven between a cathode and an anode is used to establish a start-up plasma current. The plasma-current channel is magnetically detached from the electrodes, leaving a plasma magnetically insulated from contact with any material obstructions including the cathode and anode.
Aternating current photovoltaic building block
Bower, Ward Issac; Thomas, Michael G.; Ruby, Douglas S.
2004-06-15
A modular apparatus for and method of alternating current photovoltaic power generation comprising via a photovoltaic module, generating power in the form of direct current; and converting direct current to alternating current and exporting power via one or more power conversion and transfer units attached to the module, each unit comprising a unitary housing extending a length or width of the module, which housing comprises: contact means for receiving direct current from the module; one or more direct current-to-alternating current inverters; an alternating current bus; and contact means for receiving alternating current from the one or more inverters.
ERIC Educational Resources Information Center
Department of the Interior, Denver, CO. Engineering and Research Center.
Subjects covered in this text are controlling the hydroelectric generator, generator excitation, basic principles of direct current generation, direction of current flow, basic alternating current generator, alternating and direct current voltage outputs, converting alternating current to direct current, review of the basic generator and…
Measurement of scrape-off-layer current dynamics during MHD activity and disruptions in HBT-EP
NASA Astrophysics Data System (ADS)
Levesque, J. P.; Brooks, J. W.; Abler, M. C.; Bialek, J.; Byrne, P. J.; Hansen, C. J.; Hughes, P. E.; Mauel, M. E.; Navratil, G. A.; Rhodes, D. J.
2017-08-01
We report scrape-off layer (SOL) current measurements during magnetohydrodynamic (MHD) mode activity, resonant magnetic perturbations (RMPs), and disruptions in the High Beta Tokamak—Extended Pulse (HBT-EP) device. Currents are measured via segmented plasma current Rogowski coils, jumpers running toroidally between otherwise-isolated vessel sections, and a grounded electrode in the scrape-off layer. These currents strongly depend on the plasma’s major radius, and amplitude and phase of non-axisymmetric field components. SOL currents connecting through the vessel are seen to reach ∼0.2{--}0.5 % of the plasma current during typical kink activity and RMPs. Plasma current asymmetries and scrape-off-layer currents generated during disruptions, which are commonly called halo currents, reach ∼4 % of I p. Asymmetric toroidal currents between vessel sections rotate at tens of kHz through most of the current quench, then symmetrize once I p reaches ∼30 % of its pre-disruptive value. Toroidal jumper currents oscillate between co- and counter-I p, with co-I p being dominant on average during disruptions. A relative increase in local plasma current measured by a segmented I p Rogowski coil correlates with counter-I p current in the nearest toroidal jumper. Measurements are interpreted in the context of two models that produce contrary predictions for the toroidal vessel current polarity during disruptions. Plasma current asymmetry measurements are consistent with both models, and SOL currents scale with plasma displacement toward the vessel wall. The design of an upcoming SOL current diagnostic and control upgrade is also briefly presented.
Power conversion apparatus and method
Su, Gui-Jia [Knoxville, TN
2012-02-07
A power conversion apparatus includes an interfacing circuit that enables a current source inverter to operate from a voltage energy storage device (voltage source), such as a battery, ultracapacitor or fuel cell. The interfacing circuit, also referred to as a voltage-to-current converter, transforms the voltage source into a current source that feeds a DC current to a current source inverter. The voltage-to-current converter also provides means for controlling and maintaining a constant DC bus current that supplies the current source inverter. The voltage-to-current converter also enables the current source inverter to charge the voltage energy storage device, such as during dynamic braking of a hybrid electric vehicle, without the need of reversing the direction of the DC bus current.
Current responsive devices for synchronous generators
Karlicek, Robert F.
1983-01-01
A device for detecting current imbalance between phases of a polyphase alternating current generator. A detector responds to the maximum peak current in the generator, and detecting means generates an output for each phase proportional to the peak current of each phase. Comparing means generates an output when the maximum peak current exceeds the phase peak current.
Umans, Stephen D.
2008-11-11
Apparatus and methods are provided for a system for measurement of a current in a conductor such that the conductor current may be momentarily directed to a current measurement element in order to maintain proper current without significantly increasing an amount of power dissipation attributable to the current measurement element or adding resistance to assist in current measurement. The apparatus and methods described herein are useful in superconducting circuits where it is necessary to monitor current carried by the superconducting elements while minimizing the effects of power dissipation attributable to the current measurement element.
An Optimal Current Observer for Predictive Current Controlled Buck DC-DC Converters
Min, Run; Chen, Chen; Zhang, Xiaodong; Zou, Xuecheng; Tong, Qiaoling; Zhang, Qiao
2014-01-01
In digital current mode controlled DC-DC converters, conventional current sensors might not provide isolation at a minimized price, power loss and size. Therefore, a current observer which can be realized based on the digital circuit itself, is a possible substitute. However, the observed current may diverge due to the parasitic resistors and the forward conduction voltage of the diode. Moreover, the divergence of the observed current will cause steady state errors in the output voltage. In this paper, an optimal current observer is proposed. It achieves the highest observation accuracy by compensating for all the known parasitic parameters. By employing the optimal current observer-based predictive current controller, a buck converter is implemented. The converter has a convergently and accurately observed inductor current, and shows preferable transient response than the conventional voltage mode controlled converter. Besides, costs, power loss and size are minimized since the strategy requires no additional hardware for current sensing. The effectiveness of the proposed optimal current observer is demonstrated experimentally. PMID:24854061
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, B.E.; Biewer, T.M.; Chattopadhyay, P.K.
2000-09-01
Auxiliary edge current drive is routinely applied in the Madison Symmetric Torus [R.N. Dexter, D. W. Kerst, T.W. Lovell et.al., Fusion Technol. 19, 131 (1991)] with the goal of modifying the parallel current profile to reduce current- driven magnetic fluctuations and the associated particle and energy transport. Provided by an inductive electric field, the current drive successfully reduces energy transport. First-time measurements of the modified edge current profile reveal that, relative to discharges without auxiliary current drive, the edge current density decreases. This decrease is explicable in terms of newly measured reductions in the dynamo (fluctuation-based) electric field and themore » electrical conductivity. Induced by the current drive, these two changes to the edge plasma play as much of a role in determining the resultant edge current profile as does the current drive itself.« less
100-kA vacuum current breaker of a modular design
NASA Astrophysics Data System (ADS)
Ivanov, V. P.; Vozdvijenskii, V. A.; Jagnov, V. A.; Solodovnikov, S. G.; Mazulin, A. V.; Ryjkov, V. M.
1994-05-01
Direct current breaker of a modular design is developed for the strong field tokamak power supply system. The power supply system comprises four 800 MW alternative current generators with 4 GJ flywheels, thyristor rectifiers providing inductive stores pumping by a current up to 100 kA for 1 - 4 sec. To form current pulses of various shapes in the tokamak windings current breakers are used with either pneumatic or explosive drive, at a current switching synchronously of not worse than 100 mks. Current breakers of these types require that the current conducting elements be replaced after each shot. For recent years vacuum arc quenching chambers with an axial magnetic field are successfully employed as repetitive performance current breakers, basically for currents up to 40 kA. In the report some results of researches of a vacuum switch modular are presented which we used as prototype switch for currents of the order of 100 kA.
Verma, Dharmendra; Kapadia, Asha; Adler, Douglas G
2007-08-01
Endoscopic biliary sphincterotomy (ES) can cause bleeding, pancreatitis, and perforation. This has, in part, been attributed to the type of electrosurgical current used for ES. No consensus exists on the optimal type of electrosurgical current for ES to maximize safety. To compare the rates of complications in patients undergoing ES via pure current versus mixed current. A systematic review of published, prospective, randomized trials that compared pure current with mixed current for ES. Patients undergoing ES, with random assignment to either current group. Data were standardized for pancreatitis and postsphincterotomy bleeding. There were insufficient data to analyze perforation risk. A random-effects model was used. Bleeding, pancreatitis, and perforation. A total of 804 patients from 4 trials that compared pure current to mixed current were analyzed. The aggregated rate of pancreatitis was 3.8%, 95% confidence interval (CI) 1.0%-6.6%, for the pure-current group versus 7.9%, 95% CI 3.1%-12.7%, for the mixed-current group; the difference was not statistically significant. The rate of bleeding (all severity groups) for the pure-current group was 37.3% (95% CI 27.3%, 47.3%), which was significantly higher than that of the mixed-current group (12.2% [95% CI 4.1%, 20.3%]). Mild bleeding was significantly more frequent with pure current (28.9% [95% CI 16.3, 41.4]) compared with mixed current (9.4% [95% CI 2.1%, 16.8%]). Variables, including endoscopist skill and cannulation difficulty, were difficult to measure. The rate of pancreatitis in patients who underwent ES when using pure current was not significantly different from those when using mixed current. Pure current was associated with more episodes of bleeding, primarily mild bleeding. Data were insufficient to analyze the perforation risk.
Blaxter, T J; Carlen, P L; Niesen, C
1989-01-01
1. Rat dentate granule neurones in hippocampal slices were voltage-clamped at 21-23 degrees C using CsCl-filled microelectrodes. The perfusate contained TTX and K+ channel blockers to isolate pharmacologically inward Ca2+ currents. 2. From hyperpolarized holding potentials of -65 to -85 mV, depolarizing test potentials to between -50 and -40 mV elicited a transient (100-200 ms) low-threshold (TLT) current which was also elicited from more depolarized holding potentials following hyperpolarizing voltage steps of -40 mV or greater. 3. Larger depolarizing steps from a hyperpolarized holding potential triggered a large (2-6 nA), transient high-threshold (THT) inward current, rapidly peaking and decaying over 500 ms, followed by a sustained inward current component. 4. At depolarized holding potentials (-50 to -20 mV), the THT current was apparently inactivated and a sustained high-threshold (SHT) inward current was evident during depolarizing voltage steps of 10 mV or more. 5. From hyperpolarized holding potentials with depolarizing voltage steps of 10-30 mV, most neurones demonstrated a small-amplitude, sustained low-threshold (SLT) inward current with similar characteristics to the SHT current. 6. Zero-Ca2+ perfusate or high concentrations of Ca2+ channel blockers (Cd2+, Mn2+ or Ni2+) diminished or abolished all inward currents. 7. Repetitive voltage step activation of each current at 0.5 Hz reduced the large THT current to less than 25% of an unconditioned control current, reduced the SHT current by 50%, but had little effect on the TLT current. 8. A low concentration of Cd2+ (50 microM) blocked the THT and SHT currents with little effect on the TLT current. Nimodipine (1 microM) attenuated the SHT current. Ni2+ (100 microM) selectively attenuated the TLT current. 9. In low-Ca2+ perfusate, high concentrations of Ca2+ (10-15 mM), focally applied to different parts of the neurone, increased the THT current when applied to the dendrites, the SHT current when applied to the soma and the TLT current at all locations. Conversely, in regular perfusate, Cd2+ (1-5 mM), focally applied to the dendrites decreased the THT current and somatic applications decreased the SHT current. The TLT current was diminished regardless of the site of Cd2+ application. 10. These results suggest the existence of three different Ca2+ currents in dentate granule cells separable by their activation and inactivation characteristics, pharmacology and site of initiation. PMID:2557433
NASA Technical Reports Server (NTRS)
Le, Guan; Slavin, J. A.; Strangeway, Robert
2011-01-01
In this study, we use the in-situ magnetic field observations from Space Technology 5 mission to quantify the imbalance of Region 1 (R1) and Region 2 (R2) currents. During the three-month duration of the ST5 mission, geomagnetic conditions range from quiet to moderately active. We find that the R1 current intensity is consistently stronger than the R2 current intensity both for the dawnside and the duskside large-scale field-aligned current system. The net currents flowing into (out of) the ionosphere in the dawnside (duskside) are in the order of 5% of the total R1 currents. We also find that the net currents flowing into or out of the ionosphere are controlled by the solar wind-magnetosphere interaction in the same way as the field-aligned currents themselves are. Since the net currents due to the imbalance of the R1 and R2 currents require that their closure currents flow across the polar cap from dawn to dusk as Pedersen currents, our results indicate that the total amount of the cross-polar cap Pedersen currents is in the order of 0.1 MA. This study, although with a very limited dataset, is one of the first attempts to quantify the cross-polar cap Pedersen currents. Given the importance of the Joule heating due to Pedersen currents to the high-latitude ionospheric electrodynamics, quantifying the cross-polar cap Pedersen currents and associated Joule heating is needed for developing models of the magnetosphere-ionosphere coupling.
NASA Technical Reports Server (NTRS)
Le, Guan; Slavin, J. A.; Strangeway, Robert
2010-01-01
In this study, we use the in-situ magnetic field observations from Space Technology 5 mission to quantify the imbalance of Region 1 (R1) and Region 2 (R2) currents. During the three-month duration of the ST5 mission, geomagnetic conditions range from quiet to moderately active. We find that the R1 current intensity is consistently stronger than the R2 current intensity both for the dawnside and the duskside large-scale field-aligned current system. The net currents flowing into (out of) the ionosphere in the dawnside (duskside) are in the order of 5% of the total R1 currents. We also find that the net currents flowing into or out of the ionosphere are controlled by the solar windmagnetosphere interaction in the same way as the field-aligned currents themselves are. Since the net currents due to the imbalance of the R1 and R2 currents require that their closure currents flow across the polar cap from dawn to dusk as Pedersen currents, our results indicate that the total amount of the cross-polar cap Pedersen currents is in the order of approximately 0.1 MA. This study, although with a very limited dataset, is one of the first attempts to quantify the cross-polar cap Pedersen currents. Given the importance of the Joule heating due to Pedersen currents to the high-latitude ionospheric electrodynamics, quantifying the cross-polar cap Pedersen currents and associated Joule heating is needed for developing models of the magnetosphere-ionosphere coupling.
Current responsive devices for synchronous generators
Karlicek, R.F.
1983-09-27
A device for detecting current imbalance between phases of a polyphase alternating current generator. A detector responds to the maximum peak current in the generator, and detecting means generates an output for each phase proportional to the peak current of each phase. Comparing means generates an output when the maximum peak current exceeds the phase peak current. 11 figs.
Hossack, A. C.; Sutherland, D. A.; Jarboe, T. R.
2017-02-01
A derivation is given showing that the current inside a closed-current volume can be sustained against resistive dissipation by appropriately phased magnetic perturbations. Imposed-dynamo current drive (IDCD) theory is used to predict the toroidal current evolution in the HIT-SI experiment as a function of magnetic fluctuations at the edge. Analysis of magnetic fields from a HIT-SI discharge shows that the injector-imposed fluctuations are sufficient to sustain the measured toroidal current without instabilities whereas the small, plasma-generated magnetic fluctuations are not sufficiently large to sustain the current.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hossack, A. C.; Sutherland, D. A.; Jarboe, T. R.
A derivation is given showing that the current inside a closed-current volume can be sustained against resistive dissipation by appropriately phased magnetic perturbations. Imposed-dynamo current drive (IDCD) theory is used to predict the toroidal current evolution in the HIT-SI experiment as a function of magnetic fluctuations at the edge. Analysis of magnetic fields from a HIT-SI discharge shows that the injector-imposed fluctuations are sufficient to sustain the measured toroidal current without instabilities whereas the small, plasma-generated magnetic fluctuations are not sufficiently large to sustain the current.
Li, Bingchu; Ling, Xiao; Huang, Yixiang; Gong, Liang; Liu, Chengliang
2017-01-01
This paper presents a fixed-switching-frequency model predictive current controller using multiplexed current sensor for switched reluctance machine (SRM) drives. The converter was modified to distinguish currents from simultaneously excited phases during the sampling period. The only current sensor installed in the converter was time division multiplexing for phase current sampling. During the commutation stage, the control steps of adjacent phases were shifted so that sampling time was staggered. The maximum and minimum duty ratio of pulse width modulation (PWM) was limited to keep enough sampling time for analog-to-digital (A/D) conversion. Current sensor multiplexing was realized without complex adjustment of either driver circuit nor control algorithms, while it helps to reduce the cost and errors introduced in current sampling due to inconsistency between sensors. The proposed controller is validated by both simulation and experimental results with a 1.5 kW three-phase 12/8 SRM. Satisfied current sampling is received with little difference compared with independent phase current sensors for each phase. The proposed controller tracks the reference current profile as accurately as the model predictive current controller with independent phase current sensors, while having minor tracking errors compared with a hysteresis current controller. PMID:28513554
Oxygen concentration sensor for an internal combustion engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakajima, T.; Okada, Y.; Mieno, T.
1988-09-29
This patent describes an oxygen concentration sensor, comprising: an oxygen ion conductive solid electrolyte member forming a gas diffusion restricted region into which a measuring gas is introduced; a pair of electrodes sandwiching the solid electrolyte member; pump current supply means applying a pump voltage to the pair of electrodes through a current detection element to generate a pump current; and a heater element connected to the solid electrolyte member for heating the solid electrolyte member for heating the solid electrolyte member when a heater current is supplied from a heater current source; wherein the oxygen concentration sensor detects anmore » oxygen concentration in the measuring gas in terms of a current value of the pump current supplied through the current detection element and controls oxygen concentration in the gas diffusion restricted region by conducting oxygen ions through the solid electrolyte member in accordance to the flow of the pump current; and wherein the current detection element is connected to the electrode of the pair of electrodes facing the gas diffusion restricted region for insuring that the current value is representative of the pump current and possible leakage current from the heater current.« less
Yoosefinejad, Amin Kordi; Motealleh, Alireza; Abbasnia, Keramatollah
2016-01-01
Iontophoresis is the noninvasive delivery of ions using direct current. The direct current has some disadvantages such as skin burning. Interferential current is a kind of alternating current without limitations of direct current; so the purpose of this study is to investigate and compare the effects of lidocaine, interferential current and lidocaine iontophoresis using interferential current. 30 healthy women aged 20-24 years participated in this randomized clinical trial study. Pressure, tactile and pain thresholds were evaluated before and after the application of treatment methods. Pressure, tactile and pain sensitivity increased significantly after the application of lidocaine alone (p < 0.005) and lidocaine iontophoresis using interferential current (p < 0.0001). Lidocaine iontophoresis using interferential current can increase perception threshold of pain, tactile stimulus and pressure sense more significantly than lidocaine and interferential current alone.
Analysis and modeling of leakage current sensor under pulsating direct current
NASA Astrophysics Data System (ADS)
Li, Kui; Dai, Yihua; Wang, Yao; Niu, Feng; Chen, Zhao; Huang, Shaopo
2017-05-01
In this paper, the transformation characteristics of current sensor under pulsating DC leakage current is investigated. The mathematical model of current sensor is proposed to accurately describe the secondary side current and excitation current. The transformation process of current sensor is illustrated in details and the transformation error is analyzed from multi aspects. A simulation model is built and a sensor prototype is designed to conduct comparative evaluation, and both simulation and experimental results are presented to verify the correctness of theoretical analysis.
Oscillatory nonohomic current drive for maintaining a plasma current
Fisch, N.J.
1984-01-01
Apparatus and methods are described for maintaining a plasma current with an oscillatory nonohmic current drive. Each cycle of operation has a generation period in which current driving energy is applied to the plasma, and a relaxation period in which current driving energy is removed. Plasma parameters, such as plasma temperature or plasma average ionic charge state, are modified during the generation period so as to oscillate plasma resistivity in synchronism with the application of current driving energy. The invention improves overall current drive efficiencies.
Oscillatory nonhmic current drive for maintaining a plasma current
Fisch, Nathaniel J.
1986-01-01
Apparatus and method of the invention maintain a plasma current with an oscillatory nonohmic current drive. Each cycle of operation has a generation period in which current driving energy is applied to the plasma, and a relaxation period in which current driving energy is removed. Plasma parameters, such as plasma temperature or plasma average ionic charge state, are modified during the generation period so as to oscillate plasma resistivity in synchronism with the application of current driving energy. The invention improves overall current drive efficiencies.
NASA Technical Reports Server (NTRS)
Le, G.
2008-01-01
A major unsolved question in the physics of ionosphere-magnetosphere coupling is how field-aligned currents (FACs) close. In order to maintain the divergence free condition, overall downward FACs (carried mainly by upward electrons) must eventually balance the overall upward FACs associated with the precipitating electrons through ionospheric Pedersen currents. Although much of the current closure may take place via local Pedersen currents flowing between Region 1 (R1) and Region 2 (R2) FACs, there is a generally an imbalance, i.e., more currents in R1 than in R2, in total currents between them. The net currents may be closed within R1 via cross-polar cap Pedersen currents. In this study, we use the magnetic field observations from Space Technology 5 mission to quantify the imbalance of R1 and R2 currents. We will determine the net R1-R2 currents under various solar wind conditions and discuss the implication of such imbalance to the ionospheric closure currents.
NASA Technical Reports Server (NTRS)
Taguchi, S.; Sugiura, M.; Winningham, J. D.; Slavin, J. A.
1993-01-01
The magnetic field and plasma data from 47 passes of DE-2 are used to study the IMF By-dependent distribution of field-aligned currents in the cleft region. It is proposed that the low-latitude cleft current (LCC) region is not an extension of the region 1 or region 2 current system and that a pair of LCCs and high-latitude cleft currents (HCCs) constitutes the cleft field-aligned current regime. The proposed pair of cleft field-aligned currents is explained with a qualitative model in which this pair of currents is generated on open field lines that have just been reconnected on the dayside magnetopause. The electric fields are transmitted along the field lines to the ionosphere, creating a poleward electric field and a pair of field-aligned currents when By is positive; the pair of field-aligned currents consists of a downward current at lower latitudes and an upward current at higher latitudes. In the By negative case, the model explains the reversal of the field-aligned current direction in the LCC and HCC regions.
Magnetic Configurations of the Tilted Current Sheets and Dynamics of Their Flapping in Magnetotail
NASA Astrophysics Data System (ADS)
Shen, C.; Rong, Z. J.; Li, X.; Dunlop, M.; Liu, Z. X.; Malova, H. V.; Lucek, E.; Carr, C.
2009-04-01
Based on multiple spacecraft measurements, the geometrical structures of tilted current sheet and tail flapping waves have been analyzed and some features of the tilted current sheets have been made clear for the first time. The geometrical features of the tilted current sheet revealed in this investigation are as follows: (1) The magnetic field lines (MFLs) are generally plane curves and the osculating planes in which the MFLs lie are about vertical to the magnetic equatorial plane, while the tilted current sheet may lean severely to the dawn or dusk side. (2) The tilted current sheet may become very thin, its half thickness is generally much less than the minimum radius of the curvature of the MFLs. (3) In the neutral sheet, the field-aligned current density becomes very large and has a maximum value at the center of the current sheet. (4) In some cases, the current density is a bifurcated one, and the two humps of the current density often superpose two peaks in the gradient of magnetic strength, indicating that the magnetic gradient drift current is possibly responsible for the formation of the two humps of the current density in some tilted current sheets. Tilted current sheets often appear along with tail thick current sheet flapping waves. It is found that, in the tail flapping current sheets, the minimum curvature radius of the MFLs in the current sheet is rather large with values around 1RE, while the neutral sheet may be very thin, with its half thickness being several tenths ofRE. During the flapping waves, the current sheet is tilted substantially, and the maximum tilt angle is generally larger than 45
2015-01-01
Abstract The basic properties of the near‐Earth current sheet from 8 RE to 12 RE were determined based on Time History of Events and Macroscale Interactions during Substorms (THEMIS) observations from 2007 to 2013. Ampere's law was used to estimate the current density when the locations of two spacecraft were suitable for the calculation. A total of 3838 current density observations were obtained to study the vertical profile. For typical solar wind conditions, the current density near (off) the central plane of the current sheet ranged from 1 to 2 nA/m2 (1 to 8 nA/m2). All the high current densities appeared off the central plane of the current sheet, indicating the formation of a bifurcated current sheet structure when the current density increased above 2 nA/m2. The median profile also showed a bifurcated structure, in which the half thickness was about 3 RE. The distance between the peak of the current density and the central plane of the current sheet was 0.5 to 1 RE. High current densities above 4 nA/m2 were observed in some cases that occurred preferentially during substorms, but they also occurred in quiet times. In contrast to the commonly accepted picture, these high current densities can form without a high solar wind dynamic pressure. In addition, these high current densities can appear in two magnetic configurations: tail‐like and dipolar structures. At least two mechanisms, magnetic flux depletion and new current system formation during the expansion phase, other than plasma sheet compression are responsible for the formation of the bifurcated current sheets. PMID:27722039
Particle-bearing currents in uniform density and two-layer fluids
NASA Astrophysics Data System (ADS)
Sutherland, Bruce R.; Gingras, Murray K.; Knudson, Calla; Steverango, Luke; Surma, Christopher
2018-02-01
Lock-release gravity current experiments are performed to examine the evolution of a particle bearing flow that propagates either in a uniform-density fluid or in a two-layer fluid. In all cases, the current is composed of fresh water plus micrometer-scale particles, the ambient fluid is saline, and the current advances initially either over the surface as a hypopycnal current or at the interface of the two-layer fluid as a mesopycnal current. In most cases the tank is tilted so that the ambient fluid becomes deeper with distance from the lock. For hypopycnal currents advancing in a uniform density fluid, the current typically slows as particles rain out of the current. While the loss of particles alone from the current should increase the current's buoyancy and speed, in practice the current's speed decreases because the particles carry with them interstitial fluid from the current. Meanwhile, rather than settling on the sloping bottom of the tank, the particles form a hyperpycnal (turbidity) current that advances until enough particles rain out that the relatively less dense interstitial fluid returns to the surface, carrying some particles back upward. When a hypopycnal current runs over the surface of a two-layer fluid, the particles that rain out temporarily halt their descent as they reach the interface, eventually passing through it and again forming a hyperpycnal current. Dramatically, a mesopycnal current in a two-layer fluid first advances along the interface and then reverses direction as particles rain out below and fresh interstitial fluid rises above.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, B. E.; Biewer, T. M.; Chattopadhyay, P. K.
2000-09-01
Auxiliary edge current drive is routinely applied in the Madison Symmetric Torus [R. N. Dexter, D. W. Kerst, T. W. Lovell et al., Fusion Technol. 19, 131 (1991)] with the goal of modifying the parallel current profile to reduce current-driven magnetic fluctuations and the associated particle and energy transport. Provided by an inductive electric field, the current drive successfully reduces fluctuations and transport. First-time measurements of the modified edge current profile reveal that, relative to discharges without auxiliary current drive, the edge current density decreases. This decrease is explicable in terms of newly measured reductions in the dynamo (fluctuation-based) electricmore » field and the electrical conductivity. Induced by the current drive, these two changes to the edge plasma play as much of a role in determining the resultant edge current profile as does the current drive itself. (c) 2000 American Institute of Physics.« less
Relationship between Birkeland current regions, particle precipitation, and electric fields
NASA Technical Reports Server (NTRS)
De La Beaujardiere, O.; Watermann, J.; Newell, P.; Rich, F.
1993-01-01
The relationship of the large-scale dayside Birkeland currents to large-scale particle precipitation patterns, currents, and convection is examined using DMSP and Sondrestrom radar observations. It is found that the local time of the mantle currents is not limited to the longitude of the cusp proper, but covers a larger local time extent. The mantle currents flow entirely on open field lines. About half of region 1 currents flow on open field lines, consistent with the assumption that the region 1 currents are generated by the solar wind dynamo and flow within the surface that separates open and closed field lines. More than 80 percent of the Birkeland current boundaries do not correspond to particle precipitation boundaries. Region 2 currents extend beyond the plasma sheet poleward boundary; region 1 currents flow in part on open field lines; mantle currents and mantle particles are not coincident. On most passes when a triple current sheet is observed, the convection reversal is located on closed field lines.
NASA Astrophysics Data System (ADS)
Coxon, John C.; Rae, I. Jonathan; Forsyth, Colin; Jackman, Caitriona M.; Fear, Robert C.; Anderson, Brian J.
2017-06-01
We conduct a superposed epoch analysis of Birkeland current densities from AMPERE (Active Magnetosphere and Planetary Electrodynamics Response Experiment) using isolated substorm expansion phase onsets identified by an independently derived data set. In order to evaluate whether R1 and R2 currents contribute to the substorm current wedge, we rotate global maps of Birkeland currents into a common coordinate system centered on the magnetic local time of substorm onset. When the latitude of substorm is taken into account, it is clear that both R1 and R2 current systems play a role in substorm onset, contrary to previous studies which found that R2 current did not contribute. The latitude of substorm onset is colocated with the interface between R1 and R2 currents, allowing us to infer that R1 current closes just tailward and R2 current closes just earthward of the associated current disruption in the tail. AMPERE is the first data set to give near-instantaneous measurements of Birkeland current across the whole polar cap, and this study addresses apparent discrepancies in previous studies which have used AMPERE to examine the morphology of the substorm current wedge. Finally, we present evidence for an extremely localized reduction in current density immediately prior to substorm onset, and we interpret this as the first statistical signature of auroral dimming in Birkeland current.
NASA Astrophysics Data System (ADS)
Hossack, A. C.; Sutherland, D. A.; Jarboe, T. R.
2017-02-01
A derivation is given showing that the current inside a closed-current volume can be sustained against resistive dissipation by appropriately phased magnetic perturbations. Imposed-dynamo current drive theory is used to predict the toroidal current evolution in the helicity injected torus with steady inductive helicity injection (HIT-SI) experiment as a function of magnetic fluctuations at the edge. Analysis of magnetic fields from a HIT-SI discharge shows that the injector-imposed fluctuations are sufficient to sustain the measured toroidal current without instabilities whereas the small, plasma-generated magnetic fluctuations are not sufficiently large to sustain the current.
The Experiment of Modulated Toroidal Current on HT-7 and HT-6M Tokamak
NASA Astrophysics Data System (ADS)
Mao, Jian-shan; P, Phillips; Luo, Jia-rong; Xu, Yu-hong; Zhao, Jun-yu; Zhang, Xian-mei; Wan, Bao-nian; Zhang, Shou-yin; Jie, Yin-xian; Wu, Zhen-wei; Hu, Li-qun; Liu, Sheng-xia; Shi, Yue-jiang; Li, Jian-gang; HT-6M; HT-7 Group
2003-02-01
The Experiments of Modulated Toroidal Current were done on the HT-6M tokamak and HT-7 superconducting tokamak. The toroidal current was modulated by programming the Ohmic heating field. Modulation of the plasma current has been used successfully to suppress MHD activity in discharges near the density limit where large MHD m = 2 tearing modes were suppressed by sufficiently large plasma current oscillations. The improved Ohmic confinement phase was observed during modulating toroidal current (MTC) on the Hefei Tokamak-6M (HT-6M) and Hefei superconducting Tokamak-7 (HT-7). A toroidal frequency-modulated current, induced by a modulated loop voltage, was added on the plasma equilibrium current. The ratio of A.C. amplitude of plasma current to the main plasma current ΔIp/Ip is about 12%-30%. The different formats of the frequency-modulated toroidal current were compared.
The auroral current circuit and field-aligned currents observed by FAST
NASA Astrophysics Data System (ADS)
Elphic, R. C.; Bonnell, J. W.; Strangeway, R. J.; Kepko, L.; Ergun, R. E.; McFadden, J. P.; Carlson, C. W.; Peria, W.; Cattell, C. A.; Klumpar, D.; Shelley, E.; Peterson, W.; Moebius, E.; Kistler, L.; Pfaff, R.
FAST observes signatures of small-scale downward-going current at the edges of the inverted-V regions where the primary (auroral) electrons are found. In the winter pre-midnight auroral zone these downward currents are carried by upward flowing low- and medium-energy (up to several keV) electron beams. FAST instrumentation shows agreement between the current densities inferred from both the electron distributions and gradients in the magnetic field. FAST data taken near apogee (˜4000-km altitude) commonly show downward current magnetic field deflections consistent with the observed upward flux of ˜109 electrons cm-2 s-1, or current densities of several µA m-2. The electron, field-aligned current and electric field signatures indicate the downward currents may be associated with “black aurora” and auroral ionospheric cavities. The field-aligned voltage-current relationship in the downward current region is nonlinear.
NASA Astrophysics Data System (ADS)
Bruserud, Kjersti; Haver, Sverre; Myrhaug, Dag
2018-06-01
Measured current speed data show that episodes of wind-generated inertial oscillations dominate the current conditions in parts of the northern North Sea. In order to acquire current data of sufficient duration for robust estimation of joint metocean design conditions, such as wind, waves, and currents, a simple model for episodes of wind-generated inertial oscillations is adapted for the northern North Sea. The model is validated with and compared against measured current data at one location in the northern North Sea and found to reproduce the measured maximum current speed in each episode with considerable accuracy. The comparison is further improved when a small general background current is added to the simulated maximum current speeds. Extreme values of measured and simulated current speed are estimated and found to compare well. To assess the robustness of the model and the sensitivity of current conditions from location to location, the validated model is applied at three other locations in the northern North Sea. In general, the simulated maximum current speeds are smaller than the measured, suggesting that wind-generated inertial oscillations are not as prominent at these locations and that other current conditions may be governing. Further analysis of the simulated current speed and joint distribution of wind, waves, and currents for design of offshore structures will be presented in a separate paper.
NASA Astrophysics Data System (ADS)
Bruserud, Kjersti; Haver, Sverre; Myrhaug, Dag
2018-04-01
Measured current speed data show that episodes of wind-generated inertial oscillations dominate the current conditions in parts of the northern North Sea. In order to acquire current data of sufficient duration for robust estimation of joint metocean design conditions, such as wind, waves, and currents, a simple model for episodes of wind-generated inertial oscillations is adapted for the northern North Sea. The model is validated with and compared against measured current data at one location in the northern North Sea and found to reproduce the measured maximum current speed in each episode with considerable accuracy. The comparison is further improved when a small general background current is added to the simulated maximum current speeds. Extreme values of measured and simulated current speed are estimated and found to compare well. To assess the robustness of the model and the sensitivity of current conditions from location to location, the validated model is applied at three other locations in the northern North Sea. In general, the simulated maximum current speeds are smaller than the measured, suggesting that wind-generated inertial oscillations are not as prominent at these locations and that other current conditions may be governing. Further analysis of the simulated current speed and joint distribution of wind, waves, and currents for design of offshore structures will be presented in a separate paper.
The Rogowski Coil Sensor in High Current Application: A Review
NASA Astrophysics Data System (ADS)
Nazmy Nanyan, Ayob; Isa, Muzamir; Hamid, Haziah Abdul; Nur Khairul Hafizi Rohani, Mohamad; Ismail, Baharuddin
2018-03-01
Rogowski coil is used for measuring the alternating current (AC) and high-speed current pulses. However, the technology makes the Rogowski coil (RC) come out with more improvement, modification and until today it’s still being studied for the new application. The Rogowski coil has a few advantages compared to the high frequency current transformer (HFCT). A brief review on the basic theory and the application of Rogowski coil as a current sensor measurement that been done by previous researchers are presented and discussed in this paper. Additionally, the review also focused on the capability of Rogowski coil for high current sensor measurement and their application for fault detection, over voltage current sensor, lightning current sensor and high impulse current detection. The experimental set up, techniques and measurement parameters in models also been discussed. Finally, a brief review on the performance analysis of current sensor measurement of Rogowski coil likes sensitivity, the maximum and current detection which could be used as a guideline to another researcher in order to develop an advanced RC as high current sensor in future is presented. This review reveal that the RC has a very good performance in high current sensor detection in term of sensitivity which is up to a few nanosecond, higher bandwidth, excellent in detection of high fault and also could measuring lightning current up to 400kA and has many advantages compare to conventional current transformer(CT).
Frequency behavior of the residual current devices
NASA Astrophysics Data System (ADS)
Erdei, Z.; Horgos, M.; Lung, C.; Pop-Vadean, A.; Muresan, R.
2017-01-01
This paper presents an experimental investigation into the operating characteristic of residual current devices when in presence of a residual current at a frequency of 60Hz. In order to protect persons and equipment effectively the residual current devices are made to be very sensitive to the ground fault current or the touch current. Because of their high sensitivity the residual current circuit breakers are prone to tripping under no-fault conditions.
Dahl, David A.; Appelhans, Anthony D.; Olson, John E.
1997-01-01
A current measuring system comprising a current measuring device having a first electrode at ground potential, and a second electrode; a current source having an offset potential of at least three hundred volts, the current source having an output electrode; and a capacitor having a first electrode electrically connected to the output electrode of the current source and having a second electrode electrically connected to the second electrode of the current measuring device.
Emission current control system for multiple hollow cathode devices
NASA Technical Reports Server (NTRS)
Beattie, John R. (Inventor); Hancock, Donald J. (Inventor)
1988-01-01
An emission current control system for balancing the individual emission currents from an array of hollow cathodes has current sensors for determining the current drawn by each cathode from a power supply. Each current sensor has an output signal which has a magnitude proportional to the current. The current sensor output signals are averaged, the average value so obtained being applied to a respective controller for controlling the flow of an ion source material through each cathode. Also applied to each controller are the respective sensor output signals for each cathode and a common reference signal. The flow of source material through each hollow cathode is thereby made proportional to the current drawn by that cathode, the average current drawn by all of the cathodes, and the reference signal. Thus, the emission current of each cathode is controlled such that each is made substantially equal to the emission current of each of the other cathodes. When utilized as a component of a multiple hollow cathode ion propulsion motor, the emission current control system of the invention provides for balancing the thrust of the motor about the thrust axis and also for preventing premature failure of a hollow cathode source due to operation above a maximum rated emission current.
Ionospheric convection driven by NBZ currents
NASA Technical Reports Server (NTRS)
Rasmussen, C. E.; Schunk, R. W.
1987-01-01
Computer simulations of Birkeland currents and electric fields in the polar ionosphere during periods of northward IMF were conducted. When the IMF z component is northward, an additional current system, called the NBZ current system, is present in the polar cap. These simulations show the effect of the addition of NBZ currents on ionospheric convection, particularly in the polar cap. When the total current in the NBZ system is roughly 25 to 50 percent of the net region 1 and 2 currents, convection in the central portion of the polar cap reverses direction and turns sunward. This creates a pattern of four-cell convection with two small cells located in the polar cap, rotating in an opposite direction from the larger cells. When the Birkeland currents are fixed (constant current source), the electric field is reduced in regions of relatively high conductivity, which affects the pattern of ionospheric convection. Day-night asymmetries in conductivity change convection in such a way that the two polar-cap cells are located within the large dusk cell. When ionospheric convection is fixed (constant voltage source), Birkeland currents are increased in regions of relatively high conductivity. Ionospheric currents, which flow horizontally to close the Birkeland currents, are changed appreciably by the NBZ current system. The principal effect is an increase in ionospheric current in the polar cap.
Calcium currents in a fast-twitch skeletal muscle of the rat.
Donaldson, P L; Beam, K G
1983-10-01
Slow ionic currents were measured in the rat omohyoid muscle with the three-microelectrode voltage-clamp technique. Sodium and delayed rectifier potassium currents were blocked pharmacologically. Under these conditions, depolarizing test pulses elicited an early outward current, followed by a transient slow inward current, followed in turn by a late outward current. The early outward current appeared to be a residual delayed rectifier current. The slow inward current was identified as a calcium current on the basis that (a) its magnitude depended on extracellular calcium concentration, (b) it was blocked by the addition of the divalent cations cadmium or nickel, and reduced in magnitude by the addition of manganese or cobalt, and (c) barium was able to replace calcium as an inward current carrier. The threshold potential for inward calcium current was around -20 mV in 10mM extracellular calcium and about -35 mV in 2 mM calcium. Currents were net inward over part of their time course for potentials up to at least +30 mV. At temperatures of 20-26 degrees C, the peak inward current (at approximately 0 mV) was 139 +/- 14 microA/cm2 (mean +/- SD), increasing to 226 +/- 28 microA/cm2 at temperatures of 27-37 degrees C. The late outward current exhibited considerable fiber-to-fiber variability. In some fibers it was primarily a time-independent, nonlinear leakage current. In other fibers it was primarily a time-independent, nonlinear leakage current. In other fibers it appeared to be the sum of both leak and a slowly activated outward current. The rate of activation of inward calcium current was strongly temperature dependent. For example, in a representative fiber, the time-to-peak inward current for a +10-mV test pulse decreased from approximately 250 ms at 20 degrees C to 100 ms at 30 degrees C. At 37 degrees C, the time-to-peak current was typically approximately 25 ms. The earliest phase of activation was difficult to quantify because the ionic current was partially obscured by nonlinear charge movement. Nonetheless, at physiological temperatures, the rate of calcium channel activation in rat skeletal muscle is about five times faster than activation of calcium channels in frog muscle. This pathway may be an important source of calcium entry in mammalian muscle.
Iberiotoxin-sensitive and -insensitive BK currents in Purkinje neuron somata
Benton, Mark D.; Lewis, Amanda H.; Bant, Jason S.
2013-01-01
Purkinje cells have specialized intrinsic ionic conductances that generate high-frequency action potentials. Disruptions of their Ca or Ca-activated K (KCa) currents correlate with altered firing patterns in vitro and impaired motor behavior in vivo. To examine the properties of somatic KCa currents, we recorded voltage-clamped KCa currents in Purkinje cell bodies isolated from postnatal day 17–21 mouse cerebellum. Currents were evoked by endogenous Ca influx with approximately physiological Ca buffering. Purkinje somata expressed voltage-activated, Cd-sensitive KCa currents with iberiotoxin (IBTX)-sensitive (>100 nS) and IBTX-insensitive (>75 nS) components. IBTX-sensitive currents activated and partially inactivated within milliseconds. Rapid, incomplete macroscopic inactivation was also evident during 50- or 100-Hz trains of 1-ms depolarizations. In contrast, IBTX-insensitive currents activated more slowly and did not inactivate. These currents were insensitive to the small- and intermediate-conductance KCa channel blockers apamin, scyllatoxin, UCL1684, bicuculline methiodide, and TRAM-34, but were largely blocked by 1 mM tetraethylammonium. The underlying channels had single-channel conductances of ∼150 pS, suggesting that the currents are carried by IBTX-resistant (β4-containing) large-conductance KCa (BK) channels. IBTX-insensitive currents were nevertheless increased by small-conductance KCa channel agonists EBIO, chlorzoxazone, and CyPPA. During trains of brief depolarizations, IBTX-insensitive currents flowed during interstep intervals, and the accumulation of interstep outward current was enhanced by EBIO. In current clamp, EBIO slowed spiking, especially during depolarizing current injections. The two components of BK current in Purkinje somata likely contribute differently to spike repolarization and firing rate. Moreover, augmentation of BK current may partially underlie the action of EBIO and chlorzoxazone to alleviate disrupted Purkinje cell firing associated with genetic ataxias. PMID:23446695
NASA Astrophysics Data System (ADS)
Bradley, T. J.; Cowley, S. W. H.; Provan, G.; Hunt, G. J.; Bunce, E. J.; Wharton, S. J.; Alexeev, I. I.; Belenkaya, E. S.; Kalegaev, V. V.; Dougherty, M. K.
2018-05-01
We newly analyze Cassini magnetic field data from the 2012/2013 Saturn northern spring interval of highly inclined orbits and compare them with similar data from late southern summer in 2008, thus providing unique information on the seasonality of the currents that couple momentum between Saturn's ionosphere and magnetosphere. Inferred meridional ionospheric currents in both cases consist of a steady component related to plasma subcorotation, together with the rotating current systems of the northern and southern planetary period oscillations (PPOs). Subcorotation currents during the two intervals show opposite north-south polar region asymmetries, with strong equatorward currents flowing in the summer hemispheres but only weak currents flowing to within a few degrees of the open-closed boundary (OCB) in the winter hemispheres, inferred due to weak polar ionospheric conductivities. Currents peak at 1 MA rad-1 in both hemispheres just equatorward of the open-closed boundary, associated with total downward polar currents 6 MA, then fall across the narrow auroral upward current region to small values at subauroral latitudes. PPO-related currents have a similar form in both summer and winter with principal upward and downward field-aligned currents peaking at 1.25 MA rad-1 being essentially collocated with the auroral upward current and approximately equal in strength. Though northern and southern PPO currents were approximately equal during both intervals, the currents in both hemispheres were dual modulated by both systems during 2012/2013, with approximately half the main current closing in the opposite ionosphere and half cross field in the magnetosphere, while only the northern hemisphere currents were similarly dual modulated in 2008.
Space Technology 5 (ST-5) Observations of the Imbalance of Region 1 and 2 Field-Aligned Currents
NASA Technical Reports Server (NTRS)
Le, Guan
2010-01-01
Space Technology 5 (ST-5) is a three micro-satellite constellation deployed into a 300 x 4500 km, dawn-dusk, sun-synchronous polar orbit from March 22 to June 21, 2006, for technology validations. In this study, we use the in-situ magnetic field observations from Space Technology 5 mission to quantify the imbalance of Region 1 (R1) and Region 2 (R2) currents. During the three-month duration of the ST5 mission, geomagnetic conditions range from quiet to moderately active. We find that the R1 current intensity is consistently stronger than the R2 current intensity both for the dawnside and the duskside large-scale field-aligned current system. The net currents flowing into (out of) the ionosphere in the dawnside (duskside) are in the order of 5% of the total RI currents. We also find that the net currents flowing into or out of the ionosphere are controlled by the solar wind-magnetosphere interaction in the same way as the field-aligned currents themselves are. Since the net currents due to the imbalance of the R1 and R2 currents require that their closure currents flow across the polar cap from dawn to dusk as Pedersen currents, our results indicate that the total amount of the cross-polar cap Pedersen currents is in the order of approx. 0.1 MA. This study, although with a very limited dataset, is one of the first attempts to quantify the cross-polar cap Pedersen currents. Given the importance of the Joule heating due to Pedersen currents to the high-latitude ionospheric electrodynamics, quantifying the cross-polar cap Pedersen currents and associated Joule heating is needed for developing models of the magnetosphere-ionosphere coupling.
Substorm Birkeland currents and Cowling channels in the ionosphere
NASA Astrophysics Data System (ADS)
Fujii, R.
2016-12-01
Field-aligned current (FAC) connects electromagnetically the ionosphere with the magnetosphere and plays important roles on dynamics and energetics in the magnetosphere and the ionosphere. In particular, connections between FACs in the ionosphere give important information on various current sources in the magnetosphere and the linkage between them, although the connection between FACs in the ionosphere does not straightforwardly give that in the magnetosphere. FACs in the ionosphere are closed to each other through ionospheric currents determined with the electric field and the Hall and Pedersen conductivities. The electric field and the conductivities are not independently distributed, but rather they are harmonized with each other spatially and temporarily in a physically consistent manner to give a certain FAC. In particular, the divergence of the Hall current due to the inhomogeneity of the Hall conductivity either flows in/out to the magnetosphere as a secondary FAC or accumulates excess charges that produce a secondary electric field. This electric field drives a current circuit connecting the Hall current with the Pedersen current; a Cowling channel current circuit. The FAC (the electric field) we observe is the sum of the primary and secondary FACs (electric fields). The talk will present characteristics of FACs and associated electric field and auroras during substorms, and the ionospheric current closures between the FACs. A statistical study has shown that the majority of region 1 currents are connected to their adjacent region 2 or region 0 currents, indicating the Pedersen current closure rather than the Hall current closure is dominant. On the other hand, the Pedersen currents associated with surge and substorm-related auroras often are connected to the Hall currents, forming a Cowling channel current circuit within the ionosphere.
NASA Astrophysics Data System (ADS)
Liu, J.; Angelopoulos, V.; Chu, X.; McPherron, R. L.
2016-12-01
Although Earth's Region 1 and 2 currents are related to activities such as substorm initiation, their magnetospheric origin remains unclear. Utilizing the triangular configuration of THEMIS probes at 8-12 RE downtail, we seek the origin of nightside Region 1 and 2 currents. The triangular configuration allows a curlometer-like technique which do not rely on active-time boundary crossings, so we can examine the current distribution in quiet times as well as active times. Our statistical study reveals that both Region 1 and 2 currents exist in the plasma sheet during quiet and active times. Especially, this is the first unequivocal, in-situ evidence of the existence of Region 2 currents in the plasma sheet. Farther away from the neutral sheet than the Region 2 currents lie the Region 1 currents which extend at least to the plasma sheet boundary layer. At geomagnetic quiet times, the separation between the two currents is located 2.5 RE from the neutral sheet. These findings suggest that the plasma sheet is a source of Region 1 and 2 currents regardless of geomagnetic activity level. During substorms, the separation between Region 1 and 2 currents migrates toward (away from) the neutral sheet as the plasma sheet thins (thickens). This migration indicates that the deformation of Region 1 and 2 currents is associated with redistribution of FAC sources in the magnetotail. In some substorms when the THEMIS probes encounter a dipolarization, a substorm current wedge (SCW) can be inferred from our technique, and it shows a distinctively larger current density than the pre-existing Region 1 currents. This difference suggests that the SCW is not just an enhancement of the pre-existing Region 1 current; the SCW and the Region 1 currents have different sources.
Some Comments on Topological Approaches to the π-Electron Currents in Conjugated Systems.
Dickens, Timothy K; Gomes, José A N F; Mallion, Roger B
2011-11-08
Within the past two years, three sets of independent authors (Mandado, Ciesielski et al., and Randić) have proposed methods in which π-electron currents in conjugated systems are estimated by invoking the concept of circuits of conjugation. These methods are here compared with ostensibly similar approaches published more than 30 years ago by two of the present authors (Gomes and Mallion) and (likewise independently) by Gayoso. Patterns of bond currents and ring currents computed by these methods for the nonalternant isomer of coronene that was studied by Randić are also systematically compared with those calculated by the Hückel-London-Pople-McWeeny (HLPM) "topological" approach and with the ab initio, "ipso-centric" current-density maps of Balaban et al. These all agree that a substantial diamagnetic π-electron current flows around the periphery of the selected structure (which could be thought of as a "perturbed" [18]-annulene), and consideration is given to the differing trends predicted by these several methods for the π-electron currents around its central six-membered ring and in its internal bonds. It is observed that, for any method in which calculated π-electron currents respect Kirchhoff's Laws of current conservation at a junction, consideration of bond currents-as an alternative to the more-traditional ring currents-can give a different insight into the magnetic properties of conjugated systems. However, provided that charge/current conservation is guaranteed-or Kirchhoff's First Law holds for bond currents instead of the more-general current-densities-then ring currents represent a more efficient way of describing the molecular reaction to the external magnetic field: ring currents are independent quantities, while bond currents are not.
High Performance CMOS Light Detector with Dark Current Suppression in Variable-Temperature Systems.
Lin, Wen-Sheng; Sung, Guo-Ming; Lin, Jyun-Long
2016-12-23
This paper presents a dark current suppression technique for a light detector in a variable-temperature system. The light detector architecture comprises a photodiode for sensing the ambient light, a dark current diode for conducting dark current suppression, and a current subtractor that is embedded in the current amplifier with enhanced dark current cancellation. The measured dark current of the proposed light detector is lower than that of the epichlorohydrin photoresistor or cadmium sulphide photoresistor. This is advantageous in variable-temperature systems, especially for those with many infrared light-emitting diodes. Experimental results indicate that the maximum dark current of the proposed current amplifier is approximately 135 nA at 125 °C, a near zero dark current is achieved at temperatures lower than 50 °C, and dark current and temperature exhibit an exponential relation at temperatures higher than 50 °C. The dark current of the proposed light detector is lower than 9.23 nA and the linearity is approximately 1.15 μA/lux at an external resistance R SS = 10 kΩ and environmental temperatures from 25 °C to 85 °C.
High Performance CMOS Light Detector with Dark Current Suppression in Variable-Temperature Systems
Lin, Wen-Sheng; Sung, Guo-Ming; Lin, Jyun-Long
2016-01-01
This paper presents a dark current suppression technique for a light detector in a variable-temperature system. The light detector architecture comprises a photodiode for sensing the ambient light, a dark current diode for conducting dark current suppression, and a current subtractor that is embedded in the current amplifier with enhanced dark current cancellation. The measured dark current of the proposed light detector is lower than that of the epichlorohydrin photoresistor or cadmium sulphide photoresistor. This is advantageous in variable-temperature systems, especially for those with many infrared light-emitting diodes. Experimental results indicate that the maximum dark current of the proposed current amplifier is approximately 135 nA at 125 °C, a near zero dark current is achieved at temperatures lower than 50 °C, and dark current and temperature exhibit an exponential relation at temperatures higher than 50 °C. The dark current of the proposed light detector is lower than 9.23 nA and the linearity is approximately 1.15 μA/lux at an external resistance RSS = 10 kΩ and environmental temperatures from 25 °C to 85 °C. PMID:28025530
NASA Technical Reports Server (NTRS)
Krasowski, Michael J. (Inventor); Prokop, Norman F. (Inventor)
2017-01-01
A current source logic gate with depletion mode field effect transistor ("FET") transistors and resistors may include a current source, a current steering switch input stage, and a resistor divider level shifting output stage. The current source may include a transistor and a current source resistor. The current steering switch input stage may include a transistor to steer current to set an output stage bias point depending on an input logic signal state. The resistor divider level shifting output stage may include a first resistor and a second resistor to set the output stage point and produce valid output logic signal states. The transistor of the current steering switch input stage may function as a switch to provide at least two operating points.
Enhanced distributed energy resource system
Atcitty, Stanley [Albuquerque, NM; Clark, Nancy H [Corrales, NM; Boyes, John D [Albuquerque, NM; Ranade, Satishkumar J [Las Cruces, NM
2007-07-03
A power transmission system including a direct current power source electrically connected to a conversion device for converting direct current into alternating current, a conversion device connected to a power distribution system through a junction, an energy storage device capable of producing direct current connected to a converter, where the converter, such as an insulated gate bipolar transistor, converts direct current from an energy storage device into alternating current and supplies the current to the junction and subsequently to the power distribution system. A microprocessor controller, connected to a sampling and feedback module and the converter, determines when the current load is higher than a set threshold value, requiring triggering of the converter to supply supplemental current to the power transmission system.
Non-inductive current generation in fusion plasmas with turbulence
NASA Astrophysics Data System (ADS)
Wang, Weixing; Ethier, S.; Startsev, E.; Chen, J.; Hahm, T. S.; Yoo, M. G.
2017-10-01
It is found that plasma turbulence may strongly influence non-inductive current generation. This may have radical impact on various aspects of tokamak physics. Our simulation study employs a global gyrokinetic model coupling self-consistent neoclassical and turbulent dynamics with focus on electron current. Distinct phases in electron current generation are illustrated in the initial value simulation. In the early phase before turbulence develops, the electron bootstrap current is established in a time scale of a few electron collision times, which closely agrees with the neoclassical prediction. The second phase follows when turbulence begins to saturate, during which turbulent fluctuations are found to strongly affect electron current. The profile structure, amplitude and phase space structure of electron current density are all significantly modified relative to the neoclassical bootstrap current by the presence of turbulence. Both electron parallel acceleration and parallel residual stress drive are shown to play important roles in turbulence-induced current generation. The current density profile is modified in a way that correlates with the fluctuation intensity gradient through its effect on k//-symmetry breaking in fluctuation spectrum. Turbulence is shown to deduct (enhance) plasma self-generated current in low (high) collisionality regime, and the reduction of total electron current relative to the neoclassical bootstrap current increases as collisionality decreases. The implication of this result to the fully non-inductive current operation in steady state burning plasma regime should be investigated. Finally, significant non-inductive current is observed in flat pressure region, which is a nonlocal effect and results from turbulence spreading induced current diffusion. Work supported by U.S. DOE Contract DE-AC02-09-CH11466.
The Substorm Current Wedge: Further Insights from MHD Simulations
NASA Technical Reports Server (NTRS)
Birn, J.; Hesse, M.
2015-01-01
Using a recent magnetohydrodynamic simulation of magnetotail dynamics, we further investigate the buildup and evolution of the substorm current wedge (SCW), resulting from flow bursts generated by near-tail reconnection. Each flow burst generates an individual current wedge, which includes the reduction of cross-tail current and the diversion to region 1 (R1)-type field-aligned currents (earthward on the dawn and tailward on the duskside), connecting the tail with the ionosphere. Multiple flow bursts generate initially multiple SCW patterns, which at later times combine to a wider single SCW pattern. The standard SCWmodel is modified by the addition of several current loops, related to particular magnetic field changes: the increase of Bz in a local equatorial region (dipolarization), the decrease of |Bx| away from the equator (current disruption), and increases in |By| resulting from azimuthally deflected flows. The associated loop currents are found to be of similar magnitude, 0.1-0.3 MA. The combined effect requires the addition of region 2 (R2)-type currents closing in the near tail through dawnward currents but also connecting radially with the R1 currents. The current closure at the inner boundary, taken as a crude proxy of an idealized ionosphere, demonstrates westward currents as postulated in the original SCW picture as well as North-South currents connecting R1- and R2-type currents, which were larger than the westward currents by a factor of almost 2. However, this result should be applied with caution to the ionosphere because of our neglect of finite resistance and Hall effects.
Dahl, D.A.; Appelhans, A.D.; Olson, J.E.
1997-09-09
A current measuring system is disclosed comprising a current measuring device having a first electrode at ground potential, and a second electrode; a current source having an offset potential of at least three hundred volts, the current source having an output electrode; and a capacitor having a first electrode electrically connected to the output electrode of the current source and having a second electrode electrically connected to the second electrode of the current measuring device. 4 figs.
Yakymyshyn, Christopher Paul; Brubaker, Michael Allen; Yakymyshyn, Pamela Jane
2007-01-16
A current sensor is described that uses a plurality of magnetic field sensors positioned around a current carrying conductor. The sensor can be hinged to allow clamping to a conductor. The current sensor provides high measurement accuracy for both DC and AC currents, and is substantially immune to the effects of temperature, conductor position, nearby current carrying conductors and aging.
Characterization of plasma current quench during disruptions at HL-2A
NASA Astrophysics Data System (ADS)
Zhu, Jinxia; Zhang, Yipo; Dong, Yunbo; HL-2A Team
2017-05-01
The most essential assumptions of physics for the evaluation of electromagnetic forces on the plasma-facing components due to a disruption-induced eddy current are characteristics of plasma current quenches including the current quench rate or its waveforms. The characteristics of plasma current quenches at HL-2A have been analyzed during spontaneous disruptions. Both linear decay and exponential decay are found in the disruptions with the fastest current quenches. However, there are two stages of current quench in the slow current quench case. The first stage with an exponential decay and the second stage followed by a rapid linear decay. The faster current quench rate corresponds to the faster movement of plasma displacement. The parameter regimes on the current quench time and the current quench rates have been obtained from disruption statistics at HL-2A. There exists no remarkable difference for distributions obtained between the limiter and the divertor configuration. This data from HL-2A provides basic data of the derivation of design criteria for a large-sized machine during the current decay phase of the disruptions.
Sustained and transient calcium currents in horizontal cells of the white bass retina.
Sullivan, J M; Lasater, E M
1992-01-01
Calcium currents were recorded from cultured horizontal cells (HCs) isolated from adult white bass retinas, using the whole-cell patch-clamp technique. Ca2+ currents were enhanced using 10 mM extracellular Ca2+, while Na+ and K+ currents were pharmacologically suppressed. Two components of the Ca2+ current, one transient, the other sustained, were found. The large transient component of the Ca2+ current, which has not been seen before in HCs, is similar, but not identical, to the T-type Ca2+ current described previously in a variety of preparations. The sustained component of the Ca2+ current is similar, but not identical, to the L-type current described in other preparations. FTX, a factor isolated from the venom of the funnel-web spider, Agelenopsis aperta, preferentially and irreversibly blocks the sustained component of the Ca2+ current at very dilute concentrations. The sustained component of the Ca2+ current inactivates slowly, over the course of 15-60 s, in some HCs. This inactivation of the sustained Ca2+ current, when present, is primarily voltage dependent rather than Ca2+ dependent.
Sustained and transient calcium currents in horizontal cells of the white bass retina
1992-01-01
Calcium currents were recorded from cultured horizontal cells (HCs) isolated from adult white bass retinas, using the whole-cell patch- clamp technique. Ca2+ currents were enhanced using 10 mM extracellular Ca2+, while Na+ and K+ currents were pharmacologically suppressed. Two components of the Ca2+ current, one transient, the other sustained, were found. The large transient component of the Ca2+ current, which has not been seen before in HCs, is similar, but not identical, to the T-type Ca2+ current described previously in a variety of preparations. The sustained component of the Ca2+ current is similar, but not identical, to the L-type current described in other preparations. FTX, a factor isolated from the venom of the funnel-web spider, Agelenopsis aperta, preferentially and irreversibly blocks the sustained component of the Ca2+ current at very dilute concentrations. The sustained component of the Ca2+ current inactivates slowly, over the course of 15- 60 s, in some HCs. This inactivation of the sustained Ca2+ current, when present, is primarily voltage dependent rather than Ca2+ dependent. PMID:1371309
Method and apparatus for measuring low currents in capacitance devices
Kopp, M.K.; Manning, F.W.; Guerrant, G.C.
1986-06-04
A method and apparatus for measuring subnanoampere currents in capacitance devices is reported. The method is based on a comparison of the voltages developed across the capacitance device with that of a reference capacitor in which the current is adjusted by means of a variable current source to produce a stable voltage difference. The current varying means of the variable current source is calibrated to provide a read out of the measured current. Current gain may be provided by using a reference capacitor which is larger than the device capacitance with a corresponding increase in current supplied through the reference capacitor. The gain is then the ratio of the reference capacitance to the device capacitance. In one illustrated embodiment, the invention makes possible a new type of ionizing radiation dose-rate monitor where dose-rate is measured by discharging a reference capacitor with a variable current source at the same rate that radiation is discharging an ionization chamber. The invention eliminates high-megohm resistors and low current ammeters used in low-current measuring instruments.
Interhemispheric currents in the ring current region as seen by the Cluster spacecraft
NASA Astrophysics Data System (ADS)
Tenfjord, P.; Ostgaard, N.; Haaland, S.; Laundal, K.; Reistad, J. P.
2013-12-01
The existence of interhemispheric currents has been predicted by several authors, but their extent in the ring current has to our knowledge never been studied systematically by using in-situ measurements. These currents have been suggested to be associated with observed asymmetries of the aurora. We perform a statistical study of current density and direction during ring current crossings using the Cluster spacecraft. We analyse the extent of the interhemispheric field aligned currents for a wide range of solar wind conditions. Direct estimations of equatorial current direction and density are achieved through the curlometer technique. The curlometer technique is based on Ampere's law and requires magnetic field measurements from all four spacecrafts. The use of this method requires careful study of factors that limit the accuracy, such as tetrahedron shape and configuration. This significantly limits our dataset, but is a necessity for accurate current calculations. Our goal is to statistically investigate the occurrence of interhemispheric currents, and determine if there are parameters or magnetospheric states on which the current magnitude and directions depend upon.
Associating ground magnetometer observations with current or voltage generators
NASA Astrophysics Data System (ADS)
Hartinger, M. D.; Xu, Z.; Clauer, C. R.; Yu, Y.; Weimer, D. R.; Kim, H.; Pilipenko, V.; Welling, D. T.; Behlke, R.; Willer, A. N.
2017-07-01
A circuit analogy for magnetosphere-ionosphere current systems has two extremes for drivers of ionospheric currents: ionospheric electric fields/voltages constant while current/conductivity vary—the "voltage generator"—and current constant while electric field/conductivity vary—the "current generator." Statistical studies of ground magnetometer observations associated with dayside Transient High Latitude Current Systems (THLCS) driven by similar mechanisms find contradictory results using this paradigm: some studies associate THLCS with voltage generators, others with current generators. We argue that most of this contradiction arises from two assumptions used to interpret ground magnetometer observations: (1) measurements made at fixed position relative to the THLCS field-aligned current and (2) negligible auroral precipitation contributions to ionospheric conductivity. We use observations and simulations to illustrate how these two assumptions substantially alter expectations for magnetic perturbations associated with either a current or a voltage generator. Our results demonstrate that before interpreting ground magnetometer observations of THLCS in the context of current/voltage generators, the location of a ground magnetometer station relative to the THLCS field-aligned current and the location of any auroral zone conductivity enhancements need to be taken into account.
Two-dimensional relativistic space charge limited current flow in the drift space
NASA Astrophysics Data System (ADS)
Liu, Y. L.; Chen, S. H.; Koh, W. S.; Ang, L. K.
2014-04-01
Relativistic two-dimensional (2D) electrostatic (ES) formulations have been derived for studying the steady-state space charge limited (SCL) current flow of a finite width W in a drift space with a gap distance D. The theoretical analyses show that the 2D SCL current density in terms of the 1D SCL current density monotonically increases with D/W, and the theory recovers the 1D classical Child-Langmuir law in the drift space under the approximation of uniform charge density in the transverse direction. A 2D static model has also been constructed to study the dynamical behaviors of the current flow with current density exceeding the SCL current density, and the static theory for evaluating the transmitted current fraction and minimum potential position have been verified by using 2D ES particle-in-cell simulation. The results show the 2D SCL current density is mainly determined by the geometrical effects, but the dynamical behaviors of the current flow are mainly determined by the relativistic effect at the current density exceeding the SCL current density.
Inhibitory effect of aniracetam on N-type calcium current in acutely isolated rat neuronal cells.
Koike, H; Saito, H; Matsuki, N
1993-04-01
Effects of aniracetam on whole-cell calcium currents were studied in acutely isolated neuronal cells from postnatal rat ventromedial hypothalamus. There were three types of inward calcium currents, one low-threshold transient current and two high-threshold sustained currents. The nicardipine sensitive L-type current was activated at -20 mV or more depolarized potentials, and the omega-conotoxin sensitive N-type current was recorded at more positive potentials than the L-type. Aniracetam inhibited the N-type current in a dose-dependent manner without affecting the other two types of calcium currents. The effect appeared soon after the addition and lasted for several minutes during washing. Since the N-type current is thought to regulate the release of transmitters, the inhibitory effect may contribute to the nootropic property of aniracetam by modifying the neurotransmission.
Ocean dynamics studies. [of current-wave interactions
NASA Technical Reports Server (NTRS)
1974-01-01
Both the theoretical and experimental investigations into current-wave interactions are discussed. The following three problems were studied: (1) the dispersive relation of a random gravity-capillary wave field; (2) the changes of the statistical properties of surface waves under the influence of currents; and (3) the interaction of capillary-gravity with the nonuniform currents. Wave current interaction was measured and the feasibility of using such measurements for remote sensing of surface currents was considered. A laser probe was developed to measure the surface statistics, and the possibility of using current-wave interaction as a means of current measurement was demonstrated.
Correcting magnetic probe perturbations on current density measurements of current carrying plasmas.
Knoblauch, P; Raspa, V; Di Lorenzo, F; Lazarte, A; Clausse, A; Moreno, C
2010-09-01
A method to infer the current density distribution in the current sheath of a plasma focus discharge from a magnetic probe is formulated and then applied to experimental data obtained in a 1.1 kJ device. Distortions on the magnetic probe signal caused by current redistribution and by a time-dependent total discharge current are considered simultaneously, leading to an integral equation for the current density. Two distinct, easy to implement, numerical procedures are given to solve such equation. Experimental results show the coexistence of at least two maxima in the current density structure of a nitrogen sheath.
Current in nanojunctions: Effects of reservoir coupling
NASA Astrophysics Data System (ADS)
Yadalam, Hari Kumar; Harbola, Upendra
2018-07-01
We study the effect of system reservoir coupling on currents flowing through quantum junctions. We consider two simple double-quantum dot configurations coupled to two external fermionic reservoirs and study the net current flowing between the two reservoirs. The net current is partitioned into currents carried by the eigenstates of the system and by the coherences between the eigenstates induced due to coupling with the reservoirs. We find that current carried by populations is always positive whereas current carried by coherences are negative for large couplings. This results in a non-monotonic dependence of the net current on the coupling strength. We find that in certain cases, the net current can vanish at large couplings due to cancellation between currents carried by the eigenstates and by the coherences. These results provide new insights into the non-trivial role of system-reservoir couplings on electron transport through quantum dot junctions. In the presence of weak coulomb interactions, net current as a function of system reservoir coupling strength shows similar trends as for the non-interacting case.
MgB2-based superconductors for fault current limiters
NASA Astrophysics Data System (ADS)
Sokolovsky, V.; Prikhna, T.; Meerovich, V.; Eisterer, M.; Goldacker, W.; Kozyrev, A.; Weber, H. W.; Shapovalov, A.; Sverdun, V.; Moshchil, V.
2017-02-01
A promising solution of the fault current problem in power systems is the application of fast-operating nonlinear superconducting fault current limiters (SFCLs) with the capability of rapidly increasing their impedance, and thus limiting high fault currents. We report the results of experiments with models of inductive (transformer type) SFCLs based on the ring-shaped bulk MgB2 prepared under high quasihydrostatic pressure (2 GPa) and by hot pressing technique (30 MPa). It was shown that the SFCLs meet the main requirements to fault current limiters: they possess low impedance in the nominal regime of the protected circuit and can fast increase their impedance limiting both the transient and the steady-state fault currents. The study of quenching currents of MgB2 rings (SFCL activation current) and AC losses in the rings shows that the quenching current density and critical current density determined from AC losses can be 10-20 times less than the critical current determined from the magnetization experiments.
Toroidal current asymmetry in tokamak disruptions
NASA Astrophysics Data System (ADS)
Strauss, H. R.
2014-10-01
It was discovered on JET that disruptions were accompanied by toroidal asymmetry of the toroidal plasma current I ϕ. It was found that the toroidal current asymmetry was proportional to the vertical current moment asymmetry with positive sign for an upward vertical displacement event (VDE) and negative sign for a downward VDE. It was observed that greater displacement leads to greater measured I ϕ asymmetry. Here, it is shown that this is essentially a kinematic effect produced by a VDE interacting with three dimensional MHD perturbations. The relation of toroidal current asymmetry and vertical current moment is calculated analytically and is verified by numerical simulations. It is shown analytically that the toroidal variation of the toroidal plasma current is accompanied by an equal and opposite variation of the toroidal current flowing in a thin wall surrounding the plasma. These currents are connected by 3D halo current, which is π/2 radians out of phase with the n = 1 toroidal current variations.
Fault current limiter and alternating current circuit breaker
Boenig, Heinrich J.
1998-01-01
A solid-state circuit breaker and current limiter for a load served by an alternating current source having a source impedance, the solid-state circuit breaker and current limiter comprising a thyristor bridge interposed between the alternating current source and the load, the thyristor bridge having four thyristor legs and four nodes, with a first node connected to the alternating current source, and a second node connected to the load. A coil is connected from a third node to a fourth node, the coil having an impedance of a value calculated to limit the current flowing therethrough to a predetermined value. Control means are connected to the thyristor legs for limiting the alternating current flow to the load under fault conditions to a predetermined level, and for gating the thyristor bridge under fault conditions to quickly reduce alternating current flowing therethrough to zero and thereafter to maintain the thyristor bridge in an electrically open condition preventing the alternating current from flowing therethrough for a predetermined period of time.
Fault current limiter and alternating current circuit breaker
Boenig, H.J.
1998-03-10
A solid-state circuit breaker and current limiter are disclosed for a load served by an alternating current source having a source impedance, the solid-state circuit breaker and current limiter comprising a thyristor bridge interposed between the alternating current source and the load, the thyristor bridge having four thyristor legs and four nodes, with a first node connected to the alternating current source, and a second node connected to the load. A coil is connected from a third node to a fourth node, the coil having an impedance of a value calculated to limit the current flowing therethrough to a predetermined value. Control means are connected to the thyristor legs for limiting the alternating current flow to the load under fault conditions to a predetermined level, and for gating the thyristor bridge under fault conditions to quickly reduce alternating current flowing therethrough to zero and thereafter to maintain the thyristor bridge in an electrically open condition preventing the alternating current from flowing therethrough for a predetermined period of time. 9 figs.
Influence of internal current and pacing current on pacemaker longevity.
Schuchert, A; Kuck, K H
1994-01-01
The effects of lower pulse amplitude on battery current and pacemaker longevity were studied comparing the new, small-sized VVI pacemaker, Minix 8341, with the former model, Pasys 8329. Battery current was telemetrically measured at 0.8, 1.6, 2.5, and 5.0 V pulse amplitude and 0.05, 0.25, 0.5, and 1.0 msec pulse duration. Internal current was assumed to be equal to the battery current at 0.8 V and 0.05 msec. Pacing current was calculated subtracting internal current from battery current. The Minix pacemaker had a significantly lower battery current because of a lower internal current (Minix: 4.1 +/- 0.1 microA; Pasys: 16.1 +/- 0.1 microA); pacing current of both units was similar. At 0.5 msec pulse duration, the programming from 5.0-2.5 V pulse amplitude resulted in a greater relative reduction of battery current in the newer pacemaker (51% vs 25%). Projected longevity of each pacemaker was 7.9 years at 5.0 V and 0.5 msec. The programming from 5.0-2.5 V extended the projected longevity by 2.3 years (Pasys) and by 7.1 years (Minix). The longevity was negligibly longer after programming to 1.6 V. extension of pacemaker longevity can be achieved with the programming to 2.5 V or less if the connected pacemakers need a low internal current for their circuitry.
Preliminary Experiment of Non-Inductive Plasma Current Startup in SUNIST Spherical Tokamak
NASA Astrophysics Data System (ADS)
He, Yexi; Zhang, Liang; Xie, Lifeng; Tang, Yi; Yang, Xuanzong; Feng, Chunhua; Fu, Hongjun
2006-01-01
The non-inductive plasma current startup is an important motivation in SUNIST spherical tokamak. In the recent experiment, the magnetron microwave system of 100 kW and 2.45 GHz has been used to the ECR plasma current startup. Besides the toroidal field, a vertical field was applied to generate preliminary toroidal plasma current without the action of the central solenoid. As the evidence of plasma current startup with the effect of vertical field drift, the direction of plasma current is changed when the direction of vertical field changes during the ECR plasma current startup discharge. We also observed a maximum plasma current by scanning vertical field in both directions. Additionally, we used electrode discharge to assist the ECR plasma current startup.
Plasma source for spacecraft potential control
NASA Technical Reports Server (NTRS)
Olsen, R. C.
1983-01-01
A stable electrical ground which enables the particle spectrometers to measure the low energy particle populations was investigated and the current required to neutralize the spacecraft was measured. In addition, the plasma source for potential control (PSPO C) prevents high charging events which could affect the spacecraft electrical integrity. The plasma source must be able to emit a plasma current large enough to balance the sum of all other currents to the spacecraft. In ion thrusters, hollow cathodes provide several amperes of electron current to the discharge chamber. The PSPO C is capable of balancing the net negative currents found in eclipse charging events producing 10 to 100 microamps of electron current. The largest current required is the ion current necessary to balance the total photoelectric current.
NASA Astrophysics Data System (ADS)
Zhou, Huai-Bei
This dissertation examines the dynamic response of a magnetoplasma to an external time-dependent current source. To achieve this goal a new method which combines analytic and numerical techniques to study the dynamic response of a 3-D magnetoplasma to a time-dependent current source imposed across the magnetic field was developed. The set of the cold electron and/or ion plasma equations and Maxwell's equations are first solved analytically in (k, omega)^ace; inverse Laplace and 3 -D complex Fast Fourier Transform (FFT) techniques are subsequently used to numerically transform the radiation fields and plasma currents from the (k, omega) ^ace to the (r, t) space. The dynamic responses of the electron plasma and of the compensated two-component plasma to external current sources are studied separately. The results show that the electron plasma responds to a time -varying current source imposed across the magnetic field by exciting whistler/helicon waves and forming of an expanding local current loop, induced by field aligned plasma currents. The current loop consists of two anti-parallel field-aligned current channels concentrated at the ends of the imposed current and a cross-field current region connecting these channels. The latter is driven by an electron Hall drift. A compensated two-component plasma responds to the same current source as following: (a) For slow time scales tau > Omega_sp{i}{-1} , it generates Alfven waves and forms a non-local current loop in which the ion polarization currents dominate the cross-field current; (b) For fast time scales tau < Omega_sp{i}{-1} , the dynamic response of the compensated two-component plasma is the same as that of the electron plasma. The characteristics of the current closure region are determined by the background plasma density, the magnetic field and the time scale of the current source. This study has applications to a diverse range of space and solid state plasma problems. These problems include current closure in emf inducing tethered satellite systems (TSS), generation of ELF/VLF waves by ionospheric heating, current closure and quasineutrality in thin magnetopause transitions, and short electromagnetic pulse generation in solid state plasmas. The cross-field current in TSS builds up on a time scale corresponding to the whistler waves and results in local current closure. Amplitude modulated HF ionospheric heating generates ELF/VLF waves by forming a horizontal magnetic dipole. The dipole is formed by the current closure in the modified region. For thin transition the time-dependent cross-field polarization field at the magnetopause could be neutralized by the formation of field aligned current loops that close by a cross-field electron Hall current. A moving current source in a solid state plasma results in microwave emission if the speed of the source exceeds the local phase velocity of the helicon or Alfven waves. Detailed analysis of the above problems is presented in the thesis.
Magnetic configurations of the tilted current sheets in magnetotail
NASA Astrophysics Data System (ADS)
Shen, C.; Rong, Z. J.; Li, X.; Dunlop, M.; Liu, Z. X.; Malova, H. V.; Lucek, E.; Carr, C.
2008-11-01
In this research, the geometrical structures of tilted current sheet and tail flapping waves have been analysed based on multiple spacecraft measurements and some features of the tilted current sheets have been made clear for the first time. The geometrical features of the tilted current sheet revealed in this investigation are as follows: (1) The magnetic field lines (MFLs) in the tilted current sheet are generally plane curves and the osculating planes in which the MFLs lie are about vertical to the equatorial plane, while the normal of the tilted current sheet leans severely to the dawn or dusk side. (2) The tilted current sheet may become very thin, the half thickness of its neutral sheet is generally much less than the minimum radius of the curvature of the MFLs. (3) In the neutral sheet, the field-aligned current density becomes very large and has a maximum value at the center of the current sheet. (4) In some cases, the current density is a bifurcated one, and the two humps of the current density often superpose two peaks in the gradient of magnetic strength, indicating that the magnetic gradient drift current is possibly responsible for the formation of the two humps of the current density in some tilted current sheets. Tilted current sheets often appear along with tail current sheet flapping waves. It is found that, in the tail flapping current sheets, the minimum curvature radius of the MFLs in the current sheet is rather large with values around 1 RE, while the neutral sheet may be very thin, with its half thickness being several tenths of RE. During the flapping waves, the current sheet is tilted substantially, and the maximum tilt angle is generally larger than 45°. The phase velocities of these flapping waves are several tens km/s, while their periods and wavelengths are several tens of minutes, and several earth radii, respectively. These tail flapping events generally last several hours and occur during quiet periods or periods of weak magnetospheric activity.
NASA Astrophysics Data System (ADS)
Tallouli, M.; Shyshkin, O.; Yamaguchi, S.
2017-07-01
The development of power transmission lines based on long-length high temperature superconducting (HTS) tapes is complicated and technically challenging task. A serious problem for transmission line operation could become HTS power cable damage due to over-current pulse conditions. To avoid the cable damage in any urgent case the superconducting coil technology, i.e. superconductor fault current limiter (SFCL) is required. Comprehensive understanding of the current density characteristics of HTS tapes in both cases, either after pure over-current pulse or after over-current pulse limited by SFCL, is needed to restart or to continue the operation of the power transmission line. Moreover, current density distribution along and across the HTS tape provides us with the sufficient information about the quality of the tape performance in different current feeding regimes. In present paper we examine BSCCO HTS tape under two current feeding regimes. The first one is 100A feeding preceded by 900A over-current pulse. In this case none of tape protection was used. The second scenario is similar to the fist one but SFCL is used to limit an over-current value. For both scenarios after the pulse is gone and the current feeding is set up at 100A we scan magnetic field above the tape by means of Hall probe sensor. Then the feeding is turned of and the magnetic field scanning is repeated. Using the inverse problem numerical solver we calculate the corresponding direct and permanent current density distributions during the feeding and after switch off. It is demonstrated that in the absence of SFCL the current distribution is highly peaked at the tape center. At the same time the current distribution in the experiment with SFCL is similar to that observed under normal current feeding condition. The current peaking in the first case is explained by the effect of an opposite electric field induced at the tape edges during the overcurrent pulse decay, and by degradation of superconductivity at the edges due to penetration of magnetic field in superconducting core during the pulse.
Surface currents associated with external kink modes in tokamak plasmas during a major disruption
NASA Astrophysics Data System (ADS)
Ng, C. S.; Bhattacharjee, A.
2017-10-01
The surface current on the plasma-vacuum interface during a disruption event involving kink instability can play an important role in driving current into the vacuum vessel. However, there have been disagreements over the nature or even the sign of the surface current in recent theoretical calculations based on idealized step-function background plasma profiles. We revisit such calculations by replacing step-function profiles with more realistic profiles characterized by a strong but finite gradient along the radial direction. It is shown that the resulting surface current is no longer a delta-function current density, but a finite and smooth current density profile with an internal structure, concentrated within the region with a strong plasma pressure gradient. Moreover, this current density profile has peaks of both signs, unlike the delta-function case with a sign opposite to, or the same as the plasma current. We show analytically and numerically that such current density can be separated into two parts, with one of them, called the convective current density, describing the transport of the background plasma density by the displacement, and the other part that remains, called the residual current density. It is argued that consideration of both types of current density is important and can resolve past controversies.
NASA Astrophysics Data System (ADS)
Sakaizawa, Ryosuke; Kawai, Takaya; Sato, Toru; Oyama, Hiroyuki; Tsumune, Daisuke; Tsubono, Takaki; Goto, Koichi
2018-03-01
The target seas of tidal-current models are usually semi-closed bays, minimally affected by ocean currents. For these models, tidal currents are simulated in computational domains with a spatial scale of a couple hundred kilometers or less, by setting tidal elevations at their open boundaries. However, when ocean currents cannot be ignored in the sea areas of interest, such as in open seas near coastlines, it is necessary to include ocean-current effects in these tidal-current models. In this study, we developed a numerical method to analyze tidal currents near coasts by incorporating pre-calculated ocean-current velocities. First, a large regional-scale simulation with a spatial scale of several thousand kilometers was conducted and temporal changes in the ocean-current velocity at each grid point were stored. Next, the spatially and temporally interpolated ocean-current velocity was incorporated as forcing into the cross terms of the convection term of a tidal-current model having computational domains with spatial scales of hundreds of kilometers or less. Then, we applied this method to the diffusion of dissolved CO2 in a sea area off Tomakomai, Japan, and compared the numerical results and measurements to validate the proposed method.
Apparatus and method for critical current measurements
Martin, Joe A.; Dye, Robert C.
1992-01-01
An apparatus for the measurement of the critical current of a superconductive sample, e.g., a clad superconductive sample, the apparatus including a conductive coil, a means for maintaining the coil in proximity to a superconductive sample, an electrical connection means for passing a low amplitude alternating current through the coil, a cooling means for maintaining the superconductive sample at a preselected temperature, a means for passing a current through the superconductive sample, and, a means for monitoring reactance of the coil, is disclosed, together with a process of measuring the critical current of a superconductive material, e.g., a clad superconductive material, by placing a superconductive material into the vicinity of the conductive coil of such an apparatus, cooling the superconductive material to a preselected temperature, passing a low amplitude alternating current through the coil, the alternating current capable of generating a magnetic field sufficient to penetrate, e.g., any cladding, and to induce eddy currents in the superconductive material, passing a steadily increasing current through the superconductive material, the current characterized as having a different frequency than the alternating current, and, monitoring the reactance of the coil with a phase sensitive detector as the current passed through the superconductive material is steadily increased whereby critical current of the superconductive material can be observed as the point whereat a component of impedance deviates.
Current-induced switching in a magnetic insulator
NASA Astrophysics Data System (ADS)
Avci, Can Onur; Quindeau, Andy; Pai, Chi-Feng; Mann, Maxwell; Caretta, Lucas; Tang, Astera S.; Onbasli, Mehmet C.; Ross, Caroline A.; Beach, Geoffrey S. D.
2017-03-01
The spin Hall effect in heavy metals converts charge current into pure spin current, which can be injected into an adjacent ferromagnet to exert a torque. This spin-orbit torque (SOT) has been widely used to manipulate the magnetization in metallic ferromagnets. In the case of magnetic insulators (MIs), although charge currents cannot flow, spin currents can propagate, but current-induced control of the magnetization in a MI has so far remained elusive. Here we demonstrate spin-current-induced switching of a perpendicularly magnetized thulium iron garnet film driven by charge current in a Pt overlayer. We estimate a relatively large spin-mixing conductance and damping-like SOT through spin Hall magnetoresistance and harmonic Hall measurements, respectively, indicating considerable spin transparency at the Pt/MI interface. We show that spin currents injected across this interface lead to deterministic magnetization reversal at low current densities, paving the road towards ultralow-dissipation spintronic devices based on MIs.
Transient sodium current at subthreshold voltages: activation by EPSP waveforms
Carter, Brett C.; Giessel, Andrew J.; Sabatini, Bernardo L.; Bean, Bruce P.
2012-01-01
Summary Tetrodotoxin (TTX)-sensitive sodium channels carry large transient currents during action potentials and also “persistent” sodium current, a non-inactivating TTX-sensitive current present at subthreshold voltages. We examined gating of subthreshold sodium current in dissociated cerebellar Purkinje neurons and hippocampal CA1 neurons, studied at 37 °C with near-physiological ionic conditions. Unexpectedly, in both cell types small voltage steps at subthreshold voltages activated a substantial component of transient sodium current as well as persistent current. Subthreshold EPSP-like waveforms also activated a large component of transient sodium current, but IPSP-like waveforms engaged primarily persistent sodium current with only a small additional transient component. Activation of transient as well as persistent sodium current at subthreshold voltages produces amplification of EPSPs that is sensitive to the rate of depolarization and can help account for the dependence of spike threshold on depolarization rate, as previously observed in vivo. PMID:22998875
Evaluation of Ferrite Chip Beads as Surge Current Limiters in Circuits with Tantalum Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2014-01-01
Limiting resistors are currently required to be connected in series with tantalum capacitors to reduce the risk of surge current failures. However, application of limiting resistors decreases substantially the efficiency of the power supply systems. An ideal surge current limiting device should have a negligible resistance for DC currents and high resistance at frequencies corresponding to transients in tantalum capacitors. This work evaluates the possibility of using chip ferrite beads (FB) as such devices. Twelve types of small size FBs from three manufacturers were used to evaluate their robustness under soldering stresses and at high surge current spikes associated with transients in tantalum capacitors. Results show that FBs are capable to withstand current pulses that are substantially greater than the specified current limits. However, due to a sharp decrease of impedance with current, FBs do not reduce surge currents to the required level that can be achieved with regular resistors.
Hawkes, Grant L.; Herring, James S.; Stoots, Carl M.; O& #x27; Brien, James E.
2013-03-05
Electrolytic/fuel cell bundles and systems including such bundles include an electrically conductive current collector in communication with an anode or a cathode of each of a plurality of cells. A cross-sectional area of the current collector may vary in a direction generally parallel to a general direction of current flow through the current collector. The current collector may include a porous monolithic structure. At least one cell of the plurality of cells may include a current collector that surrounds an outer electrode of the cell and has at least six substantially planar exterior surfaces. The planar surfaces may extend along a length of the cell, and may abut against a substantially planar surface of a current collector of an adjacent cell. Methods for generating electricity and for performing electrolysis include flowing current through a conductive current collector having a varying cross-sectional area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandez-Gutierrez, Sulmer, E-mail: sulmer.a.fernandez.gutierrez@intel.com; Browning, Jim; Lin, Ming-Chieh
Phase-control of a magnetron is studied via simulation using a combination of a continuous current source and a modulated current source. The addressable, modulated current source is turned ON and OFF at the magnetron operating frequency in order to control the electron injection and the spoke phase. Prior simulation work using a 2D model of a Rising Sun magnetron showed that the use of 100% modulated current controlled the magnetron phase and allowed for dynamic phase control. In this work, the minimum fraction of modulated current source needed to achieve a phase control is studied. The current fractions (modulated versusmore » continuous) were varied from 10% modulated current to 100% modulated current to study the effects on phase control. Dynamic phase-control, stability, and start up time of the device were studied for all these cases showing that with 10% modulated current and 90% continuous current, a phase shift of 180° can be achieved demonstrating dynamic phase control.« less
Dynamic characteristics of a 30-centimeter mercury ion thruster
NASA Technical Reports Server (NTRS)
Serafini, J. S.; Mantenieks, M. A.; Rawlin, V. K.
1975-01-01
The present work reports on measurements of the fluctuations in the beam current, discharge current, neutralizer keeper current, and discharge voltage of a 30-cm ion thruster made with 60Hz laboratory-type power supplies. The intensities of the fluctuations (ratio of the root-mean-square magnitude to time-average quantity) were found to depend significantly on the beam and magnetic baffle currents. The shape of the frequency spectra of the discharge plasma fluctuations was related to the beam and magnetic baffle currents. The predominant peaks of the beam and discharge current spectra occurred at frequencies less than 30 kilohertz. This discharge chamber resonance could be attributable to ion-acoustic wave phenomena. Cross-correlations of the discharge and beam currents indicated that the dependence on the magnetic baffle current was strong. The measurements revealed that the discharge current fluctuations directly contribute to the beam current fluctuations and that the power supply characteristics can modify these fluctuations.
A complete dc characterization of a constant-frequency, clamped-mode, series-resonant converter
NASA Technical Reports Server (NTRS)
Tsai, Fu-Sheng; Lee, Fred C.
1988-01-01
The dc behavior of a clamped-mode series-resonant converter is characterized systematically. Given a circuit operating condition, the converter's mode of operation is determined and various circuit parameters are calculated, such as average inductor current (load current), rms inductor current, peak capacitor voltage, rms switch currents, average diode currents, switch turn-on currents, and switch turn-off currents. Regions of operation are defined, and various circuit characteristics are derived to facilitate the converter design.
Temperature compensated and self-calibrated current sensor using reference current
Yakymyshyn, Christopher Paul [Seminole, FL; Brubaker, Michael Allen [Loveland, CO; Yakymyshyn, Pamela Jane [Seminole, FL
2008-01-22
A method is described to provide temperature compensation and self-calibration of a current sensor based on a plurality of magnetic field sensors positioned around a current carrying conductor. A reference electrical current carried by a conductor positioned within the sensing window of the current sensor is used to correct variations in the output signal due to temperature variations and aging.
Fault current limiter with shield and adjacent cores
Darmann, Francis Anthony; Moriconi, Franco; Hodge, Eoin Patrick
2013-10-22
In a fault current limiter (FCL) of a saturated core type having at least one coil wound around a high permeability material, a method of suppressing the time derivative of the fault current at the zero current point includes the following step: utilizing an electromagnetic screen or shield around the AC coil to suppress the time derivative current levels during zero current conditions.
Recent and Future Enhancements in NDI for Aircraft Structures
2015-11-30
accomplish NDI of aircraft structure. This includes improved eddy current probes, improved eddy current instrumentation, as well as other...Aircraft Structures,” which is currently in Revision C [8]. The document divides various inspection methods, such as eddy current and fluorescent...efforts at AFRL to address technology shortfalls include improved eddy current probes, improved eddy current instrumentation, as well as other
Recent and Future Enhancements in NDI for Aircraft Structures (Postprint)
2015-11-30
accomplish NDI of aircraft structure. This includes improved eddy current probes, improved eddy current instrumentation, as well as other...Aircraft Structures,” which is currently in Revision C [8]. The document divides various inspection methods, such as eddy current and fluorescent...efforts at AFRL to address technology shortfalls include improved eddy current probes, improved eddy current instrumentation, as well as other
Recent and Future Enhancements in NDI for Aircraft Structures (Postprint)
2015-11-01
accomplish NDI of aircraft structure. This includes improved eddy current probes, improved eddy current instrumentation, as well as other...Aircraft Structures,” which is currently in Revision C [8]. The document divides various inspection methods, such as eddy current and fluorescent...efforts at AFRL to address technology shortfalls include improved eddy current probes, improved eddy current instrumentation, as well as other
Recent and Future Enhancements in NDI for Aircraft Structures (POSTPRINT)
2015-11-16
accomplish NDI of aircraft structure. This includes improved eddy current probes, improved eddy current instrumentation, as well as other...Aircraft Structures,” which is currently in Revision C [8]. The document divides various inspection methods, such as eddy current and fluorescent...efforts at AFRL to address technology shortfalls include improved eddy current probes, improved eddy current instrumentation, as well as other
Dynamic current-current susceptibility in three-dimensional Dirac and Weyl semimetals
NASA Astrophysics Data System (ADS)
Thakur, Anmol; Sadhukhan, Krishanu; Agarwal, Amit
2018-01-01
We study the linear response of doped three-dimensional Dirac and Weyl semimetals to vector potentials, by calculating the wave-vector- and frequency-dependent current-current response function analytically. The longitudinal part of the dynamic current-current response function is then used to study the plasmon dispersion and the optical conductivity. The transverse response in the static limit yields the orbital magnetic susceptibility. In a Weyl semimetal, along with the current-current response function, all these quantities are significantly impacted by the presence of parallel electric and magnetic fields (a finite E .B term) and can be used to experimentally explore the chiral anomaly.
Influence of magnet eddy current on magnetization characteristics of variable flux memory machine
NASA Astrophysics Data System (ADS)
Yang, Hui; Lin, Heyun; Zhu, Z. Q.; Lyu, Shukang
2018-05-01
In this paper, the magnet eddy current characteristics of a newly developed variable flux memory machine (VFMM) is investigated. Firstly, the machine structure, non-linear hysteresis characteristics and eddy current modeling of low coercive force magnet are described, respectively. Besides, the PM eddy current behaviors when applying the demagnetizing current pulses are unveiled and investigated. The mismatch of the required demagnetization currents between the cases with or without considering the magnet eddy current is identified. In addition, the influences of the magnet eddy current on the demagnetization effect of VFMM are analyzed. Finally, a prototype is manufactured and tested to verify the theoretical analyses.
Net field-aligned currents observed by Triad
NASA Technical Reports Server (NTRS)
Sugiura, M.; Potemra, T. A.
1975-01-01
From the Triad magnetometer observation of a step-like level shift in the east-west component of the magnetic field at 800 km altitude, the existence of a net current flowing into or away from the ionosphere in a current layer was inferred. The current direction is toward the ionosphere on the morning side and away from it on the afternoon side. The field aligned currents observed by Triad are considered as being an important element in the electro-dynamical coupling between the distant magnetosphere and the ionosphere. The current density integrated over the thickness of the layer increases with increasing magnetic activity, but the relation between the current density and Kp in individual cases is not a simple linear relation. An extrapolation of the statistical relation to Kp = 0 indicates existence of a sheet current of order 0.1 amp/m even at extremely quiet times. During periods of higher magnetic activity an integrated current of approximately 1 amp/m and average current density of order 0.000001 amp/sq m are observed. The location and the latitudinal width of the field aligned current layer carrying the net current very roughly agree with those of the region of high electron intensities in the trapping boundary.
High accuracy switched-current circuits using an improved dynamic mirror
NASA Technical Reports Server (NTRS)
Zweigle, G.; Fiez, T.
1991-01-01
The switched-current technique, a recently developed circuit approach to analog signal processing, has emerged as an alternative/compliment to the well established switched-capacitor circuit technique. High speed switched-current circuits offer potential cost and power savings over slower switched-capacitor circuits. Accuracy improvements are a primary concern at this stage in the development of the switched-current technique. Use of the dynamic current mirror has produced circuits that are insensitive to transistor matching errors. The dynamic current mirror has been limited by other sources of error including clock-feedthrough and voltage transient errors. In this paper we present an improved switched-current building block using the dynamic current mirror. Utilizing current feedback the errors due to current imbalance in the dynamic current mirror are reduced. Simulations indicate that this feedback can reduce total harmonic distortion by as much as 9 dB. Additionally, we have developed a clock-feedthrough reduction scheme for which simulations reveal a potential 10 dB total harmonic distortion improvement. The clock-feedthrough reduction scheme also significantly reduces offset errors and allows for cancellation with a constant current source. Experimental results confirm the simulated improvements.
Zhang, X-L; Albers, K M; Gold, M S
2015-01-22
The goals of the present study were to determine (1) the properties of the nicotinic acetylcholine receptor (nAChR) currents in rat cutaneous dorsal root ganglion (DRG) neurons; (2) the impact of nAChR activation on the excitability of cutaneous DRG neurons; and (3) the impact of inflammation on the density and distribution of nAChR currents among cutaneous DRG neurons. Whole-cell patch-clamp techniques were used to study retrogradely labeled DRG neurons from naïve and complete Freund's adjuvant inflamed rats. Nicotine-evoked currents were detectable in ∼70% of the cutaneous DRG neurons, where only one of two current types, fast or slow currents based on rates of activation and inactivation, was present in each neuron. The biophysical and pharmacological properties of the fast current were consistent with nAChRs containing an α7 subunit while those of the slow current were consistent with nAChRs containing α3/β4 subunits. The majority of small diameter neurons with fast current were IB4- while the majority of small diameter neurons with slow current were IB4+. Preincubation with nicotine (1 μM) produced a transient (1 min) depolarization and increase in the excitability of neurons with fast current and a decrease in the amplitude of capsaicin-evoked current in neurons with slow current. Inflammation increased the current density of both slow and fast currents in small diameter neurons and increased the percentage of neurons with the fast current. With the relatively selective distribution of nAChR currents in putative nociceptive cutaneous DRG neurons, our results suggest that the role of these receptors in inflammatory hyperalgesia is likely to be complex and dependent on the concentration and timing of acetylcholine release in the periphery. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Sundholm, Dage; Berger, Raphael J F; Fliegl, Heike
2016-06-21
Magnetically induced current susceptibilities and current pathways have been calculated for molecules consisting of two pentalene groups annelated with a benzene (1) or naphthalene (2) moiety. Current strength susceptibilities have been obtained by numerically integrating separately the diatropic and paratropic contributions to the current flow passing planes through chosen bonds of the molecules. The current density calculations provide novel and unambiguous current pathways for the unusual molecules with annelated aromatic and antiaromatic hydrocarbon moieties. The calculations show that the benzene and naphthalene moieties annelated with two pentalene units as in molecules 1 and 2, respectively, are unexpectedly antiaromatic sustaining only a local paratropic ring current around the ring, whereas a weak diatropic current flows around the C-H moiety of the benzene ring. For 1 and 2, the individual five-membered rings of the pentalenes are antiaromatic and a slightly weaker semilocal paratropic current flows around the two pentalene rings. Molecules 1 and 2 do not sustain any net global ring current. The naphthalene moiety of the molecule consisting of a naphthalene annelated with two pentalene units (3) does not sustain any strong ring current that is typical for naphthalene. Instead, half of the diatropic current passing the naphthalene moiety forms a zig-zag pattern along the C-C bonds of the naphthalene moiety that are not shared with the pentalene moieties and one third of the current continues around the whole molecule partially cancelling the very strong paratropic semilocal ring current of the pentalenes. For molecule 3, the pentalene moieties and the individual five-membered rings of the pentalenes are more antiaromatic than for 1 and 2. The calculated current patterns elucidate why the compounds with formally [4n + 2] π-electrons have unusual aromatic properties violating the Hückel π-electron count rule. The current density calculations also provide valuable information for interpreting the measured (1)H NMR spectra.
Properties of the calcium-activated chloride current in heart.
Zygmunt, A C; Gibbons, W R
1992-03-01
We used the whole cell patch clamp technique to study transient outward currents of single rabbit atrial cells. A large transient current, IA, was blocked by 4-aminopyridine (4AP) and/or by depolarized holding potentials. After block of IA, a smaller transient current remained. It was completely blocked by nisoldipine, cadmium, ryanodine, or caffeine, which indicates that all of the 4AP-resistant current is activated by the calcium transient that causes contraction. Neither calcium-activated potassium current nor calcium-activated nonspecific cation current appeared to contribute to the 4AP-resistant transient current. The transient current disappeared when ECl was made equal to the pulse potential; it was present in potassium-free internal and external solutions. It was blocked by the anion transport blockers SITS and DIDS, and the reversal potential of instantaneous current-voltage relations varied with extracellular chloride as predicted for a chloride-selective conductance. We concluded that the 4AP-resistant transient outward current of atrial cells is produced by a calcium-activated chloride current like the current ICl(Ca) of ventricular cells (1991. Circulation Research. 68:424-437). ICl(Ca) in atrial cells demonstrated outward rectification, even when intracellular chloride concentration was higher than extracellular. When ICa was inactivated or allowed to recover from inactivation, amplitudes of ICl(Ca) and ICa were closely correlated. The results were consistent with the view that ICl(Ca) does not undergo independent inactivation. Tentatively, we propose that ICl(Ca) is transient because it is activated by an intracellular calcium transient. Lowering extracellular sodium increased the peak outward transient current. The current was insensitive to the choice of sodium substitute. Because a recently identified time-independent, adrenergically activated chloride current in heart is reduced in low sodium, these data suggest that the two chloride currents are produced by different populations of channels.
NASA Technical Reports Server (NTRS)
Black, Jr., William C. (Inventor); Hermann, Theodore M. (Inventor)
1998-01-01
A current determiner having an output at which representations of input currents are provided having an input conductor for the input current and a current sensor supported on a substrate electrically isolated from one another but with the sensor positioned in the magnetic fields arising about the input conductor due to any input currents. The sensor extends along the substrate in a direction primarily perpendicular to the extent of the input conductor and is formed of at least a pair of thin-film ferromagnetic layers separated by a non-magnetic conductive layer. The sensor can be electrically connected to a electronic circuitry formed in the substrate including a nonlinearity adaptation circuit to provide representations of the input currents of increased accuracy despite nonlinearities in the current sensor, and can include further current sensors in bridge circuits.
Contribution For Arc Temperature Affected By Current Increment Ratio At Peak Current In Pulsed Arc
NASA Astrophysics Data System (ADS)
Kano, Ryota; Mitubori, Hironori; Iwao, Toru
2015-11-01
Tungsten Inert Gas (TIG) Welding is one of the high quality welding. However, parameters of the pulsed arc welding are many and complicated. if the welding parameters are not appropriate, the welding pool shape becomes wide and shallow.the convection of driving force contributes to the welding pool shape. However, in the case of changing current waveform as the pulse high frequency TIG welding, the arc temperature does not follow the change of the current. Other result of the calculation, in particular, the arc temperature at the reaching time of peak current is based on these considerations. Thus, the accurate measurement of the temperature at the time is required. Therefore, the objective of this research is the elucidation of contribution for arc temperature affected by current increment ratio at peak current in pulsed arc. It should obtain a detail knowledge of the welding model in pulsed arc. The temperature in the case of increment of the peak current from the base current is measured by using spectroscopy. As a result, when the arc current increases from 100 A to 150 A at 120 ms, the transient response of the temperature didn't occur during increasing current. Thus, during the current rise, it has been verified by measuring. Therefore, the contribution for arc temperature affected by current increment ratio at peak current in pulsed arc was elucidated in order to obtain more knowledge of welding model of pulsed arc.
NASA Astrophysics Data System (ADS)
Gopal, Vishnu; Qiu, WeiCheng; Hu, Weida
2014-11-01
The current-voltage characteristics of long wavelength mercury cadmium telluride infrared detectors have been studied using a recently suggested method for modelling of illuminated photovoltaic detectors. Diodes fabricated on in-house grown arsenic and vacancy doped epitaxial layers were evaluated for their leakage currents. The thermal diffusion, generation-recombination (g-r), and ohmic currents were found as principal components of diode current besides a component of photocurrent due to illumination. In addition, both types of diodes exhibited an excess current component whose growth with the applied bias voltage did not match the expected growth of trap-assisted-tunnelling current. Instead, it was found to be the best described by an exponential function of the type, Iexcess = Ir0 + K1 exp (K2 V), where Ir0, K1, and K2 are fitting parameters and V is the applied bias voltage. A study of the temperature dependence of the diode current components and the excess current provided the useful clues about the source of origin of excess current. It was found that the excess current in diodes fabricated on arsenic doped epitaxial layers has its origin in the source of ohmic shunt currents. Whereas, the source of excess current in diodes fabricated on vacancy doped epitaxial layers appeared to be the avalanche multiplication of photocurrent. The difference in the behaviour of two types of diodes has been attributed to the difference in the quality of epitaxial layers.
Tong, Qiaoling; Chen, Chen; Zhang, Qiao; Zou, Xuecheng
2015-01-01
To realize accurate current control for a boost converter, a precise measurement of the inductor current is required to achieve high resolution current regulating. Current sensors are widely used to measure the inductor current. However, the current sensors and their processing circuits significantly contribute extra hardware cost, delay and noise to the system. They can also harm the system reliability. Therefore, current sensorless control techniques can bring cost effective and reliable solutions for various boost converter applications. According to the derived accurate model, which contains a number of parasitics, the boost converter is a nonlinear system. An Extended Kalman Filter (EKF) is proposed for inductor current estimation and output voltage filtering. With this approach, the system can have the same advantages as sensored current control mode. To implement EKF, the load value is necessary. However, the load may vary from time to time. This can lead to errors of current estimation and filtered output voltage. To solve this issue, a load variation elimination effect elimination (LVEE) module is added. In addition, a predictive average current controller is used to regulate the current. Compared with conventional voltage controlled system, the transient response is greatly improved since it only takes two switching cycles for the current to reach its reference. Finally, experimental results are presented to verify the stable operation and output tracking capability for large-signal transients of the proposed algorithm. PMID:25928061
Pulse charging of lead-acid traction cells
NASA Technical Reports Server (NTRS)
Smithrick, J. J.
1980-01-01
Pulse charging, as a method of rapidly and efficiently charging 300 amp-hour lead-acid traction cells for an electric vehicle application was investigated. A wide range of charge pulse current square waveforms were investigated and the results were compared to constant current charging at the time averaged pulse current values. Representative pulse current waveforms were: (1) positive waveform-peak charge pulse current of 300 amperes (amps), discharge pulse-current of zero amps, and a duty cycle of about 50%; (2) Romanov waveform-peak charge pulse current of 300 amps, peak discharge pulse current of 15 amps, and a duty of 50%; and (3) McCulloch waveform peak charge pulse current of 193 amps, peak discharge pulse current of about 575 amps, and a duty cycle of 94%. Experimental results indicate that on the basis of amp-hour efficiency, pulse charging offered no significant advantage as a method of rapidly charging 300 amp-hour lead-acid traction cells when compared to constant current charging at the time average pulse current value. There were, however, some disadvantages of pulse charging in particular a decrease in charge amp-hour and energy efficiencies and an increase in cell electrolyte temperature. The constant current charge method resulted in the best energy efficiency with no significant sacrifice of charge time or amp-hour output. Whether or not pulse charging offers an advantage over constant current charging with regard to the cell charge/discharge cycle life is unknown at this time.
DeMonte, Tim P; Wang, Dinghui; Ma, Weijing; Gao, Jia-Hong; Joy, Michael L G
2009-01-01
Current density imaging (CDI) is a magnetic resonance imaging (MRI) technique used to quantitatively measure current density vectors throughout the volume of an object/subject placed in the MRI system. Electrical current pulses are applied externally to the object/subject and are synchronized with the MRI sequence. In this work, CDI is used to measure average current density magnitude in the torso region of an in-vivo piglet for applied current pulse amplitudes ranging from 10 mA to 110 mA. The relationship between applied current amplitude and current density magnitude is linear in simple electronic elements such as wires and resistors; however, this relationship may not be linear in living tissue. An understanding of this relationship is useful for research in defibrillation, human electro-muscular incapacitation (e.g. TASER(R)) and other bioelectric stimulation devices. This work will show that the current amplitude to current density magnitude relationship is slightly nonlinear in living tissue in the range of 10 mA to 110 mA.
Transient analysis for alternating over-current characteristics of HTSC power transmission cable
NASA Astrophysics Data System (ADS)
Lim, S. H.; Hwang, S. D.
2006-10-01
In this paper, the transient analysis for the alternating over-current distribution in case that the over-current was applied for a high-TC superconducting (HTSC) power transmission cable was performed. The transient analysis for the alternating over-current characteristics of HTSC power transmission cable with multi-layer is required to estimate the redistribution of the over-current between its conducting layers and to protect the cable system from the over-current in case that the quench in one or two layers of the HTSC power cable happens. For its transient analysis, the resistance generation of the conducting layers for the alternating over-current was reflected on its equivalent circuit, based on the resistance equation obtained by applying discrete Fourier transform (DFT) for the voltage and the current waveforms of the HTSC tape, which comprises each layer of the HTSC power transmission cable. It was confirmed through the numerical analysis on its equivalent circuit that after the current redistribution from the outermost layer into the inner layers first happened, the fast current redistribution between the inner layers developed as the amplitude of the alternating over-current increased.
Determination of eddy current response with magnetic measurements.
Jiang, Y Z; Tan, Y; Gao, Z; Nakamura, K; Liu, W B; Wang, S Z; Zhong, H; Wang, B B
2017-09-01
Accurate mutual inductances between magnetic diagnostics and poloidal field coils are an essential requirement for determining the poloidal flux for plasma equilibrium reconstruction. The mutual inductance calibration of the flux loops and magnetic probes requires time-varying coil currents, which also simultaneously drive eddy currents in electrically conducting structures. The eddy current-induced field appearing in the magnetic measurements can substantially increase the calibration error in the model if the eddy currents are neglected. In this paper, an expression of the magnetic diagnostic response to the coil currents is used to calibrate the mutual inductances, estimate the conductor time constant, and predict the eddy currents response. It is found that the eddy current effects in magnetic signals can be well-explained by the eddy current response determination. A set of experiments using a specially shaped saddle coil diagnostic are conducted to measure the SUNIST-like eddy current response and to examine the accuracy of this method. In shots that include plasmas, this approach can more accurately determine the plasma-related response in the magnetic signals by eliminating the field due to the eddy currents produced by the external field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawakami, S.; Ohno, N.; Shibata, Y.
2013-11-15
According to an early work [Y. Shibata et al., Nucl. Fusion 50, 025015 (2010)] on the behavior of the plasma current decay in the JT-60U disruptive discharges caused by the radiative collapse with a massive neon-gas-puff, the increase of the internal inductance mainly determined the current decay time of plasma current during the initial phase of current quench. To investigate what determines the increase of the internal inductance, we focus attention on the relationship between the electron temperature (or the resistivity) profile and the time evolution of the current density profile and carry out numerical calculations. As a result, wemore » find the reason of the increase of the internal inductance: The current density profile at the start of the current quench is broader than an expected current density profile in the steady state, which is determined by the temperature (or resistivity) profile. The current density profile evolves into peaked one and the internal inductance is increasing.« less
NASA Astrophysics Data System (ADS)
Poh, Gangkai; Slavin, James A.; Jia, Xianzhe; Raines, Jim M.; Imber, Suzanne M.; Sun, Wei-Jie; Gershman, Daniel J.; DiBraccio, Gina A.; Genestreti, Kevin J.; Smith, Andy W.
2017-08-01
We analyzed MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) magnetic field and plasma measurements taken during 319 crossings of Mercury's cross-tail current sheet. We found that the measured BZ in the current sheet is higher on the dawnside than the duskside by a factor of ≈3 and the asymmetry decreases with downtail distance. This result is consistent with expectations based upon MHD stress balance. The magnetic fields threading the more stretched current sheet in the duskside have a higher plasma beta than those on the dawnside, where they are less stretched. This asymmetric behavior is confirmed by mean current sheet thickness being greatest on the dawnside. We propose that heavy planetary ion (e.g., Na+) enhancements in the duskside current sheet provides the most likely explanation for the dawn-dusk current sheet asymmetries. We also report the direct measurement of Mercury's substorm current wedge (SCW) formation and estimate the total current due to pileup of magnetic flux to be ≈11 kA. The conductance at the foot of the field lines required to close the SCW current is found to be ≈1.2 S, which is similar to earlier results derived from modeling of Mercury's Region 1 field-aligned currents. Hence, Mercury's regolith is sufficiently conductive for the current to flow radially then across the surface of Mercury's highly conductive iron core. Mercury appears to be closely coupled to its nightside magnetosphere by mass loading of upward flowing heavy planetary ions and electrodynamically by field-aligned currents that transfer momentum and energy to the nightside auroral oval crust and interior. Heavy planetary ion enhancements in Mercury's duskside current sheet provide explanation for cross-tail asymmetries found in this study. The total current due to the pileup of magnetic flux and conductance required to close the SCW current is found to be ≈11 kA and 1.2 S. Mercury is coupled to magnetotail by mass loading of heavy ions and field-aligned currents driven by reconnection-related fast plasma flow.
Transport and sedimentation in unconfined experimental dilute pyroclastic density currents
NASA Astrophysics Data System (ADS)
Ramirez, G.; Andrews, B. J.; Dennen, R. L.
2013-12-01
We present results from experiments conducted in a new facility that permits the study of large, unconfined particle laden density currents that are dynamically similar to natural dilute pyroclastic density currents (PDCs). Experiments were run in a sealed, air-filled tank measuring 8.5 m long by 6.1 m wide by 2.6 m tall. Currents were generated by feeding mixture of heated particles (5 μm aluminum oxide, 25 μm talc, 27 μm walnut shell, 76 μm glass beads) down a chute at controlled rates to produce dilute, turbulent gravity currents. Comparison of experimental currents with natural PDCs shows good agreement between Froude, densimetric and thermal Richardson, and particle Stokes and settling numbers; experimental currents have lower Reynolds numbers than natural PDCs, but are fully turbulent. Currents were illuminated with 3 orthogonal laser sheets (650, 532, and 450 nm wavelengths) and recorded with an array of HD video cameras and a high speed camera (up to 3000 fps). Deposits were mapped using a grid of sedimentation traps. We observe distinct differences between ambient temperature and warm currents: * warm currents have shorter run out distances, narrow map view distributions of currents and deposits, thicken with distance from the source, and lift off to form coignimbrite plumes; * ambient temperature currents typically travel farther, spread out radially, do not thicken greatly with transport distance, and do not form coignimbrite plumes. Long duration currents (600 s compared to 30-100 s) oscillate laterally with time (e.g. transport to the right, then the left, and back); this oscillation happens prior to any interaction with the tank walls. Isopach maps of the deposits show predictable trends in sedimentation versus distance in response to eruption parameters (eruption rate, duration, temperature, and initial current mass), but all sedimentation curves can be fit with 2nd order polynomials (R2>.9). Proximal sedimentation is similar in comparable warm and ambient temperature currents, but distal sedimentation (beyond the current runout) increases in warm currents reflecting deposition from coignimbrite plumes. We are currently developing analytical models to link the observed transport and sedimentation results.
HepML, an XML-based format for describing simulated data in high energy physics
NASA Astrophysics Data System (ADS)
Belov, S.; Dudko, L.; Kekelidze, D.; Sherstnev, A.
2010-10-01
In this paper we describe a HepML format and a corresponding C++ library developed for keeping complete description of parton level events in a unified and flexible form. HepML tags contain enough information to understand what kind of physics the simulated events describe and how the events have been prepared. A HepML block can be included into event files in the LHEF format. The structure of the HepML block is described by means of several XML Schemas. The Schemas define necessary information for the HepML block and how this information should be located within the block. The library libhepml is a C++ library intended for parsing and serialization of HepML tags, and representing the HepML block in computer memory. The library is an API for external software. For example, Matrix Element Monte Carlo event generators can use the library for preparing and writing a header of an LHEF file in the form of HepML tags. In turn, Showering and Hadronization event generators can parse the HepML header and get the information in the form of C++ classes. libhepml can be used in C++, C, and Fortran programs. All necessary parts of HepML have been prepared and we present the project to the HEP community. Program summaryProgram title: libhepml Catalogue identifier: AEGL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 138 866 No. of bytes in distributed program, including test data, etc.: 613 122 Distribution format: tar.gz Programming language: C++, C Computer: PCs and workstations Operating system: Scientific Linux CERN 4/5, Ubuntu 9.10 RAM: 1 073 741 824 bytes (1 Gb) Classification: 6.2, 11.1, 11.2 External routines: Xerces XML library ( http://xerces.apache.org/xerces-c/), Expat XML Parser ( http://expat.sourceforge.net/) Nature of problem: Monte Carlo simulation in high energy physics is divided into several stages. Various programs exist for these stages. In this article we are interested in interfacing different Monte Carlo event generators via data files, in particular, Matrix Element (ME) generators and Showering and Hadronization (SH) generators. There is a widely accepted format for data files for such interfaces - Les Houches Event Format (LHEF). Although information kept in an LHEF file is enough for proper working of SH generators, it is insufficient for understanding how events in the LHEF file have been prepared and which physical model has been applied. In this paper we propose an extension of the format for keeping additional information available in generators. We propose to add a new information block, marked up with XML tags, to the LHEF file. This block describes events in the file in more detail. In particular, it stores information about a physical model, kinematical cuts, generator, etc. This helps to make LHEF files self-documented. Certainly, HepML can be applied in more general context, not in LHEF files only. Solution method: In order to overcome drawbacks of the original LHEF accord we propose to add a new information block of HepML tags. HepML is an XML-based markup language. We designed several XML Schemas for all tags in the language. Any HepML document should follow rules of the Schemas. The language is equipped with a library for operation with HepML tags and documents. This C++ library, called libhepml, consists of classes for HepML objects, which represent a HepML document in computer memory, parsing classes, serializating classes, and some auxiliary classes. Restrictions: The software is adapted for solving problems, described in the article. There are no additional restrictions. Running time: Tests have been done on a computer with Intel(R) Core(TM)2 Solo, 1.4 GHz. Parsing of a HepML file: 6 ms (size of the HepML files is 12.5 Kb) Writing of a HepML block to file: 14 ms (file size 12.5 Kb) Merging of two HepML blocks and writing to file: 18 ms (file size - 25.0 Kb).
Becoming Dragon: a mixed reality durational performance in Second Life
NASA Astrophysics Data System (ADS)
Cárdenas, Micha; Head, Christopher; Margolis, Todd; Greco, Kael
2009-02-01
The goal for Becoming Dragon was to develop a working, immersive Mixed Reality system by using a motion capture system and head mounted display to control a character in Second Life - a Massively Multiplayer Online 3D environment - in order to examine a number of questions regarding identity, gender and the transformative potential of technology. This performance was accomplished through a collaboration between Micha Cardenas, the performer and technical director, Christopher Head, Kael Greco, Benjamin Lotan, Anna Storelli and Elle Mehrmand. The plan for this project was to model the performer's physical environment to enable them to live in the virtual environment for extended amounts of time, using an approach of Mixed Reality, where the physical world is mapped into the virtual. I remain critical of the concept of Mixed Reality, as it presents an idea of realities as totalities and as objective essences independent of interpretation through the symbolic order. Part of my goal with this project is to explore identity as a process of social feedback, in the sense that Donna Haraway describes "becoming with"iii, as well as to explore the concept of Reality Spectrum that Augmentology.com discusses, thinking about states such as AFK (Away From Keyboard) that are in-between virtual and corporeal presence.iv Both of these ideas are ways of overcoming the dualisms of mind/body, real/virtual and self/other that have been a problematic part of thinking about technology for so long. Towards thinking beyond these binaries, Anna Munster offers a concept of enfolding the body and technologyv, building on Gilles Deleuze's notion of the baroque fold. She says "the superfold... opens up for us a twisted topology of code folding back upon itself without determinate start or end points: we now live in a time and space in which body and information are thoroughly imbricated."vi She elaborates on this notion of body and code as becoming with each other saying "the incorporeal vectors of digital information draw out the capacities of our bodies to become other than matter conceived as a mere vessel for consciousness or a substrate for signal... we may also conceive of these experiences as a new territory made possible by the fact that our bodies are immanently open to these kinds of technically symbiotic transformations"vii. A number of the technologies used in this performance were used in an attempt to blur the line between the actual and the digital, such as motion capture, live video streaming into Second Life and 3D fabrication of physical copies of Second Life avatars. The performance was developed using the following components: - An Emagin Z800 immersive head mounted display (HMD) allowed the performer to move around in the physical environment within Calit2 and still remain "in game". Head tracking and stereoscopic imagery help to provide a realistic feeling of immersion. We built on the University of Michigan 3D (UM3D) lab's stereoscopic patch for the Second Life client, updating it to work with the latest version of Second Life. - A motion tracking system. A Vicon MX40+ motion capture system was installed into the Visiting Artist Lab at CRCA, which served as the physical performance space, to allow real-time motion tracking data to be sent to a PC running Windows. Using this data, the plan was to map the physical motion in the real world back into game space, so that, for example, the performer could easily get to their food source or to the restroom. We developed a C++ bridge that includes a parser for the Vicon real time data stream in order to communicate this to the Second Life server to produce changes in avatar and object positions based on real physical movement. The goal was to get complete body gestures into Second Life in near real time. - A Puredata patch called Lila, developed by Shahrokh Yadegadi of UCSD, which was used to modulate the performer's voice, to provide a voice system that allowed chat ability in Second Life, which was less gendered and less human.
Nonlinear spin current generation in noncentrosymmetric spin-orbit coupled systems
NASA Astrophysics Data System (ADS)
Hamamoto, Keita; Ezawa, Motohiko; Kim, Kun Woo; Morimoto, Takahiro; Nagaosa, Naoto
2017-06-01
Spin current plays a central role in spintronics. In particular, finding more efficient ways to generate spin current has been an important issue and has been studied actively. For example, representative methods of spin-current generation include spin-polarized current injections from ferromagnetic metals, the spin Hall effect, and the spin battery. Here, we theoretically propose a mechanism of spin-current generation based on nonlinear phenomena. By using Boltzmann transport theory, we show that a simple application of the electric field E induces spin current proportional to E2 in noncentrosymmetric spin-orbit coupled systems. We demonstrate that the nonlinear spin current of the proposed mechanism is supported in the surface state of three-dimensional topological insulators and two-dimensional semiconductors with the Rashba and/or Dresselhaus interaction. In the latter case, the angular dependence of the nonlinear spin current can be manipulated by the direction of the electric field and by the ratio of the Rashba and Dresselhaus interactions. We find that the magnitude of the spin current largely exceeds those in the previous methods for a reasonable magnitude of the electric field. Furthermore, we show that application of ac electric fields (e.g., terahertz light) leads to the rectifying effect of the spin current, where dc spin current is generated. These findings will pave a route to manipulate the spin current in noncentrosymmetric crystals.
NASA Astrophysics Data System (ADS)
Yi, Guosheng; Wang, Jiang; Wei, Xile; Deng, Bin; Li, Huiyan; Che, Yanqiu
2017-06-01
Spike-frequency adaptation (SFA) mediated by various adaptation currents, such as voltage-gated K+ current (IM), Ca2+-gated K+ current (IAHP), or Na+-activated K+ current (IKNa), exists in many types of neurons, which has been shown to effectively shape their information transmission properties on slow timescales. Here we use conductance-based models to investigate how the activation of three adaptation currents regulates the threshold voltage for action potential (AP) initiation during the course of SFA. It is observed that the spike threshold gets depolarized and the rate of membrane depolarization (dV/dt) preceding AP is reduced as adaptation currents reduce firing rate. It is indicated that the presence of inhibitory adaptation currents enables the neuron to generate a dynamic threshold inversely correlated with preceding dV/dt on slower timescales than fast dynamics of AP generation. By analyzing the interactions of ionic currents at subthreshold potentials, we find that the activation of adaptation currents increase the outward level of net membrane current prior to AP initiation, which antagonizes inward Na+ to result in a depolarized threshold and lower dV/dt from one AP to the next. Our simulations demonstrate that the threshold dynamics on slow timescales is a secondary effect caused by the activation of adaptation currents. These findings have provided a biophysical interpretation of the relationship between adaptation currents and spike threshold.
NASA Technical Reports Server (NTRS)
Cash, B.
1985-01-01
Simple technique developed for monitoring direct currents up to several hundred amperes and digitally displaying values directly in current units. Used to monitor current magnitudes beyond range of standard laboratory ammeters, which typically measure 10 to 20 amperes maximum. Technique applicable to any current-monitoring situation.
Theme: Staying Current--Horticulture.
ERIC Educational Resources Information Center
Shry, Carroll L., Jr.; And Others
1986-01-01
This theme issue on staying current in horticulture includes articles on sex equity in horticulture, Future Farmers of America, career opportunities in horticulture, staying current with your school district's needs, staying current in horticulture instruction, staying current with landscape trade associations, emphasizing the basics in vocational…
40 CFR 761.30 - Authorizations.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Current-limiting fuses or other equivalent technology must be used to detect sustained high current faults... fuses or other equivalent technology to avoid PCB Transformer ruptures from sustained high current... protection, such as current-limiting fuses or other equivalent technology, to detect sustained high current...
Reducing current reversal time in electric motor control
Bredemann, Michael V
2014-11-04
The time required to reverse current flow in an electric motor is reduced by exploiting inductive current that persists in the motor when power is temporarily removed. Energy associated with this inductive current is used to initiate reverse current flow in the motor.
Anode current density distribution in a cusped field thruster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Huan, E-mail: wuhuan58@qq.com; Liu, Hui, E-mail: hlying@gmail.com; Meng, Yingchao
2015-12-15
The cusped field thruster is a new electric propulsion device that is expected to have a non-uniform radial current density at the anode. To further study the anode current density distribution, a multi-annulus anode is designed to directly measure the anode current density for the first time. The anode current density decreases sharply at larger radii; the magnitude of collected current density at the center is far higher compared with the outer annuli. The anode current density non-uniformity does not demonstrate a significant change with varying working conditions.
Current scaling of radiated power for 40-mm diameter single wire arrays on Z
NASA Astrophysics Data System (ADS)
Nash, T. J.; Cuneo, M. E.; Spielman, R. B.; Chandler, G. A.; Leeper, R. J.; Seaman, J. F.; McGurn, J.; Lazier, S.; Torres, J.; Jobe, D.; Gilliland, T.; Nielsen, D.; Hawn, R.; Bailey, J. E.; Lake, P.; Carlson, A. L.; Seamen, H.; Moore, T.; Smelser, R.; Pyle, J.; Wagoner, T. C.; LePell, P. D.; Deeney, C.; Douglas, M. R.; McDaniel, D.; Struve, K.; Mazarakis, M.; Stygar, W. A.
2004-11-01
In order to estimate the radiated power that can be expected from the next-generation Z-pinch driver such as ZR at 28 MA, current-scaling experiments have been conducted on the 20 MA driver Z. We report on the current scaling of single 40 mm diameter tungsten 240 wire arrays with a fixed 110 ns implosion time. The wire diameter is decreased in proportion to the load current. Reducing the charge voltage on the Marx banks reduces the load current. On one shot, firing only three of the four levels of the Z machine further reduced the load current. The radiated energy scaled as the current squared as expected but the radiated power scaled as the current to the 3.52±0.42 power due to increased x-ray pulse width at lower current. As the current is reduced, the rise time of the x-ray pulse increases and at the lowest current value of 10.4 MA, a shoulder appears on the leading edge of the x-ray pulse. In order to determine the nature of the plasma producing the leading edge of the x-ray pulse at low currents further shots were taken with an on-axis aperture to view on-axis precursor plasma. This aperture appeared to perturb the pinch in a favorable manner such that with the aperture in place there was no leading edge to the x-ray pulses at lower currents and the radiated power scaled as the current squared ±0.75. For a full-current shot we will present x-ray images that show precursor plasma emitting on-axis 77 ns before the main x-ray burst.
NASA Astrophysics Data System (ADS)
Zhang, Zaiqin; Ma, Hui; Liu, Zhiyuan; Geng, Yingsan; Wang, Jianhua
2018-04-01
The influence of the applied axial magnetic field on the current density distribution in the arc column and electrodes is intensively studied. However, the previous results only provide a qualitative explanation, which cannot quantitatively explain a recent experimental data on anode current density. The objective of this paper is to quantitatively determine the current constriction subjected to an axial magnetic field in high-current vacuum arcs according to the recent experimental data. A magnetohydrodynamic model is adopted to describe the high current vacuum arcs. The vacuum arc is in a diffuse arc mode with an arc current ranged from 6 kArms to 14 kArms and an axial magnetic field ranged from 20 mT to 110 mT. By a comparison of the recent experimental work of current density distribution on the anode, the modelling results show that there are two types of current constriction. On one hand, the current on the cathode shows a constriction, and this constriction is termed as the cathode-constriction. On the other hand, the current constricts in the arc column region, and this constriction is termed as the column-constriction. The cathode boundary is of vital importance in a quantitative model. An improved cathode constriction boundary is proposed. Under the improved boundary, the simulation results are in good agreement with the recent experimental data on the anode current density distribution. It is demonstrated that the current density distribution at the anode is sensitive to that at the cathode, so that measurements of the anode current density can be used, in combination with the vacuum arc model, to infer the cathode current density distribution.
The Extent to Which Dayside Reconnection Drives Field-Aligned Currents During Substorms
NASA Astrophysics Data System (ADS)
Forsyth, C.; Shortt, M. W.; Coxon, J.; Rae, J.; Freeman, M. P.; Kalmoni, N. M. E.; Jackman, C. M.; Anderson, B. J.
2016-12-01
Field-aligned currents, also known as Birkeland currents, are the agents by which energy and momentum is transferred to the ionosphere from the magnetosphere and solar wind. In order to understand this coupling, it is necessary to analyze the variations in these current systems with respect to the main energy sources of the solar wind and substorms. In this study, we perform a superposed epoch analysis of field-aligned currents determined by the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) project with respect to substorm expansion phase onsets identified using the Substorm Onsets and Phases from Indices of the Electrojet (SOPHIE) technique. We examine the total upward and downward currents separately in the noon, dusk, dawn and midnight sectors. Our results show that the dusk and dawn currents have up to a 66% linear correlated with the dayside reconnection rate estimated from solar wind measurements, whereas the noon and midnight currents are not. The noon currents show little or no variation throughout the substorm cycle. The midnight currents follows the dusk currents up to 20 min before onset, after which the midnight current increases more rapidly and exponentially. At substorm onset, the exponential growth rate increases. While the midnight field-aligned currents grow exponentially after substorm onset, the auroral indices vary with a 1/6th power law. Overall, our results show that the growth and decay rates of the Region 1 and 2 current systems, which are strongest at dawn and dusk, are directly driven by the solar wind, whereas the growth and decay rates of the substorm current system, which are dominant at midnight, act independently of the upstream driver.
Contribution of Field Strength Gradients to the Net Vertical Current of Active Regions
NASA Astrophysics Data System (ADS)
Vemareddy, P.
2017-12-01
We examined the contribution of field strength gradients for the degree of net vertical current (NVC) neutralization in active regions (ARs). We used photospheric vector magnetic field observations of AR 11158 obtained by Helioseismic and Magnetic Imager on board SDO and Hinode. The vertical component of the electric current is decomposed into twist and shear terms. The NVC exhibits systematic evolution owing to the presence of the sheared polarity inversion line between rotating and shearing magnetic regions. We found that the sign of shear current distribution is opposite in dominant pixels (60%–65%) to that of twist current distribution, and its time profile bears no systematic trend. This result indicates that the gradient of magnetic field strength contributes to an opposite signed, though smaller in magnitude, current to that contributed by the magnetic field direction in the vertical component of the current. Consequently, the net value of the shear current is negative in both polarity regions, which when added to the net twist current reduces the direct current value in the north (B z > 0) polarity, resulting in a higher degree of NVC neutralization. We conjecture that the observed opposite signs of shear and twist currents are an indication, according to Parker, that the direct volume currents of flux tubes are canceled by their return currents, which are contributed by field strength gradients. Furthermore, with the increase of spatial resolution, we found higher values of twist, shear current distributions. However, the resolution effect is more useful in resolving the field strength gradients, and therefore suggests more contribution from shear current for the degree of NVC neutralization.
Influence of the substorm current wedge on the Dst index
NASA Astrophysics Data System (ADS)
Friedrich, Erena; Rostoker, Gordon; Connors, Martin G.; McPherron, R. L.
1999-03-01
One of the major questions confronting researchers studying the nature of the solar-terrestrial interaction centers around whether or not the substorm expansive phase has any causal effect on the growth of the storm time ring current. This question is often addressed by using the Dst index as a proxy for the storm time ring current and inspecting the main phase growth of Dst in the context of the substorm expansive phases which occur in the same time frame as the ring current growth. In the past it has been assumed that the magnetic effects of the substorm current wedge have little influence on the Dst index because the current wedge is an asymmetric current system, while Dst is supposed to reflect changes in the symmetric component of the ring current. In this paper we shall shown that the substorm current wedge can have a significant effect on the present Dst index, primarily as a consequence of the fact that only four stations are presently used to formulate the index. Calculations are made assuming the instantaneous magnitude of the wedge current is constant at 1 MA. Hourly values of Dst may be as much as 50° smaller than those presented here because of variation of the wedge current over the hour. We shall show how the effect of the current wedge depends on the UT of the expansive phase onset, the angular extent of the current wedge, and the locale of the closure current in the magnetosphere. The fact that the substorm current wedge is a conjugate phenomenon has an important influence on the magnitude of the expansive phase effect in the Dst index.
Memory characteristics of ring-shaped ceramic superconductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeoka, A.; Hasunuma, M.; Sakaiya, S.
1989-03-01
For the practical application of ceramic superconductors, the authors investigated the residual magnetic field characteristics of ring-shaped ceramic superconductors in a Y-Ba-Cu-O system with high Tc. The residual magnetic field of a ring with asymmetric current paths, supplied by external currents, appeared when one of the branch currents was above the critical current. The residual magnetic field saturated when both brach currents exceeded the critical current of the ring and showed hysteresis-like characteristics. The saturated magnetic field is subject to the critical current of the ring. A superconducting ring with asymmetric current paths suggests a simple and quite new persistent-currentmore » type memory device.« less
Submesoscale cyclones in the Agulhas current
NASA Astrophysics Data System (ADS)
Krug, M.; Swart, S.; Gula, J.
2017-01-01
Gliders were deployed for the first time in the Agulhas Current region to investigate processes of interactions between western boundary currents and shelf waters. Continuous observations from the gliders in water depths of 100-1000 m and over a period of 1 month provide the first high-resolution observations of the Agulhas Current's inshore front. The observations collected in a nonmeandering Agulhas Current show the presence of submesoscale cyclonic eddies, generated at the inshore boundary of the Agulhas Current. The submesoscale cyclones are often associated with warm water plumes, which extend from their western edge and exhibit strong northeastward currents. These features are a result of shear instabilities and extract their energy from the mean Agulhas Current jet.
Jha, Kamal N.
1999-01-01
An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard.
Three-dimensional structure of dilute pyroclastic density currents
NASA Astrophysics Data System (ADS)
Andrews, B. J.
2013-12-01
Unconfined experimental density currents dynamically similar to pyroclastic density currents (PDCs) suggest that cross-stream motions of the currents and air entrainment through currents' lateral margins strongly affects PDC behavior. Experiments are conducted within an air-filled tank 8.5 m long by 6.1 m wide by 2.6 m tall. Currents are generated by feeding heated powders down a chute into the tank at controlled rates to form dilute, particle-laden, turbulent gravity currents that are fed for 30 to 600 seconds. Powders include 5 μm aluminum oxide, 25 μm talc, 27 μm walnut, 76 μm glass beads and mixtures thereof. Experiments are scaled such that Froude, densimetric and thermal Richardson, particle Stokes and Settling numbers, and thermal to kinetic energy densities are all in agreement with dilute PDCs; experiments have lower Reynolds numbers that natural currents, but the experiments are fully turbulent, thus the large scale structures should be similar. The experiments are illuminated with 3 orthogonal laser sheets (650, 532, and 450 nm wavelengths) and recorded with an array of HD video cameras and a high speed camera (up to 3000 fps); this system provides synchronous observation of a vertical streamwise and cross-stream planes, and a horizontal plane. Ambient temperature currents tend to spread out radially from the source and have long run out distances, whereas warmer currents tend to focus along narrow sectors and have shorter run outs. In addition, when warm currents lift off to form buoyant plumes, lateral spreading ceases. The behavior of short duration currents are dominated by the current head; as eruption duration increases, current transport direction tends to oscillate back and forth (this is particularly true for ambient temperature currents). Turbulent structures in the horizontal plane show air entrainment and advection downstream. Eddies illuminated by the vertical cross-stream laser sheet often show vigorous mixing along the current margins, particularly after the current head has passed. In some currents, the head can persist as a large, vertically oriented vortex long after the bulk of the current has lifted off to form a coignimbrite plume. These unconfined experiments show that three-dimensional structures can affect PDC behavior and suggest that our typical cross-sectional or 'cartoon' understanding of PDCs misses what may be very important parts of PDC dynamics.
Method of making super capacitor with fibers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmer, Joseph Collin; Kaschmitter, James
2016-08-23
An electrical cell apparatus includes a first current collector made of a multiplicity of fibers, a second current collector spaced from the first current collector; and a separator disposed between the first current collector and the second current collector. The fibers are contained in a foam.
Superconducting dc Current Limiting Vacuum Circuit Breaker
NASA Astrophysics Data System (ADS)
Alferov, D. F.; Akhmetgareev, M. R.; Budovskii, A. I.; Bunin, R. A.; Voloshin, I. F.; Degtyarenko, P. N.; Yevsin, D. V.; Ivanov, V. P.; Sidorov, V. A.; Fisher, L. M.; Tshai, E. V.
Acircuitofadc superconductingfault current limiter witha direct current circuit-breaker fora nominal current 300A is proposed. It includes the 2G high temperature superconducting (HTS) tapes and the high-speed dc vacuum circuit breaker.Thetestresultsof current-limitingcapacityandrecoverytimeof superconductivityafter currentfaultatvoltage upto3 kV are presented.
DE 1 observations of type 1 counterstreaming electrons and field-aligned currents
NASA Technical Reports Server (NTRS)
Lin, C. S.; Burch, J. L.; Barfield, J. N.; Sugiura, M.; Nielsen, E.
1984-01-01
Dynamics Explorer 1 satellite observations of plasma and magnetic fields during type one counterstreaming electron events are presented. Counterstreaming electrons are observed at high altitudes in the region of field-aligned current. The total current density computed from the plasma data in the 18-10,000 eV energy range is generally about 1-2 micro-A/sq m. For the downward current, low-energy electrons contribute more than 40 percent of the total plasma current density integrated above 18 eV. For the upward current, such electrons contribute less than 50 percent of that current density. Electron beams in the field-aligned direction are occasionally detected. The pitch angle distributions of counterstreaming electrons are generally enhanced at both small and large pitch angles. STARE simultaneous observations for one DE 1 pass indicated that the field-aligned current was closed through Pedersen currents in the ionosphere. The directions of the ionospheric current systems are consistent with the DE 1 observations at high altitudes.
Thin current sheets observation by MMS during a near-Earth's magnetotail reconnection event
NASA Astrophysics Data System (ADS)
Nakamura, R.; Varsani, A.; Nakamura, T.; Genestreti, K.; Plaschke, F.; Baumjohann, W.; Nagai, T.; Burch, J.; Cohen, I. J.; Ergun, R.; Fuselier, S. A.; Giles, B. L.; Le Contel, O.; Lindqvist, P. A.; Magnes, W.; Schwartz, S. J.; Strangeway, R. J.; Torbert, R. B.
2017-12-01
During summer 2017, the four spacecraft of the Magnetospheric Multiscale (MMS) mission traversed the nightside magnetotail current sheet at an apogee of 25 RE. They detected a number of flow reversal events suggestive of the passage of the reconnection current sheet. Due to the mission's unprecedented high-time resolution and spatial separation well below the ion scales, structure of thin current sheets is well resolved both with plasma and field measurements. In this study we examine the detailed structure of thin current sheets during a flow reversal event from tailward flow to Earthward flow, when MMS crossed the center of the current sheet . We investigate the changes in the structure of the thin current sheet relative to the X-point based on multi-point analysis. We determine the motion and strength of the current sheet from curlometer calculations comparing these with currents obtained from the particle data. The observed structures of these current sheets are also compared with simulations.
Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.
Tieng, Quang M; Vegh, Viktor; Brereton, Ian M
2009-01-01
An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.
NASA Astrophysics Data System (ADS)
Niwa, Yoshimitsu; Matsuzaki, Jun; Yokokura, Kunio
The high-speed vacuum circuit breaker, which forced the fault current to zero was investigated. The test circuit breaker consisted of a vacuum interrupter and a high frequency current source. The vacuum interrupter, which had the axial magnetic field electrode and the disk shape electrode, was tested. The arcing period of the high-speed vacuum circuit breaker is much shorter than that of conventional circuit breaker. The arc behavior of the test electrodes immediately after the contact separation was observed by a high-speed video camcorder. The relation between the current waveform just before the current zero and the interruption ability by varying the high frequency current source was investigated experimentally. The results demonstrate the interruption ability and the arc behavior of the high-speed vacuum circuit breaker. The high current interruption was made possible by the low current period just before the current zero, although the arcing time is short and the arc is concentrated.
Effects of electron pressure anisotropy on current sheet configuration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Artemyev, A. V., E-mail: aartemyev@igpp.ucla.edu; Angelopoulos, V.; Runov, A.
2016-09-15
Recent spacecraft observations in the Earth's magnetosphere have demonstrated that the magnetotail current sheet can be supported by currents of anisotropic electron population. Strong electron currents are responsible for the formation of very thin (intense) current sheets playing the crucial role in stability of the Earth's magnetotail. We explore the properties of such thin current sheets with hot isotropic ions and cold anisotropic electrons. Decoupling of the motions of ions and electrons results in the generation of a polarization electric field. The distribution of the corresponding scalar potential is derived from the electron pressure balance and the quasi-neutrality condition. Wemore » find that electron pressure anisotropy is partially balanced by a field-aligned component of this polarization electric field. We propose a 2D model that describes a thin current sheet supported by currents of anisotropic electrons embedded in an ion-dominated current sheet. Current density profiles in our model agree well with THEMIS observations in the Earth's magnetotail.« less
AC/DC current ratio in a current superimposition variable flux reluctance machine
NASA Astrophysics Data System (ADS)
Kohara, Akira; Hirata, Katsuhiro; Niguchi, Noboru; Takahara, Kazuaki
2018-05-01
We have proposed a current superimposition variable flux reluctance machine for traction motors. The torque-speed characteristics of this machine can be controlled by increasing or decreasing the DC current. In this paper, we discuss an AC/DC current ratio in the current superimposition variable flux reluctance machine. The structure and control method are described, and the characteristics are computed using FEA in several AC/DC ratios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Qinli; Li, Yufan; Chien, Chia-ling
Provided is an electric-current-controllable magnetic unit, including: a substrate, an electric-current channel disposed on the substrate, the electric-current channel including a composite heavy-metal multilayer comprising at least one heavy-metal; a capping layer disposed over the electric-current channel; and at least one ferromagnetic layer disposed between the electric-current channel and the capping layer.
Apparatus for measuring high frequency currents
NASA Technical Reports Server (NTRS)
Hagmann, Mark J. (Inventor); Sutton, John F. (Inventor)
2003-01-01
An apparatus for measuring high frequency currents includes a non-ferrous core current probe that is coupled to a wide-band transimpedance amplifier. The current probe has a secondary winding with a winding resistance that is substantially smaller than the reactance of the winding. The sensitivity of the current probe is substantially flat over a wide band of frequencies. The apparatus is particularly useful for measuring exposure of humans to radio frequency currents.
NASA Technical Reports Server (NTRS)
Hruby, Vladimir (Inventor); Demmons, Nathaniel (Inventor); Ehrbar, Eric (Inventor); Pote, Bruce (Inventor); Rosenblad, Nathan (Inventor)
2014-01-01
An autonomous method for minimizing the magnitude of plasma discharge current oscillations in a Hall effect plasma device includes iteratively measuring plasma discharge current oscillations of the plasma device and iteratively adjusting the magnet current delivered to the plasma device in response to measured plasma discharge current oscillations to reduce the magnitude of the plasma discharge current oscillations.
On tide-induced Lagrangian residual current and residual transport: 1. Lagrangian residual current
Feng, Shizuo; Cheng, Ralph T.; Pangen, Xi
1986-01-01
Residual currents in tidal estuaries and coastal embayments have been recognized as fundamental factors which affect the long-term transport processes. It has been pointed out by previous studies that it is more relevant to use a Lagrangian mean velocity than an Eulerian mean velocity to determine the movements of water masses. Under weakly nonlinear approximation, the parameter k, which is the ratio of the net displacement of a labeled water mass in one tidal cycle to the tidal excursion, is assumed to be small. Solutions for tides, tidal current, and residual current have been considered for two-dimensional, barotropic estuaries and coastal seas. Particular attention has been paid to the distinction between the Lagrangian and Eulerian residual currents. When k is small, the first-order Lagrangian residual is shown to be the sum of the Eulerian residual current and the Stokes drift. The Lagrangian residual drift velocity or the second-order Lagrangian residual current has been shown to be dependent on the phase of tidal current. The Lagrangian drift velocity is induced by nonlinear interactions between tides, tidal currents, and the first-order residual currents, and it takes the form of an ellipse on a hodograph plane. Several examples are given to further demonstrate the unique properties of the Lagrangian residual current.
Chong, Bin; Yu, Dongliang; Jin, Rong; Wang, Yang; Li, Dongdong; Song, Ye; Gao, Mingqi; Zhu, Xufei
2015-04-10
Anodic TiO2 nanotubes have been studied extensively for many years. However, the growth kinetics still remains unclear. The systematic study of the current transient under constant anodizing voltage has not been mentioned in the original literature. Here, a derivation and its corresponding theoretical formula are proposed to overcome this challenge. In this paper, the theoretical expressions for the time dependent ionic current and electronic current are derived to explore the anodizing process of Ti. The anodizing current-time curves under different anodizing voltages and different temperatures are experimentally investigated in the anodization of Ti. Furthermore, the quantitative relationship between the thickness of the barrier layer and anodizing time, and the relationships between the ionic/electronic current and temperatures are proposed in this paper. All of the current-transient plots can be fitted consistently by the proposed theoretical expressions. Additionally, it is the first time that the coefficient A of the exponential relationship (ionic current j(ion) = A exp(BE)) has been determined under various temperatures and voltages. And the results indicate that as temperature and voltage increase, ionic current and electronic current both increase. The temperature has a larger effect on electronic current than ionic current. These results can promote the research of kinetics from a qualitative to quantitative level.