2005-01-01
Interface Compatibility); the tool is written in Ocaml [10], and the symbolic algorithms for interface compatibility and refinement are built on top...automata for a fire detection and reporting system. be encoded in the input language of the tool TIC. The refinement of sociable interfaces is discussed...are closely related to the I/O Automata Language (IOA) of [11]. Interface models are games between Input and Output, and in the models, it is es
NASA Astrophysics Data System (ADS)
Croft, William
2016-03-01
Arbib's computational comparative neuroprimatology [1] is a welcome model for cognitive linguists, that is, linguists who ground their models of language in human cognition and language use in social interaction. Arbib argues that language emerged via biological and cultural coevolution [1]; linguistic knowledge is represented by constructions, and semantic representations of linguistic constructions are grounded in embodied perceptual-motor schemas (the mirror system hypothesis). My comments offer some refinements from a linguistic point of view.
Modeling stroke rehabilitation processes using the Unified Modeling Language (UML).
Ferrante, Simona; Bonacina, Stefano; Pinciroli, Francesco
2013-10-01
In organising and providing rehabilitation procedures for stroke patients, the usual need for many refinements makes it inappropriate to attempt rigid standardisation, but greater detail is required concerning workflow. The aim of this study was to build a model of the post-stroke rehabilitation process. The model, implemented in the Unified Modeling Language, was grounded on international guidelines and refined following the clinical pathway adopted at local level by a specialized rehabilitation centre. The model describes the organisation of the rehabilitation delivery and it facilitates the monitoring of recovery during the process. Indeed, a system software was developed and tested to support clinicians in the digital administration of clinical scales. The model flexibility assures easy updating after process evolution. Copyright © 2013 Elsevier Ltd. All rights reserved.
A Counterexample Guided Abstraction Refinement Framework for Verifying Concurrent C Programs
2005-05-24
source code are routinely executed. The source code is written in languages ranging from C/C++/Java to ML/ Ocaml . These languages differ not only in...from the difficulty to model computer programs—due to the complexity of programming languages as compared to hardware description languages —to...intermediate specification language lying between high-level Statechart- like formalisms and transition systems. Actions are encoded as changes in
ERIC Educational Resources Information Center
Morse, Anthony F.; Cangelosi, Angelo
2017-01-01
Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between…
A Pilot Study on Modeling of Diagnostic Criteria Using OWL and SWRL.
Hong, Na; Jiang, Guoqian; Pathak, Jyotishiman; Chute, Christopher G
2015-01-01
The objective of this study is to describe our efforts in a pilot study on modeling diagnostic criteria using a Semantic Web-based approach. We reused the basic framework of the ICD-11 content model and refined it into an operational model in the Web Ontology Language (OWL). The refinement is based on a bottom-up analysis method, in which we analyzed data elements (including value sets) in a collection (n=20) of randomly selected diagnostic criteria. We also performed a case study to formalize rule logic in the diagnostic criteria of metabolic syndrome using the Semantic Web Rule Language (SWRL). The results demonstrated that it is feasible to use OWL and SWRL to formalize the diagnostic criteria knowledge, and to execute the rules through reasoning.
A Rule-Based Policy-Level Model of Nonsuperpower Behavior in Strategic Conflicts.
1982-12-01
a mechanism. The human mind tends to work linearly and to focus implicitly on a few variables. Experience results in subconscious models with far...which is slower. Alternatives to the current ROSIE implementation include reprogramming Scenario Agent in the C language (the language used for the Red...perception, opportunity perception, opportunity response, and assertiveness. As rules are refined, maintenance and reprogramming of the model will be required
Evaluation, Use, and Refinement of Knowledge Representations through Acquisition Modeling
ERIC Educational Resources Information Center
Pearl, Lisa
2017-01-01
Generative approaches to language have long recognized the natural link between theories of knowledge representation and theories of knowledge acquisition. The basic idea is that the knowledge representations provided by Universal Grammar enable children to acquire language as reliably as they do because these representations highlight the…
ERIC Educational Resources Information Center
Goodman, Kenneth S.; Goodman, Yetta M.
Research conducted to refine and perfect a theory and model of the reading process is presented in this report. Specifically, studies of the reading miscues of 96 students who were either speakers of English as a second language or of stable, rural dialects are detailed. Chapters deal with the following topics: methodology, the reading process,…
Refinement of Representation Theorems for Context-Free Languages
NASA Astrophysics Data System (ADS)
Fujioka, Kaoru
In this paper, we obtain some refinement of representation theorems for context-free languages by using Dyck languages, insertion systems, strictly locally testable languages, and morphisms. For instance, we improved the Chomsky-Schützenberger representation theorem and show that each context-free language L can be represented in the form L = h (D ∩ R), where D is a Dyck language, R is a strictly 3-testable language, and h is a morphism. A similar representation for context-free languages can be obtained, using insertion systems of weight (3, 0) and strictly 4-testable languages.
Transformation of Graphical ECA Policies into Executable PonderTalk Code
NASA Astrophysics Data System (ADS)
Romeikat, Raphael; Sinsel, Markus; Bauer, Bernhard
Rules are becoming more and more important in business modeling and systems engineering and are recognized as a high-level programming paradigma. For the effective development of rules it is desired to start at a high level, e.g. with graphical rules, and to refine them into code of a particular rule language for implementation purposes later. An model-driven approach is presented in this paper to transform graphical rules into executable code in a fully automated way. The focus is on event-condition-action policies as a special rule type. These are modeled graphically and translated into the PonderTalk language. The approach may be extended to integrate other rule types and languages as well.
Specifying structural constraints of architectural patterns in the ARCHERY language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, Alejandro; HASLab INESC TEC and Universidade do Minho, Campus de Gualtar, 4710-057 Braga; Barbosa, Luis S.
ARCHERY is an architectural description language for modelling and reasoning about distributed, heterogeneous and dynamically reconfigurable systems in terms of architectural patterns. The language supports the specification of architectures and their reconfiguration. This paper introduces a language extension for precisely describing the structural design decisions that pattern instances must respect in their (re)configurations. The extension is a propositional modal logic with recursion and nominals referencing components, i.e., a hybrid µ-calculus. Its expressiveness allows specifying safety and liveness constraints, as well as paths and cycles over structures. Refinements of classic architectural patterns are specified.
ERIC Educational Resources Information Center
Gao, Lianhong
2012-01-01
I conducted this study to provide insights toward deepening understanding of association between culture and writing by building, assessing, and refining a conceptual model of second language writing. To do this, I examined culture and coherence as well as the relationship between them through a mixed methods research design. Coherence has been an…
Modeling Languages Refine Vehicle Design
NASA Technical Reports Server (NTRS)
2009-01-01
Cincinnati, Ohio s TechnoSoft Inc. is a leading provider of object-oriented modeling and simulation technology used for commercial and defense applications. With funding from Small Business Innovation Research (SBIR) contracts issued by Langley Research Center, the company continued development on its adaptive modeling language, or AML, originally created for the U.S. Air Force. TechnoSoft then created what is now known as its Integrated Design and Engineering Analysis Environment, or IDEA, which can be used to design a variety of vehicles and machinery. IDEA's customers include clients in green industries, such as designers for power plant exhaust filtration systems and wind turbines.
Formal language theory: refining the Chomsky hierarchy
Jäger, Gerhard; Rogers, James
2012-01-01
The first part of this article gives a brief overview of the four levels of the Chomsky hierarchy, with a special emphasis on context-free and regular languages. It then recapitulates the arguments why neither regular nor context-free grammar is sufficiently expressive to capture all phenomena in the natural language syntax. In the second part, two refinements of the Chomsky hierarchy are reviewed, which are both relevant to the extant research in cognitive science: the mildly context-sensitive languages (which are located between context-free and context-sensitive languages), and the sub-regular hierarchy (which distinguishes several levels of complexity within the class of regular languages). PMID:22688632
Formal language theory: refining the Chomsky hierarchy.
Jäger, Gerhard; Rogers, James
2012-07-19
The first part of this article gives a brief overview of the four levels of the Chomsky hierarchy, with a special emphasis on context-free and regular languages. It then recapitulates the arguments why neither regular nor context-free grammar is sufficiently expressive to capture all phenomena in the natural language syntax. In the second part, two refinements of the Chomsky hierarchy are reviewed, which are both relevant to the extant research in cognitive science: the mildly context-sensitive languages (which are located between context-free and context-sensitive languages), and the sub-regular hierarchy (which distinguishes several levels of complexity within the class of regular languages).
On two-point boundary correlations in the six-vertex model with domain wall boundary conditions
NASA Astrophysics Data System (ADS)
Colomo, F.; Pronko, A. G.
2005-05-01
The six-vertex model with domain wall boundary conditions on an N × N square lattice is considered. The two-point correlation function describing the probability of having two vertices in a given state at opposite (top and bottom) boundaries of the lattice is calculated. It is shown that this two-point boundary correlator is expressible in a very simple way in terms of the one-point boundary correlators of the model on N × N and (N - 1) × (N - 1) lattices. In alternating sign matrix (ASM) language this result implies that the doubly refined x-enumerations of ASMs are just appropriate combinations of the singly refined ones.
Interactive natural language acquisition in a multi-modal recurrent neural architecture
NASA Astrophysics Data System (ADS)
Heinrich, Stefan; Wermter, Stefan
2018-01-01
For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.
Tests of the E-Z Reader Model: Exploring the Interface between Cognition and Eye-Movement Control
ERIC Educational Resources Information Center
Pollatsek, Alexander; Reichle, Erik D.; Rayner, Keith
2006-01-01
This paper is simultaneously a test and refinement of the E-Z Reader model and an exploration of the interrelationship between visual and language processing and eye-movements in reading. Our modeling indicates that the assumption that words in text are processed serially by skilled readers is a viable and attractive hypothesis, as it accounts not…
The Use of a Block Diagram Simulation Language for Rapid Model Prototyping
NASA Technical Reports Server (NTRS)
Whitlow, Johnathan E.; Engrand, Peter
1996-01-01
The research performed this summer was a continuation of work performed during the 1995 NASA/ASEE Summer Fellowship. The focus of the work was to expand previously generated predictive models for liquid oxygen (LOX) loading into the external fuel tank of the shuttle. The models which were developed using a block diagram simulation language known as VisSim, were evaluated on numerous shuttle flights and found to well in most cases. Once the models were refined and validated, the predictive methods were integrated into the existing Rockwell software propulsion advisory tool (PAT). Although time was not sufficient to completely integrate the models developed into PAT, the ability to predict flows and pressures in the orbiter section and graphically display the results was accomplished.
Aphasia: Current Concepts in Theory and Practice
Tippett, Donna C.; Niparko, John K.; Hillis, Argye E.
2014-01-01
Recent advances in neuroimaging contribute to a new insights regarding brain-behavior relationships and expand understanding of the functional neuroanatomy of language. Modern concepts of the functional neuroanatomy of language invoke rich and complex models of language comprehension and expression, such as dual stream networks. Increasingly, aphasia is seen as a disruption of cognitive processes underlying language. Rehabilitation of aphasia incorporates evidence based and person-centered approaches. Novel techniques, such as methods of delivering cortical brain stimulation to modulate cortical excitability, such as repetitive transcranial magnetic stimulation and transcranial direct current stimulation, are just beginning to be explored. In this review, we discuss the historical context of the foundations of neuroscientific approaches to language. We sample the emergent theoretical models of the neural substrates of language and cognitive processes underlying aphasia that contribute to more refined and nuanced concepts of language. Current concepts of aphasia rehabilitation are reviewed, including the promising role of cortical stimulation as an adjunct to behavioral therapy and changes in therapeutic approaches based on principles of neuroplasticity and evidence-based/person-centered practice to optimize functional outcomes. PMID:24904925
Modeling Coevolution between Language and Memory Capacity during Language Origin
Gong, Tao; Shuai, Lan
2015-01-01
Memory is essential to many cognitive tasks including language. Apart from empirical studies of memory effects on language acquisition and use, there lack sufficient evolutionary explorations on whether a high level of memory capacity is prerequisite for language and whether language origin could influence memory capacity. In line with evolutionary theories that natural selection refined language-related cognitive abilities, we advocated a coevolution scenario between language and memory capacity, which incorporated the genetic transmission of individual memory capacity, cultural transmission of idiolects, and natural and cultural selections on individual reproduction and language teaching. To illustrate the coevolution dynamics, we adopted a multi-agent computational model simulating the emergence of lexical items and simple syntax through iterated communications. Simulations showed that: along with the origin of a communal language, an initially-low memory capacity for acquired linguistic knowledge was boosted; and such coherent increase in linguistic understandability and memory capacities reflected a language-memory coevolution; and such coevolution stopped till memory capacities became sufficient for language communications. Statistical analyses revealed that the coevolution was realized mainly by natural selection based on individual communicative success in cultural transmissions. This work elaborated the biology-culture parallelism of language evolution, demonstrated the driving force of culturally-constituted factors for natural selection of individual cognitive abilities, and suggested that the degree difference in language-related cognitive abilities between humans and nonhuman animals could result from a coevolution with language. PMID:26544876
Modeling Coevolution between Language and Memory Capacity during Language Origin.
Gong, Tao; Shuai, Lan
2015-01-01
Memory is essential to many cognitive tasks including language. Apart from empirical studies of memory effects on language acquisition and use, there lack sufficient evolutionary explorations on whether a high level of memory capacity is prerequisite for language and whether language origin could influence memory capacity. In line with evolutionary theories that natural selection refined language-related cognitive abilities, we advocated a coevolution scenario between language and memory capacity, which incorporated the genetic transmission of individual memory capacity, cultural transmission of idiolects, and natural and cultural selections on individual reproduction and language teaching. To illustrate the coevolution dynamics, we adopted a multi-agent computational model simulating the emergence of lexical items and simple syntax through iterated communications. Simulations showed that: along with the origin of a communal language, an initially-low memory capacity for acquired linguistic knowledge was boosted; and such coherent increase in linguistic understandability and memory capacities reflected a language-memory coevolution; and such coevolution stopped till memory capacities became sufficient for language communications. Statistical analyses revealed that the coevolution was realized mainly by natural selection based on individual communicative success in cultural transmissions. This work elaborated the biology-culture parallelism of language evolution, demonstrated the driving force of culturally-constituted factors for natural selection of individual cognitive abilities, and suggested that the degree difference in language-related cognitive abilities between humans and nonhuman animals could result from a coevolution with language.
Re-visiting the electrophysiology of language.
Obleser, Jonas
2015-09-01
This editorial accompanies a special issue of Brain and Language re-visiting old themes and new leads in the electrophysiology of language. The event-related potential (ERP) as a series of characteristic deflections ("components") over time and their distribution on the scalp has been exploited by speech and language researchers over decades to find support for diverse psycholinguistic models. Fortunately, methodological and statistical advances have allowed human neuroscience to move beyond some of the limitations imposed when looking at the ERP only. Most importantly, we currently witness a refined and refreshed look at "event-related" (in the literal sense) brain activity that relates itself more closely to the actual neurobiology of speech and language processes. It is this imminent change in handling and interpreting electrophysiological data of speech and language experiments that this special issue intends to capture. Copyright © 2015 Elsevier Inc. All rights reserved.
Technology and Second Language Learning
ERIC Educational Resources Information Center
Lin, Li Li
2009-01-01
Current technology provides new opportunities to increase the effectiveness of language learning and teaching. Incorporating well-organized and effective technology into second language learning and teaching for improving students' language proficiency has been refined by researchers and educators for many decades. Based on the rapidly changing…
75 FR 66761 - Agency Information Collection Activities: Final Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-29
... strengthen sanctions against Iran, the Act contains language prohibiting Ex-Im Bank from: Authoriz[ing] any... refiner that continues to: (A) Provide Iran with significant refined petroleum resources; (B) materially contribute to Iran's capability to import refined petroleum resources; or (C) allow Iran to maintain or...
Clustering Words to Match Conditions: An Algorithm for Stimuli Selection in Factorial Designs
ERIC Educational Resources Information Center
Guasch, Marc; Haro, Juan; Boada, Roger
2017-01-01
With the increasing refinement of language processing models and the new discoveries about which variables can modulate these processes, stimuli selection for experiments with a factorial design is becoming a tough task. Selecting sets of words that differ in one variable, while matching these same words into dozens of other confounding variables…
Multilingualism, Language Policy and Creative Writing in Kenya
ERIC Educational Resources Information Center
Mbithi, Esther K
2014-01-01
Language use and creative writing go hand in hand. In the process of exploring language, we also engage in the study of literature. An engagement with literature is, indeed, a continuing process of improving our capacity to use language and refining our sensibility to good language use. In Kenya, there are clearly discernible patterns of creative…
Languages of Grief: a model for understanding the expressions of the bereaved
Corless, Inge B.; Limbo, Rana; Bousso, Regina Szylit; Wrenn, Robert L.; Head, David; Lickiss, Norelle; Wass, Hannelore
2014-01-01
The aim of this work is to provide an overview of the key features of the expressions of grief. Grief is a response to loss or anticipated loss. Although universal, its oral and nonverbal expression varies across cultures and individuals. Loss is produced by an event perceived to be negative to varying degrees by the individuals involved and has the potential to trigger long-term changes in a person's cognitions and relationships. The languages used by the bereaved to express grief differ from the language used by professionals, creating dissonance between the two. Data were obtained from English language Medline and CINAHL databases, from professional and personal experiences, interviews with experts, and exploration of cemetery memorials. Blog websites and social networks provided additional materials for further refinement of the model. Content analysis of the materials and agreement by the authors as to the themes resulted in the development of the model. To bridge the gap between professional language and that used by the bereaved, a Languages of Grief model was developed consisting of four Modes of Expression, four Types of Language, plus three Contingent Factors. The Languages of Grief provides a framework for comprehending the grief of the individual, contributing to clinical understanding, and fruitful exploration by professionals in better understanding the use of languages by the bereaved. Attention to the Modes of Expression, Types of Language, and Contingent Factors provides the professional with a richer understanding of the grieving individual, a step in providing appropriate support to the bereaved. The Languages of Grief provides a framework for application to discrete occurrences with the goal of understanding grief from the perspective of the bereaved. PMID:25750773
Languages of Grief: a model for understanding the expressions of the bereaved.
Corless, Inge B; Limbo, Rana; Bousso, Regina Szylit; Wrenn, Robert L; Head, David; Lickiss, Norelle; Wass, Hannelore
2014-01-01
The aim of this work is to provide an overview of the key features of the expressions of grief. Grief is a response to loss or anticipated loss. Although universal, its oral and nonverbal expression varies across cultures and individuals. Loss is produced by an event perceived to be negative to varying degrees by the individuals involved and has the potential to trigger long-term changes in a person's cognitions and relationships. The languages used by the bereaved to express grief differ from the language used by professionals, creating dissonance between the two. Data were obtained from English language Medline and CINAHL databases, from professional and personal experiences, interviews with experts, and exploration of cemetery memorials. Blog websites and social networks provided additional materials for further refinement of the model. Content analysis of the materials and agreement by the authors as to the themes resulted in the development of the model. To bridge the gap between professional language and that used by the bereaved, a Languages of Grief model was developed consisting of four Modes of Expression, four Types of Language, plus three Contingent Factors. The Languages of Grief provides a framework for comprehending the grief of the individual, contributing to clinical understanding, and fruitful exploration by professionals in better understanding the use of languages by the bereaved. Attention to the Modes of Expression, Types of Language, and Contingent Factors provides the professional with a richer understanding of the grieving individual, a step in providing appropriate support to the bereaved. The Languages of Grief provides a framework for application to discrete occurrences with the goal of understanding grief from the perspective of the bereaved.
Language Comprehension vs. Language Production: Age Effects on fMRI Activation
ERIC Educational Resources Information Center
Lidzba, Karen; Schwilling, Eleonore; Grodd, Wolfgang; Krageloh-Mann, Inge; Wilke, Marko
2011-01-01
Normal language acquisition is a process that unfolds with amazing speed primarily in the first years of life. However, the refinement of linguistic proficiency is an ongoing process, extending well into childhood and adolescence. An increase in lateralization and a more focussed productive language network have been suggested to be the neural…
ERIC Educational Resources Information Center
Zheng, Dongping
2012-01-01
This study provides concrete evidence of ecological, dialogical views of languaging within the dynamics of coordination and cooperation in a virtual world. Beginning level second language learners of Chinese engaged in cooperative activities designed to provide them opportunities to refine linguistic actions by way of caring for others, for the…
Compositional Abstraction and Refinement for Aspects (CARA)
2004-03-01
tight. 5 5 The SAL Language Manual by Leonardo de Moura, Sam Owre, and N. Shankar. Avail- able as [9]. The heart of the SAL system is its language , also...called SAL. The SAL language provides an attractive language for writing specifications, and it is also suitable as a target for translating...key part of the SAL framework is a language for describing transition systems. This language serves as a specification language and as the target for
Oral Language: Expression of Thought.
ERIC Educational Resources Information Center
Anastasiow, Nicholas
A child's language reflects his thought processes and his level of development. Motor, emotional, and language development all have a direct relationship to the child's cognitive functioning--each follows the pattern of moving from gross and loosely differentiated states to refined and differentiated systems. Research in early childhood education…
Levine, Dani; Strother-Garcia, Kristina; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathy
2016-02-01
Language development is a multifaceted, dynamic process involving the discovery of complex patterns, and the refinement of native language competencies in the context of communicative interactions. This process is already advanced by the end of the first year of life for hearing children, but prelingually deaf children who initially lack a language model may miss critical experiences during this early window. The purpose of this review is twofold. First, we examine the published literature on language development during the first 12 months in typically developing children. Second, we use this literature to inform our understanding of the language outcomes of prelingually deaf children who receive cochlear implants (CIs), and therefore language input, either before or after the first year. During the first 12 months, typically developing infants exhibit advances in speech segmentation, word learning, syntax acquisition, and communication, both verbal and nonverbal. Infants and their caregivers coconstruct a communication foundation during this time, supporting continued language growth. The language outcomes of hearing children are robustly predicted by their experiences and acquired competencies during the first year; yet these predictive links are absent among prelingually deaf infants lacking a language model (i.e., those without exposure to sign). For deaf infants who receive a CI, implantation timing is crucial. Children receiving CIs before 12 months frequently catch up with their typically developing peers, whereas those receiving CIs later do not. Explanations for the language difficulties of late-implanted children are discussed.
Temple, Michael W; Lehmann, Christoph U; Fabbri, Daniel
2016-01-01
Discharging patients from the Neonatal Intensive Care Unit (NICU) can be delayed for non-medical reasons including the procurement of home medical equipment, parental education, and the need for children's services. We previously created a model to identify patients that will be medically ready for discharge in the subsequent 2-10 days. In this study we use Natural Language Processing to improve upon that model and discern why the model performed poorly on certain patients. We retrospectively examined the text of the Assessment and Plan section from daily progress notes of 4,693 patients (103,206 patient-days) from the NICU of a large, academic children's hospital. A matrix was constructed using words from NICU notes (single words and bigrams) to train a supervised machine learning algorithm to determine the most important words differentiating poorly performing patients compared to well performing patients in our original discharge prediction model. NLP using a bag of words (BOW) analysis revealed several cohorts that performed poorly in our original model. These included patients with surgical diagnoses, pulmonary hypertension, retinopathy of prematurity, and psychosocial issues. The BOW approach aided in cohort discovery and will allow further refinement of our original discharge model prediction. Adequately identifying patients discharged home on g-tube feeds alone could improve the AUC of our original model by 0.02. Additionally, this approach identified social issues as a major cause for delayed discharge. A BOW analysis provides a method to improve and refine our NICU discharge prediction model and could potentially avoid over 900 (0.9%) hospital days.
Language Arts Resource Guide K-12.
ERIC Educational Resources Information Center
Pitman, John C.
One of a series designed to help districts refine and upgrade their current curricular offerings, this resource guide deals with the development of a unified K-12 language arts curriculum that combines the four major language arts components of listening, speaking, reading, and writing. Following a brief foreword and list of acknowledgments, the…
The Prediction of Success in Intensive Foreign Language Training.
ERIC Educational Resources Information Center
Carroll, John B.
After a review of the problem of predicting foreign language success, this booklet describes the development, refinement, and validation of a battery of psychological tests, some involving tape-recorded auditory stimuli, for predicting rate of progress in learning a foreign language. Although the battery was developed for more general application…
Deviation of Zipf's and Heaps' Laws in Human Languages with Limited Dictionary Sizes
Lü, Linyuan; Zhang, Zi-Ke; Zhou, Tao
2013-01-01
Zipf's law on word frequency and Heaps' law on the growth of distinct words are observed in Indo-European language family, but it does not hold for languages like Chinese, Japanese and Korean. These languages consist of characters, and are of very limited dictionary sizes. Extensive experiments show that: (i) The character frequency distribution follows a power law with exponent close to one, at which the corresponding Zipf's exponent diverges. Indeed, the character frequency decays exponentially in the Zipf's plot. (ii) The number of distinct characters grows with the text length in three stages: It grows linearly in the beginning, then turns to a logarithmical form, and eventually saturates. A theoretical model for writing process is proposed, which embodies the rich-get-richer mechanism and the effects of limited dictionary size. Experiments, simulations and analytical solutions agree well with each other. This work refines the understanding about Zipf's and Heaps' laws in human language systems. PMID:23378896
Individual differences in online spoken word recognition: Implications for SLI
McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce
2012-01-01
Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014
Morse, Anthony F; Cangelosi, Angelo
2017-02-01
Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills. Copyright © 2016 Cognitive Science Society, Inc.
Bayesian molecular design with a chemical language model
NASA Astrophysics Data System (ADS)
Ikebata, Hisaki; Hongo, Kenta; Isomura, Tetsu; Maezono, Ryo; Yoshida, Ryo
2017-04-01
The aim of computational molecular design is the identification of promising hypothetical molecules with a predefined set of desired properties. We address the issue of accelerating the material discovery with state-of-the-art machine learning techniques. The method involves two different types of prediction; the forward and backward predictions. The objective of the forward prediction is to create a set of machine learning models on various properties of a given molecule. Inverting the trained forward models through Bayes' law, we derive a posterior distribution for the backward prediction, which is conditioned by a desired property requirement. Exploring high-probability regions of the posterior with a sequential Monte Carlo technique, molecules that exhibit the desired properties can computationally be created. One major difficulty in the computational creation of molecules is the exclusion of the occurrence of chemically unfavorable structures. To circumvent this issue, we derive a chemical language model that acquires commonly occurring patterns of chemical fragments through natural language processing of ASCII strings of existing compounds, which follow the SMILES chemical language notation. In the backward prediction, the trained language model is used to refine chemical strings such that the properties of the resulting structures fall within the desired property region while chemically unfavorable structures are successfully removed. The present method is demonstrated through the design of small organic molecules with the property requirements on HOMO-LUMO gap and internal energy. The R package iqspr is available at the CRAN repository.
Bayesian molecular design with a chemical language model.
Ikebata, Hisaki; Hongo, Kenta; Isomura, Tetsu; Maezono, Ryo; Yoshida, Ryo
2017-04-01
The aim of computational molecular design is the identification of promising hypothetical molecules with a predefined set of desired properties. We address the issue of accelerating the material discovery with state-of-the-art machine learning techniques. The method involves two different types of prediction; the forward and backward predictions. The objective of the forward prediction is to create a set of machine learning models on various properties of a given molecule. Inverting the trained forward models through Bayes' law, we derive a posterior distribution for the backward prediction, which is conditioned by a desired property requirement. Exploring high-probability regions of the posterior with a sequential Monte Carlo technique, molecules that exhibit the desired properties can computationally be created. One major difficulty in the computational creation of molecules is the exclusion of the occurrence of chemically unfavorable structures. To circumvent this issue, we derive a chemical language model that acquires commonly occurring patterns of chemical fragments through natural language processing of ASCII strings of existing compounds, which follow the SMILES chemical language notation. In the backward prediction, the trained language model is used to refine chemical strings such that the properties of the resulting structures fall within the desired property region while chemically unfavorable structures are successfully removed. The present method is demonstrated through the design of small organic molecules with the property requirements on HOMO-LUMO gap and internal energy. The R package iqspr is available at the CRAN repository.
MAVEN-SA: Model-Based Automated Visualization for Enhanced Situation Awareness
2005-11-01
34 methods. But historically, as arts evolve, these how to methods become systematized and codified (e.g. the development and refinement of color theory ...schema (as necessary) 3. Draw inferences from new knowledge to support decision making process 33 Visual language theory suggests that humans process...informed by theories of learning. Over the years, many types of software have been developed to support student learning. The various types of
NASA Astrophysics Data System (ADS)
Selvy, Brian M.; Claver, Charles; Willman, Beth; Petravick, Don; Johnson, Margaret; Reil, Kevin; Marshall, Stuart; Thomas, Sandrine; Lotz, Paul; Schumacher, German; Lim, Kian-Tat; Jenness, Tim; Jacoby, Suzanne; Emmons, Ben; Axelrod, Tim
2016-08-01
We† provide an overview of the Model Based Systems Engineering (MBSE) language, tool, and methodology being used in our development of the Operational Plan for Large Synoptic Survey Telescope (LSST) operations. LSST's Systems Engineering (SE) team is using a model-based approach to operational plan development to: 1) capture the topdown stakeholders' needs and functional allocations defining the scope, required tasks, and personnel needed for operations, and 2) capture the bottom-up operations and maintenance activities required to conduct the LSST survey across its distributed operations sites for the full ten year survey duration. To accomplish these complimentary goals and ensure that they result in self-consistent results, we have developed a holistic approach using the Sparx Enterprise Architect modeling tool and Systems Modeling Language (SysML). This approach utilizes SysML Use Cases, Actors, associated relationships, and Activity Diagrams to document and refine all of the major operations and maintenance activities that will be required to successfully operate the observatory and meet stakeholder expectations. We have developed several customized extensions of the SysML language including the creation of a custom stereotyped Use Case element with unique tagged values, as well as unique association connectors and Actor stereotypes. We demonstrate this customized MBSE methodology enables us to define: 1) the rolls each human Actor must take on to successfully carry out the activities associated with the Use Cases; 2) the skills each Actor must possess; 3) the functional allocation of all required stakeholder activities and Use Cases to organizational entities tasked with carrying them out; and 4) the organization structure required to successfully execute the operational survey. Our approach allows for continual refinement utilizing the systems engineering spiral method to expose finer levels of detail as necessary. For example, the bottom-up, Use Case-driven approach will be deployed in the future to develop the detailed work procedures required to successfully execute each operational activity.
Using hybridization networks to retrace the evolution of Indo-European languages.
Willems, Matthieu; Lord, Etienne; Laforest, Louise; Labelle, Gilbert; Lapointe, François-Joseph; Di Sciullo, Anna Maria; Makarenkov, Vladimir
2016-09-06
Curious parallels between the processes of species and language evolution have been observed by many researchers. Retracing the evolution of Indo-European (IE) languages remains one of the most intriguing intellectual challenges in historical linguistics. Most of the IE language studies use the traditional phylogenetic tree model to represent the evolution of natural languages, thus not taking into account reticulate evolutionary events, such as language hybridization and word borrowing which can be associated with species hybridization and horizontal gene transfer, respectively. More recently, implicit evolutionary networks, such as split graphs and minimal lateral networks, have been used to account for reticulate evolution in linguistics. Striking parallels existing between the evolution of species and natural languages allowed us to apply three computational biology methods for reconstruction of phylogenetic networks to model the evolution of IE languages. We show how the transfer of methods between the two disciplines can be achieved, making necessary methodological adaptations. Considering basic vocabulary data from the well-known Dyen's lexical database, which contains word forms in 84 IE languages for the meanings of a 200-meaning Swadesh list, we adapt a recently developed computational biology algorithm for building explicit hybridization networks to study the evolution of IE languages and compare our findings to the results provided by the split graph and galled network methods. We conclude that explicit phylogenetic networks can be successfully used to identify donors and recipients of lexical material as well as the degree of influence of each donor language on the corresponding recipient languages. We show that our algorithm is well suited to detect reticulate relationships among languages, and present some historical and linguistic justification for the results obtained. Our findings could be further refined if relevant syntactic, phonological and morphological data could be analyzed along with the available lexical data.
Introduction to the special issue: parsimony and redundancy in models of language.
Wiechmann, Daniel; Kerz, Elma; Snider, Neal; Jaeger, T Florian
2013-09-01
One of the most fundamental goals in linguistic theory is to understand the nature of linguistic knowledge, that is, the representations and mechanisms that figure in a cognitively plausible model of human language-processing. The past 50 years have witnessed the development and refinement of various theories about what kind of 'stuff' human knowledge of language consists of, and technological advances now permit the development of increasingly sophisticated computational models implementing key assumptions of different theories from both rationalist and empiricist perspectives. The present special issue does not aim to present or discuss the arguments for and against the two epistemological stances or discuss evidence that supports either of them (cf. Bod, Hay, & Jannedy, 2003; Christiansen & Chater, 2008; Hauser, Chomsky, & Fitch, 2002; Oaksford & Chater, 2007; O'Donnell, Hauser, & Fitch, 2005). Rather, the research presented in this issue, which we label usage-based here, conceives of linguistic knowledge as being induced from experience. According to the strongest of such accounts, the acquisition and processing of language can be explained with reference to general cognitive mechanisms alone (rather than with reference to innate language-specific mechanisms). Defined in these terms, usage-based approaches encompass approaches referred to as experience-based, performance-based and/or emergentist approaches (Amrnon & Snider, 2010; Bannard, Lieven, & Tomasello, 2009; Bannard & Matthews, 2008; Chater & Manning, 2006; Clark & Lappin, 2010; Gerken, Wilson, & Lewis, 2005; Gomez, 2002;
Automated knowledge-base refinement
NASA Technical Reports Server (NTRS)
Mooney, Raymond J.
1994-01-01
Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.
Structural Validation of Nursing Terminologies
Hardiker, Nicholas R.; Rector, Alan L.
2001-01-01
Objective: The purpose of the study is twofold: 1) to explore the applicability of combinatorial terminologies as the basis for building enumerated classifications, and 2) to investigate the usefulness of formal terminological systems for performing such classification and for assisting in the refinement of both combinatorial terminologies and enumerated classifications. Design: A formal model of the beta version of the International Classification for Nursing Practice (ICNP) was constructed in the compositional terminological language GRAIL (GALEN Representation and Integration Language). Terms drawn from the North American Nursing Diagnosis Association Taxonomy I (NANDA taxonomy) were mapped into the model and classified automatically using GALEN technology. Measurements: The resulting generated hierarchy was compared with the NANDA taxonomy to assess coverage and accuracy of classification. Results: In terms of coverage, in this study ICNP was able to capture 77 percent of NANDA terms using concepts drawn from five of its eight axes. Three axes—Body Site, Topology, and Frequency—were not needed. In terms of accuracy, where hierarchic relationships existed in the generated hierarchy or the NANDA taxonomy, or both, 6 were identical, 19 existed in the generated hierarchy alone (2 of these were considered suitable for incorporation into the NANDA taxonomy and 17 were considered inaccurate), and 23 appeared in the NANDA taxonomy alone (8 of these were considered suitable for incorporation into ICNP, 9 were considered inaccurate, and 6 reflected different, equally valid perspectives). Sixty terms appeared at the top level, with no indenting, in both the generated hierarchy and the NANDA taxonomy. Conclusions: With appropriate refinement, combinatorial terminologies such as ICNP have the potential to provide a useful foundation for representing enumerated classifications such as NANDA. Technologies such as GALEN make possible the process of building automatically enumerated classifications while providing a useful means of validating and refining both combinatorial terminologies and enumerated classifications. PMID:11320066
NASA Astrophysics Data System (ADS)
Dore, C.; Murphy, M.
2013-02-01
This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.
ERIC Educational Resources Information Center
Iverson, Jana M.
2010-01-01
During the first eighteen months of life, infants acquire and refine a whole set of new motor skills that significantly change the ways in which the body moves in and interacts with the environment. In this review article, I argue that motor acquisitions provide infants with an opportunity to practice skills relevant to language acquisition before…
Refining the Use of the Web (and Web Search) as a Language Teaching and Learning Resource
ERIC Educational Resources Information Center
Wu, Shaoqun; Franken, Margaret; Witten, Ian H.
2009-01-01
The web is a potentially useful corpus for language study because it provides examples of language that are contextualized and authentic, and is large and easily searchable. However, web contents are heterogeneous in the extreme, uncontrolled and hence "dirty," and exhibit features different from the written and spoken texts in other linguistic…
Semi-Automated Methods for Refining a Domain-Specific Terminology Base
2011-02-01
only as a resource for written and oral translation, but also for Natural Language Processing ( NLP ) applications, text retrieval, document indexing...Natural Language Processing ( NLP ) applications, text retrieval, document indexing, and other knowledge management tasks. The objective of this...also for Natural Language Processing ( NLP ) applications, text retrieval (1), document indexing, and other knowledge management tasks. The National
Measures of lexical distance between languages
NASA Astrophysics Data System (ADS)
Petroni, Filippo; Serva, Maurizio
2010-06-01
The idea of measuring distance between languages seems to have its roots in the work of the French explorer Dumont D’Urville (1832) [13]. He collected comparative word lists for various languages during his voyages aboard the Astrolabe from 1826 to 1829 and, in his work concerning the geographical division of the Pacific, he proposed a method for measuring the degree of relation among languages. The method used by modern glottochronology, developed by Morris Swadesh in the 1950s, measures distances from the percentage of shared cognates, which are words with a common historical origin. Recently, we proposed a new automated method which uses the normalized Levenshtein distances among words with the same meaning and averages on the words contained in a list. Recently another group of scholars, Bakker et al. (2009) [8] and Holman et al. (2008) [9], proposed a refined version of our definition including a second normalization. In this paper we compare the information content of our definition with the refined version in order to decide which of the two can be applied with greater success to resolve relationships among languages.
CITE NLM: Natural-Language Searching in an Online Catalog.
ERIC Educational Resources Information Center
Doszkocs, Tamas E.
1983-01-01
The National Library of Medicine's Current Information Transfer in English public access online catalog offers unique subject search capabilities--natural-language query input, automatic medical subject headings display, closest match search strategy, ranked document output, dynamic end user feedback for search refinement. References, description…
STRCMACS: An extensive set of Macros for structured programming in OS/360 assembly language
NASA Technical Reports Server (NTRS)
Barth, C. W.
1974-01-01
Two techniques are discussed that have been most often referred to as structured programming. One is that of programming with high level control structures (such as the if and while) replacing the branch instruction (goto-less programming); the other is the process of developing a program by progressively refining descriptions of components in terms of more primitive components (called stepwise refinement or top-down programming). In addition to discussing what these techniques are, it is shown why their use is advised and how both can be implemented in OS assembly language by the use of a special macro instruction package.
Musical emotions: Functions, origins, evolution
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid
2010-03-01
Theories of music origins and the role of musical emotions in the mind are reviewed. Most existing theories contradict each other, and cannot explain mechanisms or roles of musical emotions in workings of the mind, nor evolutionary reasons for music origins. Music seems to be an enigma. Nevertheless, a synthesis of cognitive science and mathematical models of the mind has been proposed describing a fundamental role of music in the functioning and evolution of the mind, consciousness, and cultures. The review considers ancient theories of music as well as contemporary theories advanced by leading authors in this field. It addresses one hypothesis that promises to unify the field and proposes a theory of musical origin based on a fundamental role of music in cognition and evolution of consciousness and culture. We consider a split in the vocalizations of proto-humans into two types: one less emotional and more concretely-semantic, evolving into language, and the other preserving emotional connections along with semantic ambiguity, evolving into music. The proposed hypothesis departs from other theories in considering specific mechanisms of the mind-brain, which required the evolution of music parallel with the evolution of cultures and languages. Arguments are reviewed that the evolution of language toward becoming the semantically powerful tool of today required emancipation from emotional encumbrances. The opposite, no less powerful mechanisms required a compensatory evolution of music toward more differentiated and refined emotionality. The need for refined music in the process of cultural evolution is grounded in fundamental mechanisms of the mind. This is why today's human mind and cultures cannot exist without today's music. The reviewed hypothesis gives a basis for future analysis of why different evolutionary paths of languages were paralleled by different evolutionary paths of music. Approaches toward experimental verification of this hypothesis in psychological and neuroimaging research are reviewed.
Interacting domain-specific languages with biological problem solving environments
NASA Astrophysics Data System (ADS)
Cickovski, Trevor M.
Iteratively developing a biological model and verifying results with lab observations has become standard practice in computational biology. This process is currently facilitated by biological Problem Solving Environments (PSEs), multi-tiered and modular software frameworks which traditionally consist of two layers: a computational layer written in a high level language using design patterns, and a user interface layer which hides its details. Although PSEs have proven effective, they still enforce some communication overhead between biologists refining their models through repeated comparison with experimental observations in vitro or in vivo, and programmers actually implementing model extensions and modifications within the computational layer. I illustrate the use of biological Domain-Specific Languages (DSLs) as a middle-level PSE tier to ameliorate this problem by providing experimentalists with the ability to iteratively test and develop their models using a higher degree of expressive power compared to a graphical interface, while saving the requirement of general purpose programming knowledge. I develop two radically different biological DSLs: XML-based BIOLOGO will model biological morphogenesis using a cell-centered stochastic cellular automaton and translate into C++ modules for an object-oriented PSE C OMPUCELL3D, and MDLab will provide a set of high-level Python libraries for running molecular dynamics simulations, using wrapped functionality from the C++ PSE PROTOMOL. I describe each language in detail, including its its roles within the larger PSE and its expressibility in terms of representable phenomena, and a discussion of observations from users of the languages. Moreover I will use these studies to draw general conclusions about biological DSL development, including dependencies upon the goals of the corresponding PSE, strategies, and tradeoffs.
Silva, Wanderson Roberto; Costa, David; Pimenta, Filipa; Maroco, João; Campos, Juliana Alvares Duarte Bonini
2016-07-21
The objectives of this study were to develop a unified Portuguese-language version, for use in Brazil and Portugal, of the Body Shape Questionnaire (BSQ) and to estimate its validity, reliability, and internal consistency in Brazilian and Portuguese female university students. Confirmatory factor analysis was performed using both original (34-item) and shortened (8-item) versions. The model's fit was assessed with χ²/df, CFI, NFI, and RMSEA. Concurrent and convergent validity were assessed. Reliability was estimated through internal consistency and composite reliability (α). Transnational invariance of the BSQ was tested using multi-group analysis. The original 32-item model was refined to present a better fit and adequate validity and reliability. The shortened model was stable in both independent samples and in transnational samples (Brazil and Portugal). The use of this unified version is recommended for the assessment of body shape concerns in both Brazilian and Portuguese college students.
Effects of linguistic experience on early levels of perceptual tone processing
NASA Astrophysics Data System (ADS)
Huang, Tsan; Johnson, Keith
2005-04-01
This study investigated the phenomenon of language-specificity in Mandarin Chinese tone perception. The main question was whether linguistic experience affects the earliest levels of perceptual processing of tones. Chinese and American English listeners participated in four perception experiments, which involved short inter-stimulus intervals (300 ms or 100 ms) and an AX discrimination or AX degree-of-difference rating task. Three experiments used natural speech monosyllabic tone stimuli and one experiment used time-varying sinusoidal simulations of Mandarin tones. AE listeners showed psychoacoustic listening in all experiments, paying much attention to onset and offset pitch. Chinese listeners showed language-specific patterns in all experiments to various degrees, where tonal neutralization rules reduced perceptual distance between two otherwise contrastive tones for Chinese listeners. Since these experiments employed procedures hypothesized to tap the auditory trace mode (Pisoni, Percept. Psychophys. 13, 253-260 (1973)], language-specificity found in this study seems to support the proposal of an auditory cortical map [Guenther et al., J. Acoust. Soc. Am. 23, 213-221 (1999)]. But the model needs refining to account for different degrees of language-specificity, which are better handled by Johnsons (2004, TLS03:26-41) lexical distance model, although the latter model is too rigid in assuming that linguistic experience does not affect low-level perceptual tasks such as AX discrimination with short ISIs.
1992-12-01
describing how. 5. EDDA . EDDA is an attempt to add mathematical formalism to SADT. Because it is based on SADT, it cannot easily represent any other...design methodology. EDDA has two forms: G- EDDA , the standard graphical version of SADT, and S- EDDA , a textual language that partially represents the...used. "* EDDA only supports the SADT methodology and is too limited in scope to be useful in our research. "* SAMM lacks the semantic richness of
The application of NASCAD as a NASTRAN pre- and post-processor
NASA Technical Reports Server (NTRS)
Peltzman, Alan N.
1987-01-01
The NASA Computer Aided Design (NASCAD) graphics package provides an effective way to interactively create, view, and refine analytic data models. NASCAD's macro language, combined with its powerful 3-D geometric data base allows the user important flexibility and speed in constructing his model. This flexibility has the added benefit of enabling the user to keep pace with any new NASTRAN developments. NASCAD allows models to be conveniently viewed and plotted to best advantage in both pre- and post-process phases of development, providing useful visual feedback to the analysis process. NASCAD, used as a graphics compliment to NASTRAN, can play a valuable role in the process of finite element modeling.
Cheng, Bing; Zhang, Yang
2015-01-01
The present study investigated how syllable structure differences between the first Language (L1) and the second language (L2) affect L2 consonant perception and production at syllable-initial and syllable-final positions. The participants were Mandarin-speaking college students who studied English as a second language. Monosyllabic English words were used in the perception test. Production was recorded from each Chinese subject and rated for accentedness by two native speakers of English. Consistent with previous studies, significant positional asymmetry effects were found across speech sound categories in terms of voicing, place of articulation, and manner of articulation. Furthermore, significant correlations between perception and accentedness ratings were found at the syllable onset position but not for the coda. Many exceptions were also found, which could not be solely accounted for by differences in L1–L2 syllabic structures. The results show a strong effect of language experience at the syllable level, which joins force with acoustic, phonetic, and phonemic properties of individual consonants in influencing positional asymmetry in both domains of L2 segmental perception and production. The complexities and exceptions call for further systematic studies on the interactions between syllable structure universals and native language interference with refined theoretical models to specify the links between perception and production in second language acquisition. PMID:26635699
Poll, Gerard H; Miller, Carol A; Mainela-Arnold, Elina; Adams, Katharine Donnelly; Misra, Maya; Park, Ji Sook
2013-01-01
More limited working memory capacity and slower processing for language and cognitive tasks are characteristics of many children with language difficulties. Individual differences in processing speed have not consistently been found to predict language ability or severity of language impairment. There are conflicting views on whether working memory and processing speed are integrated or separable abilities. To evaluate four models for the relations of individual differences in children's processing speed and working memory capacity in sentence imitation. The models considered whether working memory and processing speed are integrated or separable, as well as the effect of the number of operations required per sentence. The role of working memory as a mediator of the effect of processing speed on sentence imitation was also evaluated. Forty-six children with varied language and reading abilities imitated sentences. Working memory was measured with the Competing Language Processing Task (CLPT), and processing speed was measured with a composite of truth-value judgment and rapid automatized naming tasks. Mixed-effects ordinal regression models evaluated the CLPT and processing speed as predictors of sentence imitation item scores. A single mediator model evaluated working memory as a mediator of the effect of processing speed on sentence imitation total scores. Working memory was a reliable predictor of sentence imitation accuracy, but processing speed predicted sentence imitation only as a component of a processing speed by number of operations interaction. Processing speed predicted working memory capacity, and there was evidence that working memory acted as a mediator of the effect of processing speed on sentence imitation accuracy. The findings support a refined view of working memory and processing speed as separable factors in children's sentence imitation performance. Processing speed does not independently explain sentence imitation accuracy for all sentence types, but contributes when the task requires more mental operations. Processing speed also has an indirect effect on sentence imitation by contributing to working memory capacity. © 2013 Royal College of Speech and Language Therapists.
NASA Astrophysics Data System (ADS)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Modifiers: Increasing Richness and Nuance of Design Pattern Languages
NASA Astrophysics Data System (ADS)
Kolfschoten, Gwendolyn L.; Briggs, Robert O.; Lukosch, Stephan
One of the challenges when establishing and maintaining a pattern language is to balance richness with simplicity. On the one hand, designers need a variety of useful design patterns to increase the speed of their design efforts and to reduce design risk. On the other hand, the greater the variety of design patterns in a language, the higher the cognitive load to remember and select among them. One solution to this problem is the concept of a modifier design pattern, a design pattern for pattern languages. A modifier pattern is a named, documented variation that can be applied to some set of other design patterns. They create similar, useful changes and refinements to the solutions derived from any pattern to which they are applied. The modifier concept, described in this paper emerged in a relatively new design pattern language for collaborative work practices in which the design patterns are called thinkLets. When analyzing the thinkLet pattern language, we found that many of the patterns we knew were variations and refinements of other patterns. However, we also found patterns in these variations; we found variations that could be applied to different patterns, with similar effects. We document these variations as modifiers. In this paper, we introduce the concept of modifier design patterns and illustrate the use of modifiers with two case studies.
ERIC Educational Resources Information Center
Tse, Shek Kam; Ip, Olivia King Ming; Tan, Wei Xiong; Ko, Hwa-Wei
2012-01-01
An overview is presented of a three-year project aimed at helping Chinese language teachers in Taiwan refine ways that Chinese, an ideographic language that differs markedly from alphabetic English, is taught in primary schools. Guided by university staff in Taiwan, Hong Kong University and a Taiwanese non-government social enterprise, 20…
ERIC Educational Resources Information Center
Gomez, Kimberley; Gomez, Louis M.; Rodela, Katherine C.; Horton, Emily S.; Cunningham, Jahneille; Ambrocio, Rocio
2015-01-01
Three community college faculty members used improvement science techniques to design, develop, and refine contextualized developmental mathematics lessons, where language and literacy pedagogy and related supports figured prominently in these instructional materials. This article reports on the role that their design experiences played in…
Advances in natural language processing.
Hirschberg, Julia; Manning, Christopher D
2015-07-17
Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.
Developing embodied cognition: insights from children’s concepts and language processing
Wellsby, Michele; Pexman, Penny M.
2014-01-01
Over the past decade, theories of embodied cognition have become increasingly influential with research demonstrating that sensorimotor experiences are involved in cognitive processing; however, this embodied research has primarily focused on adult cognition. The notion that sensorimotor experience is important for acquiring conceptual knowledge is not a novel concept for developmental researchers, and yet theories of embodied cognition often do not fully integrate developmental findings. We propose that in order for an embodied cognition perspective to be refined and advanced as a lifelong theory of cognition, it is important to consider what can be learned from research with children. In this paper, we focus on development of concepts and language processing, and examine the importance of children's embodied experiences for these aspects of cognition in particular. Following this review, we outline what we see as important developmental issues that need to be addressed in order to determine the extent to which language and conceptual knowledge are embodied and to refine theories of embodied cognition. PMID:24904513
Software Model Checking Without Source Code
NASA Technical Reports Server (NTRS)
Chaki, Sagar; Ivers, James
2009-01-01
We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.
ERIC Educational Resources Information Center
Stafford, Catherine A.
2013-01-01
Vygotskian sociocultural theory of mind holds that language mediates thought. According to the theory, speech does not merely put completed thought into words; rather, it is a tool to refine thought as it evolves in real time. This study investigated from a sociocultural theory of mind perspective how nine beginning learners of Latin used private…
ERIC Educational Resources Information Center
Ogle, Donna; Correa-Kovtun, Amy
2010-01-01
Increasing numbers of English-language learners and the challenge of supporting their learning in social studies and science brought together a group of urban literacy coaches and university faculty. This article describes the development and refinement of a partner reading routine, Partner Reading and Content, Too (PRC2). Partners with similar…
ERIC Educational Resources Information Center
Sapey-Triomphe, Laurie-Anne; Moulin, Annie; Sonié, Sandrine; Schmitz, Christina
2018-01-01
Sensory sensitivity peculiarities represent an important characteristic of Autism Spectrum Disorders (ASD). We first validated a French language version of the Glasgow Sensory Questionnaire (GSQ) (Robertson and Simmons in "J Autism Dev Disord" 43(4):775-784, 2013). The GSQ score was strongly positively correlated with the Autism-Spectrum…
ERIC Educational Resources Information Center
Saito, Kazuya; van Poeteren, Kim
2012-01-01
A questionnaire study was conducted to examine how 120 highly experienced EFL (English as a foreign language) teachers in Japan adjust their pronunciation in order to facilitate and refine their students' learning skills to approach mutual intelligibility in second language (L2) classrooms (i.e. "pronunciation-specific teacher talk").…
Exploring model based engineering for large telescopes: getting started with descriptive models
NASA Astrophysics Data System (ADS)
Karban, R.; Zamparelli, M.; Bauvir, B.; Koehler, B.; Noethe, L.; Balestra, A.
2008-07-01
Large telescopes pose a continuous challenge to systems engineering due to their complexity in terms of requirements, operational modes, long duty lifetime, interfaces and number of components. A multitude of decisions must be taken throughout the life cycle of a new system, and a prime means of coping with complexity and uncertainty is using models as one decision aid. The potential of descriptive models based on the OMG Systems Modeling Language (OMG SysMLTM) is examined in different areas: building a comprehensive model serves as the basis for subsequent activities of soliciting and review for requirements, analysis and design alike. Furthermore a model is an effective communication instrument against misinterpretation pitfalls which are typical of cross disciplinary activities when using natural language only or free-format diagrams. Modeling the essential characteristics of the system, like interfaces, system structure and its behavior, are important system level issues which are addressed. Also shown is how to use a model as an analysis tool to describe the relationships among disturbances, opto-mechanical effects and control decisions and to refine the control use cases. Considerations on the scalability of the model structure and organization, its impact on the development process, the relation to document-centric structures, style and usage guidelines and the required tool chain are presented.
Iverson, Jana M
2010-03-01
ABSTRACTDuring the first eighteen months of life, infants acquire and refine a whole set of new motor skills that significantly change the ways in which the body moves in and interacts with the environment. In this review article, I argue that motor acquisitions provide infants with an opportunity to practice skills relevant to language acquisition before they are needed for that purpose; and that the emergence of new motor skills changes infants' experience with objects and people in ways that are relevant for both general communicative development and the acquisition of language. Implications of this perspective for current views of co-occurring language and motor impairments and for methodology in the field of child language research are also considered.
Iverson, Jana M.
2010-01-01
During the first eighteen months of life, infants acquire and refine a whole set of new motor skills that significantly change the ways in which the body moves in and interacts with the environment. In this review article, I argue that motor acquisitions provide infants with an opportunity to practice skills relevant to language acquisition before they are needed for that purpose; and that the emergence of new motor skills changes infants’ experience with objects and people in ways that are relevant for both general communicative development and the acquisition of language. Implications of this perspective for current views of co-occurring language and motor impairments and for methodology in the field of child language research are also considered. PMID:20096145
Hill, Annie J.; Breslin, Hugh M.
2016-01-01
Asynchronous telerehabilitation in which computer-based interventions are remotely monitored and adapted offline is an emerging service delivery model in the rehabilitation of communication disorders. The asynchronous nature of this model may hold a benefit over its synchronous counterpart by eliminating scheduling issues and thus improving efficiency in a healthcare landscape of constrained resource allocation. The design of asynchronous telerehabilitation platforms should therefore ensure efficiency and flexibility. The authors have been engaged in a program of research to develop and evaluate an asynchronous telerehabilitation platform for use in speech-language pathology. eSALT is a novel asynchronous telerehabilitation platform in which clinicians design and individualize therapy tasks for transfer to a client's mobile device. An inbuilt telerehabilitation module allows for remote monitoring and updating of tasks. This paper introduces eSALT and reports outcomes from an usability study that considered the needs of two end-user groups, people with aphasia and clinicians, in the on-going refinement of eSALT. In the study participants with aphasia were paired with clinicians who used eSALT to design and customize therapy tasks. After training on the mobile device the participants engaged in therapy at home for a period of 3 weeks, while clinicians remotely monitored and updated tasks. Following the home trial, participants, and clinicians engaged in semi-structured interviews and completed surveys on the usability of eSALT and their satisfaction with the platform. Content analysis of data involving five participants and three clinicians revealed a number of usability themes including ease of use, user support, satisfaction, limitations, and potential improvements. These findings were translated into a number of refinements of the eSALT platform including the development of a client interface for use on the Apple iPad®, greater variety in feedback options to both the participant and clinician, automatic transfer of results to the clinician, and expansion of the task template list. This research highlights the importance of including end-users in the process of technology refinement, in order to ensure effective and efficient use of the technology. Future directions for research are discussed including clinical trials in which the effectiveness of and adherence to intervention protocols using asynchronous telerehabilitation are examined. PMID:28066211
PCA method for automated detection of mispronounced words
NASA Astrophysics Data System (ADS)
Ge, Zhenhao; Sharma, Sudhendu R.; Smith, Mark J. T.
2011-06-01
This paper presents a method for detecting mispronunciations with the aim of improving Computer Assisted Language Learning (CALL) tools used by foreign language learners. The algorithm is based on Principle Component Analysis (PCA). It is hierarchical with each successive step refining the estimate to classify the test word as being either mispronounced or correct. Preprocessing before detection, like normalization and time-scale modification, is implemented to guarantee uniformity of the feature vectors input to the detection system. The performance using various features including spectrograms and Mel-Frequency Cepstral Coefficients (MFCCs) are compared and evaluated. Best results were obtained using MFCCs, achieving up to 99% accuracy in word verification and 93% in native/non-native classification. Compared with Hidden Markov Models (HMMs) which are used pervasively in recognition application, this particular approach is computational efficient and effective when training data is limited.
Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan
2016-10-28
Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.
Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan
2016-01-01
Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. PMID:27801829
Phonetic diversity, statistical learning, and acquisition of phonology.
Pierrehumbert, Janet B
2003-01-01
In learning to perceive and produce speech, children master complex language-specific patterns. Daunting language-specific variation is found both in the segmental domain and in the domain of prosody and intonation. This article reviews the challenges posed by results in phonetic typology and sociolinguistics for the theory of language acquisition. It argues that categories are initiated bottom-up from statistical modes in use of the phonetic space, and sketches how exemplar theory can be used to model the updating of categories once they are initiated. It also argues that bottom-up initiation of categories is successful thanks to the perception-production loop operating in the speech community. The behavior of this loop means that the superficial statistical properties of speech available to the infant indirectly reflect the contrastiveness and discriminability of categories in the adult grammar. The article also argues that the developing system is refined using internal feedback from type statistics over the lexicon, once the lexicon is well-developed. The application of type statistics to a system initiated with surface statistics does not cause a fundamental reorganization of the system. Instead, it exploits confluences across levels of representation which characterize human language and make bootstrapping possible.
ERIC Educational Resources Information Center
Laffin, Diana
2012-01-01
Diana Laffin writes about historical language and explores how understanding different historians' use of language can help sixth form students refine and deepen both their understanding of the discipline of history and their abilities to practise the discipline in their own writing. What does close study of the textual habits and practices of…
Proceedings of the First NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)
2009-01-01
Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.
Multiscale Simulations of Magnetic Island Coalescence
NASA Technical Reports Server (NTRS)
Dorelli, John C.
2010-01-01
We describe a new interactive parallel Adaptive Mesh Refinement (AMR) framework written in the Python programming language. This new framework, PyAMR, hides the details of parallel AMR data structures and algorithms (e.g., domain decomposition, grid partition, and inter-process communication), allowing the user to focus on the development of algorithms for advancing the solution of a systems of partial differential equations on a single uniform mesh. We demonstrate the use of PyAMR by simulating the pairwise coalescence of magnetic islands using the resistive Hall MHD equations. Techniques for coupling different physics models on different levels of the AMR grid hierarchy are discussed.
The Cerebellum: Adaptive Prediction for Movement and Cognition
Sokolov, Arseny A.; Miall, R. Chris; Ivry, Richard B.
2017-01-01
Over the past 30 years, cumulative evidence has indicated that cerebellar function extends beyond sensorimotor control. This view has emerged from studies of neuroanatomy, neuroimaging, neuropsychology and brain stimulation, with the results implicating the cerebellum in domains as diverse as attention, language, executive function and social cognition. Although the literature provides sophisticated models of how the cerebellum helps refine movements, it remains unclear how the core mechanisms of these models can be applied when considering a broader conceptualization of cerebellar function. In light of recent multidisciplinary findings, we consider two key concepts that have been suggested as general computational principles of cerebellar function, prediction and error-based learning, examining how these might be relevant in the operation of cognitive cerebro-cerebellar loops. PMID:28385461
Language Lateralization Shifts with Learning by Adults
Plante, Elena; Almryde, Kyle; Patterson, Dianne K.; Vance, Christopher J.; Asbjørnsen, Arve E.
2014-01-01
For the majority of the population, language is a left hemisphere lateralized function. During childhood, a pattern of increasing left lateralization for language has been described in brain imaging studies, suggesting this trait develops. This development could reflect change due to brain maturation or change due to skill acquisition, given that children acquire and refine language skills as they mature. We test the possibility that skill acquisition, independent of age-associated maturation can result in shifts in language lateralization in classic language cortex. We imaged adults exposed to unfamiliar language during three successive fMRI scans. Participants were then asked to identify specific words embedded in Norwegian sentences. Exposure to these sentences, relative to complex tones, resulted in consistent activation in the left and right superior temporal gyrus. Activation in this region became increasingly left lateralized with repeated exposure to the unfamiliar language. These results demonstrate that shifts in lateralization can be produced in the short-term within a learning context, independent of maturation. PMID:25285756
A learning apprentice for software parts composition
NASA Technical Reports Server (NTRS)
Allen, Bradley P.; Holtzman, Peter L.
1987-01-01
An overview of the knowledge acquisition component of the Bauhaus, a prototype computer aided software engineering (CASE) workstation for the development of domain-specific automatic programming systems (D-SAPS) is given. D-SAPS use domain knowledge in the refinement of a description of an application program into a compilable implementation. The approach to the construction of D-SAPS was to automate the process of refining a description of a program, expressed in an object-oriented domain language, into a configuration of software parts that implement the behavior of the domain objects.
Yeast 5 – an expanded reconstruction of the Saccharomyces cerevisiae metabolic network
2012-01-01
Background Efforts to improve the computational reconstruction of the Saccharomyces cerevisiae biochemical reaction network and to refine the stoichiometrically constrained metabolic models that can be derived from such a reconstruction have continued since the first stoichiometrically constrained yeast genome scale metabolic model was published in 2003. Continuing this ongoing process, we have constructed an update to the Yeast Consensus Reconstruction, Yeast 5. The Yeast Consensus Reconstruction is a product of efforts to forge a community-based reconstruction emphasizing standards compliance and biochemical accuracy via evidence-based selection of reactions. It draws upon models published by a variety of independent research groups as well as information obtained from biochemical databases and primary literature. Results Yeast 5 refines the biochemical reactions included in the reconstruction, particularly reactions involved in sphingolipid metabolism; updates gene-reaction annotations; and emphasizes the distinction between reconstruction and stoichiometrically constrained model. Although it was not a primary goal, this update also improves the accuracy of model prediction of viability and auxotrophy phenotypes and increases the number of epistatic interactions. This update maintains an emphasis on standards compliance, unambiguous metabolite naming, and computer-readable annotations available through a structured document format. Additionally, we have developed MATLAB scripts to evaluate the model’s predictive accuracy and to demonstrate basic model applications such as simulating aerobic and anaerobic growth. These scripts, which provide an independent tool for evaluating the performance of various stoichiometrically constrained yeast metabolic models using flux balance analysis, are included as Additional files 1, 2 and 3. Conclusions Yeast 5 expands and refines the computational reconstruction of yeast metabolism and improves the predictive accuracy of a stoichiometrically constrained yeast metabolic model. It differs from previous reconstructions and models by emphasizing the distinction between the yeast metabolic reconstruction and the stoichiometrically constrained model, and makes both available as Additional file 4 and Additional file 5 and at http://yeast.sf.net/ as separate systems biology markup language (SBML) files. Through this separation, we intend to make the modeling process more accessible, explicit, transparent, and reproducible. PMID:22663945
Ontology driven modeling for the knowledge of genetic susceptibility to disease.
Lin, Yu; Sakamoto, Norihiro
2009-05-12
For the machine helped exploring the relationships between genetic factors and complex diseases, a well-structured conceptual framework of the background knowledge is needed. However, because of the complexity of determining a genetic susceptibility factor, there is no formalization for the knowledge of genetic susceptibility to disease, which makes the interoperability between systems impossible. Thus, the ontology modeling language OWL was used for formalization in this paper. After introducing the Semantic Web and OWL language propagated by W3C, we applied text mining technology combined with competency questions to specify the classes of the ontology. Then, an N-ary pattern was adopted to describe the relationships among these defined classes. Based on the former work of OGSF-DM (Ontology of Genetic Susceptibility Factors to Diabetes Mellitus), we formalized the definition of "Genetic Susceptibility", "Genetic Susceptibility Factor" and other classes by using OWL-DL modeling language; and a reasoner automatically performed the classification of the class "Genetic Susceptibility Factor". The ontology driven modeling is used for formalization the knowledge of genetic susceptibility to complex diseases. More importantly, when a class has been completely formalized in an ontology, the OWL reasoning can automatically compute the classification of the class, in our case, the class of "Genetic Susceptibility Factors". With more types of genetic susceptibility factors obtained from the laboratory research, our ontologies always needs to be refined, and many new classes must be taken into account to harmonize with the ontologies. Using the ontologies to develop the semantic web needs to be applied in the future.
Mira, José J; Navarro, Isabel M; Guilabert, Mercedes; Poblete, Rodrigo; Franco, Astolfo L; Jiménez, Pilar; Aquino, Margarita; Fernández-Trujillo, Francisco J; Lorenzo, Susana; Vitaller, Julián; de Valle, Yohana Díaz; Aibar, Carlos; Aranaz, Jesús M; De Pedro, José A
2015-08-01
To design and validate a questionnaire for assessing attitudes and knowledge about patient safety using a sample of medical and nursing students undergoing clinical training in Spain and four countries in Latin America. In this cross-sectional study, a literature review was carried out and total of 786 medical and nursing students were surveyed at eight universities from five countries (Chile, Colombia, El Salvador, Guatemala, and Spain) to develop and refine a Spanish-language questionnaire on knowledge and attitudes about patient safety. The scope of the questionnaire was based on five dimensions (factors) presented in studies related to patient safety culture found in PubMed and Scopus. Based on the five factors, 25 reactive items were developed. Composite reliability indexes and Cronbach's alpha statistics were estimated for each factor, and confirmatory factor analysis was conducted to assess validity. After a pilot test, the questionnaire was refined using confirmatory models, maximum-likelihood estimation, and the variance-covariance matrix (as input). Multiple linear regression models were used to confirm external validity, considering variables related to patient safety culture as dependent variables and the five factors as independent variables. The final instrument was a structured five-point Likert self-administered survey (the "Latino Student Patient Safety Questionnaire") consisting of 21 items grouped into five factors. Compound reliability indexes (Cronbach's alpha statistic) calculated for the five factors were about 0.7 or higher. The results of the multiple linear regression analyses indicated good model fit (goodness-of-fit index: 0.9). Item-total correlations were higher than 0.3 in all cases. The convergent-discriminant validity was adequate. The questionnaire designed and validated in this study assesses nursing and medical students' attitudes and knowledge about patient safety. This instrument could be used to indirectly evaluate whether or not students in health disciplines are acquiring and thus likely to put into practice the professional skills currently considered most appropriate for patient safety.
1985-05-01
Teachers interested in helping students to become more effective learners * should be aware of strategies which can be embedded in curricul1a and...taught *to students with only modest extra effort. Teachers can expand their * instructional role to include a variety of learning strategies which... can be used with specific types of language tasks. Future research should be * directed to refining strategy training approaches, and determining
Automatic Thread-Level Parallelization in the Chombo AMR Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christen, Matthias; Keen, Noel; Ligocki, Terry
2011-05-26
The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less
Object-Oriented Scientific Programming with Fortran 90
NASA Technical Reports Server (NTRS)
Norton, C.
1998-01-01
Fortran 90 is a modern language that introduces many important new features beneficial for scientific programming. We discuss our experiences in plasma particle simulation and unstructured adaptive mesh refinement on supercomputers, illustrating the features of Fortran 90 that support the object-oriented methodology.
On macromolecular refinement at subatomic resolution withinteratomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-09
A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality withmore » results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
On macromolecular refinement at subatomic resolution with interatomic scatterers
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre
2007-01-01
A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package. PMID:18007035
On macromolecular refinement at subatomic resolution with interatomic scatterers.
Afonine, Pavel V; Grosse-Kunstleve, Ralf W; Adams, Paul D; Lunin, Vladimir Y; Urzhumtsev, Alexandre
2007-11-01
A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than approximately 1.0 A) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 A, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.
The Cerebellum: Adaptive Prediction for Movement and Cognition.
Sokolov, Arseny A; Miall, R Chris; Ivry, Richard B
2017-05-01
Over the past 30 years, cumulative evidence has indicated that cerebellar function extends beyond sensorimotor control. This view has emerged from studies of neuroanatomy, neuroimaging, neuropsychology, and brain stimulation, with the results implicating the cerebellum in domains as diverse as attention, language, executive function, and social cognition. Although the literature provides sophisticated models of how the cerebellum helps refine movements, it remains unclear how the core mechanisms of these models can be applied when considering a broader conceptualization of cerebellar function. In light of recent multidisciplinary findings, we examine how two key concepts that have been suggested as general computational principles of cerebellar function- prediction and error-based learning- might be relevant in the operation of cognitive cerebro-cerebellar loops. Copyright © 2017 Elsevier Ltd. All rights reserved.
Huang, Chih-Ling; Cheng, Chung-Ping; Huang, Hui-Wen
2013-10-01
The purpose of this study was to develop a scale to measure the social smoking motives of adult male smokers using a Chinese social context. Three phases were conducted between February 2006 and May 2009. First, the initial instrument development was guided by a literature review, interviews with smokers, and item analysis. Second, the validity and reliability of the refined scale were tested. The factor structures of the Social Smoking Measures (SSM-12) scale were validated. The final scale consists of 12 items. Two factors that account for 49.2% of the variance emerged from the exploratory factor analysis. Cronbach's alpha was .88, and test-retest reliability was .82. The results of the confirmatory factor analysis indicated that the SSM model was a two-correlated factor. Field testing revealed the SSM-12 to be a reliable and valid Chinese-language instrument to measure social smoking motives, which can be used to guide nursing interventions that support culturally and socially appropriate smoking cessation programs.
The blind leading the blind: Mutual refinement of approximate theories
NASA Technical Reports Server (NTRS)
Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa
1991-01-01
The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.
K-Means Subject Matter Expert Refined Topic Model Methodology
2017-01-01
Refined Topic Model Methodology Topic Model Estimation via K-Means U.S. Army TRADOC Analysis Center-Monterey 700 Dyer Road...January 2017 K-means Subject Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-Means Theodore T. Allen, Ph.D. Zhenhuan...Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-means 5a. CONTRACT NUMBER W9124N-15-P-0022 5b. GRANT NUMBER 5c
Refining Pragmatically-Appropriate Oral Communication via Computer-Simulated Conversations
ERIC Educational Resources Information Center
Sydorenko, Tetyana; Daurio, Phoebe; Thorne, Steven L.
2018-01-01
To address the problem of limited opportunities for practicing second language speaking in interaction, especially delicate interactions requiring pragmatic competence, we describe computer simulations designed for the oral practice of extended pragmatic routines and report on the affordances of such simulations for learning pragmatically…
CE-SAM: a conversational interface for ISR mission support
NASA Astrophysics Data System (ADS)
Pizzocaro, Diego; Parizas, Christos; Preece, Alun; Braines, Dave; Mott, David; Bakdash, Jonathan Z.
2013-05-01
There is considerable interest in natural language conversational interfaces. These allow for complex user interactions with systems, such as fulfilling information requirements in dynamic environments, without requiring extensive training or a technical background (e.g. in formal query languages or schemas). To leverage the advantages of conversational interactions we propose CE-SAM (Controlled English Sensor Assignment to Missions), a system that guides users through refining and satisfying their information needs in the context of Intelligence, Surveillance, and Reconnaissance (ISR) operations. The rapidly-increasing availability of sensing assets and other information sources poses substantial challenges to effective ISR resource management. In a coalition context, the problem is even more complex, because assets may be "owned" by different partners. We show how CE-SAM allows a user to refine and relate their ISR information needs to pre-existing concepts in an ISR knowledge base, via conversational interaction implemented on a tablet device. The knowledge base is represented using Controlled English (CE) - a form of controlled natural language that is both human-readable and machine processable (i.e. can be used to implement automated reasoning). Users interact with the CE-SAM conversational interface using natural language, which the system converts to CE for feeding-back to the user for confirmation (e.g. to reduce misunderstanding). We show that this process not only allows users to access the assets that can support their mission needs, but also assists them in extending the CE knowledge base with new concepts.
Valentine, Sarah E; Borba, Christina P C; Dixon, Louise; Vaewsorn, Adin S; Guajardo, Julia Gallegos; Resick, Patricia A; Wiltsey Stirman, Shannon; Marques, Luana
2017-03-01
As part of a larger implementation trial for cognitive processing therapy (CPT) for posttraumatic stress disorder (PTSD) in a community health center, we used formative evaluation to assess relations between iterative cultural adaption (for Spanish-speaking clients) and implementation outcomes (appropriateness and acceptability) for CPT. Qualitative data for the current study were gathered through multiple sources (providers: N = 6; clients: N = 22), including CPT therapy sessions, provider fieldnotes, weekly consultation team meetings, and researcher fieldnotes. Findings from conventional and directed content analysis of the data informed refinements to the CPT manual. Data-driven refinements included adaptations related to cultural context (i.e., language, regional variation in wording), urban context (e.g., crime/violence), and literacy level. Qualitative findings suggest improved appropriateness and acceptability of CPT for Spanish-speaking clients. Our study reinforces the need for dual application of cultural adaptation and implementation science to address the PTSD treatment needs of Spanish-speaking clients. © 2016 Wiley Periodicals, Inc.
Valentine, Sarah E.; Borba, Christina P. C.; Dixon, Louise; Vaewsorn, Adin S.; Guajardo, Julia Gallegos; Resick, Patricia A.; Wiltsey-Stirman, Shannon; Marques, Luana
2016-01-01
Objective As part of a larger implementation trial for Cognitive Processing Therapy (CPT) for posttraumatic stress disorder (PTSD) in a community health center, we used formative evaluation to assess relations between iterative cultural adaption (for Spanish-speaking clients) and implementation outcomes (appropriateness & acceptability) for CPT. Method Qualitative data for the current study were gathered through multiple sources (providers: N=6; clients: N=22), including CPT therapy sessions, provider field notes, weekly consultation team meetings, and researcher field notes. Findings from conventional and directed content analysis of the data informed refinements to the CPT manual. Results Data-driven refinements included adaptations related to cultural context (i.e., language, regional variation in wording), urban context (e.g., crime/violence), and literacy level. Qualitative findings suggest improved appropriateness and acceptability of CPT for Spanish-speaking clients. Conclusion Our study reinforces the need for dual application of cultural adaptation and implementation science to address the PTSD treatment needs of Spanish-speaking clients. PMID:27378013
Neuroimaging correlates of language network impairment and reorganization in temporal lobe epilepsy
Balter, S.; Lin, G.; Leyden, K.M.; Paul, B.M.; McDonald, C.R.
2016-01-01
Advanced, noninvasive imaging has revolutionized our understanding of language networks in the brain and is reshaping our approach to the presurgical evaluation of patients with epilepsy. Functional magnetic resonance imaging (fMRI) has had the greatest impact, unveiling the complexity of language organization and reorganization in patients with epilepsy both pre- and postoperatively, while volumetric MRI and diffusion tensor imaging have led to a greater appreciation of structural and microstructural correlates of language dysfunction in different epilepsy syndromes. In this article, we review recent literature describing how unimodal and multimodal imaging has advanced our knowledge of language networks and their plasticity in epilepsy, with a focus on the most frequently studied epilepsy syndrome in adults, temporal lobe epilepsy (TLE). We also describe how new analytic techniques (i.e., graph theory) are leading to a refined characterization of abnormal brain connectivity, and how subject-specific imaging profiles combined with clinical data may enhance the prediction of both seizure and language outcomes following surgical interventions. PMID:27393391
Model based systems engineering for astronomical projects
NASA Astrophysics Data System (ADS)
Karban, R.; Andolfato, L.; Bristow, P.; Chiozzi, G.; Esselborn, M.; Schilling, M.; Schmid, C.; Sommer, H.; Zamparelli, M.
2014-08-01
Model Based Systems Engineering (MBSE) is an emerging field of systems engineering for which the System Modeling Language (SysML) is a key enabler for descriptive, prescriptive and predictive models. This paper surveys some of the capabilities, expectations and peculiarities of tools-assisted MBSE experienced in real-life astronomical projects. The examples range in depth and scope across a wide spectrum of applications (for example documentation, requirements, analysis, trade studies) and purposes (addressing a particular development need, or accompanying a project throughout many - if not all - its lifecycle phases, fostering reuse and minimizing ambiguity). From the beginnings of the Active Phasing Experiment, through VLT instrumentation, VLTI infrastructure, Telescope Control System for the E-ELT, until Wavefront Control for the E-ELT, we show how stepwise refinements of tools, processes and methods have provided tangible benefits to customary system engineering activities like requirement flow-down, design trade studies, interfaces definition, and validation, by means of a variety of approaches (like Model Checking, Simulation, Model Transformation) and methodologies (like OOSEM, State Analysis)
Real-space refinement in PHENIX for cryo-EM and crystallography
Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.; ...
2018-06-01
This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less
Real-space refinement in PHENIX for cryo-EM and crystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.
This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less
Development of structured ICD-10 and its application to computer-assisted ICD coding.
Imai, Takeshi; Kajino, Masayuki; Sato, Megumi; Ohe, Kazuhiko
2010-01-01
This paper presents: (1) a framework of formal representation of ICD10, which functions as a bridge between ontological information and natural language expressions; and (2) a methodology to use formally described ICD10 for computer-assisted ICD coding. First, we analyzed and structurized the meanings of categories in 15 chapters of ICD10. Then we expanded the structured ICD10 (S-ICD10) by adding subordinate concepts and labels derived from Japanese Standard Disease Names. The information model to describe formal representation was refined repeatedly. The resultant model includes 74 types of semantic links. We also developed an ICD coding module based on S-ICD10 and a 'Coding Principle,' which achieved high accuracy (>70%) for four chapters. These results not only demonstrate the basic feasibility of our coding framework but might also inform the development of the information model for formal description framework in the ICD11 revision.
Deformable complex network for refining low-resolution X-ray structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu
2015-10-27
A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less
Gambashidze, Nikoloz; Hammer, Antje; Brösterhaus, Mareen; Manser, Tanja
2017-11-09
To study the psychometric characteristics of German version of the Hospital Survey on Patient Safety Culture and to compare its dimensionality to other language versions in order to understand the instrument's potential for cross-national studies. Cross-sectional multicentre study to establish psychometric properties of German version of the survey instrument. 73 units from 37 departments of two German university hospitals. Clinical personnel (n=995 responses, response rate 39.6%). Psychometric properties (eg, model fit, internal consistency, construct validity) of the instrument and comparison of dimensionality across different language translations. The instrument demonstrated acceptable to good internal consistency (Cronbach's alpha 0.64-0.88). Confirmatory factor analysis of the original 12-factor model resulted in marginally satisfactory model fit (root mean square error of approximation (RMSEA)=0.05; standardised root mean residual (SRMR)=0.05; comparative fit index (CFI)=0.90; goodness of fit index (GFI)=0.88; Tucker-Lewis Index (TLI)=0.88). Exploratory factor analysis resulted in an alternative eight-factor model with good model fit (RMSEA=0.05; SRMR=0.05; CFI=0.95; GFI=0.91; TLI=0.94) and good internal consistency (Cronbach's alpha 0.73-0.87) and construct validity. Analysis of the dimensionality compared with models from 10 other language versions revealed eight dimensions with relatively stable composition and appearance across different versions and four dimensions requiring further improvement. The German version of Hospital Survey on Patient Safety Culture demonstrated satisfactory psychometric properties for use in German hospitals. However, our comparison of instrument dimensionality across different language versions indicates limitations concerning cross-national studies. Results of this study can be considered in interpreting findings across national contexts, in further refinement of the instrument for cross-national studies and in better understanding the various facets and dimensions of patient safety culture. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Orthographic and Phonological Neighborhood Databases across Multiple Languages.
Marian, Viorica
2017-01-01
The increased globalization of science and technology and the growing number of bilinguals and multilinguals in the world have made research with multiple languages a mainstay for scholars who study human function and especially those who focus on language, cognition, and the brain. Such research can benefit from large-scale databases and online resources that describe and measure lexical, phonological, orthographic, and semantic information. The present paper discusses currently-available resources and underscores the need for tools that enable measurements both within and across multiple languages. A general review of language databases is followed by a targeted introduction to databases of orthographic and phonological neighborhoods. A specific focus on CLEARPOND illustrates how databases can be used to assess and compare neighborhood information across languages, to develop research materials, and to provide insight into broad questions about language. As an example of how using large-scale databases can answer questions about language, a closer look at neighborhood effects on lexical access reveals that not only orthographic, but also phonological neighborhoods can influence visual lexical access both within and across languages. We conclude that capitalizing upon large-scale linguistic databases can advance, refine, and accelerate scientific discoveries about the human linguistic capacity.
On macromolecular refinement at subatomic resolution with interatomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V., E-mail: pafonine@lbl.gov; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-01
Modelling deformation electron density using interatomic scatters is simpler than multipolar methods, produces comparable results at subatomic resolution and can easily be applied to macromolecules. A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented bymore » additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
Brunger, Axel T; Das, Debanu; Deacon, Ashley M; Grant, Joanna; Terwilliger, Thomas C; Read, Randy J; Adams, Paul D; Levitt, Michael; Schröder, Gunnar F
2012-04-01
Phasing by molecular replacement remains difficult for targets that are far from the search model or in situations where the crystal diffracts only weakly or to low resolution. Here, the process of determining and refining the structure of Cgl1109, a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum, at ∼3 Å resolution is described using a combination of homology modeling with MODELLER, molecular-replacement phasing with Phaser, deformable elastic network (DEN) refinement and automated model building using AutoBuild in a semi-automated fashion, followed by final refinement cycles with phenix.refine and Coot. This difficult molecular-replacement case illustrates the power of including DEN restraints derived from a starting model to guide the movements of the model during refinement. The resulting improved model phases provide better starting points for automated model building and produce more significant difference peaks in anomalous difference Fourier maps to locate anomalous scatterers than does standard refinement. This example also illustrates a current limitation of automated procedures that require manual adjustment of local sequence misalignments between the homology model and the target sequence.
Brunger, Axel T.; Das, Debanu; Deacon, Ashley M.; Grant, Joanna; Terwilliger, Thomas C.; Read, Randy J.; Adams, Paul D.; Levitt, Michael; Schröder, Gunnar F.
2012-01-01
Phasing by molecular replacement remains difficult for targets that are far from the search model or in situations where the crystal diffracts only weakly or to low resolution. Here, the process of determining and refining the structure of Cgl1109, a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum, at ∼3 Å resolution is described using a combination of homology modeling with MODELLER, molecular-replacement phasing with Phaser, deformable elastic network (DEN) refinement and automated model building using AutoBuild in a semi-automated fashion, followed by final refinement cycles with phenix.refine and Coot. This difficult molecular-replacement case illustrates the power of including DEN restraints derived from a starting model to guide the movements of the model during refinement. The resulting improved model phases provide better starting points for automated model building and produce more significant difference peaks in anomalous difference Fourier maps to locate anomalous scatterers than does standard refinement. This example also illustrates a current limitation of automated procedures that require manual adjustment of local sequence misalignments between the homology model and the target sequence. PMID:22505259
Roles in Literacy Learning: A New Perspective.
ERIC Educational Resources Information Center
Tovey, Duane R., Ed.; Kerber, James E., Ed.
Refining and better understanding the roles parents, teachers, administrators, and researchers play in helping children learn to process written language is the focus of this book. Part 1 considers the role of the parents and includes the following articles: "Learning to Read: It Starts in the Home" (David B. Doake): "Let's Read…
Automobile Maintenance. Reading and Language Activities.
ERIC Educational Resources Information Center
Kessman, William A.
Designed primarily for special needs students in a vocational program in automobile maintenance, this book was written to refine the basic skills of following directions, reading comprehension, vocabulary building, spelling, word usage, and word recognition, while relating these skills to some of the tasks a beginning student in the program must…
An Affirmative Approach to Vocabulary Development.
ERIC Educational Resources Information Center
Shioji, Jean
Methods for second language vocabulary development in the intermediate and advanced level English classroom are described. Rather than require students to memorize lists of words, the teacher should give students a better understanding of the process of vocabulary development by showing them how to refine their use of new lexical items and implant…
Impact of Culture on Breast Cancer Screening in Chinese American Women
2006-09-01
developed and refined based on previous finding of cultural and language barriers to breast cancer screening in Chinese women . In Year 2, two hundred...and fifty Chinese women aged 50 and older in the Washington, DC area completed a telephone interview regarding their previous screening experience
Beginning Reading and Writing. Language and Literacy Series.
ERIC Educational Resources Information Center
Strickland, Dorothy S., Ed.; Morrow, Lesley Mandel, Ed.
In this essay collection, scholars in the area of early literacy provide concrete strategies for achieving excellence in literacy instruction. The collection presents current, research-based information on the advances and refinements in the area of emerging literacy and the early stages of formal instruction in reading and writing. Following a…
Testing MODFLOW-LGR for simulating flow around buried Quaternary valleys - synthetic test cases
NASA Astrophysics Data System (ADS)
Vilhelmsen, T. N.; Christensen, S.
2009-12-01
In this study the Local Grid Refinement (LGR) method developed for MODFLOW-2005 (Mehl and Hill, 2005) is utilized to describe groundwater flow in areas containing buried Quaternary valley structures. The tests are conducted as comparative analysis between simulations run with a globally refined model, a locally refined model, and a globally coarse model, respectively. The models vary from simple one layer models to more complex ones with up to 25 model layers. The comparisons of accuracy are conducted within the locally refined area and focus on water budgets, simulated heads, and simulated particle traces. Simulations made with the globally refined model are used as reference (regarded as “true” values). As expected, for all test cases the application of local grid refinement resulted in more accurate results than when using the globally coarse model. A significant advantage of utilizing MODFLOW-LGR was that it allows increased numbers of model layers to better resolve complex geology within local areas. This resulted in more accurate simulations than when using either a globally coarse model grid or a locally refined model with lower geological resolution. Improved accuracy in the latter case could not be expected beforehand because difference in geological resolution between the coarse parent model and the refined child model contradicts the assumptions of the Darcy weighted interpolation used in MODFLOW-LGR. With respect to model runtimes, it was sometimes found that the runtime for the locally refined model is much longer than for the globally refined model. This was the case even when the closure criteria were relaxed compared to the globally refined model. These results are contradictory to those presented by Mehl and Hill (2005). Furthermore, in the complex cases it took some testing (model runs) to identify the closure criteria and the damping factor that secured convergence, accurate solutions, and reasonable runtimes. For our cases this is judged to be a serious disadvantage of applying MODFLOW-LGR. Another disadvantage in the studied cases was that the MODFLOW-LGR results proved to be somewhat dependent on the correction method used at the parent-child model interface. This indicates that when applying MODFLOW-LGR there is a need for thorough and case-specific considerations regarding choice of correction method. References: Mehl, S. and M. C. Hill (2005). "MODFLOW-2005, THE U.S. GEOLOGICAL SURVEY MODULAR GROUND-WATER MODEL - DOCUMENTATION OF SHARED NODE LOCAL GRID REFINEMENT (LGR) AND THE BOUNDARY FLOW AND HEAD (BFH) PACKAGE " U.S. Geological Survey Techniques and Methods 6-A12
Jiang, Yang; Zhang, Haiyang; Feng, Wei; Tan, Tianwei
2015-12-28
Metal ions play an important role in the catalysis of metalloenzymes. To investigate metalloenzymes via molecular modeling, a set of accurate force field parameters for metal ions is highly imperative. To extend its application range and improve the performance, the dummy atom model of metal ions was refined through a simple parameter screening strategy using the Mg(2+) ion as an example. Using the AMBER ff03 force field with the TIP3P model, the refined model accurately reproduced the experimental geometric and thermodynamic properties of Mg(2+). Compared with point charge models and previous dummy atom models, the refined dummy atom model yields an enhanced performance for producing reliable ATP/GTP-Mg(2+)-protein conformations in three metalloenzyme systems with single or double metal centers. Similar to other unbounded models, the refined model failed to reproduce the Mg-Mg distance and favored a monodentate binding of carboxylate groups, and these drawbacks needed to be considered with care. The outperformance of the refined model is mainly attributed to the use of a revised (more accurate) experimental solvation free energy and a suitable free energy correction protocol. This work provides a parameter screening strategy that can be readily applied to refine the dummy atom models for metal ions.
Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G.; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J.; Arruda-Olson, Adelaide M.
2016-01-01
Objective Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm to billing code algorithms, using ankle-brachial index (ABI) test results as the gold standard. Methods We compared the performance of the NLP algorithm to 1) results of gold standard ABI; 2) previously validated algorithms based on relevant ICD-9 diagnostic codes (simple model) and 3) a combination of ICD-9 codes with procedural codes (full model). A dataset of 1,569 PAD patients and controls was randomly divided into training (n= 935) and testing (n= 634) subsets. Results We iteratively refined the NLP algorithm in the training set including narrative note sections, note types and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP: 91.8%, full model: 81.8%, simple model: 83%, P<.001), PPV (NLP: 92.9%, full model: 74.3%, simple model: 79.9%, P<.001), and specificity (NLP: 92.5%, full model: 64.2%, simple model: 75.9%, P<.001). Conclusions A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. PMID:28189359
Improving the accuracy of macromolecular structure refinement at 7 Å resolution.
Brunger, Axel T; Adams, Paul D; Fromme, Petra; Fromme, Raimund; Levitt, Michael; Schröder, Gunnar F
2012-06-06
In X-ray crystallography, molecular replacement and subsequent refinement is challenging at low resolution. We compared refinement methods using synchrotron diffraction data of photosystem I at 7.4 Å resolution, starting from different initial models with increasing deviations from the known high-resolution structure. Standard refinement spoiled the initial models, moving them further away from the true structure and leading to high R(free)-values. In contrast, DEN refinement improved even the most distant starting model as judged by R(free), atomic root-mean-square differences to the true structure, significance of features not included in the initial model, and connectivity of electron density. The best protocol was DEN refinement with initial segmented rigid-body refinement. For the most distant initial model, the fraction of atoms within 2 Å of the true structure improved from 24% to 60%. We also found a significant correlation between R(free) values and the accuracy of the model, suggesting that R(free) is useful even at low resolution. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ever since language and learning: afterthoughts on the Piaget-Chomsky debate.
Piattelli-Palmarini, M
1994-01-01
The central arguments and counter-arguments presented by several participants during the debate between Piaget and Chomsky at the Royaumont Abbey in October 1975 are here reconstructed in a particularly consice chronological and "logical" sequence. Once the essential points of this important exchange are thus clearly laid out, it is easy to witness that recent developments in generative grammar, as well as new data on language acquisition, especially in the acquisition of pronouns by the congenitally deaf child, corroborate the "language specificity" thesis defended by Chomsky. By the same token these data and these new theoretical refinements refute the Piagetian hypothesis that language is constructed upon abstractions from sensorimotor schemata. Moreover, in the light of modern evolutionary theory, Piaget's basic assumptions on the biological roots of cognition, language and learning turn out to be unfounded. In hindsight, all this accrues to the validity of Fodor's seemingly "paradoxical" argument against "learning" as a transition from "less" powerful to "more" powerful conceptual systems.
ERIC Educational Resources Information Center
Kazepides, Tasos
2012-01-01
The purpose of this paper is to show that genuine dialogue is a refined human achievement and probably the most valid criterion on the basis of which we can evaluate educational or social policy and practice. The paper explores the prerequisites of dialogue in the language games, the common certainties, the rules of logic and the variety of common…
Using Language Positively: How to Encourage Negotiation in the Classroom
ERIC Educational Resources Information Center
Schoerning, Emily; Hand, Brian
2013-01-01
The importance of argument in science teaching is a hot topic. Educators are told "doing science" doesn't just involve following a scientific method; it involves the restructuring and refinement of ideas through negotiation and critique with other people. Understanding the importance of argument is one thing, but even for the…
DEVELOPMENT OF EXPERIMENTAL AUDIOVISUAL DEVICES AND MATERIALS FOR BEGINNING READERS.
ERIC Educational Resources Information Center
GIBSON, CHRISTINE M.; RICHARDS, I.A.
THIS STUDY TESTED THE ARRANGEMENT OF AN INTERRELATED PROGRAM OF PROCEDURES THAT CAN MUTUALLY GENERATE AND NURTURE THE LEARNING PROCESS FOR BEGINNING READING. CLOSE, SYSTEMATIC OBSERVATIONS OF PEOPLE OF VARYING AGES WERE MADE. THE MATERIALS HAD BEEN DESIGNED, FIELD TESTED, AND REFINED BY A LANGUAGE RESEARCH GROUP AT THE HARVARD GRADUATE SCHOOL OF…
Treatment of Children with Speech Oral Placement Disorders (OPDs): A Paradigm Emerges
ERIC Educational Resources Information Center
Bahr, Diane; Rosenfeld-Johnson, Sara
2010-01-01
Epidemiological research was used to develop the Speech Disorders Classification System (SDCS). The SDCS is an important speech diagnostic paradigm in the field of speech-language pathology. This paradigm could be expanded and refined to also address treatment while meeting the standards of evidence-based practice. The article assists that process…
Compact modalities for forward-error correction
NASA Astrophysics Data System (ADS)
Fang, Dejian
2013-10-01
Hash tables [1] must work. In fact, few leading analysts would disagree with the refinement of thin clients. In our research, we disprove not only that the infamous read-write algorithm for the exploration of object-oriented languages by W. White et al. is NP-complete, but that the same is true for the lookaside buffer.
Psychometric Properties of an Arabic Version of the Depression Anxiety Stress Scales (DASS)
ERIC Educational Resources Information Center
Moussa, Miriam Taouk; Lovibond, Peter; Laube, Roy; Megahead, Hamido A.
2017-01-01
Objective: To translate and evaluate the psychometric properties of an Arabic-language version of the Depression Anxiety Stress Scales (DASS). Method: The items were translated, back translated, refined, and tested in an Australian immigrant sample (N = 220). Results: Confirmatory factor analysis showed that the Arabic DASS discriminates between…
Why Choose French? Boys' and Girls' Attitudes at the Option Stage.
ERIC Educational Resources Information Center
Powell, Robert; Littlewood, Peter
1983-01-01
A survey of secondary students' attitudes toward French as a second language in two schools analyzed how students' choice or rejection of the subject was related to career choice, sex, social class, travel in France, and success in French. A secondary intent was to refine attitude study techniques. (MSE)
Some Aspects of Language Development in Middle Childhood.
ERIC Educational Resources Information Center
Hoar, Nancy
The middle childhood years are a period of refinement of the semantics and syntax acquired in the early years, of substantial metalinguistic development, and of subtle changes in actual processing strategies. In a study undertaken to determine how these three factors interact, children aged 6 to 11 were asked to produce and recognize paraphrases.…
Coding and Comprehension in Skilled Reading and Implications for Reading Instruction.
ERIC Educational Resources Information Center
Perfetti, Charles A.; Lesgold, Alan M.
A view of skilled reading is suggested that emphasizes an intimate connection between coding and comprehension. It is suggested that skilled comprehension depends on a highly refined facility for generating and manipulating language codes, especially at the phonetic/articulatory level. The argument is developed that decoding expertise should be a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dongsheng; Lavender, Curt
2015-05-08
Improving yield strength and asymmetry is critical to expand applications of magnesium alloys in industry for higher fuel efficiency and lower CO 2 production. Grain refinement is an efficient method for strengthening low symmetry magnesium alloys, achievable by precipitate refinement. This study provides guidance on how precipitate engineering will improve mechanical properties through grain refinement. Precipitate refinement for improving yield strengths and asymmetry is simulated quantitatively by coupling a stochastic second phase grain refinement model and a modified polycrystalline crystal viscoplasticity φ-model. Using the stochastic second phase grain refinement model, grain size is quantitatively determined from the precipitate size andmore » volume fraction. Yield strengths, yield asymmetry, and deformation behavior are calculated from the modified φ-model. If the precipitate shape and size remain constant, grain size decreases with increasing precipitate volume fraction. If the precipitate volume fraction is kept constant, grain size decreases with decreasing precipitate size during precipitate refinement. Yield strengths increase and asymmetry approves to one with decreasing grain size, contributed by increasing precipitate volume fraction or decreasing precipitate size.« less
Terwilliger, Thomas C; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Moriarty, Nigel W; Zwart, Peter H; Hung, Li Wei; Read, Randy J; Adams, Paul D
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 A, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao
Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less
Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao
2017-10-03
Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less
Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora
NASA Astrophysics Data System (ADS)
Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke
The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.
ISPE: A knowledge-based system for fluidization studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, S.
1991-01-01
Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all specified goals'' are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that canmore » enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.« less
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J; Arruda-Olson, Adelaide M
2017-06-01
Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm with billing code algorithms, using ankle-brachial index test results as the gold standard. We compared the performance of the NLP algorithm to (1) results of gold standard ankle-brachial index; (2) previously validated algorithms based on relevant International Classification of Diseases, Ninth Revision diagnostic codes (simple model); and (3) a combination of International Classification of Diseases, Ninth Revision codes with procedural codes (full model). A dataset of 1569 patients with PAD and controls was randomly divided into training (n = 935) and testing (n = 634) subsets. We iteratively refined the NLP algorithm in the training set including narrative note sections, note types, and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP, 91.8%; full model, 81.8%; simple model, 83%; P < .001), positive predictive value (NLP, 92.9%; full model, 74.3%; simple model, 79.9%; P < .001), and specificity (NLP, 92.5%; full model, 64.2%; simple model, 75.9%; P < .001). A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Pittman, Pamela Kay
2015-01-01
Teaching is an ever-evolving profession, one in which teachers must stay abreast of recent research and trends to continually deepen their knowledge and refine their skills. Therefore, teachers need high quality professional learning opportunities to help them master the content they teach and strengthen their teaching skills. Professional…
Valuing Exercises for the Middle School. Resource Monograph No. 11.
ERIC Educational Resources Information Center
Casteel, J. Doyle; And Others
One of the major goals of the middle school is to help students gain and refine skills in the area of values clarification. One way of securing such value clarification is to plan and assign value sheets--carefully planned and written activities designed to elicit value clarification patterns of language usage from students. Six different formats…
A Nongraded Phase Elective Senior High English Curriculum.
ERIC Educational Resources Information Center
South Bend Community School Corp., IN.
The course content in this nongraded phase elective curriculum is classified into Phase 1, designed for students who find reading, writing, and speaking difficult, Phase 2 for students who need to improve and refine basic skills at a somewhat slower pace, Phase 3 for those who have an average command of basic language skills and want to advance at…
Teacher/Mentor: A Dialogue for Collaborative Learning. The Practitioner Inquiry Series.
ERIC Educational Resources Information Center
Graham, Peg, Ed.; Hudson-Ross, Sally, Ed.; Adkins, Chandra, Ed.; McWhorter, Patti, Ed.; Stewart, Jennifer McDuffie, Ed.
Using the subjects of language arts and English in the secondary school setting, this collection of essays should inspire the development and refinement of teacher education at all levels and in all subject areas. The essays are written in the voices of public school teachers and grounded in everyday theory and practice of faculty in public…
An Automated Pronunciation - Hearing Instruction Aid: Refinements and Applications
ERIC Educational Resources Information Center
Smith, Donald L.
1975-01-01
Describes an automated device for teaching and testing the pronunciation of a language to a non-native speaker without the assistance of a trained teacher. This device was developed at the Research Institute of Logopedics and Phoniatrics at the University of Tokyo. Available from SIGLASH, c/o ACM, P.O. Box 12105, Church St. Station, New York, NY…
Innovative Language-Based & Object-Oriented Structured AMR Using Fortran 90 and OpenMP
NASA Technical Reports Server (NTRS)
Norton, C.; Balsara, D.
1999-01-01
Parallel adaptive mesh refinement (AMR) is an important numerical technique that leads to the efficient solution of many physical and engineering problems. In this paper, we describe how AMR programing can be performed in an object-oreinted way using the modern aspects of Fortran 90 combined with the parallelization features of OpenMP.
Readability--The Situation Today. Reading Education Report No. 70.
ERIC Educational Resources Information Center
Davison, Alice
Defining readability in both the narrow sense of formula use and refinement and the broader sense of the processing and comprehension of language, this paper argues for the need for more research focusing on readability as a way of improving the match between reader and text. Following a brief introduction, the paper reviews current readability…
Guiberson, Mark; Rodríguez, Barbara L
2010-08-01
To describe the concurrent validity and classification accuracy of 2 Spanish parent surveys of language development, the Spanish Ages and Stages Questionnaire (ASQ; Squires, Potter, & Bricker, 1999) and the Pilot Inventario-III (Pilot INV-III; Guiberson, 2008a). Forty-eight Spanish-speaking parents of preschool-age children participated. Twenty-two children had expressive language delays, and 26 had typical language development. The parents completed the Spanish ASQ and the Pilot INV-III at home, and the Preschool Language Scale, Fourth Edition: Spanish Edition (PLS-4 Spanish; Zimmerman, Steiner, & Pond, 2002) was administered to the children at preschool centers. The Spanish ASQ and Pilot INV-III were significantly correlated with the PLS-4 Spanish, establishing concurrent validity. On both surveys, children with expressive language delays scored significantly lower than children with typical development. The Spanish ASQ demonstrated unacceptably low sensitivity (59%) and good specificity (92%), while the Pilot INV-III demonstrated fair sensitivity (82%) and specificity (81%). Likelihood ratios and posttest probability revealed that the Pilot INV-III may assist in detection of expressive language delays, but viewed alone it is insufficient to make an unconditional screening determination. Results suggest that Spanish parent surveys hold promise for screening language delay in Spanish-speaking preschool children; however, further refinement of these tools is needed.
NASA Astrophysics Data System (ADS)
Boo, G.; Fabrikant, S. I.; Leyk, S.
2015-08-01
In spatial epidemiology, disease incidence and demographic data are commonly summarized within larger regions such as administrative units because of privacy concerns. As a consequence, analyses using these aggregated data are subject to the Modifiable Areal Unit Problem (MAUP) as the geographical manifestation of ecological fallacy. In this study, we create small area disease estimates through dasymetric refinement, and investigate the effects on predictive epidemiological models. We perform a binary dasymetric refinement of municipality-aggregated dog tumor incidence counts in Switzerland for the year 2008 using residential land as a limiting ancillary variable. This refinement is expected to improve the quality of spatial data originally aggregated within arbitrary administrative units by deconstructing them into discontinuous subregions that better reflect the underlying population distribution. To shed light on effects of this refinement, we compare a predictive statistical model that uses unrefined administrative units with one that uses dasymetrically refined spatial units. Model diagnostics and spatial distributions of model residuals are assessed to evaluate the model performances in different regions. In particular, we explore changes in the spatial autocorrelation of the model residuals due to spatial refinement of the enumeration units in a selected mountainous region, where the rugged topography induces great shifts of the analytical units i.e., residential land. Such spatial data quality refinement results in a more realistic estimation of the population distribution within administrative units, and thus, in a more accurate modeling of dog tumor incidence patterns. Our results emphasize the benefits of implementing a dasymetric modeling framework in veterinary spatial epidemiology.
NASA Astrophysics Data System (ADS)
Zaichik, Leonid I.; Alipchenkov, Vladimir M.
2007-11-01
The purposes of the paper are threefold: (i) to refine the statistical model of preferential particle concentration in isotropic turbulence that was previously proposed by Zaichik and Alipchenkov [Phys. Fluids 15, 1776 (2003)], (ii) to investigate the effect of clustering of low-inertia particles using the refined model, and (iii) to advance a simple model for predicting the collision rate of aerosol particles. The model developed is based on a kinetic equation for the two-point probability density function of the relative velocity distribution of particle pairs. Improvements in predicting the preferential concentration of low-inertia particles are attained due to refining the description of the turbulent velocity field of the carrier fluid by including a difference between the time scales of the of strain and rotation rate correlations. The refined model results in a better agreement with direct numerical simulations for aerosol particles.
Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Zwart, Peter H.; Hung, Li-Wei; Read, Randy J.; Adams, Paul D.
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution. PMID:18094468
Li, Jia; Xu, Zhenming; Zhou, Yaohe
2008-05-30
Traditionally, the mixture metals from waste printed circuit board (PCB) were sent to the smelt factory to refine pure copper. Some valuable metals (aluminum, zinc and tin) with low content in PCB were lost during smelt. A new method which used roll-type electrostatic separator (RES) to recovery low content metals in waste PCB was presented in this study. The theoretic model which was established from computing electric field and the analysis of forces on the particles was used to write a program by MATLAB language. The program was design to simulate the process of separating mixture metal particles. Electrical, material and mechanical factors were analyzed to optimize the operating parameters of separator. The experiment results of separating copper and aluminum particles by RES had a good agreement with computer simulation results. The model could be used to simulate separating other metal (tin, zinc, etc.) particles during the process of recycling waste PCBs by RES.
Barrès, Victor; Simons, Arthur; Arbib, Michael
2013-01-01
Our previous work developed Synthetic Brain Imaging to link neural and schema network models of cognition and behavior to PET and fMRI studies of brain function. We here extend this approach to Synthetic Event-Related Potentials (Synthetic ERP). Although the method is of general applicability, we focus on ERP correlates of language processing in the human brain. The method has two components: Phase 1: To generate cortical electro-magnetic source activity from neural or schema network models; and Phase 2: To generate known neurolinguistic ERP data (ERP scalp voltage topographies and waveforms) from putative cortical source distributions and activities within a realistic anatomical model of the human brain and head. To illustrate the challenges of Phase 2 of the methodology, spatiotemporal information from Friederici's 2002 model of auditory language comprehension was used to define cortical regions and time courses of activation for implementation within a forward model of ERP data. The cortical regions from the 2002 model were modeled using atlas-based masks overlaid on the MNI high definition single subject cortical mesh. The electromagnetic contribution of each region was modeled using current dipoles whose position and orientation were constrained by the cortical geometry. In linking neural network computation via EEG forward modeling to empirical results in neurolinguistics, we emphasize the need for neural network models to link their architecture to geometrically sound models of the cortical surface, and the need for conceptual models to refine and adopt brain-atlas based approaches to allow precise brain anchoring of their modules. The detailed analysis of Phase 2 sets the stage for a brief introduction to Phase 1 of the program, including the case for a schema-theoretic approach to language production and perception presented in detail elsewhere. Unlike Dynamic Causal Modeling (DCM) and Bojak's mean field model, Synthetic ERP builds on models of networks that mediate the relation between the brain's inputs, outputs, and internal states in executing a specific task. The neural networks used for Synthetic ERP must include neuroanatomically realistic placement and orientation of the cortical pyramidal neurons. These constraints pose exciting challenges for future work in neural network modeling that is applicable to systems and cognitive neuroscience. Copyright © 2012 Elsevier Ltd. All rights reserved.
Natural language acquisition in large scale neural semantic networks
NASA Astrophysics Data System (ADS)
Ealey, Douglas
This thesis puts forward the view that a purely signal- based approach to natural language processing is both plausible and desirable. By questioning the veracity of symbolic representations of meaning, it argues for a unified, non-symbolic model of knowledge representation that is both biologically plausible and, potentially, highly efficient. Processes to generate a grounded, neural form of this model-dubbed the semantic filter-are discussed. The combined effects of local neural organisation, coincident with perceptual maturation, are used to hypothesise its nature. This theoretical model is then validated in light of a number of fundamental neurological constraints and milestones. The mechanisms of semantic and episodic development that the model predicts are then used to explain linguistic properties, such as propositions and verbs, syntax and scripting. To mimic the growth of locally densely connected structures upon an unbounded neural substrate, a system is developed that can grow arbitrarily large, data- dependant structures composed of individual self- organising neural networks. The maturational nature of the data used results in a structure in which the perception of concepts is refined by the networks, but demarcated by subsequent structure. As a consequence, the overall structure shows significant memory and computational benefits, as predicted by the cognitive and neural models. Furthermore, the localised nature of the neural architecture also avoids the increasing error sensitivity and redundancy of traditional systems as the training domain grows. The semantic and episodic filters have been demonstrated to perform as well, or better, than more specialist networks, whilst using significantly larger vocabularies, more complex sentence forms and more natural corpora.
Computer simulation of refining process of a high consistency disc refiner based on CFD
NASA Astrophysics Data System (ADS)
Wang, Ping; Yang, Jianwei; Wang, Jiahui
2017-08-01
In order to reduce refining energy consumption, the ANSYS CFX was used to simulate the refining process of a high consistency disc refiner. In the first it was assumed to be uniform Newton fluid of turbulent state in disc refiner with the k-ɛ flow model; then meshed grids and set the boundary conditions in 3-D model of the disc refiner; and then was simulated and analyzed; finally, the viscosity of the pulp were measured. The results show that the CFD method can be used to analyze the pressure and torque on the disc plate, so as to calculate the refining power, and streamlines and velocity vectors can also be observed. CFD simulation can optimize parameters of the bar and groove, which is of great significance to reduce the experimental cost and cycle.
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-08-18
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex.
Building validation tools for knowledge-based systems
NASA Technical Reports Server (NTRS)
Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.
1987-01-01
The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.
Refinements in the Los Alamos model of the prompt fission neutron spectrum
Madland, D. G.; Kahler, A. C.
2017-01-01
This paper presents a number of refinements to the original Los Alamos model of the prompt fission neutron spectrum and average prompt neutron multiplicity as derived in 1982. The four refinements are due to new measurements of the spectrum and related fission observables many of which were not available in 1982. Here, they are also due to a number of detailed studies and comparisons of the model with previous and present experimental results including not only the differential spectrum, but also integal cross sections measured in the field of the differential spectrum. The four refinements are (a) separate neutron contributionsmore » in binary fission, (b) departure from statistical equilibrium at scission, (c) fission-fragment nuclear level-density models, and (d) center-of-mass anisotropy. With these refinements, for the first time, good agreement has been obtained for both differential and integral measurements using the same Los Alamos model spectrum.« less
jInv: A Modular and Scalable Framework for Electromagnetic Inverse Problems
NASA Astrophysics Data System (ADS)
Belliveau, P. T.; Haber, E.
2016-12-01
Inversion is a key tool in the interpretation of geophysical electromagnetic (EM) data. Three-dimensional (3D) EM inversion is very computationally expensive and practical software for inverting large 3D EM surveys must be able to take advantage of high performance computing (HPC) resources. It has traditionally been difficult to achieve those goals in a high level dynamic programming environment that allows rapid development and testing of new algorithms, which is important in a research setting. With those goals in mind, we have developed jInv, a framework for PDE constrained parameter estimation problems. jInv provides optimization and regularization routines, a framework for user defined forward problems, and interfaces to several direct and iterative solvers for sparse linear systems. The forward modeling framework provides finite volume discretizations of differential operators on rectangular tensor product meshes and tetrahedral unstructured meshes that can be used to easily construct forward modeling and sensitivity routines for forward problems described by partial differential equations. jInv is written in the emerging programming language Julia. Julia is a dynamic language targeted at the computational science community with a focus on high performance and native support for parallel programming. We have developed frequency and time-domain EM forward modeling and sensitivity routines for jInv. We will illustrate its capabilities and performance with two synthetic time-domain EM inversion examples. First, in airborne surveys, which use many sources, we achieve distributed memory parallelism by decoupling the forward and inverse meshes and performing forward modeling for each source on small, locally refined meshes. Secondly, we invert grounded source time-domain data from a gradient array style induced polarization survey using a novel time-stepping technique that allows us to compute data from different time-steps in parallel. These examples both show that it is possible to invert large scale 3D time-domain EM datasets within a modular, extensible framework written in a high-level, easy to use programming language.
Structure refinement of membrane proteins via molecular dynamics simulations.
Dutagaci, Bercem; Heo, Lim; Feig, Michael
2018-07-01
A refinement protocol based on physics-based techniques established for water soluble proteins is tested for membrane protein structures. Initial structures were generated by homology modeling and sampled via molecular dynamics simulations in explicit lipid bilayer and aqueous solvent systems. Snapshots from the simulations were selected based on scoring with either knowledge-based or implicit membrane-based scoring functions and averaged to obtain refined models. The protocol resulted in consistent and significant refinement of the membrane protein structures similar to the performance of refinement methods for soluble proteins. Refinement success was similar between sampling in the presence of lipid bilayers and aqueous solvent but the presence of lipid bilayers may benefit the improvement of lipid-facing residues. Scoring with knowledge-based functions (DFIRE and RWplus) was found to be as good as scoring using implicit membrane-based scoring functions suggesting that differences in internal packing is more important than orientations relative to the membrane during the refinement of membrane protein homology models. © 2018 Wiley Periodicals, Inc.
Prins, Martin H; Marrel, Alexia; Carita, Paulo; Anderson, David; Bousser, Marie-Germaine; Crijns, Harry; Consoli, Silla; Arnould, Benoit
2009-01-01
Background The side effects and burden of anticoagulant treatments may contribute to poor compliance and consequently to treatment failure. A specific questionnaire is necessary to assess patients' needs and their perceptions of anticoagulant treatment. Methods A conceptual model of expectation and satisfaction with anticoagulant treatment was designed by an advisory board and used to guide patient (n = 31) and clinician (n = 17) interviews in French, US English and Dutch. Patients had either atrial fibrillation (AF), deep venous thrombosis (DVT), or pulmonary embolism (PE). Following interviews, three PACT-Q language versions were developed simultaneously and further pilot-tested by 19 patients. Linguistic validations were performed for additional language versions. Results Initial concepts were developed to cover three areas of interest: 'Treatment', 'Disease and Complications' and 'Information about disease and anticoagulant treatment'. After clinician and patient interviews, concepts were further refined into four domains and 17 concepts; test versions of the PACT-Q were then created simultaneously in three languages, each containing 27 items grouped into four domains: "Treatment Expectations" (7 items), "Convenience" (11 items), "Burden of Disease and Treatment" (2 items) and "Anticoagulant Treatment Satisfaction" (7 items). No item was deleted or added after pilot testing as patients found the PACT-Q easy to understand and appropriate in length in all languages. The PACT-Q was divided into two parts: the first part to measure the expectations and the second to measure the convenience, burden and treatment satisfaction, for evaluation prior to and after anticoagulant treatment, respectively. Eleven additional language versions were linguistically validated. Conclusion The PACT-Q has been rigorously developed and linguistically validated. It is available in 14 languages for use with thromboembolic patients, including AF, PE and DVT patients. Its validation and psychometric properties have been tested and are presented in a separate manuscript. PMID:19196486
ERIC Educational Resources Information Center
Malt, Barbara C.; White, Anne; Ameel, Eef; Storms, Gert
2016-01-01
Much has been said about children's strategies for mapping elements of meaning to words in toddlerhood. However, children continue to refine word meanings and patterns of word use into middle childhood and beyond, even for common words appearing in early vocabulary. We address where children past toddlerhood diverge from adults and where they more…
Assessment of a Refined Short Acculturation Scale for Latino Preteens in Rural Colorado.
ERIC Educational Resources Information Center
Serrano, Elena; Anderson, Jennifer
2003-01-01
The Short Acculturation Scale for Hispanic Youth (SASH-Y) was used to assess acculturation among 137 fourth- and fifth-grade children in rural southern Colorado, including 11 Mexican, 33 Mexican American, and 93 Euro-American children. The SASH-Y, especially questions related to language use, was found to be robust with a young, rural Latino…
Academic Writing in Context: Implications and Applications. Papers in Honour of Tony Dudley-Evans.
ERIC Educational Resources Information Center
Hewings, Martin, Ed.
The papers in this volume were collected to honor T. Dudley-Evans on his retirement from the University of Birmingham. They explore a number of themes of current interest to those engaged in English language teaching and academic writing. The papers are: (1) Introduction (Martin Hewings); (2) Distance and Refined Selves: Educational Tensions in…
Clinician-Oriented Access to Data - C.O.A.D.: A Natural Language Interface to a VA DHCP Database
Levy, Christine; Rogers, Elizabeth
1995-01-01
Hospitals collect enormous amounts of data related to the on-going care of patients. Unfortunately, a clinicians access to the data is limited by complexities of the database structure and/or programming skills required to access the database. The COAD project attempts to bridge the gap between the clinical user's need for specific information from the database, and the wealth of data residing in the hospital information system. The project design includes a natural language interface to data contained in a VA DHCP database. We have developed a prototype which links natural language software to certain DHCP data elements, including, patient demographics, prescriptions, diagnoses, laboratory data, and provider information. English queries can by typed onto the system, and answers to the questions are returned. Future work includes refinement of natural language/DHCP connections to enable more sophisticated queries, and optimization of the system to reduce response time to user questions.
Model of Silicon Refining During Tapping: Removal of Ca, Al, and Other Selected Element Groups
NASA Astrophysics Data System (ADS)
Olsen, Jan Erik; Kero, Ida T.; Engh, Thorvald A.; Tranell, Gabriella
2017-04-01
A mathematical model for industrial refining of silicon alloys has been developed for the so-called oxidative ladle refining process. It is a lumped (zero-dimensional) model, based on the mass balances of metal, slag, and gas in the ladle, developed to operate with relatively short computational times for the sake of industrial relevance. The model accounts for a semi-continuous process which includes both the tapping and post-tapping refining stages. It predicts the concentrations of Ca, Al, and trace elements, most notably the alkaline metals, alkaline earth metal, and rare earth metals. The predictive power of the model depends on the quality of the model coefficients, the kinetic coefficient, τ, and the equilibrium partition coefficient, L for a given element. A sensitivity analysis indicates that the model results are most sensitive to L. The model has been compared to industrial measurement data and found to be able to qualitatively, and to some extent quantitatively, predict the data. The model is very well suited for alkaline and alkaline earth metals which respond relatively fast to the refining process. The model is less well suited for elements such as the lanthanides and Al, which are refined more slowly. A major challenge for the prediction of the behavior of the rare earth metals is that reliable thermodynamic data for true equilibrium conditions relevant to the industrial process is not typically available in literature.
Grid-size dependence of Cauchy boundary conditions used to simulate stream-aquifer interactions
Mehl, S.; Hill, M.C.
2010-01-01
This work examines the simulation of stream–aquifer interactions as grids are refined vertically and horizontally and suggests that traditional methods for calculating conductance can produce inappropriate values when the grid size is changed. Instead, different grid resolutions require different estimated values. Grid refinement strategies considered include global refinement of the entire model and local refinement of part of the stream. Three methods of calculating the conductance of the Cauchy boundary conditions are investigated. Single- and multi-layer models with narrow and wide streams produced stream leakages that differ by as much as 122% as the grid is refined. Similar results occur for globally and locally refined grids, but the latter required as little as one-quarter the computer execution time and memory and thus are useful for addressing some scale issues of stream–aquifer interactions. Results suggest that existing grid-size criteria for simulating stream–aquifer interactions are useful for one-layer models, but inadequate for three-dimensional models. The grid dependence of the conductance terms suggests that values for refined models using, for example, finite difference or finite-element methods, cannot be determined from previous coarse-grid models or field measurements. Our examples demonstrate the need for a method of obtaining conductances that can be translated to different grid resolutions and provide definitive test cases for investigating alternative conductance formulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si
2014-12-01
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less
Stosic, Dejan; Fagard, Benjamin; Sarda, Laure; Colin, Camille
2015-09-01
Fictive motion (FM) characterizes the use of dynamic expressions to describe static scenes. This phenomenon is crucial in terms of cognitive motivations for language use; several explanations have been proposed to account for it, among which mental simulation (Talmy in Toward a cognitive semantics, vol 1. MIT Press, Cambridge, 2000) and visual scanning (Matlock in Studies in linguistic motivation. Mouton de Gruyter, Berlin and New York, pp 221-248, 2004a). The aims of this paper were to test these competing explanations and identify language-specific constraints. To do this, we compared the linguistic strategies for expressing several types of static configurations in four languages, French, Italian, German and Serbian, with an experimental set-up (59 participants). The experiment yielded significant differences for motion-affordance versus no motion-affordance, for all four languages. Significant differences between languages included mean frequency of FM expressions. In order to refine the picture, and more specifically to disentangle the respective roles of language-specific conventions and language-independent (i.e. possibly cognitive) motivations, we completed our study with a corpus approach (besides the four initial languages, we added English and Polish). The corpus study showed low frequency of FM across languages, but a higher frequency and translation ratio for some FM types--among which those best accounted for by enactive perception. The importance of enactive perception could thus explain both the universality of FM and the fact that language-specific conventions appear mainly in very specific contexts--the ones furthest from enaction.
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Benson, Nicholas F; Kranzler, John H; Floyd, Randy G
2016-10-01
Prior research examining cognitive ability and academic achievement relations have been based on different theoretical models, have employed both latent variables as well as observed variables, and have used a variety of analytic methods. Not surprisingly, results have been inconsistent across studies. The aims of this study were to (a) examine how relations between psychometric g, Cattell-Horn-Carroll (CHC) broad abilities, and academic achievement differ across higher-order and bifactor models; (b) examine how well various types of observed scores corresponded with latent variables; and (c) compare two types of observed scores (i.e., refined and non-refined factor scores) as predictors of academic achievement. Results suggest that cognitive-achievement relations vary across theoretical models and that both types of factor scores tend to correspond well with the models on which they are based. However, orthogonal refined factor scores (derived from a bifactor model) have the advantage of controlling for multicollinearity arising from the measurement of psychometric g across all measures of cognitive abilities. Results indicate that the refined factor scores provide more precise representations of their targeted constructs than non-refined factor scores and maintain close correspondence with the cognitive-achievement relations observed for latent variables. Thus, we argue that orthogonal refined factor scores provide more accurate representations of the relations between CHC broad abilities and achievement outcomes than non-refined scores do. Further, the use of refined factor scores addresses calls for the application of scores based on latent variable models. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
REFMAC5 for the refinement of macromolecular crystal structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murshudov, Garib N., E-mail: garib@ysbl.york.ac.uk; Skubák, Pavol; Lebedev, Andrey A.
The general principles behind the macromolecular crystal structure refinement program REFMAC5 are described. This paper describes various components of the macromolecular crystallographic refinement program REFMAC5, which is distributed as part of the CCP4 suite. REFMAC5 utilizes different likelihood functions depending on the diffraction data employed (amplitudes or intensities), the presence of twinning and the availability of SAD/SIRAS experimental diffraction data. To ensure chemical and structural integrity of the refined model, REFMAC5 offers several classes of restraints and choices of model parameterization. Reliable models at resolutions at least as low as 4 Å can be achieved thanks to low-resolution refinement toolsmore » such as secondary-structure restraints, restraints to known homologous structures, automatic global and local NCS restraints, ‘jelly-body’ restraints and the use of novel long-range restraints on atomic displacement parameters (ADPs) based on the Kullback–Leibler divergence. REFMAC5 additionally offers TLS parameterization and, when high-resolution data are available, fast refinement of anisotropic ADPs. Refinement in the presence of twinning is performed in a fully automated fashion. REFMAC5 is a flexible and highly optimized refinement package that is ideally suited for refinement across the entire resolution spectrum encountered in macromolecular crystallography.« less
A hybrid-system model of the coagulation cascade: simulation, sensitivity, and validation.
Makin, Joseph G; Narayanan, Srini
2013-10-01
The process of human blood clotting involves a complex interaction of continuous-time/continuous-state processes and discrete-event/discrete-state phenomena, where the former comprise the various chemical rate equations and the latter comprise both threshold-limited behaviors and binary states (presence/absence of a chemical). Whereas previous blood-clotting models used only continuous dynamics and perforce addressed only portions of the coagulation cascade, we capture both continuous and discrete aspects by modeling it as a hybrid dynamical system. The model was implemented as a hybrid Petri net, a graphical modeling language that extends ordinary Petri nets to cover continuous quantities and continuous-time flows. The primary focus is simulation: (1) fidelity to the clinical data in terms of clotting-factor concentrations and elapsed time; (2) reproduction of known clotting pathologies; and (3) fine-grained predictions which may be used to refine clinical understanding of blood clotting. Next we examine sensitivity to rate-constant perturbation. Finally, we propose a method for titrating between reliance on the model and on prior clinical knowledge. For simplicity, we confine these last two analyses to a critical purely-continuous subsystem of the model.
NASA Astrophysics Data System (ADS)
Hoebelheinrich, N. J.; Lynnes, C.; West, P.; Ferritto, M.
2014-12-01
Two problems common to many geoscience domains are the difficulties in finding tools to work with a given dataset collection, and conversely, the difficulties in finding data for a known tool. A collaborative team from the Earth Science Information Partnership (ESIP) has gotten together to design and create a web service, called ToolMatch, to address these problems. The team began their efforts by defining an initial, relatively simple conceptual model that addressed the two uses cases briefly described above. The conceptual model is expressed as an ontology using OWL (Web Ontology Language) and DCterms (Dublin Core Terms), and utilizing standard ontologies such as DOAP (Description of a Project), FOAF (Friend of a Friend), SKOS (Simple Knowledge Organization System) and DCAT (Data Catalog Vocabulary). The ToolMatch service will be taking advantage of various Semantic Web and Web standards, such as OpenSearch, RESTful web services, SWRL (Semantic Web Rule Language) and SPARQL (Simple Protocol and RDF Query Language). The first version of the ToolMatch service was deployed in early fall 2014. While more complete testing is required, a number of communities besides ESIP member organizations have expressed interest in collaborating to create, test and use the service and incorporate it into their own web pages, tools and / or services including the USGS Data Catalog service, DataONE, the Deep Carbon Observatory, Virtual Solar Terrestrial Observatory (VSTO), and the U.S. Global Change Research Program. In this session, presenters will discuss the inception and development of the ToolMatch service, the collaborative process used to design, refine, and test the service, and future plans for the service.
Machine learning to parse breast pathology reports in Chinese.
Tang, Rong; Ouyang, Lizhi; Li, Clara; He, Yue; Griffin, Molly; Taghian, Alphonse; Smith, Barbara; Yala, Adam; Barzilay, Regina; Hughes, Kevin
2018-06-01
Large structured databases of pathology findings are valuable in deriving new clinical insights. However, they are labor intensive to create and generally require manual annotation. There has been some work in the bioinformatics community to support automating this work via machine learning in English. Our contribution is to provide an automated approach to construct such structured databases in Chinese, and to set the stage for extraction from other languages. We collected 2104 de-identified Chinese benign and malignant breast pathology reports from Hunan Cancer Hospital. Physicians with native Chinese proficiency reviewed the reports and annotated a variety of binary and numerical pathologic entities. After excluding 78 cases with a bilateral lesion in the same report, 1216 cases were used as a training set for the algorithm, which was then refined by 405 development cases. The Natural language processing algorithm was tested by using the remaining 405 cases to evaluate the machine learning outcome. The model was used to extract 13 binary entities and 8 numerical entities. When compared to physicians with native Chinese proficiency, the model showed a per-entity accuracy from 91 to 100% for all common diagnoses on the test set. The overall accuracy of binary entities was 98% and of numerical entities was 95%. In a per-report evaluation for binary entities with more than 100 training cases, 85% of all the testing reports were completely correct and 11% had an error in 1 out of 22 entities. We have demonstrated that Chinese breast pathology reports can be automatically parsed into structured data using standard machine learning approaches. The results of our study demonstrate that techniques effective in parsing English reports can be scaled to other languages.
ISPE: A knowledge-based system for fluidization studies. 1990 Annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, S.
1991-01-01
Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all ``specified goals`` are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that canmore » enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.« less
Using stakeholder engagement to develop a patient-centered pediatric asthma intervention.
Shelef, Deborah Q; Rand, Cynthia; Streisand, Randi; Horn, Ivor B; Yadav, Kabir; Stewart, Lisa; Fousheé, Naja; Waters, Damian; Teach, Stephen J
2016-12-01
Stakeholder engagement has the potential to develop research interventions that are responsive to patient and provider preferences. This approach contrasts with traditional models of clinical research in which researchers determine the study's design. This article describes the effect of stakeholder engagement on the design of a randomized trial of an intervention designed to improve child asthma outcomes by reducing parental stress. The study team developed and implemented a stakeholder engagement process that provided iterative feedback regarding the study design, patient-centered outcomes, and intervention. Stakeholder engagement incorporated the perspectives of parents of children with asthma; local providers of community-based medical, legal, and social services; and national experts in asthma research methodology and implementation. Through a year-long process of multidimensional stakeholder engagement, the research team successfully refined and implemented a patient-centered study protocol. Key stakeholder contributions included selection of patient-centered outcome measures, refinement of intervention content and format, and language framing the study in a culturally appropriate manner. Stakeholder engagement was a useful framework for developing an intervention that was acceptable and relevant to our target population. This approach might have unique benefits in underserved populations, leading to sustainable improvement in health outcomes and reduced disparities. Copyright © 2016 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Gardner, David C.; And Others
Volume 1 of the final report on Project HIRE reports the design, development, field-testing, and refining of self-instructional packages to teach entry level technical vocabulary to learning handicapped students mainstreamed in vocational programs. Volume 2, a management handbook, reports the methods and findings concerning development of…
Portable Language-Independent Adaptive Translation from OCR. Phase 1
2009-04-01
including brute-force k-Nearest Neighbors ( kNN ), fast approximate kNN using hashed k-d trees, classification and regression trees, and locality...achieved by refinements in ground-truthing protocols. Recent algorithmic improvements to our approximate kNN classifier using hashed k-D trees allows...recent years discriminative training has been shown to outperform phonetic HMMs estimated using ML for speech recognition. Standard ML estimation
Refining mass formulas for astrophysical applications: A Bayesian neural network approach
NASA Astrophysics Data System (ADS)
Utama, R.; Piekarewicz, J.
2017-10-01
Background: Exotic nuclei, particularly those near the drip lines, are at the core of one of the fundamental questions driving nuclear structure and astrophysics today: What are the limits of nuclear binding? Exotic nuclei play a critical role in both informing theoretical models as well as in our understanding of the origin of the heavy elements. Purpose: Our aim is to refine existing mass models through the training of an artificial neural network that will mitigate the large model discrepancies far away from stability. Methods: The basic paradigm of our two-pronged approach is an existing mass model that captures as much as possible of the underlying physics followed by the implementation of a Bayesian neural network (BNN) refinement to account for the missing physics. Bayesian inference is employed to determine the parameters of the neural network so that model predictions may be accompanied by theoretical uncertainties. Results: Despite the undeniable quality of the mass models adopted in this work, we observe a significant improvement (of about 40%) after the BNN refinement is implemented. Indeed, in the specific case of the Duflo-Zuker mass formula, we find that the rms deviation relative to experiment is reduced from σrms=0.503 MeV to σrms=0.286 MeV. These newly refined mass tables are used to map the neutron drip lines (or rather "drip bands") and to study a few critical r -process nuclei. Conclusions: The BNN approach is highly successful in refining the predictions of existing mass models. In particular, the large discrepancy displayed by the original "bare" models in regions where experimental data are unavailable is considerably quenched after the BNN refinement. This lends credence to our approach and has motivated us to publish refined mass tables that we trust will be helpful for future astrophysical applications.
Earth Science Markup Language: Transitioning From Design to Application
NASA Technical Reports Server (NTRS)
Moe, Karen; Graves, Sara; Ramachandran, Rahul
2002-01-01
The primary objective of the proposed Earth Science Markup Language (ESML) research is to transition from design to application. The resulting schema and prototype software will foster community acceptance for the "define once, use anywhere" concept central to ESML. Supporting goals include: 1. Refinement of the ESML schema and software libraries in cooperation with the user community. 2. Application of the ESML schema and software libraries to a variety of Earth science data sets and analysis tools. 3. Development of supporting prototype software for enhanced ease of use. 4. Cooperation with standards bodies in order to assure ESML is aligned with related metadata standards as appropriate. 5. Widespread publication of the ESML approach, schema, and software.
The scheme machine: A case study in progress in design derivation at system levels
NASA Technical Reports Server (NTRS)
Johnson, Steven D.
1995-01-01
The Scheme Machine is one of several design projects of the Digital Design Derivation group at Indiana University. It differs from the other projects in its focus on issues of system design and its connection to surrounding research in programming language semantics, compiler construction, and programming methodology underway at Indiana and elsewhere. The genesis of the project dates to the early 1980's, when digital design derivation research branched from the surrounding research effort in programming languages. Both branches have continued to develop in parallel, with this particular project serving as a bridge. However, by 1990 there remained little real interaction between the branches and recently we have undertaken to reintegrate them. On the software side, researchers have refined a mathematically rigorous (but not mechanized) treatment starting with the fully abstract semantic definition of Scheme and resulting in an efficient implementation consisting of a compiler and virtual machine model, the latter typically realized with a general purpose microprocessor. The derivation includes a number of sophisticated factorizations and representations and is also deep example of the underlying engineering methodology. The hardware research has created a mechanized algebra supporting the tedious and massive transformations often seen at lower levels of design. This work has progressed to the point that large scale devices, such as processors, can be derived from first-order finite state machine specifications. This is roughly where the language oriented research stops; thus, together, the two efforts establish a thread from the highest levels of abstract specification to detailed digital implementation. The Scheme Machine project challenges hardware derivation research in several ways, although the individual components of the system are of a similar scale to those we have worked with before. The machine has a custom dual-ported memory to support garbage collection. It consists of four tightly coupled processes--processor, collector, allocator, memory--with a very non-trivial synchronization relationship. Finally, there are deep issues of representation for the run-time objects of a symbolic processing language. The research centers on verification through integrated formal reasoning systems, but is also involved with modeling and prototyping environments. Since the derivation algebra is basd on an executable modeling language, there is opportunity to incorporate design animation in the design process. We are looking for ways to move smoothly and incrementally from executable specifications into hardware realization. For example, we can run the garbage collector specification, a Scheme program, directly against the physical memory prototype, and similarly, the instruction processor model against the heap implementation.
Mehl, S.; Hill, M.C.
2002-01-01
A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Jewett, M. E.; Kronauer, R. E.; Brown, E. N. (Principal Investigator)
1998-01-01
In 1990, Kronauer proposed a mathematical model of the effects of light on the human circadian pacemaker. Although this model predicted many general features of the response of the human circadian pacemaker to light exposure, additional data now available enable us to refine the original model. We first refined the original model by incorporating the results of a dose response curve to light into the model's predicted relationship between light intensity and the strength of the drive onto the pacemaker. Data from three bright light phase resetting experiments were then used to refine the amplitude recovery characteristics of the model. Finally, the model was tested and further refined using data from an extensive phase resetting experiment in which a 3-cycle bright light stimulus was presented against a background of dim light. In order to describe the results of the four resetting experiments, the following major refinements to the original model were necessary: (i) the relationship between light intensity (I) and drive onto the pacemaker was reduced from I1/3 to I0.23 for light levels between 150 and 10,000 lux; (ii) the van der Pol oscillator from the original model was replaced with a higher-order limit cycle oscillator so that amplitude recovery is slower near the singularity and faster near the limit cycle; (iii) a direct effect of light on circadian period (tau x) was incorporated into the model such that as I increases, tau x decreases, which is in accordance with "Aschoff's rule". This refined model generates the following testable predictions: it should be difficult to enhance normal circadian amplitude via bright light; near the critical point of a type 0 phase response curve (PRC) the slope should be steeper than it is in a type 1 PRC; and circadian period measured during forced desynchrony should be directly affected by ambient light intensity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England
The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} tomore » 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.« less
Satellite SAR geocoding with refined RPC model
NASA Astrophysics Data System (ADS)
Zhang, Lu; Balz, Timo; Liao, Mingsheng
2012-04-01
Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.
Structure Refinement of Protein Low Resolution Models Using the GNEIMO Constrained Dynamics Method
Park, In-Hee; Gangupomu, Vamshi; Wagner, Jeffrey; Jain, Abhinandan; Vaidehi, Nagara-jan
2012-01-01
The challenge in protein structure prediction using homology modeling is the lack of reliable methods to refine the low resolution homology models. Unconstrained all-atom molecular dynamics (MD) does not serve well for structure refinement due to its limited conformational search. We have developed and tested the constrained MD method, based on the Generalized Newton-Euler Inverse Mass Operator (GNEIMO) algorithm for protein structure refinement. In this method, the high-frequency degrees of freedom are replaced with hard holonomic constraints and a protein is modeled as a collection of rigid body clusters connected by flexible torsional hinges. This allows larger integration time steps and enhances the conformational search space. In this work, we have demonstrated the use of a constraint free GNEIMO method for protein structure refinement that starts from low-resolution decoy sets derived from homology methods. In the eight proteins with three decoys for each, we observed an improvement of ~2 Å in the RMSD to the known experimental structures of these proteins. The GNEIMO method also showed enrichment in the population density of native-like conformations. In addition, we demonstrated structural refinement using a “Freeze and Thaw” clustering scheme with the GNEIMO framework as a viable tool for enhancing localized conformational search. We have derived a robust protocol based on the GNEIMO replica exchange method for protein structure refinement that can be readily extended to other proteins and possibly applicable for high throughput protein structure refinement. PMID:22260550
Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models
NASA Astrophysics Data System (ADS)
Zang, Tianwu
Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.
The ideomotor recycling theory for tool use, language, and foresight.
Badets, Arnaud; Osiurak, François
2017-02-01
The present theoretical framework highlights a common action-perception mechanism for tool use, spoken language, and foresight capacity. On the one hand, it has been suggested that human language and the capacity to envision the future (i.e. foresight) have, from an evolutionary viewpoint, developed mutually along with the pressure of tool use. This co-evolution has afforded humans an evident survival advantage in the animal kingdom because language can help to refine the representation of future scenarios, which in turn can help to encourage or discourage engagement in appropriate and efficient behaviours. On the other hand, recent assumptions regarding the evolution of the brain have capitalized on the concept of "neuronal recycling". In the domain of cognitive neuroscience, neuronal recycling means that during evolution, some neuronal areas and cognitive functions have been recycled to manage new environmental and social constraints. In the present article, we propose that the co-evolution of tool use, language, and foresight represents a suitable example of such functional recycling throughout a well-defined common action-perception mechanism, i.e. the ideomotor mechanism. This ideomotor account is discussed in light of different future ontogenetic and phylogenetic perspectives.
Mehl, Steffen W.; Hill, Mary C.
2007-01-01
This report documents the addition of the multiple-refined-areas capability to shared node Local Grid Refinement (LGR) and Boundary Flow and Head (BFH) Package of MODFLOW-2005, the U.S. Geological Survey modular, three-dimensional, finite-difference ground-water flow model. LGR now provides the capability to simulate ground-water flow by using one or more block-shaped, higher resolution local grids (child model) within a coarser grid (parent model). LGR accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundaries. The ability to have multiple, nonoverlapping areas of refinement is important in situations where there is more than one area of concern within a regional model. In this circumstance, LGR can be used to simulate these distinct areas with higher resolution grids. LGR can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined ground-water systems. The BFH Package can be used to simulate these situations by using either the parent or child models independently.
Key cognitive preconditions for the evolution of language.
Donald, Merlin
2017-02-01
Languages are socially constructed systems of expression, generated interactively in social networks, which can be assimilated by the individual brain as it develops. Languages co-evolved with culture, reflecting the changing complexity of human culture as it acquired the properties of a distributed cognitive system. Two key preconditions set the stage for the evolution of such cultures: a very general ability to rehearse and refine skills (evident early in hominin evolution in toolmaking), and the emergence of material culture as an external (to the brain) memory record that could retain and accumulate knowledge across generations. The ability to practice and rehearse skill provided immediate survival-related benefits in that it expanded the physical powers of early hominins, but the same adaptation also provided the imaginative substrate for a system of "mimetic" expression, such as found in ritual and pantomime, and in proto-words, which performed an expressive function somewhat like the home signs of deaf non-signers. The hominid brain continued to adapt to the increasing importance and complexity of culture as human interactions with material culture became more complex; above all, this entailed a gradual expansion in the integrative systems of the brain, especially those involved in the metacognitive supervision of self-performances. This supported a style of embodied mimetic imagination that improved the coordination of shared activities such as fire tending, but also in rituals and reciprocal mimetic games. The time-depth of this mimetic adaptation, and its role in both the construction and acquisition of languages, explains the importance of mimetic expression in the media, religion, and politics. Spoken language evolved out of voco-mimesis, and emerged long after the more basic abilities needed to refine skill and share intentions, probably coinciding with the common ancestor of sapient humans. Self-monitoring and self-supervised practice were necessary preconditions for lexical invention, and as these abilities evolved further, communicative skills extended to more abstract and complex aspects of the communication environments-that is, the "cognitive ecologies"-being generated by human groups. The hominin brain adapted continuously to the need to assimilate language and its many cognitive byproducts by expanding many of its higher integrative systems, a process that seems to have accelerated and peaked in the past half million years.
Which Melodic Universals Emerge from Repeated Signaling Games? A Note on Lumaca and Baggio (2017) ‡.
Ravignani, Andrea; Verhoef, Tessa
2018-01-01
Music is a peculiar human behavior, yet we still know little as to why and how music emerged. For centuries, the study of music has been the sole prerogative of the humanities. Lately, however, music is being increasingly investigated by psychologists, neuroscientists, biologists, and computer scientists. One approach to studying the origins of music is to empirically test hypotheses about the mechanisms behind this structured behavior. Recent lab experiments show how musical rhythm and melody can emerge via the process of cultural transmission. In particular, Lumaca and Baggio (2017) tested the emergence of a sound system at the boundary between music and language. In this study, participants were given random pairs of signal-meanings; when participants negotiated their meaning and played a "game of telephone" with them, these pairs became more structured and systematic. Over time, the small biases introduced in each artificial transmission step accumulated, displaying quantitative trends, including the emergence, over the course of artificial human generations, of features resembling properties of language and music. In this Note, we highlight the importance of Lumaca and Baggio's experiment, place it in the broader literature on the evolution of language and music, and suggest refinements for future experiments. We conclude that, while psychological evidence for the emergence of proto-musical features is accumulating, complementary work is needed: Mathematical modeling and computer simulations should be used to test the internal consistency of experimentally generated hypotheses and to make new predictions.
Refinement of protein termini in template-based modeling using conformational space annealing.
Park, Hahnbeom; Ko, Junsu; Joo, Keehyoung; Lee, Julian; Seok, Chaok; Lee, Jooyoung
2011-09-01
The rapid increase in the number of experimentally determined protein structures in recent years enables us to obtain more reliable protein tertiary structure models than ever by template-based modeling. However, refinement of template-based models beyond the limit available from the best templates is still needed for understanding protein function in atomic detail. In this work, we develop a new method for protein terminus modeling that can be applied to refinement of models with unreliable terminus structures. The energy function for terminus modeling consists of both physics-based and knowledge-based potential terms with carefully optimized relative weights. Effective sampling of both the framework and terminus is performed using the conformational space annealing technique. This method has been tested on a set of termini derived from a nonredundant structure database and two sets of termini from the CASP8 targets. The performance of the terminus modeling method is significantly improved over our previous method that does not employ terminus refinement. It is also comparable or superior to the best server methods tested in CASP8. The success of the current approach suggests that similar strategy may be applied to other types of refinement problems such as loop modeling or secondary structure rearrangement. Copyright © 2011 Wiley-Liss, Inc.
Variability of Protein Structure Models from Electron Microscopy.
Monroe, Lyman; Terashi, Genki; Kihara, Daisuke
2017-04-04
An increasing number of biomolecular structures are solved by electron microscopy (EM). However, the quality of structure models determined from EM maps vary substantially. To understand to what extent structure models are supported by information embedded in EM maps, we used two computational structure refinement methods to examine how much structures can be refined using a dataset of 49 maps with accompanying structure models. The extent of structure modification as well as the disagreement between refinement models produced by the two computational methods scaled inversely with the global and the local map resolutions. A general quantitative estimation of deviations of structures for particular map resolutions are provided. Our results indicate that the observed discrepancy between the deposited map and the refined models is due to the lack of structural information present in EM maps and thus these annotations must be used with caution for further applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
X-ray structure determination at low resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunger, Axel T., E-mail: brunger@stanford.edu; Department of Molecular and Cellular Physiology, Stanford University; Department of Neurology and Neurological Sciences, Stanford University
2009-02-01
Refinement is meaningful even at 4 Å or lower, but with present methodologies it should start from high-resolution crystal structures whenever possible. As an example of structure determination in the 3.5–4.5 Å resolution range, crystal structures of the ATPase p97/VCP, consisting of an N-terminal domain followed by a tandem pair of ATPase domains (D1 and D2), are discussed. The structures were originally solved by molecular replacement with the high-resolution structure of the N-D1 fragment of p97/VCP, whereas the D2 domain was manually built using its homology to the D1 domain as a guide. The structure of the D2 domain alonemore » was subsequently solved at 3 Å resolution. The refined model of D2 and the high-resolution structure of the N-D1 fragment were then used as starting models for re-refinement against the low-resolution diffraction data for full-length p97. The re-refined full-length models showed significant improvement in both secondary structure and R values. The free R values dropped by as much as 5% compared with the original structure refinements, indicating that refinement is meaningful at low resolution and that there is information in the diffraction data even at ∼4 Å resolution that objectively assesses the quality of the model. It is concluded that de novo model building is problematic at low resolution and refinement should start from high-resolution crystal structures whenever possible.« less
Koparde, Vishal N.; Scarsdale, J. Neel; Kellogg, Glen E.
2011-01-01
Background The quality of X-ray crystallographic models for biomacromolecules refined from data obtained at high-resolution is assured by the data itself. However, at low-resolution, >3.0 Å, additional information is supplied by a forcefield coupled with an associated refinement protocol. These resulting structures are often of lower quality and thus unsuitable for downstream activities like structure-based drug discovery. Methodology An X-ray crystallography refinement protocol that enhances standard methodology by incorporating energy terms from the HINT (Hydropathic INTeractions) empirical forcefield is described. This protocol was tested by refining synthetic low-resolution structural data derived from 25 diverse high-resolution structures, and referencing the resulting models to these structures. The models were also evaluated with global structural quality metrics, e.g., Ramachandran score and MolProbity clashscore. Three additional structures, for which only low-resolution data are available, were also re-refined with this methodology. Results The enhanced refinement protocol is most beneficial for reflection data at resolutions of 3.0 Å or worse. At the low-resolution limit, ≥4.0 Å, the new protocol generated models with Cα positions that have RMSDs that are 0.18 Å more similar to the reference high-resolution structure, Ramachandran scores improved by 13%, and clashscores improved by 51%, all in comparison to models generated with the standard refinement protocol. The hydropathic forcefield terms are at least as effective as Coulombic electrostatic terms in maintaining polar interaction networks, and significantly more effective in maintaining hydrophobic networks, as synthetic resolution is decremented. Even at resolutions ≥4.0 Å, these latter networks are generally native-like, as measured with a hydropathic interactions scoring tool. PMID:21246043
Overview of refinement procedures within REFMAC5: utilizing data from different sources.
Kovalevskiy, Oleg; Nicholls, Robert A; Long, Fei; Carlon, Azzurra; Murshudov, Garib N
2018-03-01
Refinement is a process that involves bringing into agreement the structural model, available prior knowledge and experimental data. To achieve this, the refinement procedure optimizes a posterior conditional probability distribution of model parameters, including atomic coordinates, atomic displacement parameters (B factors), scale factors, parameters of the solvent model and twin fractions in the case of twinned crystals, given observed data such as observed amplitudes or intensities of structure factors. A library of chemical restraints is typically used to ensure consistency between the model and the prior knowledge of stereochemistry. If the observation-to-parameter ratio is small, for example when diffraction data only extend to low resolution, the Bayesian framework implemented in REFMAC5 uses external restraints to inject additional information extracted from structures of homologous proteins, prior knowledge about secondary-structure formation and even data obtained using different experimental methods, for example NMR. The refinement procedure also generates the `best' weighted electron-density maps, which are useful for further model (re)building. Here, the refinement of macromolecular structures using REFMAC5 and related tools distributed as part of the CCP4 suite is discussed.
Gyre and gimble: a maximum-likelihood replacement for Patterson correlation refinement.
McCoy, Airlie J; Oeffner, Robert D; Millán, Claudia; Sammito, Massimo; Usón, Isabel; Read, Randy J
2018-04-01
Descriptions are given of the maximum-likelihood gyre method implemented in Phaser for optimizing the orientation and relative position of rigid-body fragments of a model after the orientation of the model has been identified, but before the model has been positioned in the unit cell, and also the related gimble method for the refinement of rigid-body fragments of the model after positioning. Gyre refinement helps to lower the root-mean-square atomic displacements between model and target molecular-replacement solutions for the test case of antibody Fab(26-10) and improves structure solution with ARCIMBOLDO_SHREDDER.
Network Policy Languages: A Survey and a New Approach
2000-08-01
go, throwing a community into economic chaos. These events could result from discontinued funding from Silicon Valley investors who became aware of...prevented. C. POLICY HIERARCHIES FOR DISTRIBUTED SYSTEMS MANAGEMENT In [23], Moffett and Sloman form a policy hierarchy is by refining general high...management behavior of a system, without coding the behavior into the manager agents. Lupu and Sloman focus on techniques and tool support for off-line
Milius, Robert P; Heuer, Michael; Valiga, Daniel; Doroschak, Kathryn J; Kennedy, Caleb J; Bolon, Yung-Tsi; Schneider, Joel; Pollack, Jane; Kim, Hwa Ran; Cereb, Nezih; Hollenbach, Jill A; Mack, Steven J; Maiers, Martin
2015-12-01
We present an electronic format for exchanging data for HLA and KIR genotyping with extensions for next-generation sequencing (NGS). This format addresses NGS data exchange by refining the Histoimmunogenetics Markup Language (HML) to conform to the proposed Minimum Information for Reporting Immunogenomic NGS Genotyping (MIRING) reporting guidelines (miring.immunogenomics.org). Our refinements of HML include two major additions. First, NGS is supported by new XML structures to capture additional NGS data and metadata required to produce a genotyping result, including analysis-dependent (dynamic) and method-dependent (static) components. A full genotype, consensus sequence, and the surrounding metadata are included directly, while the raw sequence reads and platform documentation are externally referenced. Second, genotype ambiguity is fully represented by integrating Genotype List Strings, which use a hierarchical set of delimiters to represent allele and genotype ambiguity in a complete and accurate fashion. HML also continues to enable the transmission of legacy methods (e.g. site-specific oligonucleotide, sequence-specific priming, and Sequence Based Typing (SBT)), adding features such as allowing multiple group-specific sequencing primers, and fully leveraging techniques that combine multiple methods to obtain a single result, such as SBT integrated with NGS. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Mehl, S.; Hill, M.C.
2004-01-01
This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.
Cognitive search model and a new query paradigm
NASA Astrophysics Data System (ADS)
Xu, Zhonghui
2001-06-01
This paper proposes a cognitive model in which people begin to search pictures by using semantic content and find a right picture by judging whether its visual content is a proper visualization of the semantics desired. It is essential that human search is not just a process of matching computation on visual feature but rather a process of visualization of the semantic content known. For people to search electronic images in the way as they manually do in the model, we suggest that querying be a semantic-driven process like design. A query-by-design paradigm is prosed in the sense that what you design is what you find. Unlike query-by-example, query-by-design allows users to specify the semantic content through an iterative and incremental interaction process so that a retrieval can start with association and identification of the given semantic content and get refined while further visual cues are available. An experimental image retrieval system, Kuafu, has been under development using the query-by-design paradigm and an iconic language is adopted.
Chapman, Michael S; Trzynka, Andrew; Chapman, Brynmor K
2013-04-01
When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5-2.5 Å at resolutions of 4.5-6 Å. Copyright © 2013 Elsevier Inc. All rights reserved.
The PDB_REDO server for macromolecular structure model optimization.
Joosten, Robbie P; Long, Fei; Murshudov, Garib N; Perrakis, Anastassis
2014-07-01
The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395-1412]. The PDB_REDO procedure aims for 'constructive validation', aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo-graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB.
The PDB_REDO server for macromolecular structure model optimization
Joosten, Robbie P.; Long, Fei; Murshudov, Garib N.; Perrakis, Anastassis
2014-01-01
The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395–1412]. The PDB_REDO procedure aims for ‘constructive validation’, aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallographers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342
MODFLOW-LGR: Practical application to a large regional dataset
NASA Astrophysics Data System (ADS)
Barnes, D.; Coulibaly, K. M.
2011-12-01
In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.
A requirements specification for a software design support system
NASA Technical Reports Server (NTRS)
Noonan, Robert E.
1988-01-01
Most existing software design systems (SDSS) support the use of only a single design methodology. A good SDSS should support a wide variety of design methods and languages including structured design, object-oriented design, and finite state machines. It might seem that a multiparadigm SDSS would be expensive in both time and money to construct. However, it is proposed that instead an extensible SDSS that directly implements only minimal database and graphical facilities be constructed. In particular, it should not directly implement tools to faciliate language definition and analysis. It is believed that such a system could be rapidly developed and put into limited production use, with the experience gained used to refine and evolve the systems over time.
NASA Astrophysics Data System (ADS)
Dennis, L.; Roesler, E. L.; Guba, O.; Hillman, B. R.; McChesney, M.
2016-12-01
The Atmospheric Radiation Measurement (ARM) climate research facility has three siteslocated on the North Slope of Alaska (NSA): Barrrow, Oliktok, and Atqasuk. These sites, incombination with one other at Toolik Lake, have the potential to become a "megasite" whichwould combine observational data and high resolution modeling to produce high resolutiondata products for the climate community. Such a data product requires high resolutionmodeling over the area of the megasite. We present three variable resolution atmosphericgeneral circulation model (AGCM) configurations as potential alternatives to stand-alonehigh-resolution regional models. Each configuration is based on a global cubed-sphere gridwith effective resolution of 1 degree, with a refinement in resolution down to 1/8 degree overan area surrounding the ARM megasite. The three grids vary in the size of the refined areawith 13k, 9k, and 7k elements. SquadGen, NCL, and GIMP are used to create the grids.Grids vary based upon the selection of areas of refinement which capture climate andweather processes that may affect a proposed NSA megasite. A smaller area of highresolution may not fully resolve climate and weather processes before they reach the NSA,however grids with smaller areas of refinement have a significantly reduced computationalcost compared with grids with larger areas of refinement. Optimal size and shape of thearea of refinement for a variable resolution model at the NSA is investigated.
Hoard, C.J.
2010-01-01
The U.S. Geological Survey is evaluating water availability and use within the Great Lakes Basin. This is a pilot effort to develop new techniques and methods to aid in the assessment of water availability. As part of the pilot program, a regional groundwater-flow model for the Lake Michigan Basin was developed using SEAWAT-2000. The regional model was used as a framework for assessing local-scale water availability through grid-refinement techniques. Two grid-refinement techniques, telescopic mesh refinement and local grid refinement, were used to illustrate the capability of the regional model to evaluate local-scale problems. An intermediate model was developed in central Michigan spanning an area of 454 square miles (mi2) using telescopic mesh refinement. Within the intermediate model, a smaller local model covering an area of 21.7 mi2 was developed and simulated using local grid refinement. Recharge was distributed in space and time using a daily output from a modified Thornthwaite-Mather soil-water-balance method. The soil-water-balance method derived recharge estimates from temperature and precipitation data output from an atmosphere-ocean coupled general-circulation model. The particular atmosphere-ocean coupled general-circulation model used, simulated climate change caused by high global greenhouse-gas emissions to the atmosphere. The surface-water network simulated in the regional model was refined and simulated using a streamflow-routing package for MODFLOW. The refined models were used to demonstrate streamflow depletion and potential climate change using five scenarios. The streamflow-depletion scenarios include (1) natural conditions (no pumping), (2) a pumping well near a stream; the well is screened in surficial glacial deposits, (3) a pumping well near a stream; the well is screened in deeper glacial deposits, and (4) a pumping well near a stream; the well is open to a deep bedrock aquifer. Results indicated that a range of 59 to 50 percent of the water pumped originated from the stream for the shallow glacial and deep bedrock pumping scenarios, respectively. The difference in streamflow reduction between the shallow and deep pumping scenarios was compensated for in the deep well by deriving more water from regional sources. The climate-change scenario only simulated natural conditions from 1991-2044, so there was no pumping stress simulated. Streamflows were calculated for the simulated period and indicated that recharge over the period generally increased from the start of the simulation until approximately 2017, and decreased from then to the end of the simulation. Streamflow was highly correlated with recharge so that the lowest streamflows occurred in the later stress periods of the model when recharge was lowest.
REFINEMENT OF A MODEL TO PREDICT THE PERMEATION OF PROTECTIVE CLOTHING MATERIALS
A prototype of a predictive model for estimating chemical permeation through protective clothing materials was refined and tested. he model applies Fickian diffusion theory and predicts permeation rates and cumulative permeation as a function of time for five materials: butyl rub...
Current concepts in adult aphasia.
McNeil, M R
1984-01-01
This paper provides a review of recent research from the areas of speech and language pathology, cognitive psychology, psycholinguistics, neurology, and rehabilitation medicine which is used to refine and extend current definitions of aphasia. Evidence is presented from these diverse disciplines, which supports a multimodality, performance-based, verbal and non-verbal, cortical and subcortical, and cognitively multidimensional view of aphasia. A summary of current practice in the assessment and treatment of adult aphasia is summarized.
Macromolecular refinement by model morphing using non-atomic parameterizations.
Cowtan, Kevin; Agirre, Jon
2018-02-01
Refinement is a critical step in the determination of a model which explains the crystallographic observations and thus best accounts for the missing phase components. The scattering density is usually described in terms of atomic parameters; however, in macromolecular crystallography the resolution of the data is generally insufficient to determine the values of these parameters for individual atoms. Stereochemical and geometric restraints are used to provide additional information, but produce interrelationships between parameters which slow convergence, resulting in longer refinement times. An alternative approach is proposed in which parameters are not attached to atoms, but to regions of the electron-density map. These parameters can move the density or change the local temperature factor to better explain the structure factors. Varying the size of the region which determines the parameters at a particular position in the map allows the method to be applied at different resolutions without the use of restraints. Potential applications include initial refinement of molecular-replacement models with domain motions, and potentially the use of electron density from other sources such as electron cryo-microscopy (cryo-EM) as the refinement model.
Coloured Petri Net Refinement Specification and Correctness Proof with Coq
NASA Technical Reports Server (NTRS)
Choppy, Christine; Mayero, Micaela; Petrucci, Laure
2009-01-01
In this work, we address the formalisation of symmetric nets, a subclass of coloured Petri nets, refinement in COQ. We first provide a formalisation of the net models, and of their type refinement in COQ. Then the COQ proof assistant is used to prove the refinement correctness lemma. An example adapted from a protocol example illustrates our work.
2016-01-01
Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538
Early Bimodal Stimulation Benefits Language Acquisition for Children With Cochlear Implants.
Moberly, Aaron C; Lowenstein, Joanna H; Nittrouer, Susan
2016-01-01
Adding a low-frequency acoustic signal to the cochlear implant (CI) signal (i.e., bimodal stimulation) for a period of time early in life improves language acquisition. Children must acquire sensitivity to the phonemic units of language to develop most language-related skills, including expressive vocabulary, working memory, and reading. Acquiring sensitivity to phonemic structure depends largely on having refined spectral (frequency) representations available in the signal, which does not happen with CIs alone. Combining the low-frequency acoustic signal available through hearing aids with the CI signal can enhance signal quality. A period with this bimodal stimulation has been shown to improve language skills in very young children. This study examined whether these benefits persist into childhood. Data were examined for 48 children with CIs implanted under age 3 years, participating in a longitudinal study. All children wore hearing aids before receiving a CI, but upon receiving a first CI, 24 children had at least 1 year of bimodal stimulation (Bimodal group), and 24 children had only electric stimulation subsequent to implantation (CI-only group). Measures of phonemic awareness were obtained at second and fourth grades, along with measures of expressive vocabulary, working memory, and reading. Children in the Bimodal group generally performed better on measures of phonemic awareness, and that advantage was reflected in other language measures. Having even a brief period of time early in life with combined electric-acoustic input provides benefits to language learning into childhood, likely because of the enhancement in spectral representations provided.
Correcting pervasive errors in RNA crystallography through enumerative structure prediction.
Chou, Fang-Chieh; Sripakdeevong, Parin; Dibrov, Sergey M; Hermann, Thomas; Das, Rhiju
2013-01-01
Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron density under Rosetta (ERRASER), coupled to Python-based hierarchical environment for integrated 'xtallography' (PHENIX) diffraction-based refinement. On 24 data sets, ERRASER automatically corrects the majority of MolProbity-assessed errors, improves the average R(free) factor, resolves functionally important discrepancies in noncanonical structure and refines low-resolution models to better match higher-resolution models.
Hirshfeld atom refinement for modelling strong hydrogen bonds.
Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon
2014-09-01
High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.
The solvent component of macromolecular crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weichenberger, Christian X.; Afonine, Pavel V.; Kantardjieff, Katherine
2015-04-30
On average, the mother liquor or solvent and its constituents occupy about 50% of a macromolecular crystal. Ordered as well as disordered solvent components need to be accurately accounted for in modelling and refinement, often with considerable complexity. The mother liquor from which a biomolecular crystal is grown will contain water, buffer molecules, native ligands and cofactors, crystallization precipitants and additives, various metal ions, and often small-molecule ligands or inhibitors. On average, about half the volume of a biomolecular crystal consists of this mother liquor, whose components form the disordered bulk solvent. Its scattering contributions can be exploited in initialmore » phasing and must be included in crystal structure refinement as a bulk-solvent model. Concomitantly, distinct electron density originating from ordered solvent components must be correctly identified and represented as part of the atomic crystal structure model. Herein, are reviewed (i) probabilistic bulk-solvent content estimates, (ii) the use of bulk-solvent density modification in phase improvement, (iii) bulk-solvent models and refinement of bulk-solvent contributions and (iv) modelling and validation of ordered solvent constituents. A brief summary is provided of current tools for bulk-solvent analysis and refinement, as well as of modelling, refinement and analysis of ordered solvent components, including small-molecule ligands.« less
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madland, D. G.; Kahler, A. C.
This paper presents a number of refinements to the original Los Alamos model of the prompt fission neutron spectrum and average prompt neutron multiplicity as derived in 1982. The four refinements are due to new measurements of the spectrum and related fission observables many of which were not available in 1982. Here, they are also due to a number of detailed studies and comparisons of the model with previous and present experimental results including not only the differential spectrum, but also integal cross sections measured in the field of the differential spectrum. The four refinements are (a) separate neutron contributionsmore » in binary fission, (b) departure from statistical equilibrium at scission, (c) fission-fragment nuclear level-density models, and (d) center-of-mass anisotropy. With these refinements, for the first time, good agreement has been obtained for both differential and integral measurements using the same Los Alamos model spectrum.« less
Baldwin, Austin K.; Robertson, Dale M.; Saad, David A.; Magruder, Christopher
2013-01-01
In 2008, the U.S. Geological Survey and the Milwaukee Metropolitan Sewerage District initiated a study to develop regression models to estimate real-time concentrations and loads of chloride, suspended solids, phosphorus, and bacteria in streams near Milwaukee, Wisconsin. To collect monitoring data for calibration of models, water-quality sensors and automated samplers were installed at six sites in the Menomonee River drainage basin. The sensors continuously measured four potential explanatory variables: water temperature, specific conductance, dissolved oxygen, and turbidity. Discrete water-quality samples were collected and analyzed for five response variables: chloride, total suspended solids, total phosphorus, Escherichia coli bacteria, and fecal coliform bacteria. Using the first year of data, regression models were developed to continuously estimate the response variables on the basis of the continuously measured explanatory variables. Those models were published in a previous report. In this report, those models are refined using 2 years of additional data, and the relative improvement in model predictability is discussed. In addition, a set of regression models is presented for a new site in the Menomonee River Basin, Underwood Creek at Wauwatosa. The refined models use the same explanatory variables as the original models. The chloride models all used specific conductance as the explanatory variable, except for the model for the Little Menomonee River near Freistadt, which used both specific conductance and turbidity. Total suspended solids and total phosphorus models used turbidity as the only explanatory variable, and bacteria models used water temperature and turbidity as explanatory variables. An analysis of covariance (ANCOVA), used to compare the coefficients in the original models to those in the refined models calibrated using all of the data, showed that only 3 of the 25 original models changed significantly. Root-mean-squared errors (RMSEs) calculated for both the original and refined models using the entire dataset showed a median improvement in RMSE of 2.1 percent, with a range of 0.0–13.9 percent. Therefore most of the original models did almost as well at estimating concentrations during the validation period (October 2009–September 2011) as the refined models, which were calibrated using those data. Application of these refined models can produce continuously estimated concentrations of chloride, total suspended solids, total phosphorus, E. coli bacteria, and fecal coliform bacteria that may assist managers in quantifying the effects of land-use changes and improvement projects, establish total maximum daily loads, and enable better informed decision making in the future.
Fancher, Chris M.; Han, Zhen; Levin, Igor; Page, Katharine; Reich, Brian J.; Smith, Ralph C.; Wilson, Alyson G.; Jones, Jacob L.
2016-01-01
A Bayesian inference method for refining crystallographic structures is presented. The distribution of model parameters is stochastically sampled using Markov chain Monte Carlo. Posterior probability distributions are constructed for all model parameters to properly quantify uncertainty by appropriately modeling the heteroskedasticity and correlation of the error structure. The proposed method is demonstrated by analyzing a National Institute of Standards and Technology silicon standard reference material. The results obtained by Bayesian inference are compared with those determined by Rietveld refinement. Posterior probability distributions of model parameters provide both estimates and uncertainties. The new method better estimates the true uncertainties in the model as compared to the Rietveld method. PMID:27550221
2011-01-01
Background Available measures of patient-reported outcomes for complementary and alternative medicine (CAM) inadequately capture the range of patient-reported treatment effects. The Self-Assessment of Change questionnaire was developed to measure multi-dimensional shifts in well-being for CAM users. With content derived from patient narratives, items were subsequently focused through interviews on a new cohort of participants. Here we present the development of the final version in which the content and format is refined through cognitive interviews. Methods We conducted cognitive interviews across five iterations of questionnaire refinement with a culturally diverse sample of 28 CAM users. In each iteration, participant critiques were used to revise the questionnaire, which was then re-tested in subsequent rounds of cognitive interviews. Following all five iterations, transcripts of cognitive interviews were systematically coded and analyzed to examine participants' understanding of the format and content of the final questionnaire. Based on this data, we established summary descriptions and selected exemplar quotations for each word pair on the final questionnaire. Results The final version of the Self-Assessment of Change questionnaire (SAC) includes 16 word pairs, nine of which remained unchanged from the original draft. Participants consistently said that these stable word pairs represented opposite ends of the same domain of experience and the meanings of these terms were stable across the participant pool. Five pairs underwent revision and two word pairs were added. Four word pairs were eliminated for redundancy or because participants did not agree on the meaning of the terms. Cognitive interviews indicate that participants understood the format of the questionnaire and considered each word pair to represent opposite poles of a shared domain of experience. Conclusions We have placed lay language and direct experience at the center of questionnaire revision and refinement. In so doing, we provide an innovative model for the development of truly patient-centered outcome measures. Although this instrument was designed and tested in a CAM-specific population, it may be useful in assessing multi-dimensional shifts in well-being across a broader patient population. PMID:22206409
A Conceptual Model of Career Development to Enhance Academic Motivation
ERIC Educational Resources Information Center
Collins, Nancy Creighton
2010-01-01
The purpose of this study was to develop, refine, and validate a conceptual model of career development to enhance the academic motivation of community college students. To achieve this end, a straw model was built from the theoretical and empirical research literature. The model was then refined and validated through three rounds of a Delphi…
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.
3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.
2016-03-15
We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less
Lee, Hasup; Baek, Minkyung; Lee, Gyu Rie; Park, Sangwoo; Seok, Chaok
2017-03-01
Many proteins function as homo- or hetero-oligomers; therefore, attempts to understand and regulate protein functions require knowledge of protein oligomer structures. The number of available experimental protein structures is increasing, and oligomer structures can be predicted using the experimental structures of related proteins as templates. However, template-based models may have errors due to sequence differences between the target and template proteins, which can lead to functional differences. Such structural differences may be predicted by loop modeling of local regions or refinement of the overall structure. In CAPRI (Critical Assessment of PRotein Interactions) round 30, we used recently developed features of the GALAXY protein modeling package, including template-based structure prediction, loop modeling, model refinement, and protein-protein docking to predict protein complex structures from amino acid sequences. Out of the 25 CAPRI targets, medium and acceptable quality models were obtained for 14 and 1 target(s), respectively, for which proper oligomer or monomer templates could be detected. Symmetric interface loop modeling on oligomer model structures successfully improved model quality, while loop modeling on monomer model structures failed. Overall refinement of the predicted oligomer structures consistently improved the model quality, in particular in interface contacts. Proteins 2017; 85:399-407. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Ducrot, Virginie; Ashauer, Roman; Bednarska, Agnieszka J; Hinarejos, Silvia; Thorbek, Pernille; Weyman, Gabriel
2016-01-01
Recent guidance identified toxicokinetic-toxicodynamic (TK-TD) modeling as a relevant approach for risk assessment refinement. Yet, its added value compared to other refinement options is not detailed, and how to conduct the modeling appropriately is not explained. This case study addresses these issues through 2 examples of individual-level risk assessment for 2 hypothetical plant protection products: 1) evaluating the risk for small granivorous birds and small omnivorous mammals of a single application, as a seed treatment in winter cereals, and 2) evaluating the risk for fish after a pulsed treatment in the edge-of-field zone. Using acute test data, we conducted the first tier risk assessment as defined in the European Food Safety Authority (EFSA) guidance. When first tier risk assessment highlighted a concern, refinement options were discussed. Cases where the use of models should be preferred over other existing refinement approaches were highlighted. We then practically conducted the risk assessment refinement by using 2 different models as examples. In example 1, a TK model accounting for toxicokinetics and relevant feeding patterns in the skylark and in the wood mouse was used to predict internal doses of the hypothetical active ingredient in individuals, based on relevant feeding patterns in an in-crop situation, and identify the residue levels leading to mortality. In example 2, a TK-TD model accounting for toxicokinetics, toxicodynamics, and relevant exposure patterns in the fathead minnow was used to predict the time-course of fish survival for relevant FOCUS SW exposure scenarios and identify which scenarios might lead to mortality. Models were calibrated using available standard data and implemented to simulate the time-course of internal dose of active ingredient or survival for different exposure scenarios. Simulation results were discussed and used to derive the risk assessment refinement endpoints used for decision. Finally, we compared the "classical" risk assessment approach with the model-based approach. These comparisons showed that TK and TK-TD models can bring more realism to the risk assessment through the possibility to study realistic exposure scenarios and to simulate relevant mechanisms of effects (including delayed toxicity and recovery). Noticeably, using TK-TD models is currently the most relevant way to directly connect realistic exposure patterns to effects. We conclude with recommendations on how to properly use TK and TK-TD model in acute risk assessment for vertebrates. © 2015 SETAC.
Language games: Advanced R & R packages: Book Review
Hraber, Peter Thomas
2016-03-23
Readers who wrangle answers from data by extended refinement of available computational tools have many options and resources available. Inevitably, they will develop their own methods tailored to the problem at hand.Two new books have recently been published, each of which is useful addition to the library for a scientist who programs with data. The two books reviewed are both written by H. Wickham. The titles are ''Advanced R'' and ''R Packages'', both published in 2015.
Language games: Advanced R & R packages: Book Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hraber, Peter Thomas
Readers who wrangle answers from data by extended refinement of available computational tools have many options and resources available. Inevitably, they will develop their own methods tailored to the problem at hand.Two new books have recently been published, each of which is useful addition to the library for a scientist who programs with data. The two books reviewed are both written by H. Wickham. The titles are ''Advanced R'' and ''R Packages'', both published in 2015.
Massive Joint Multinational Exercise Planning to Solve Army Warfighting Challenges
2016-06-10
and military sustainment occurs for various reasons, such as physical distance between offices, or a lack of institutional knowledge about Army...this thesis. Thank you to the entire library staff. A final thank you to LTC Toni Sabo for her expert review of the final paper. Your knowledge of the... English language reminded me how much I need to continue to refine and hone my skills. Thank you for your support and leadership in our staff group
Jung, Chai Young; Choi, Jong-Ye; Jeong, Seong Jik; Cho, Kyunghee; Koo, Yong Duk; Bae, Jin Hee; Kim, Sukil
2016-05-16
Arden Syntax is a Health Level Seven International (HL7) standard language that is used for representing medical knowledge as logic statements. Arden Syntax Markup Language (ArdenML) is a new representation of Arden Syntax based on XML. Compilers are required to execute medical logic modules (MLMs) in the hospital environment. However, ArdenML may also replace the compiler. The purpose of this study is to demonstrate that MLMs, encoded in ArdenML, can be transformed into a commercial rule engine format through an XSLT stylesheet and made executable in a target system. The target rule engine selected was Blaze Advisor. We developed an XSLT stylesheet to transform MLMs in ArdenML into Structured Rules Language (SRL) in Blaze Advisor, through a comparison of syntax between the two languages. The stylesheet was then refined recursively, by building and applying rules collected from the billing and coding guidelines of the Korean health insurance service. Two nurse coders collected and verified the rules and two information technology (IT) specialists encoded the MLMs and built the XSLT stylesheet. Finally, the stylesheet was validated by importing the MLMs into Blaze Advisor and applying them to claims data. The language comparison revealed that Blaze Advisor requires the declaration of variables with explicit types. We used both integer and real numbers for numeric types in ArdenML. "IF∼THEN" statements and assignment statements in ArdenML become rules in Blaze Advisor. We designed an XSLT stylesheet to solve this issue. In addition, we maintained the order of rule execution in the transformed rules, and added two small programs to support variable declarations and action statements. A total of 1489 rules were reviewed during this study, of which 324 rules were collected. We removed duplicate rules and encoded 241 unique MLMs in ArdenML, which were successfully transformed into SRL and imported to Blaze Advisor via the XSLT stylesheet. When applied to 73,841 outpatients' insurance claims data, the review result was the same as that of the legacy system. We have demonstrated that ArdenML can replace a compiler for transforming MLMs into commercial rule engine format. While the proposed XSLT stylesheet requires refinement for general use, we anticipate that the development of further XSLT stylesheets will support various rule engines. Copyright © 2016 Elsevier B.V. All rights reserved.
Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere
NASA Astrophysics Data System (ADS)
Chen, Zhiyuan; Morita, Kazuki
2018-03-01
We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.
Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere
NASA Astrophysics Data System (ADS)
Chen, Zhiyuan; Morita, Kazuki
2018-06-01
We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.
An ontology for component-based models of water resource systems
NASA Astrophysics Data System (ADS)
Elag, Mostafa; Goodall, Jonathan L.
2013-08-01
Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.
A BRDF statistical model applying to space target materials modeling
NASA Astrophysics Data System (ADS)
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
Refining the structure and content of clinical genomic reports.
Dorschner, Michael O; Amendola, Laura M; Shirts, Brian H; Kiedrowski, Lesli; Salama, Joseph; Gordon, Adam S; Fullerton, Stephanie M; Tarczy-Hornoch, Peter; Byers, Peter H; Jarvik, Gail P
2014-03-01
To effectively articulate the results of exome and genome sequencing we refined the structure and content of molecular test reports. To communicate results of a randomized control trial aimed at the evaluation of exome sequencing for clinical medicine, we developed a structured narrative report. With feedback from genetics and non-genetics professionals, we developed separate indication-specific and incidental findings reports. Standard test report elements were supplemented with research study-specific language, which highlighted the limitations of exome sequencing and provided detailed, structured results, and interpretations. The report format we developed to communicate research results can easily be transformed for clinical use by removal of research-specific statements and disclaimers. The development of clinical reports for exome sequencing has shown that accurate and open communication between the clinician and laboratory is ideally an ongoing process to address the increasing complexity of molecular genetic testing. © 2014 Wiley Periodicals, Inc.
Refining the Structure and Content of Clinical Genomic Reports
DORSCHNER, MICHAEL O.; AMENDOLA, LAURA M.; SHIRTS, BRIAN H.; KIEDROWSKI, LESLI; SALAMA, JOSEPH; GORDON, ADAM S.; FULLERTON, STEPHANIE M.; TARCZY-HORNOCH, PETER; BYERS, PETER H.; JARVIK, GAIL P.
2014-01-01
To effectively articulate the results of exome and genome sequencing we refined the structure and content of molecular test reports. To communicate results of a randomized control trial aimed at the evaluation of exome sequencing for clinical medicine, we developed a structured narrative report. With feedback from genetics and non-genetics professionals, we developed separate indication-specific and incidental findings reports. Standard test report elements were supplemented with research study-specific language, which highlighted the limitations of exome sequencing and provided detailed, structured results, and interpretations. The report format we developed to communicate research results can easily be transformed for clinical use by removal of research-specific statements and disclaimers. The development of clinical reports for exome sequencing has shown that accurate and open communication between the clinician and laboratory is ideally an ongoing process to address the increasing complexity of molecular genetic testing. PMID:24616401
Automated Assume-Guarantee Reasoning by Abstraction Refinement
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra
2008-01-01
Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.
Symmetry breaking in tensor models
NASA Astrophysics Data System (ADS)
Benedetti, Dario; Gurau, Razvan
2015-11-01
In this paper we analyze a quartic tensor model with one interaction for a tensor of arbitrary rank. This model has a critical point where a continuous limit of infinitely refined random geometries is reached. We show that the critical point corresponds to a phase transition in the tensor model associated to a breaking of the unitary symmetry. We analyze the model in the two phases and prove that, in a double scaling limit, the symmetric phase corresponds to a theory of infinitely refined random surfaces, while the broken phase corresponds to a theory of infinitely refined random nodal surfaces. At leading order in the double scaling limit planar surfaces dominate in the symmetric phase, and planar nodal surfaces dominate in the broken phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrnstein, Aaron R.
An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration,more » and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO 2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No dramatic or persistent signs of error growth in the passive tracer outgassing or the ocean circulation are observed to result from AMR.« less
Predictive Software Cost Model Study. Volume I. Final Technical Report.
1980-06-01
development phase to identify computer resources necessary to support computer programs after transfer of program manangement responsibility and system... classical model development with refinements specifically applicable to avionics systems. The refinements are the result of the Phase I literature search
AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT
A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...
Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B
NASA Technical Reports Server (NTRS)
Yeganefard, Sanaz; Butler, Michael; Rezazadeh, Abdolbaghi
2010-01-01
Recently a set of guidelines, or cookbook, has been developed for modelling and refinement of control problems in Event-B. The Event-B formal method is used for system-level modelling by defining states of a system and events which act on these states. It also supports refinement of models. This cookbook is intended to systematize the process of modelling and refining a control problem system by distinguishing environment, controller and command phenomena. Our main objective in this paper is to investigate and evaluate the usefulness and effectiveness of this cookbook by following it throughout the formal modelling of cruise control system found in cars. The outcomes are identifying the benefits of the cookbook and also giving guidance to its future users.
Application of the Refined Zigzag Theory to the Modeling of Delaminations in Laminated Composites
NASA Technical Reports Server (NTRS)
Groh, Rainer M. J.; Weaver, Paul M.; Tessler, Alexander
2015-01-01
The Refined Zigzag Theory is applied to the modeling of delaminations in laminated composites. The commonly used cohesive zone approach is adapted for use within a continuum mechanics model, and then used to predict the onset and propagation of delamination in five cross-ply composite beams. The resin-rich area between individual composite plies is modeled explicitly using thin, discrete layers with isotropic material properties. A damage model is applied to these resin-rich layers to enable tracking of delamination propagation. The displacement jump across the damaged interfacial resin layer is captured using the zigzag function of the Refined Zigzag Theory. The overall model predicts the initiation of delamination to within 8% compared to experimental results and the load drop after propagation is represented accurately.
The Collaborative Seismic Earth Model: Generation 1
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner
2018-05-01
We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.
Validating neural-network refinements of nuclear mass models
NASA Astrophysics Data System (ADS)
Utama, R.; Piekarewicz, J.
2018-01-01
Background: Nuclear astrophysics centers on the role of nuclear physics in the cosmos. In particular, nuclear masses at the limits of stability are critical in the development of stellar structure and the origin of the elements. Purpose: We aim to test and validate the predictions of recently refined nuclear mass models against the newly published AME2016 compilation. Methods: The basic paradigm underlining the recently refined nuclear mass models is based on existing state-of-the-art models that are subsequently refined through the training of an artificial neural network. Bayesian inference is used to determine the parameters of the neural network so that statistical uncertainties are provided for all model predictions. Results: We observe a significant improvement in the Bayesian neural network (BNN) predictions relative to the corresponding "bare" models when compared to the nearly 50 new masses reported in the AME2016 compilation. Further, AME2016 estimates for the handful of impactful isotopes in the determination of r -process abundances are found to be in fairly good agreement with our theoretical predictions. Indeed, the BNN-improved Duflo-Zuker model predicts a root-mean-square deviation relative to experiment of σrms≃400 keV. Conclusions: Given the excellent performance of the BNN refinement in confronting the recently published AME2016 compilation, we are confident of its critical role in our quest for mass models of the highest quality. Moreover, as uncertainty quantification is at the core of the BNN approach, the improved mass models are in a unique position to identify those nuclei that will have the strongest impact in resolving some of the outstanding questions in nuclear astrophysics.
Microstructures and Grain Refinement of Additive-Manufactured Ti- xW Alloys
NASA Astrophysics Data System (ADS)
Mendoza, Michael Y.; Samimi, Peyman; Brice, David A.; Martin, Brian W.; Rolchigo, Matt R.; LeSar, Richard; Collins, Peter C.
2017-07-01
It is necessary to better understand the composition-processing-microstructure relationships that exist for materials produced by additive manufacturing. To this end, Laser Engineered Net Shaping (LENS™), a type of additive manufacturing, was used to produce a compositionally graded titanium binary model alloy system (Ti- xW specimen (0 ≤ x ≤ 30 wt pct), so that relationships could be made between composition, processing, and the prior beta grain size. Importantly, the thermophysical properties of the Ti- xW, specifically its supercooling parameter ( P) and growth restriction factor ( Q), are such that grain refinement is expected and was observed. The systematic, combinatorial study of this binary system provides an opportunity to assess the mechanisms by which grain refinement occurs in Ti-based alloys in general, and for additive manufacturing in particular. The operating mechanisms that govern the relationship between composition and grain size are interpreted using a model originally developed for aluminum and magnesium alloys and subsequently applied for titanium alloys. The prior beta grain factor observed and the interpretations of their correlations indicate that tungsten is a good grain refiner and such models are valid to explain the grain-refinement process. By extension, other binary elements or higher order alloy systems with similar thermophysical properties should exhibit similar grain refinement.
3Drefine: an interactive web server for efficient protein structure refinement
Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin
2016-01-01
3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. PMID:27131371
Mansour, M M; Spink, A E F
2013-01-01
Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
Composing, Analyzing and Validating Software Models
NASA Astrophysics Data System (ADS)
Sheldon, Frederick T.
1998-10-01
This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.
Composing, Analyzing and Validating Software Models
NASA Technical Reports Server (NTRS)
Sheldon, Frederick T.
1998-01-01
This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.
Requirements' Role in Mobilizing and Enabling Design Conversation
NASA Astrophysics Data System (ADS)
Bergman, Mark
Requirements play a critical role in a design conversation of systems and products. Product and system design exists at the crossroads of problems, solutions and requirements. Requirements contextualize problems and solutions, pointing the way to feasible outcomes. These are captured with models and detailed specifications. Still, stakeholders need to be able to understand one-another using shared design representations in order to mobilize bias and transform knowledge towards legitimized, desired results. Many modern modeling languages, including UML, as well as detailed, logic-based specifications are beyond the comprehension of key stakeholders. Hence, they inhibit, rather than promote design conversation. Improved design boundary objects (DBO), especially design requirements boundary objects (DRBO), need to be created and refined to improve the communications between principals. Four key features of design boundary objects that improve and promote design conversation are discussed in detail. A systems analysis and design case study is presented which demonstrates these features in action. It describes how a small team of analysts worked with key stakeholders to mobilize and guide a complex system design discussion towards an unexpected, yet desired outcome within a short time frame.
Black, D F; Vachha, B; Mian, A; Faro, S H; Maheshwari, M; Sair, H I; Petrella, J R; Pillai, J J; Welker, K
2017-10-01
Functional MR imaging is increasingly being used for presurgical language assessment in the treatment of patients with brain tumors, epilepsy, vascular malformations, and other conditions. The inherent complexity of fMRI, which includes numerous processing steps and selective analyses, is compounded by institution-unique approaches to patient training, paradigm choice, and an eclectic array of postprocessing options from various vendors. Consequently, institutions perform fMRI in such markedly different manners that data sharing, comparison, and generalization of results are difficult. The American Society of Functional Neuroradiology proposes widespread adoption of common fMRI language paradigms as the first step in countering this lost opportunity to advance our knowledge and improve patient care. A taskforce of American Society of Functional Neuroradiology members from multiple institutions used a broad literature review, member polls, and expert opinion to converge on 2 sets of standard language paradigms that strike a balance between ease of application and clinical usefulness. The taskforce generated an adult language paradigm algorithm for presurgical language assessment including the following tasks: Sentence Completion, Silent Word Generation, Rhyming, Object Naming, and/or Passive Story Listening. The pediatric algorithm includes the following tasks: Sentence Completion, Rhyming, Antonym Generation, or Passive Story Listening. Convergence of fMRI language paradigms across institutions offers the first step in providing a "Rosetta Stone" that provides a common reference point with which to compare and contrast the usefulness and reliability of fMRI data. From this common language task battery, future refinements and improvements are anticipated, particularly as objective measures of reliability become available. Some commonality of practice is a necessary first step to develop a foundation on which to improve the clinical utility of this field. © 2017 by American Journal of Neuroradiology.
MAIN software for density averaging, model building, structure refinement and validation
Turk, Dušan
2013-01-01
MAIN is software that has been designed to interactively perform the complex tasks of macromolecular crystal structure determination and validation. Using MAIN, it is possible to perform density modification, manual and semi-automated or automated model building and rebuilding, real- and reciprocal-space structure optimization and refinement, map calculations and various types of molecular structure validation. The prompt availability of various analytical tools and the immediate visualization of molecular and map objects allow a user to efficiently progress towards the completed refined structure. The extraordinary depth perception of molecular objects in three dimensions that is provided by MAIN is achieved by the clarity and contrast of colours and the smooth rotation of the displayed objects. MAIN allows simultaneous work on several molecular models and various crystal forms. The strength of MAIN lies in its manipulation of averaged density maps and molecular models when noncrystallographic symmetry (NCS) is present. Using MAIN, it is possible to optimize NCS parameters and envelopes and to refine the structure in single or multiple crystal forms. PMID:23897458
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borbulevych, Oleg Y.; Plumley, Joshua A.; Martin, Roger I.
2014-05-01
Semiempirical quantum-chemical X-ray macromolecular refinement using the program DivCon integrated with PHENIX is described. Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM)more » program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein–ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.« less
Simulation of the shallow groundwater-flow system near the Hayward Airport, Sawyer County, Wisconsin
Hunt, Randall J.; Juckem, Paul F.; Dunning, Charles P.
2010-01-01
There are concerns that removal and trimming of vegetation during expansion of the Hayward Airport in Sawyer County, Wisconsin, could appreciably change the character of a nearby cold-water stream and its adjacent environs. In cooperation with the Wisconsin Department of Transportation, a two-dimensional, steady-state groundwater-flow model of the shallow groundwater-flow system near the Hayward Airport was refined from a regional model of the area. The parameter-estimation code PEST was used to obtain a best fit of the model to additional field data collected in February 2007 as part of this study. The additional data were collected during an extended period of low runoff and consisted of water levels and streamflows near the Hayward Airport. Refinements to the regional model included one additional hydraulic-conductivity zone for the airport area, and three additional parameters for streambed resistance in a northern tributary to the Namekagon River and in the main stem of the Namekagon River. In the refined Hayward Airport area model, the calibrated hydraulic conductivity was 11.2 feet per day, which is within the 58.2 to 7.9 feet per day range reported for the regional glacial and sandstone aquifer, and is consistent with a silty soil texture for the area. The calibrated refined model had a best fit of 8.6 days for the streambed resistance of the Namekagon River and between 0.6 and 1.6 days for the northern tributary stream. The previously reported regional groundwater-recharge rate of 10.1 inches per year was adjusted during calibration of the refined model in order to match streamflows measured during the period of extended low runoff; this resulted in an optimal groundwater-recharge rate of 7.1 inches per year during this period. The refined model was then used to simulate the capture zone of the northern tributary to the Namekagon River.
Refined open intersection numbers and the Kontsevich-Penner matrix model
NASA Astrophysics Data System (ADS)
Alexandrov, Alexander; Buryak, Alexandr; Tessler, Ran J.
2017-03-01
A study of the intersection theory on the moduli space of Riemann surfaces with boundary was recently initiated in a work of R. Pandharipande, J.P. Solomon and the third author, where they introduced open intersection numbers in genus 0. Their construction was later generalized to all genera by J.P. Solomon and the third author. In this paper we consider a refinement of the open intersection numbers by distinguishing contributions from surfaces with different numbers of boundary components, and we calculate all these numbers. We then construct a matrix model for the generating series of the refined open intersection numbers and conjecture that it is equivalent to the Kontsevich-Penner matrix model. An evidence for the conjecture is presented. Another refinement of the open intersection numbers, which describes the distribution of the boundary marked points on the boundary components, is also discussed.
Liao, Sheng-hui; Zhu, Xing-hao; Xie, Jing; Sohodeb, Vikesh Kumar; Ding, Xi
2016-01-01
The objective of this investigation is to analyze the influence of trabecular microstructure modeling on the biomechanical distribution of the implant-bone interface. Two three-dimensional finite element mandible models, one with trabecular microstructure (a refined model) and one with macrostructure (a simplified model), were built. The values of equivalent stress at the implant-bone interface in the refined model increased compared with those of the simplified model and strain on the contrary. The distributions of stress and strain were more uniform in the refined model of trabecular microstructure, in which stress and strain were mainly concentrated in trabecular bone. It was concluded that simulation of trabecular bone microstructure had a significant effect on the distribution of stress and strain at the implant-bone interface. These results suggest that trabecular structures could disperse stress and strain and serve as load buffers. PMID:27403424
Liao, Sheng-Hui; Zhu, Xing-Hao; Xie, Jing; Sohodeb, Vikesh Kumar; Ding, Xi
2016-01-01
The objective of this investigation is to analyze the influence of trabecular microstructure modeling on the biomechanical distribution of the implant-bone interface. Two three-dimensional finite element mandible models, one with trabecular microstructure (a refined model) and one with macrostructure (a simplified model), were built. The values of equivalent stress at the implant-bone interface in the refined model increased compared with those of the simplified model and strain on the contrary. The distributions of stress and strain were more uniform in the refined model of trabecular microstructure, in which stress and strain were mainly concentrated in trabecular bone. It was concluded that simulation of trabecular bone microstructure had a significant effect on the distribution of stress and strain at the implant-bone interface. These results suggest that trabecular structures could disperse stress and strain and serve as load buffers.
Hydrogen ADPs with Cu Kα data? Invariom and Hirshfeld atom modelling of fluconazole.
Orben, Claudia M; Dittrich, Birger
2014-06-01
For the structure of fluconazole [systematic name: 2-(2,4-difluorophenyl)-1,3-bis(1H-1,2,4-triazol-1-yl)propan-2-ol] monohydrate, C13H12F2N6O·H2O, a case study on different model refinements is reported, based on single-crystal X-ray diffraction data measured at 100 K with Cu Kα radiation to a resolution of sin θ/λ of 0.6 Å(-1). The structure, anisotropic displacement parameters (ADPs) and figures of merit from the independent atom model are compared to `invariom' and `Hirshfeld atom' refinements. Changing from a spherical to an aspherical atom model lowers the figures of merit and improves both the accuracy and the precision of the geometrical parameters. Differences between results from the two aspherical-atom refinements are small. However, a refinement of ADPs for H atoms is only possible with the Hirshfeld atom density model. It gives meaningful results even at a resolution of 0.6 Å(-1), but requires good low-order data.
The Civil Rights Act of 1991: Affirmative Action, Disparate Impact, and Employment Quotas?
1991-01-01
personal worth and making equal treatment and equal opportunity matters of simple fairness." > But change, especially in the areas where it can be said...employer’s legitimate business goals." > Some of the articles > quote the Court’s language out of context and state the analysis has been realigned...This clearly is a misstatement of the Court’s refinement and is the result of viewing the case from a purely plaintiff-oriented,
Natural Language Processing: A Tutorial.
1986-08-01
most specific. For example, ’ ’’’’ the net in Figure 34 shows that: a dog is an animal, a Schnauzer is a .-. type of dog, and Bert is a Schnauzer ...specifically, is true (by default) of .,.--- the concept below it on the hierarchy. Thus, since a dog is an animal and a Schnauzer is a dog, a... Schnauzer is an animal (and Bert, because he .. ’- 63•... ,..4.,. . .-4 is a Schnauzer , is a dog, and therefore is an animal, etc). A further refinement of
Harte, P.T.; Mack, Thomas J.
1992-01-01
Hydrogeologic data collected since 1990 were assessed and a ground-water-flow model was refined in this study of the Milford-Souhegan glacial-drift aquifer in Milford, New Hampshire. The hydrogeologic data collected were used to refine estimates of hydraulic conductivity and saturated thickness of the aquifer, which were previously calculated during 1988-90. In October 1990, water levels were measured at 124 wells and piezometers, and at 45 stream-seepage sites on the main stem of the Souhegan River, and on small tributary streams overlying the aquifer to improve an understanding of ground-water-flow patterns and stream-seepage gains and losses. Refinement of the ground-water-flow model included a reduction in the number of active cells in layer 2 in the central part of the aquifer, a revision of simulated hydraulic conductivity in model layers 2 and representing the aquifer, incorporation of a new block-centered finite-difference ground-water-flow model, and incorporation of a new solution algorithm and solver (a preconditioned conjugate-gradient algorithm). Refinements to the model resulted in decreases in the difference between calculated and measured heads at 22 wells. The distribution of gains and losses of stream seepage calculated in simulation with the refined model is similar to that calculated in the previous model simulation. The contributing area to the Savage well, under average pumping conditions, decreased by 0.021 square miles from the area calculated in the previous model simulation. The small difference in the contrib- uting recharge area indicates that the additional data did not enhance model simulation and that the conceptual framework for the previous model is accurate.
NASA Astrophysics Data System (ADS)
Rout, Bapin Kumar; Brooks, Geoff; Rhamdhani, M. Akbar; Li, Zushu; Schrama, Frank N. H.; Sun, Jianjun
2018-04-01
A multi-zone kinetic model coupled with a dynamic slag generation model was developed for the simulation of hot metal and slag composition during the basic oxygen furnace (BOF) operation. The three reaction zones (i) jet impact zone, (ii) slag-bulk metal zone, (iii) slag-metal-gas emulsion zone were considered for the calculation of overall refining kinetics. In the rate equations, the transient rate parameters were mathematically described as a function of process variables. A micro and macroscopic rate calculation methodology (micro-kinetics and macro-kinetics) were developed to estimate the total refining contributed by the recirculating metal droplets through the slag-metal emulsion zone. The micro-kinetics involves developing the rate equation for individual droplets in the emulsion. The mathematical models for the size distribution of initial droplets, kinetics of simultaneous refining of elements, the residence time in the emulsion, and dynamic interfacial area change were established in the micro-kinetic model. In the macro-kinetics calculation, a droplet generation model was employed and the total amount of refining by emulsion was calculated by summing the refining from the entire population of returning droplets. A dynamic FetO generation model based on oxygen mass balance was developed and coupled with the multi-zone kinetic model. The effect of post-combustion on the evolution of slag and metal composition was investigated. The model was applied to a 200-ton top blowing converter and the simulated value of metal and slag was found to be in good agreement with the measured data. The post-combustion ratio was found to be an important factor in controlling FetO content in the slag and the kinetics of Mn and P in a BOF process.
Re-refinement from deposited X-ray data can deliver improved models for most PDB entries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joosten, Robbie P.; Womack, Thomas; Vriend, Gert, E-mail: vriend@cmbi.ru.nl
2009-02-01
An evaluation of validation and real-space intervention possibilities for improving existing automated (re-)refinement methods. The deposition of X-ray data along with the customary structural models defining PDB entries makes it possible to apply large-scale re-refinement protocols to these entries, thus giving users the benefit of improvements in X-ray methods that have occurred since the structure was deposited. Automated gradient refinement is an effective method to achieve this goal, but real-space intervention is most often required in order to adequately address problems detected by structure-validation software. In order to improve the existing protocol, automated re-refinement was combined with structure validation andmore » difference-density peak analysis to produce a catalogue of problems in PDB entries that are amenable to automatic correction. It is shown that re-refinement can be effective in producing improvements, which are often associated with the systematic use of the TLS parameterization of B factors, even for relatively new and high-resolution PDB entries, while the accompanying manual or semi-manual map analysis and fitting steps show good prospects for eventual automation. It is proposed that the potential for simultaneous improvements in methods and in re-refinement results be further encouraged by broadening the scope of depositions to include refinement metadata and ultimately primary rather than reduced X-ray data.« less
Borbulevych, Oleg Y; Plumley, Joshua A; Martin, Roger I; Merz, Kenneth M; Westerhoff, Lance M
2014-05-01
Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein-ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.
Ahlfeld, David P.; Baker, Kristine M.; Barlow, Paul M.
2009-01-01
This report describes the Groundwater-Management (GWM) Process for MODFLOW-2005, the 2005 version of the U.S. Geological Survey modular three-dimensional groundwater model. GWM can solve a broad range of groundwater-management problems by combined use of simulation- and optimization-modeling techniques. These problems include limiting groundwater-level declines or streamflow depletions, managing groundwater withdrawals, and conjunctively using groundwater and surface-water resources. GWM was initially released for the 2000 version of MODFLOW. Several modifications and enhancements have been made to GWM since its initial release to increase the scope of the program's capabilities and to improve its operation and reporting of results. The new code, which is called GWM-2005, also was designed to support the local grid refinement capability of MODFLOW-2005. Local grid refinement allows for the simulation of one or more higher resolution local grids (referred to as child models) within a coarser grid parent model. Local grid refinement is often needed to improve simulation accuracy in regions where hydraulic gradients change substantially over short distances or in areas requiring detailed representation of aquifer heterogeneity. GWM-2005 can be used to formulate and solve groundwater-management problems that include components in both parent and child models. Although local grid refinement increases simulation accuracy, it can also substantially increase simulation run times.
Adaptive Mesh Refinement for Microelectronic Device Design
NASA Technical Reports Server (NTRS)
Cwik, Tom; Lou, John; Norton, Charles
1999-01-01
Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of the spectrum as well as active device simulations that model charge transport and Maxwell's equations will be presented.
Language extinction and linguistic fronts
Isern, Neus; Fort, Joaquim
2014-01-01
Language diversity has become greatly endangered in the past centuries owing to processes of language shift from indigenous languages to other languages that are seen as socially and economically more advantageous, resulting in the death or doom of minority languages. In this paper, we define a new language competition model that can describe the historical decline of minority languages in competition with more advantageous languages. We then implement this non-spatial model as an interaction term in a reaction–diffusion system to model the evolution of the two competing languages. We use the results to estimate the speed at which the more advantageous language spreads geographically, resulting in the shrinkage of the area of dominance of the minority language. We compare the results from our model with the observed retreat in the area of influence of the Welsh language in the UK, obtaining a good agreement between the model and the observed data. PMID:24598207
TLS from fundamentals to practice
Urzhumtsev, Alexandre; Afonine, Pavel V.; Adams, Paul D.
2014-01-01
The Translation-Libration-Screw-rotation (TLS) model of rigid-body harmonic displacements introduced in crystallography by Schomaker & Trueblood (1968) is now a routine tool in macromolecular studies and is a feature of most modern crystallographic structure refinement packages. In this review we consider a number of simple examples that illustrate important features of the TLS model. Based on these examples simplified formulae are given for several special cases that may occur in structure modeling and refinement. The derivation of general TLS formulae from basic principles is also provided. This manuscript describes the principles of TLS modeling, as well as some select algorithmic details for practical application. An extensive list of applications references as examples of TLS in macromolecular crystallography refinement is provided. PMID:25249713
Yun, Jian; Shang, Song-Chao; Wei, Xiao-Dan; Liu, Shuang; Li, Zhi-Jie
2016-01-01
Language is characterized by both ecological properties and social properties, and competition is the basic form of language evolution. The rise and decline of one language is a result of competition between languages. Moreover, this rise and decline directly influences the diversity of human culture. Mathematics and computer modeling for language competition has been a popular topic in the fields of linguistics, mathematics, computer science, ecology, and other disciplines. Currently, there are several problems in the research on language competition modeling. First, comprehensive mathematical analysis is absent in most studies of language competition models. Next, most language competition models are based on the assumption that one language in the model is stronger than the other. These studies tend to ignore cases where there is a balance of power in the competition. The competition between two well-matched languages is more practical, because it can facilitate the co-development of two languages. A third issue with current studies is that many studies have an evolution result where the weaker language inevitably goes extinct. From the integrated point of view of ecology and sociology, this paper improves the Lotka-Volterra model and basic reaction-diffusion model to propose an "ecology-society" computational model for describing language competition. Furthermore, a strict and comprehensive mathematical analysis was made for the stability of the equilibria. Two languages in competition may be either well-matched or greatly different in strength, which was reflected in the experimental design. The results revealed that language coexistence, and even co-development, are likely to occur during language competition.
Quantifying the driving factors for language shift in a bilingual region.
Prochazka, Katharina; Vogl, Gero
2017-04-25
Many of the world's around 6,000 languages are in danger of disappearing as people give up use of a minority language in favor of the majority language in a process called language shift. Language shift can be monitored on a large scale through the use of mathematical models by way of differential equations, for example, reaction-diffusion equations. Here, we use a different approach: we propose a model for language dynamics based on the principles of cellular automata/agent-based modeling and combine it with very detailed empirical data. Our model makes it possible to follow language dynamics over space and time, whereas existing models based on differential equations average over space and consequently provide no information on local changes in language use. Additionally, cellular automata models can be used even in cases where models based on differential equations are not applicable, for example, in situations where one language has become dispersed and retreated to language islands. Using data from a bilingual region in Austria, we show that the most important factor in determining the spread and retreat of a language is the interaction with speakers of the same language. External factors like bilingual schools or parish language have only a minor influence.
Building Excellence in Project Execution: Integrated Project Management
2015-04-30
challenge by adopting and refining the CMMI Model and building the tenets of integrated project management (IPM) into project planning and execution...Systems Center Pacific (SSC Pacific) is addressing this challenge by adopting and refining the CMMI Model, and building the tenets of integrated project...successfully managing stakeholder expectations and meeting requirements. Under the Capability Maturity Model Integration ( CMMI ), IPM is defined as
ERIC Educational Resources Information Center
Lee, Chia-Jung; Kim, ChanMin
2014-01-01
This study presents a refined technological pedagogical content knowledge (also known as TPACK) based instructional design model, which was revised using findings from the implementation study of a prior model. The refined model was applied in a technology integration course with 38 preservice teachers. A case study approach was used in this…
NASA Astrophysics Data System (ADS)
Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos
2013-05-01
In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.
Re-refinement from deposited X-ray data can deliver improved models for most PDB entries.
Joosten, Robbie P; Womack, Thomas; Vriend, Gert; Bricogne, Gérard
2009-02-01
The deposition of X-ray data along with the customary structural models defining PDB entries makes it possible to apply large-scale re-refinement protocols to these entries, thus giving users the benefit of improvements in X-ray methods that have occurred since the structure was deposited. Automated gradient refinement is an effective method to achieve this goal, but real-space intervention is most often required in order to adequately address problems detected by structure-validation software. In order to improve the existing protocol, automated re-refinement was combined with structure validation and difference-density peak analysis to produce a catalogue of problems in PDB entries that are amenable to automatic correction. It is shown that re-refinement can be effective in producing improvements, which are often associated with the systematic use of the TLS parameterization of B factors, even for relatively new and high-resolution PDB entries, while the accompanying manual or semi-manual map analysis and fitting steps show good prospects for eventual automation. It is proposed that the potential for simultaneous improvements in methods and in re-refinement results be further encouraged by broadening the scope of depositions to include refinement metadata and ultimately primary rather than reduced X-ray data.
A refined methodology for modeling volume quantification performance in CT
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Wilson, Joshua; Samei, Ehsan
2014-03-01
The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.
Increasing the Cryogenic Toughness of Steels
NASA Technical Reports Server (NTRS)
Rush, H. F.
1986-01-01
Grain-refining heat treatments increase toughness without substantial strength loss. Five alloys selected for study, all at or near technological limit. Results showed clearly grain sizes of these alloys refined by such heat treatments and grain refinement results in large improvement in toughness without substantial loss in strength. Best improvements seen in HP-9-4-20 Steel, at low-strength end of technological limit, and in Maraging 200, at high-strength end. These alloys, in grain refined condition, considered for model applications in high-Reynolds-number cryogenic wind tunnels.
3Drefine: an interactive web server for efficient protein structure refinement.
Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin
2016-07-08
3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Acute, subchronic, and developmental toxicological properties of lubricating oil base stocks.
Dalbey, Walden E; McKee, Richard H; Goyak, Katy Olsavsky; Biles, Robert W; Murray, Jay; White, Russell
2014-01-01
Lubricating oil base stocks (LOBs) are substances used in the manufacture of finished lubricants and greases. They are produced from residue remaining after atmospheric distillation of crude oil that is subsequently fractionated by vacuum distillation and additional refining steps. Initial LOB streams that have been produced by vacuum distillation but not further refined may contain polycyclic aromatic compounds (PACs) and may present carcinogenic hazards. In modern refineries, LOBs are further refined by multistep processes including solvent extraction and/or hydrogen treatment to reduce the levels of PACs and other undesirable constituents. Thus, mildly (insufficiently) refined LOBs are potentially more hazardous than more severely (sufficiently) refined LOBs. This article discusses the evaluation of LOBs using statistical models based on content of PACs; these models indicate that insufficiently refined LOBs (potentially carcinogenic LOBs) can also produce systemic and developmental effects with repeated dermal exposure. Experimental data were also obtained in ten 13-week dermal studies in rats, eight 4-week dermal studies in rabbits, and seven dermal developmental toxicity studies with sufficiently refined LOBs (noncarcinogenic and commonly marketed) in which no observed adverse effect levels for systemic toxicity and developmental toxicity were 1000 to 2000 mg/kg/d with dermal exposures, typically the highest dose tested. Results in both oral and inhalation developmental toxicity studies were similar. This absence of toxicologically relevant findings was consistent with lower PAC content of sufficiently refined LOBs. Based on data on reproductive organs with repeated dosing and parameters in developmental toxicity studies, sufficiently refined LOBs are likely to have little, if any, effect on reproductive parameters.
No grammatical gender effect on affective ratings: evidence from Italian and German languages.
Montefinese, Maria; Ambrosini, Ettore; Roivainen, Eka
2018-06-06
In this study, we tested the linguistic relativity hypothesis by studying the effect of grammatical gender (feminine vs. masculine) on affective judgments of conceptual representation in Italian and German. In particular, we examined the within- and cross-language grammatical gender effect and its interaction with participants' demographic characteristics (such as, the raters' age and sex) on semantic differential scales (affective ratings of valence, arousal and dominance) in Italian and German speakers. We selected the stimuli and the relative affective measures from Italian and German adaptations of the ANEW (Affective Norms for English Words). Bayesian and frequentist analyses yielded evidence for the absence of within- and cross-languages effects of grammatical gender and sex- and age-dependent interactions. These results suggest that grammatical gender does not affect judgments of affective features of semantic representation in Italian and German speakers, since an overt coding of word grammar is not required. Although further research is recommended to refine the impact of the grammatical gender on properties of semantic representation, these results have implications for any strong view of the linguistic relativity hypothesis.
[Evolution, emotion, language and conscience in the postrationalist psychotherapy].
De Pascale, Adele
2011-01-01
A complex system process oriented approach, in other words a constructivistic postrationalist cognitive one to psychology and to psychopathology, stresses the close interdependency among processes as evolution, emotion, language and conscience. During evolution, emotions, whose biological roots we share with superior primates, should be specialized and refined. Along this process should become necessary a more and more abstract way of scaffolding the enormous quantity of data a brain could manage. Cognitive abilities, rooted in the emotional quality of experience, allow - during the phylogenetic development - more and more complex patterns of reflexivity until to the necessary ability of recognizing other's intention and consequently of lying. Language, abstract ability usefull to give increasing experiential data scaffolding, probably coming from motor skills development, brings at the same time the possibility, for a human knowing system, of self-consciousness: to do this it's owed to detach from itself, that is experience a deep sense of loneliness. Here it is that the progressive cognitive skills development is linked to the possibility of lying and of self-deception as long as the acquiring of advanced levels of selfconsciousness.
Improving virtual screening of G protein-coupled receptors via ligand-directed modeling
Simms, John; Christopoulos, Arthur; Wootten, Denise
2017-01-01
G protein-coupled receptors (GPCRs) play crucial roles in cell physiology and pathophysiology. There is increasing interest in using structural information for virtual screening (VS) of libraries and for structure-based drug design to identify novel agonist or antagonist leads. However, the sparse availability of experimentally determined GPCR/ligand complex structures with diverse ligands impedes the application of structure-based drug design (SBDD) programs directed to identifying new molecules with a select pharmacology. In this study, we apply ligand-directed modeling (LDM) to available GPCR X-ray structures to improve VS performance and selectivity towards molecules of specific pharmacological profile. The described method refines a GPCR binding pocket conformation using a single known ligand for that GPCR. The LDM method is a computationally efficient, iterative workflow consisting of protein sampling and ligand docking. We developed an extensive benchmark comparing LDM-refined binding pockets to GPCR X-ray crystal structures across seven different GPCRs bound to a range of ligands of different chemotypes and pharmacological profiles. LDM-refined models showed improvement in VS performance over origin X-ray crystal structures in 21 out of 24 cases. In all cases, the LDM-refined models had superior performance in enriching for the chemotype of the refinement ligand. This likely contributes to the LDM success in all cases of inhibitor-bound to agonist-bound binding pocket refinement, a key task for GPCR SBDD programs. Indeed, agonist ligands are required for a plethora of GPCRs for therapeutic intervention, however GPCR X-ray structures are mostly restricted to their inactive inhibitor-bound state. PMID:29131821
PDB_REDO: automated re-refinement of X-ray structure models in the PDB.
Joosten, Robbie P; Salzemann, Jean; Bloch, Vincent; Stockinger, Heinz; Berglund, Ann-Charlott; Blanchet, Christophe; Bongcam-Rudloff, Erik; Combet, Christophe; Da Costa, Ana L; Deleage, Gilbert; Diarena, Matteo; Fabbretti, Roberto; Fettahi, Géraldine; Flegel, Volker; Gisel, Andreas; Kasam, Vinod; Kervinen, Timo; Korpelainen, Eija; Mattila, Kimmo; Pagni, Marco; Reichstadt, Matthieu; Breton, Vincent; Tickle, Ian J; Vriend, Gert
2009-06-01
Structural biology, homology modelling and rational drug design require accurate three-dimensional macromolecular coordinates. However, the coordinates in the Protein Data Bank (PDB) have not all been obtained using the latest experimental and computational methods. In this study a method is presented for automated re-refinement of existing structure models in the PDB. A large-scale benchmark with 16 807 PDB entries showed that they can be improved in terms of fit to the deposited experimental X-ray data as well as in terms of geometric quality. The re-refinement protocol uses TLS models to describe concerted atom movement. The resulting structure models are made available through the PDB_REDO databank (http://www.cmbi.ru.nl/pdb_redo/). Grid computing techniques were used to overcome the computational requirements of this endeavour.
Quantifying the driving factors for language shift in a bilingual region
Prochazka, Katharina; Vogl, Gero
2017-01-01
Many of the world’s around 6,000 languages are in danger of disappearing as people give up use of a minority language in favor of the majority language in a process called language shift. Language shift can be monitored on a large scale through the use of mathematical models by way of differential equations, for example, reaction–diffusion equations. Here, we use a different approach: we propose a model for language dynamics based on the principles of cellular automata/agent-based modeling and combine it with very detailed empirical data. Our model makes it possible to follow language dynamics over space and time, whereas existing models based on differential equations average over space and consequently provide no information on local changes in language use. Additionally, cellular automata models can be used even in cases where models based on differential equations are not applicable, for example, in situations where one language has become dispersed and retreated to language islands. Using data from a bilingual region in Austria, we show that the most important factor in determining the spread and retreat of a language is the interaction with speakers of the same language. External factors like bilingual schools or parish language have only a minor influence. PMID:28298530
NoSQL data model for semi-automatic integration of ethnomedicinal plant data from multiple sources.
Ningthoujam, Sanjoy Singh; Choudhury, Manabendra Dutta; Potsangbam, Kumar Singh; Chetia, Pankaj; Nahar, Lutfun; Sarker, Satyajit D; Basar, Norazah; Das Talukdar, Anupam
2014-01-01
Sharing traditional knowledge with the scientific community could refine scientific approaches to phytochemical investigation and conservation of ethnomedicinal plants. As such, integration of traditional knowledge with scientific data using a single platform for sharing is greatly needed. However, ethnomedicinal data are available in heterogeneous formats, which depend on cultural aspects, survey methodology and focus of the study. Phytochemical and bioassay data are also available from many open sources in various standards and customised formats. To design a flexible data model that could integrate both primary and curated ethnomedicinal plant data from multiple sources. The current model is based on MongoDB, one of the Not only Structured Query Language (NoSQL) databases. Although it does not contain schema, modifications were made so that the model could incorporate both standard and customised ethnomedicinal plant data format from different sources. The model presented can integrate both primary and secondary data related to ethnomedicinal plants. Accommodation of disparate data was accomplished by a feature of this database that supported a different set of fields for each document. It also allowed storage of similar data having different properties. The model presented is scalable to a highly complex level with continuing maturation of the database, and is applicable for storing, retrieving and sharing ethnomedicinal plant data. It can also serve as a flexible alternative to a relational and normalised database. Copyright © 2014 John Wiley & Sons, Ltd.
Structural Health Monitoring of Large Structures
NASA Technical Reports Server (NTRS)
Kim, Hyoung M.; Bartkowicz, Theodore J.; Smith, Suzanne Weaver; Zimmerman, David C.
1994-01-01
This paper describes a damage detection and health monitoring method that was developed for large space structures using on-orbit modal identification. After evaluating several existing model refinement and model reduction/expansion techniques, a new approach was developed to identify the location and extent of structural damage with a limited number of measurements. A general area of structural damage is first identified and, subsequently, a specific damaged structural component is located. This approach takes advantage of two different model refinement methods (optimal-update and design sensitivity) and two different model size matching methods (model reduction and eigenvector expansion). Performance of the proposed damage detection approach was demonstrated with test data from two different laboratory truss structures. This space technology can also be applied to structural inspection of aircraft, offshore platforms, oil tankers, ridges, and buildings. In addition, its applications to model refinement will improve the design of structural systems such as automobiles and electronic packaging.
Fernando, Chrisantha; Valijärvi, Riitta-Liisa; Goldstein, Richard A
2010-02-01
Why and how have languages died out? We have devised a mathematical model to help us understand how languages go extinct. We use the model to ask whether language extinction can be prevented in the future and why it may have occurred in the past. A growing number of mathematical models of language dynamics have been developed to study the conditions for language coexistence and death, yet their phenomenological approach compromises their ability to influence language revitalization policy. In contrast, here we model the mechanisms underlying language competition and look at how these mechanisms are influenced by specific language revitalization interventions, namely, private interventions to raise the status of the language and thus promote language learning at home, public interventions to increase the use of the minority language, and explicit teaching of the minority language in schools. Our model reveals that it is possible to preserve a minority language but that continued long-term interventions will likely be necessary. We identify the parameters that determine which interventions work best under certain linguistic and societal circumstances. In this way the efficacy of interventions of various types can be identified and predicted. Although there are qualitative arguments for these parameter values (e.g., the responsiveness of children to learning a language as a function of the proportion of conversations heard in that language, the relative importance of conversations heard in the family and elsewhere, and the amplification of spoken to heard conversations of the high-status language because of the media), extensive quantitative data are lacking in this field. We propose a way to measure these parameters, allowing our model, as well as others models in the field, to be validated.
A methodology for quadrilateral finite element mesh coarsening
Staten, Matthew L.; Benzley, Steven; Scott, Michael
2008-03-27
High fidelity finite element modeling of continuum mechanics problems often requires using all quadrilateral or all hexahedral meshes. The efficiency of such models is often dependent upon the ability to adapt a mesh to the physics of the phenomena. Adapting a mesh requires the ability to both refine and/or coarsen the mesh. The algorithms available to refine and coarsen triangular and tetrahedral meshes are very robust and efficient. However, the ability to locally and conformally refine or coarsen all quadrilateral and all hexahedral meshes presents many difficulties. Some research has been done on localized conformal refinement of quadrilateral and hexahedralmore » meshes. However, little work has been done on localized conformal coarsening of quadrilateral and hexahedral meshes. A general method which provides both localized conformal coarsening and refinement for quadrilateral meshes is presented in this paper. This method is based on restructuring the mesh with simplex manipulations to the dual of the mesh. Finally, this method appears to be extensible to hexahedral meshes in three dimensions.« less
PDB_REDO: constructive validation, more than just looking for errors.
Joosten, Robbie P; Joosten, Krista; Murshudov, Garib N; Perrakis, Anastassis
2012-04-01
Developments of the PDB_REDO procedure that combine re-refinement and rebuilding within a unique decision-making framework to improve structures in the PDB are presented. PDB_REDO uses a variety of existing and custom-built software modules to choose an optimal refinement protocol (e.g. anisotropic, isotropic or overall B-factor refinement, TLS model) and to optimize the geometry versus data-refinement weights. Next, it proceeds to rebuild side chains and peptide planes before a final optimization round. PDB_REDO works fully automatically without the need for intervention by a crystallographic expert. The pipeline was tested on 12 000 PDB entries and the great majority of the test cases improved both in terms of crystallographic criteria such as R(free) and in terms of widely accepted geometric validation criteria. It is concluded that PDB_REDO is useful to update the otherwise `static' structures in the PDB to modern crystallographic standards. The publically available PDB_REDO database provides better model statistics and contributes to better refinement and validation targets.
PDB_REDO: constructive validation, more than just looking for errors
Joosten, Robbie P.; Joosten, Krista; Murshudov, Garib N.; Perrakis, Anastassis
2012-01-01
Developments of the PDB_REDO procedure that combine re-refinement and rebuilding within a unique decision-making framework to improve structures in the PDB are presented. PDB_REDO uses a variety of existing and custom-built software modules to choose an optimal refinement protocol (e.g. anisotropic, isotropic or overall B-factor refinement, TLS model) and to optimize the geometry versus data-refinement weights. Next, it proceeds to rebuild side chains and peptide planes before a final optimization round. PDB_REDO works fully automatically without the need for intervention by a crystallographic expert. The pipeline was tested on 12 000 PDB entries and the great majority of the test cases improved both in terms of crystallographic criteria such as R free and in terms of widely accepted geometric validation criteria. It is concluded that PDB_REDO is useful to update the otherwise ‘static’ structures in the PDB to modern crystallographic standards. The publically available PDB_REDO database provides better model statistics and contributes to better refinement and validation targets. PMID:22505269
Text mining a self-report back-translation.
Blanch, Angel; Aluja, Anton
2016-06-01
There are several recommendations about the routine to undertake when back translating self-report instruments in cross-cultural research. However, text mining methods have been generally ignored within this field. This work describes a text mining innovative application useful to adapt a personality questionnaire to 12 different languages. The method is divided in 3 different stages, a descriptive analysis of the available back-translated instrument versions, a dissimilarity assessment between the source language instrument and the 12 back-translations, and an item assessment of item meaning equivalence. The suggested method contributes to improve the back-translation process of self-report instruments for cross-cultural research in 2 significant intertwined ways. First, it defines a systematic approach to the back translation issue, allowing for a more orderly and informed evaluation concerning the equivalence of different versions of the same instrument in different languages. Second, it provides more accurate instrument back-translations, which has direct implications for the reliability and validity of the instrument's test scores when used in different cultures/languages. In addition, this procedure can be extended to the back-translation of self-reports measuring psychological constructs in clinical assessment. Future research works could refine the suggested methodology and use additional available text mining tools. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A Comparison and Evaluation of Real-Time Software Systems Modeling Languages
NASA Technical Reports Server (NTRS)
Evensen, Kenneth D.; Weiss, Kathryn Anne
2010-01-01
A model-driven approach to real-time software systems development enables the conceptualization of software, fostering a more thorough understanding of its often complex architecture and behavior while promoting the documentation and analysis of concerns common to real-time embedded systems such as scheduling, resource allocation, and performance. Several modeling languages have been developed to assist in the model-driven software engineering effort for real-time systems, and these languages are beginning to gain traction with practitioners throughout the aerospace industry. This paper presents a survey of several real-time software system modeling languages, namely the Architectural Analysis and Design Language (AADL), the Unified Modeling Language (UML), Systems Modeling Language (SysML), the Modeling and Analysis of Real-Time Embedded Systems (MARTE) UML profile, and the AADL for UML profile. Each language has its advantages and disadvantages, and in order to adequately describe a real-time software system's architecture, a complementary use of multiple languages is almost certainly necessary. This paper aims to explore these languages in the context of understanding the value each brings to the model-driven software engineering effort and to determine if it is feasible and practical to combine aspects of the various modeling languages to achieve more complete coverage in architectural descriptions. To this end, each language is evaluated with respect to a set of criteria such as scope, formalisms, and architectural coverage. An example is used to help illustrate the capabilities of the various languages.
Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.
NASA Astrophysics Data System (ADS)
Liu, Y.; Li, Y.
2016-12-01
We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.
Multicriteria framework for selecting a process modelling language
NASA Astrophysics Data System (ADS)
Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel
2016-01-01
The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.
Kumar, Avishek; Campitelli, Paul; Thorpe, M F; Ozkan, S Banu
2015-12-01
The most successful protein structure prediction methods to date have been template-based modeling (TBM) or homology modeling, which predicts protein structure based on experimental structures. These high accuracy predictions sometimes retain structural errors due to incorrect templates or a lack of accurate templates in the case of low sequence similarity, making these structures inadequate in drug-design studies or molecular dynamics simulations. We have developed a new physics based approach to the protein refinement problem by mimicking the mechanism of chaperons that rehabilitate misfolded proteins. The template structure is unfolded by selectively (targeted) pulling on different portions of the protein using the geometric based technique FRODA, and then refolded using hierarchically restrained replica exchange molecular dynamics simulations (hr-REMD). FRODA unfolding is used to create a diverse set of topologies for surveying near native-like structures from a template and to provide a set of persistent contacts to be employed during re-folding. We have tested our approach on 13 previous CASP targets and observed that this method of folding an ensemble of partially unfolded structures, through the hierarchical addition of contact restraints (that is, first local and then nonlocal interactions), leads to a refolding of the structure along with refinement in most cases (12/13). Although this approach yields refined models through advancement in sampling, the task of blind selection of the best refined models still needs to be solved. Overall, the method can be useful for improved sampling for low resolution models where certain of the portions of the structure are incorrectly modeled. © 2015 Wiley Periodicals, Inc.
Across language families: Genome diversity mirrors linguistic variation within Europe
Longobardi, Giuseppe; Ghirotto, Silvia; Guardiano, Cristina; Tassi, Francesca; Benazzo, Andrea; Ceolin, Andrea
2015-01-01
ABSTRACT Objectives: The notion that patterns of linguistic and biological variation may cast light on each other and on population histories dates back to Darwin's times; yet, turning this intuition into a proper research program has met with serious methodological difficulties, especially affecting language comparisons. This article takes advantage of two new tools of comparative linguistics: a refined list of Indo‐European cognate words, and a novel method of language comparison estimating linguistic diversity from a universal inventory of grammatical polymorphisms, and hence enabling comparison even across different families. We corroborated the method and used it to compare patterns of linguistic and genomic variation in Europe. Materials and Methods: Two sets of linguistic distances, lexical and syntactic, were inferred from these data and compared with measures of geographic and genomic distance through a series of matrix correlation tests. Linguistic and genomic trees were also estimated and compared. A method (Treemix) was used to infer migration episodes after the main population splits. Results: We observed significant correlations between genomic and linguistic diversity, the latter inferred from data on both Indo‐European and non‐Indo‐European languages. Contrary to previous observations, on the European scale, language proved a better predictor of genomic differences than geography. Inferred episodes of genetic admixture following the main population splits found convincing correlates also in the linguistic realm. Discussion: These results pave the ground for previously unfeasible cross‐disciplinary analyses at the worldwide scale, encompassing populations of distant language families. Am J Phys Anthropol 157:630–640, 2015. © 2015 Wiley Periodicals, Inc. PMID:26059462
Bilingual Language Switching: Production vs. Recognition
Mosca, Michela; de Bot, Kees
2017-01-01
This study aims at assessing how bilinguals select words in the appropriate language in production and recognition while minimizing interference from the non-appropriate language. Two prominent models are considered which assume that when one language is in use, the other is suppressed. The Inhibitory Control (IC) model suggests that, in both production and recognition, the amount of inhibition on the non-target language is greater for the stronger compared to the weaker language. In contrast, the Bilingual Interactive Activation (BIA) model proposes that, in language recognition, the amount of inhibition on the weaker language is stronger than otherwise. To investigate whether bilingual language production and recognition can be accounted for by a single model of bilingual processing, we tested a group of native speakers of Dutch (L1), advanced speakers of English (L2) in a bilingual recognition and production task. Specifically, language switching costs were measured while participants performed a lexical decision (recognition) and a picture naming (production) task involving language switching. Results suggest that while in language recognition the amount of inhibition applied to the non-appropriate language increases along with its dominance as predicted by the IC model, in production the amount of inhibition applied to the non-relevant language is not related to language dominance, but rather it may be modulated by speakers' unconscious strategies to foster the weaker language. This difference indicates that bilingual language recognition and production might rely on different processing mechanisms and cannot be accounted within one of the existing models of bilingual language processing. PMID:28638361
Bilingual Language Switching: Production vs. Recognition.
Mosca, Michela; de Bot, Kees
2017-01-01
This study aims at assessing how bilinguals select words in the appropriate language in production and recognition while minimizing interference from the non-appropriate language. Two prominent models are considered which assume that when one language is in use, the other is suppressed. The Inhibitory Control (IC) model suggests that, in both production and recognition, the amount of inhibition on the non-target language is greater for the stronger compared to the weaker language. In contrast, the Bilingual Interactive Activation (BIA) model proposes that, in language recognition, the amount of inhibition on the weaker language is stronger than otherwise. To investigate whether bilingual language production and recognition can be accounted for by a single model of bilingual processing, we tested a group of native speakers of Dutch (L1), advanced speakers of English (L2) in a bilingual recognition and production task. Specifically, language switching costs were measured while participants performed a lexical decision (recognition) and a picture naming (production) task involving language switching. Results suggest that while in language recognition the amount of inhibition applied to the non-appropriate language increases along with its dominance as predicted by the IC model, in production the amount of inhibition applied to the non-relevant language is not related to language dominance, but rather it may be modulated by speakers' unconscious strategies to foster the weaker language. This difference indicates that bilingual language recognition and production might rely on different processing mechanisms and cannot be accounted within one of the existing models of bilingual language processing.
Afonine, Pavel V.; Adams, Paul D.; Urzhumtsev, Alexandre
2018-06-08
TLS modelling was developed by Schomaker and Trueblood to describe atomic displacement parameters through concerted (rigid-body) harmonic motions of an atomic group [Schomaker & Trueblood (1968), Acta Cryst. B 24 , 63–76]. The results of a TLS refinement are T , L and S matrices that provide individual anisotropic atomic displacement parameters (ADPs) for all atoms belonging to the group. These ADPs can be calculated analytically using a formula that relates the elements of the TLS matrices to atomic parameters. Alternatively, ADPs can be obtained numerically from the parameters of concerted atomic motions corresponding to the TLS matrices. Both proceduresmore » are expected to produce the same ADP values and therefore can be used to assess the results of TLS refinement. Here, the implementation of this approach in PHENIX is described and several illustrations, including the use of all models from the PDB that have been subjected to TLS refinement, are provided.« less
Molecular dynamics-based refinement and validation for sub-5 Å cryo-electron microscopy maps.
Singharoy, Abhishek; Teo, Ivan; McGreevy, Ryan; Stone, John E; Zhao, Jianhua; Schulten, Klaus
2016-07-07
Two structure determination methods, based on the molecular dynamics flexible fitting (MDFF) paradigm, are presented that resolve sub-5 Å cryo-electron microscopy (EM) maps with either single structures or ensembles of such structures. The methods, denoted cascade MDFF and resolution exchange MDFF, sequentially re-refine a search model against a series of maps of progressively higher resolutions, which ends with the original experimental resolution. Application of sequential re-refinement enables MDFF to achieve a radius of convergence of ~25 Å demonstrated with the accurate modeling of β-galactosidase and TRPV1 proteins at 3.2 Å and 3.4 Å resolution, respectively. The MDFF refinements uniquely offer map-model validation and B-factor determination criteria based on the inherent dynamics of the macromolecules studied, captured by means of local root mean square fluctuations. The MDFF tools described are available to researchers through an easy-to-use and cost-effective cloud computing resource on Amazon Web Services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Adams, Paul D.; Urzhumtsev, Alexandre
TLS modelling was developed by Schomaker and Trueblood to describe atomic displacement parameters through concerted (rigid-body) harmonic motions of an atomic group [Schomaker & Trueblood (1968), Acta Cryst. B 24 , 63–76]. The results of a TLS refinement are T , L and S matrices that provide individual anisotropic atomic displacement parameters (ADPs) for all atoms belonging to the group. These ADPs can be calculated analytically using a formula that relates the elements of the TLS matrices to atomic parameters. Alternatively, ADPs can be obtained numerically from the parameters of concerted atomic motions corresponding to the TLS matrices. Both proceduresmore » are expected to produce the same ADP values and therefore can be used to assess the results of TLS refinement. Here, the implementation of this approach in PHENIX is described and several illustrations, including the use of all models from the PDB that have been subjected to TLS refinement, are provided.« less
Goodrich, J Marc; Lonigan, Christopher J
2017-08-01
According to the common underlying proficiency model (Cummins, 1981), as children acquire academic knowledge and skills in their first language, they also acquire language-independent information about those skills that can be applied when learning a second language. The purpose of this study was to evaluate the relevance of the common underlying proficiency model for the early literacy skills of Spanish-speaking language-minority children using confirmatory factor analysis. Eight hundred fifty-eight Spanish-speaking language-minority preschoolers (mean age = 60.83 months, 50.2% female) participated in this study. Results indicated that bifactor models that consisted of language-independent as well as language-specific early literacy factors provided the best fits to the data for children's phonological awareness and print knowledge skills. Correlated factors models that only included skills specific to Spanish and English provided the best fits to the data for children's oral language skills. Children's language-independent early literacy skills were significantly related across constructs and to language-specific aspects of early literacy. Language-specific aspects of early literacy skills were significantly related within but not across languages. These findings suggest that language-minority preschoolers have a common underlying proficiency for code-related skills but not language-related skills that may allow them to transfer knowledge across languages.
Model-based high-throughput design of ion exchange protein chromatography.
Khalaf, Rushd; Heymann, Julia; LeSaout, Xavier; Monard, Florence; Costioli, Matteo; Morbidelli, Massimo
2016-08-12
This work describes the development of a model-based high-throughput design (MHD) tool for the operating space determination of a chromatographic cation-exchange protein purification process. Based on a previously developed thermodynamic mechanistic model, the MHD tool generates a large amount of system knowledge and thereby permits minimizing the required experimental workload. In particular, each new experiment is designed to generate information needed to help refine and improve the model. Unnecessary experiments that do not increase system knowledge are avoided. Instead of aspiring to a perfectly parameterized model, the goal of this design tool is to use early model parameter estimates to find interesting experimental spaces, and to refine the model parameter estimates with each new experiment until a satisfactory set of process parameters is found. The MHD tool is split into four sections: (1) prediction, high throughput experimentation using experiments in (2) diluted conditions and (3) robotic automated liquid handling workstations (robotic workstation), and (4) operating space determination and validation. (1) Protein and resin information, in conjunction with the thermodynamic model, is used to predict protein resin capacity. (2) The predicted model parameters are refined based on gradient experiments in diluted conditions. (3) Experiments on the robotic workstation are used to further refine the model parameters. (4) The refined model is used to determine operating parameter space that allows for satisfactory purification of the protein of interest on the HPLC scale. Each section of the MHD tool is used to define the adequate experimental procedures for the next section, thus avoiding any unnecessary experimental work. We used the MHD tool to design a polishing step for two proteins, a monoclonal antibody and a fusion protein, on two chromatographic resins, in order to demonstrate it has the ability to strongly accelerate the early phases of process development. Copyright © 2016 Elsevier B.V. All rights reserved.
Stepwise construction of a metabolic network in Event-B: The heat shock response.
Sanwal, Usman; Petre, Luigia; Petre, Ion
2017-12-01
There is a high interest in constructing large, detailed computational models for biological processes. This is often done by putting together existing submodels and adding to them extra details/knowledge. The result of such approaches is usually a model that can only answer questions on a very specific level of detail, and thus, ultimately, is of limited use. We focus instead on an approach to systematically add details to a model, with formal verification of its consistency at each step. In this way, one obtains a set of reusable models, at different levels of abstraction, to be used for different purposes depending on the question to address. We demonstrate this approach using Event-B, a computational framework introduced to develop formal specifications of distributed software systems. We first describe how to model generic metabolic networks in Event-B. Then, we apply this method for modeling the biological heat shock response in eukaryotic cells, using Event-B refinement techniques. The advantage of using Event-B consists in having refinement as an intrinsic feature; this provides as a final result not only a correct model, but a chain of models automatically linked by refinement, each of which is provably correct and reusable. This is a proof-of-concept that refinement in Event-B is suitable for biomodeling, serving for mastering biological complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Refining As-cast β-Ti Grains Through ZrN Inoculation
NASA Astrophysics Data System (ADS)
Qiu, Dong; Zhang, Duyao; Easton, Mark A.; St John, David H.; Gibson, Mark A.
2018-03-01
The columnar-to-equiaxed transition and remarkable refinement of β-Ti grains occur in an as-cast Ti-13Mo alloy when a new grain refiner, ZrN, was inoculated at a nitrogen level as low as 0.4 wt pct. The grain refining effect is attributed to in situ-formed TiN particles that provide active nucleation sites and solute Zr that promotes constitutional supercooling. Reproducible orientation relationships were identified between TiN nucleants and β-Ti matrix, and well explained by the edge-to-edge matching model.
On the temperature dependence of H-U{sub iso} in the riding hydrogen model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lübben, Jens; Volkmann, Christian; Grabowsky, Simon
The temperature dependence of hydrogen U{sub iso} and parent U{sub eq} in the riding hydrogen model is investigated by neutron diffraction, aspherical-atom refinements and QM/MM and MO/MO cluster calculations. Fixed values of 1.2 or 1.5 appear to be underestimated, especially at temperatures below 100 K. The temperature dependence of H-U{sub iso} in N-acetyl-l-4-hydroxyproline monohydrate is investigated. Imposing a constant temperature-independent multiplier of 1.2 or 1.5 for the riding hydrogen model is found to be inaccurate, and severely underestimates H-U{sub iso} below 100 K. Neutron diffraction data at temperatures of 9, 150, 200 and 250 K provide benchmark results for thismore » study. X-ray diffraction data to high resolution, collected at temperatures of 9, 30, 50, 75, 100, 150, 200 and 250 K (synchrotron and home source), reproduce neutron results only when evaluated by aspherical-atom refinement models, since these take into account bonding and lone-pair electron density; both invariom and Hirshfeld-atom refinement models enable a more precise determination of the magnitude of H-atom displacements than independent-atom model refinements. Experimental efforts are complemented by computing displacement parameters following the TLS+ONIOM approach. A satisfactory agreement between all approaches is found.« less
NASA Technical Reports Server (NTRS)
Whorton, M. S.
1998-01-01
Many spacecraft systems have ambitious objectives that place stringent requirements on control systems. Achievable performance is often limited because of difficulty of obtaining accurate models for flexible space structures. To achieve sufficiently high performance to accomplish mission objectives may require the ability to refine the control design model based on closed-loop test data and tune the controller based on the refined model. A control system design procedure is developed based on mixed H2/H(infinity) optimization to synthesize a set of controllers explicitly trading between nominal performance and robust stability. A homotopy algorithm is presented which generates a trajectory of gains that may be implemented to determine maximum achievable performance for a given model error bound. Examples show that a better balance between robustness and performance is obtained using the mixed H2/H(infinity) design method than either H2 or mu-synthesis control design. A second contribution is a new procedure for closed-loop system identification which refines parameters of a control design model in a canonical realization. Examples demonstrate convergence of the parameter estimation and improved performance realized by using the refined model for controller redesign. These developments result in an effective mechanism for achieving high-performance control of flexible space structures.
Initiating technical refinements in high-level golfers: Evidence for contradictory procedures.
Carson, Howie J; Collins, Dave; Richards, Jim
2016-01-01
When developing motor skills there are several outcomes available to an athlete depending on their skill status and needs. Whereas the skill acquisition and performance literature is abundant, an under-researched outcome relates to the refinement of already acquired and well-established skills. Contrary to current recommendations for athletes to employ an external focus of attention and a representative practice design, Carson and Collins' (2011) [Refining and regaining skills in fixation/diversification stage performers: The Five-A Model. International Review of Sport and Exercise Psychology, 4, 146-167. doi: 10.1080/1750984x.2011.613682 ] Five-A Model requires an initial narrowed internal focus on the technical aspect needing refinement: the implication being that environments which limit external sources of information would be beneficial to achieving this task. Therefore, the purpose of this paper was to (1) provide a literature-based explanation for why techniques counter to current recommendations may be (temporarily) appropriate within the skill refinement process and (2) provide empirical evidence for such efficacy. Kinematic data and self-perception reports are provided from high-level golfers attempting to consciously initiate technical refinements while executing shots onto a driving range and into a close proximity net (i.e. with limited knowledge of results). It was hypothesised that greater control over intended refinements would occur when environmental stimuli were reduced in the most unrepresentative practice condition (i.e. hitting into a net). Results confirmed this, as evidenced by reduced intra-individual movement variability for all participants' individual refinements, despite little or no difference in mental effort reported. This research offers coaches guidance when working with performers who may find conscious recall difficult during the skill refinement process.
NASA Astrophysics Data System (ADS)
Kiram, J. J.; Sulaiman, J.; Swanto, S.; Din, W. A.
2015-10-01
This study aims to construct a mathematical model of the relationship between a student's Language Learning Strategy usage and English Language proficiency. Fifty-six pre-university students of University Malaysia Sabah participated in this study. A self-report questionnaire called the Strategy Inventory for Language Learning was administered to them to measure their language learning strategy preferences before they sat for the Malaysian University English Test (MUET), the results of which were utilised to measure their English language proficiency. We attempted the model assessment specific to Multiple Linear Regression Analysis subject to variable selection using Stepwise regression. We conducted various assessments to the model obtained, including the Global F-test, Root Mean Square Error and R-squared. The model obtained suggests that not all language learning strategies should be included in the model in an attempt to predict Language Proficiency.
KoBaMIN: a knowledge-based minimization web server for protein structure refinement.
Rodrigues, João P G L M; Levitt, Michael; Chopra, Gaurav
2012-07-01
The KoBaMIN web server provides an online interface to a simple, consistent and computationally efficient protein structure refinement protocol based on minimization of a knowledge-based potential of mean force. The server can be used to refine either a single protein structure or an ensemble of proteins starting from their unrefined coordinates in PDB format. The refinement method is particularly fast and accurate due to the underlying knowledge-based potential derived from structures deposited in the PDB; as such, the energy function implicitly includes the effects of solvent and the crystal environment. Our server allows for an optional but recommended step that optimizes stereochemistry using the MESHI software. The KoBaMIN server also allows comparison of the refined structures with a provided reference structure to assess the changes brought about by the refinement protocol. The performance of KoBaMIN has been benchmarked widely on a large set of decoys, all models generated at the seventh worldwide experiments on critical assessment of techniques for protein structure prediction (CASP7) and it was also shown to produce top-ranking predictions in the refinement category at both CASP8 and CASP9, yielding consistently good results across a broad range of model quality values. The web server is fully functional and freely available at http://csb.stanford.edu/kobamin.
NASA Astrophysics Data System (ADS)
Shim, Moonsoo; Choi, Ho-Gil; Choi, Jeong-Hun; Yi, Kyung-Woo; Lee, Jong-Hyeon
2017-08-01
The purification of a LiCl-KCl salt mixture was carried out by a zone-refining process. To improve the throughput of zone refining, three heaters were installed in the zone refiner. The zone-refining method was used to grow pure LiCl-KCl salt ingots from a LiCl-KCl-CsCl-SrCl2 salt mixture. The main investigated parameters were the heater speed and the number of passes. From each zone-refined salt ingot, samples were collected axially along the salt ingot and the concentrations of Sr and Cs were determined. Experimental results show that the Sr and Cs concentrations at the initial region of the ingot were low and increased to a maximum at the final freezing region of the salt ingot. Concentration results of the zone-refined salt were compared with theoretical results furnished by the proposed model to validate its predictions. The keff values for Sr and Cs were 0.55 and 0.47, respectively. The correlation between the salt composition and separation behavior was also investigated. The keff values of the Sr in LiCl-KCl-SrCl2 and the Cs in LiCl-KCl-CsCl were found to be 0.53 and 0.44, respectively, by fitting the experimental data into the proposed model.
Improved ligand geometries in crystallographic refinement using AFITT in PHENIX
Janowski, Pawel A.; Moriarty, Nigel W.; Kelley, Brian P.; ...
2016-08-31
Modern crystal structure refinement programs rely on geometry restraints to overcome the challenge of a low data-to-parameter ratio. While the classical Engh and Huber restraints work well for standard amino-acid residues, the chemical complexity of small-molecule ligands presents a particular challenge. Most current approaches either limit ligand restraints to those that can be readily described in the Crystallographic Information File (CIF) format, thus sacrificing chemical flexibility and energetic accuracy, or they employ protocols that substantially lengthen the refinement time, potentially hindering rapid automated refinement workflows.PHENIX–AFITTrefinement uses a full molecular-mechanics force field for user-selected small-molecule ligands during refinement, eliminating the potentiallymore » difficult problem of finding or generating high-quality geometry restraints. It is fully integrated with a standard refinement protocol and requires practically no additional steps from the user, making it ideal for high-throughput workflows.PHENIX–AFITTrefinements also handle multiple ligands in a single model, alternate conformations and covalently bound ligands. Here, the results of combiningAFITTand thePHENIXsoftware suite on a data set of 189 protein–ligand PDB structures are presented. Refinements usingPHENIX–AFITTsignificantly reduce ligand conformational energy and lead to improved geometries without detriment to the fit to the experimental data. Finally, for the data presented,PHENIX–AFITTrefinements result in more chemically accurate models for small-molecule ligands.« less
Principles of parametric estimation in modeling language competition
Zhang, Menghan; Gong, Tao
2013-01-01
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678
Principles of parametric estimation in modeling language competition.
Zhang, Menghan; Gong, Tao
2013-06-11
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.
Kwok, Ezra; Gopaluni, Bhushan; Kizhakkedathu, Jayachandran N.
2013-01-01
Molecular dynamics (MD) simulations results are herein incorporated into an electrostatic model used to determine the structure of an effective polymer-based antidote to the anticoagulant fondaparinux. In silico data for the polymer or its cationic binding groups has not, up to now, been available, and experimental data on the structure of the polymer-fondaparinux complex is extremely limited. Consequently, the task of optimizing the polymer structure is a daunting challenge. MD simulations provided a means to gain microscopic information on the interactions of the binding groups and fondaparinux that would have otherwise been inaccessible. This was used to refine the electrostatic model and improve the quantitative model predictions of binding affinity. Once refined, the model provided guidelines to improve electrostatic forces between candidate polymers and fondaparinux in order to increase association rate constants. PMID:27006916
Bellows flow-induced vibrations
NASA Technical Reports Server (NTRS)
Tygielski, P. J.; Smyly, H. M.; Gerlach, C. R.
1983-01-01
The bellows flow excitation mechanism and results of comprehensive test program are summarized. The analytical model for predicting bellows flow induced stress is refined. The model includes the effects of an upstream elbow, arbitrary geometry, and multiple piles. A refined computer code for predicting flow induced stress is described which allows life prediction if a material S-N diagram is available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanratty, M.P.; Liber, K.
1994-12-31
The Littoral Ecosystem Risk Assessment Model (LERAM) is a bioenergetic ecosystem effects model. It links single species toxicity data to a bioenergetic model of the trophic structure of an ecosystem in order to simulate community and ecosystem level effects of chemical stressors. LERAM was used in 1992 to simulate the ecological effects of diflubenzuron. When compared to the results from a littoral enclosure study, the model exaggerated the cascading of effects through the trophic levels of the littoral ecosystem. It was hypothesized that this could be corrected by making minor changes in the representation of the littoral food web. Twomore » refinements of the model were therefore performed: (1) the plankton and macroinvertebrate model populations [eg., predatory Copepoda, herbivorous Insecta, green phytoplankton, etc.] were changed to better represent the habitat and feeding preferences of the endemic taxa; and (2) the method for modeling the microbial degradation of detritus (and the resulting nutrient remineralization) was changed from simulating bacterial populations to simulating bacterial function. Model predictions of the ecological effects of 4-nonylphenol were made before and after these refinements. Both sets of predictions were then compared to the results from a littoral enclosure study of the ecological effects of 4-nonylphenol. The changes in the LERAM predictions were then used to determine the success of the refinements, to guide. future research, and to further define LERAM`s domain of application.« less
Simplified and refined structural modeling for economical flutter analysis and design
NASA Technical Reports Server (NTRS)
Ricketts, R. H.; Sobieszczanski, J.
1977-01-01
A coordinated use of two finite-element models of different levels of refinement is presented to reduce the computer cost of the repetitive flutter analysis commonly encountered in structural resizing to meet flutter requirements. One model, termed a refined model (RM), represents a high degree of detail needed for strength-sizing and flutter analysis of an airframe. The other model, called a simplified model (SM), has a relatively much smaller number of elements and degrees-of-freedom. A systematic method of deriving an SM from a given RM is described. The method consists of judgmental and numerical operations to make the stiffness and mass of the SM elements equivalent to the corresponding substructures of RM. The structural data are automatically transferred between the two models. The bulk of analysis is performed on the SM with periodical verifications carried out by analysis of the RM. In a numerical example of a supersonic cruise aircraft with an arrow wing, this approach permitted substantial savings in computer costs and acceleration of the job turn-around.
One technique for refining the global Earth gravity models
NASA Astrophysics Data System (ADS)
Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.
2017-01-01
The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.
GSFC Ada programming guidelines
NASA Technical Reports Server (NTRS)
Roy, Daniel M.; Nelson, Robert W.
1986-01-01
A significant Ada effort has been under way at Goddard for the last two years. To ease the center's transition toward Ada (notably for future space station projects), a cooperative effort of half a dozen companies and NASA personnel was started in 1985 to produce programming standards and guidelines for the Ada language. The great richness of the Ada language and the need of programmers for good style examples makes Ada programming guidelines an important tool to smooth the Ada transition. Because of the natural divergence of technical opinions, the great diversity of our government and private organizations and the novelty of the Ada technology, the creation of an Ada programming guidelines document is a difficult and time consuming task. It is also a vital one. Steps must now be taken to ensure that the guide is refined in an organized but timely manner to reflect the growing level of expertise of the Ada community.
Crosson, Bruce; Benefield, Hope; Cato, M Allison; Sadek, Joseph R; Moore, Anna Bacon; Wierenga, Christina E; Gopinath, Kaundinya; Soltysik, David; Bauer, Russell M; Auerbach, Edward J; Gökçay, Didem; Leonard, Christiana M; Briggs, Richard W
2003-11-01
fMRI was used to determine the frontal, basal ganglia, and thalamic structures engaged by three facets of language generation: lexical status of generated items, the use of semantic vs. phonological information during language generation, and rate of generation. During fMRI, 21 neurologically normal subjects performed four tasks: generation of nonsense syllables given beginning and ending consonant blends, generation of words given a rhyming word, generation of words given a semantic category at a fast rate (matched to the rate of nonsense syllable generation), and generation of words given a semantic category at a slow rate (matched to the rate of generating of rhyming words). Components of a left pre-SMA-dorsal caudate nucleus-ventral anterior thalamic loop were active during word generation from rhyming or category cues but not during nonsense syllable generation. Findings indicate that this loop is involved in retrieving words from pre-existing lexical stores. Relatively diffuse activity in the right basal ganglia (caudate nucleus and putamen) also was found during word-generation tasks but not during nonsense syllable generation. Given the relative absence of right frontal activity during the word generation tasks, we suggest that the right basal ganglia activity serves to suppress right frontal activity, preventing right frontal structures from interfering with language production. Current findings establish roles for the left and the right basal ganglia in word generation. Hypotheses are discussed for future research to help refine our understanding of basal ganglia functions in language generation.
Mirror neurons, language, and embodied cognition.
Perlovsky, Leonid I; Ilin, Roman
2013-05-01
Basic mechanisms of the mind, cognition, language, its semantic and emotional mechanisms are modeled using dynamic logic (DL). This cognitively and mathematically motivated model leads to a dual-model hypothesis of language and cognition. The paper emphasizes that abstract cognition cannot evolve without language. The developed model is consistent with a joint emergence of language and cognition from a mirror neuron system. The dual language-cognition model leads to the dual mental hierarchy. The nature of cognition embodiment in the hierarchy is analyzed. Future theoretical and experimental research is discussed. Published by Elsevier Ltd.
Mehl, Steffen W.; Hill, Mary C.
2011-01-01
This report documents modifications to the Streamflow-Routing Package (SFR2) to route streamflow through grids constructed using the multiple-refined-areas capability of shared node Local Grid Refinement (LGR) of MODFLOW-2005. MODFLOW-2005 is the U.S. Geological Survey modular, three-dimensional, finite-difference groundwater-flow model. LGR provides the capability to simulate groundwater flow by using one or more block-shaped, higher resolution local grids (child model) within a coarser grid (parent model). LGR accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundaries. Compatibility with SFR2 allows for streamflow routing across grids. LGR can be used in two- and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems.
Neuropeptide Signaling Networks and Brain Circuit Plasticity.
McClard, Cynthia K; Arenkiel, Benjamin R
2018-01-01
The brain is a remarkable network of circuits dedicated to sensory integration, perception, and response. The computational power of the brain is estimated to dwarf that of most modern supercomputers, but perhaps its most fascinating capability is to structurally refine itself in response to experience. In the language of computers, the brain is loaded with programs that encode when and how to alter its own hardware. This programmed "plasticity" is a critical mechanism by which the brain shapes behavior to adapt to changing environments. The expansive array of molecular commands that help execute this programming is beginning to emerge. Notably, several neuropeptide transmitters, previously best characterized for their roles in hypothalamic endocrine regulation, have increasingly been recognized for mediating activity-dependent refinement of local brain circuits. Here, we discuss recent discoveries that reveal how local signaling by corticotropin-releasing hormone reshapes mouse olfactory bulb circuits in response to activity and further explore how other local neuropeptide networks may function toward similar ends.
[Pragmatics in autism spectrum disorder: recent developments].
Kissine, Mikhail; Clin, Elise; de Villiers, Jessica
2016-10-01
Autism spectrum disorder (ASD) is characterized by primary pragmatic difficulties, out of step with verbal and non-verbal developmental level. This selective survey paper addresses three recent domains of research on pragmatic functions in autism. First, we provide an up-to-date discussion of how lack of sensitivity to social cues impacts early acquisition of words. Second, we review recent findings on the comprehension of non-literal language, pointing to a more refined clinical reality. Third, we describe recent developments in the study of conversation skills in autism. © 2016 médecine/sciences – Inserm.
A Multidimensional Curriculum Model for Heritage or International Language Instruction.
ERIC Educational Resources Information Center
Lazaruk, Wally
1993-01-01
Describes the Multidimension Curriculum Model for developing a language curriculum and suggests a generic approach to selecting and sequencing learning objectives. Alberta Education used this model to design a new French-as-a-Second-Language program. The experience/communication, culture, language, and general language components at the beginning,…
A Mathematical Model for Railway Control Systems
NASA Technical Reports Server (NTRS)
Hoover, D. N.
1996-01-01
We present a general method for modeling safety aspects of railway control systems. Using our modeling method, one can progressively refine an abstract railway safety model, sucessively adding layers of detail about how a real system actually operates, while maintaining a safety property that refines the original abstract safety property. This method supports a top-down approach to specification of railway control systems and to proof of a variety of safety-related properties. We demonstrate our method by proving safety of the classical block control system.
Modeling of the Coupling of Microstructure and Macrosegregation in a Direct Chill Cast Al-Cu Billet
NASA Astrophysics Data System (ADS)
Heyvaert, Laurent; Bedel, Marie; Založnik, Miha; Combeau, Hervé
2017-10-01
The macroscopic multiphase flow and the growth of the solidification microstructures in the mushy zone of a direct chill (DC) casting are closely coupled. These couplings are the key to the understanding of the formation of the macrosegregation and of the non-uniform microstructure of the casting. In the present paper we use a multiphase and multiscale model to provide a fully coupled picture of the links between macrosegregation and microstructure in a DC cast billet. The model describes nucleation from inoculant particles and growth of dendritic and globular equiaxed crystal grains, fully coupled with macroscopic transport phenomena: fluid flow induced by natural convection and solidification shrinkage, heat, mass, and solute mass transport, motion of free-floating equiaxed grains, and of grain refiner particles. We compare our simulations to experiments on grain-refined and non-grain-refined industrial size billets from literature. We show that a transition between dendritic and globular grain morphology triggered by the grain refinement is the key to the explanation of the differences between the macrosegregation patterns in the two billets. We further show that the grain size and morphology are strongly affected by the macroscopic transport of free-floating equiaxed grains and of grain refiner particles.
Component Models for Semantic Web Languages
NASA Astrophysics Data System (ADS)
Henriksson, Jakob; Aßmann, Uwe
Intelligent applications and agents on the Semantic Web typically need to be specified with, or interact with specifications written in, many different kinds of formal languages. Such languages include ontology languages, data and metadata query languages, as well as transformation languages. As learnt from years of experience in development of complex software systems, languages need to support some form of component-based development. Components enable higher software quality, better understanding and reusability of already developed artifacts. Any component approach contains an underlying component model, a description detailing what valid components are and how components can interact. With the multitude of languages developed for the Semantic Web, what are their underlying component models? Do we need to develop one for each language, or is a more general and reusable approach achievable? We present a language-driven component model specification approach. This means that a component model can be (automatically) generated from a given base language (actually, its specification, e.g. its grammar). As a consequence, we can provide components for different languages and simplify the development of software artifacts used on the Semantic Web.
Modelling language evolution: Examples and predictions
NASA Astrophysics Data System (ADS)
Gong, Tao; Shuai, Lan; Zhang, Menghan
2014-06-01
We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.
Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Leng, W.; Zhong, S.
2008-12-01
In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].
Diffraction-geometry refinement in the DIALS framework
Waterman, David G.; Winter, Graeme; Gildea, Richard J.; ...
2016-03-30
Rapid data collection and modern computing resources provide the opportunity to revisit the task of optimizing the model of diffraction geometry prior to integration. A comprehensive description is given of new software that builds upon established methods by performing a single global refinement procedure, utilizing a smoothly varying model of the crystal lattice where appropriate. This global refinement technique extends to multiple data sets, providing useful constraints to handle the problem of correlated parameters, particularly for small wedges of data. Examples of advanced uses of the software are given and the design is explained in detail, with particular emphasis onmore » the flexibility and extensibility it entails.« less
NASA Astrophysics Data System (ADS)
Geža, V.; Venčels, J.; Zāģeris, Ģ.; Pavlovs, S.
2018-05-01
One of the most perspective methods to produce SoG-Si is refinement via metallurgical route. The most critical part of this route is refinement from boron and phosphorus, therefore, approach under development will address this problem. An approach of creating surface waves on silicon melt’s surface is proposed in order to enlarge its area and accelerate removal of boron via chemical reactions and evaporation of phosphorus. A two dimensional numerical model is created which include coupling of electromagnetic and fluid dynamic simulations with free surface dynamics. First results show behaviour similar to experimental results from literature.
Language Models and the Teaching of English Language to Secondary School Students in Cameroon
ERIC Educational Resources Information Center
Ntongieh, Njwe Amah Eyovi
2016-01-01
This paper investigates Language models with an emphasis on an appraisal of the Competence Based Language Teaching Model (CBLT) employed in the teaching and learning of English language in Cameroon. Research endeavours at various levels combined with cumulative deficiencies experienced over the years have propelled educational policy makers to…
Research-Based Program Development: Refining the Service Model for a Geographic Alliance
ERIC Educational Resources Information Center
Rutherford, David J.; Lovorn, Carley
2018-01-01
Research conducted in 2013 identified the perceptions that K-12 teachers and administrators hold with respect to: (1) the perceived needs in education, (2) the professional audiences that are most important to reach, and (3) the service models that are most effective. The specific purpose of the research was to refine and improve the services that…
A method to estimate statistical errors of properties derived from charge-density modelling
Lecomte, Claude
2018-01-01
Estimating uncertainties of property values derived from a charge-density model is not straightforward. A methodology, based on calculation of sample standard deviations (SSD) of properties using randomly deviating charge-density models, is proposed with the MoPro software. The parameter shifts applied in the deviating models are generated in order to respect the variance–covariance matrix issued from the least-squares refinement. This ‘SSD methodology’ procedure can be applied to estimate uncertainties of any property related to a charge-density model obtained by least-squares fitting. This includes topological properties such as critical point coordinates, electron density, Laplacian and ellipticity at critical points and charges integrated over atomic basins. Errors on electrostatic potentials and interaction energies are also available now through this procedure. The method is exemplified with the charge density of compound (E)-5-phenylpent-1-enylboronic acid, refined at 0.45 Å resolution. The procedure is implemented in the freely available MoPro program dedicated to charge-density refinement and modelling. PMID:29724964
2016-09-01
UNCLASSIFIED UNCLASSIFIED Refinement of Out of Circularity and Thickness Measurements of a Cylinder for Finite Element Analysis...significant effect on the collapse strength and must be accurately represented in finite element analysis to obtain accurate results. Often it is necessary...to interpolate measurements from a relatively coarse grid to a refined finite element model and methods that have wide general acceptance are
Multilingual natural language generation as part of a medical terminology server.
Wagner, J C; Solomon, W D; Michel, P A; Juge, C; Baud, R H; Rector, A L; Scherrer, J R
1995-01-01
Re-usable and sharable, and therefore language-independent concept models are of increasing importance in the medical domain. The GALEN project (Generalized Architecture for Languages Encyclopedias and Nomenclatures in Medicine) aims at developing language-independent concept representation systems as the foundations for the next generation of multilingual coding systems. For use within clinical applications, the content of the model has to be mapped to natural language. A so-called Multilingual Information Module (MM) establishes the link between the language-independent concept model and different natural languages. This text generation software must be versatile enough to cope at the same time with different languages and with different parts of a compositional model. It has to meet, on the one hand, the properties of the language as used in the medical domain and, on the other hand, the specific characteristics of the underlying model and its representation formalism. We propose a semantic-oriented approach to natural language generation that is based on linguistic annotations to a concept model. This approach is realized as an integral part of a Terminology Server, built around the concept model and offering different terminological services for clinical applications.
Linguistics: Modelling the dynamics of language death
NASA Astrophysics Data System (ADS)
Abrams, Daniel M.; Strogatz, Steven H.
2003-08-01
Thousands of the world's languages are vanishing at an alarming rate, with 90% of them being expected to disappear with the current generation. Here we develop a simple model of language competition that explains historical data on the decline of Welsh, Scottish Gaelic, Quechua (the most common surviving indigenous language in the Americas) and other endangered languages. A linguistic parameter that quantifies the threat of language extinction can be derived from the model and may be useful in the design and evaluation of language-preservation programmes.
Zhang, Yang
2014-01-01
We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. PMID:23760925
Zhang, Yang
2014-02-01
We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Brown, Nancy Melamed
This qualitative investigation extends the study of teacher learning within a reform-based community of practice model of professional development. This long-term, multiple case study examined three experienced teachers' transformations in thinking about science instruction. Data were collected during the three years of the Guided Inquiry supporting Multiple Literacies research project, designed to develop instructional practices informed by a socio-cultural, inquiry-based orientation. Data sources included: transcripts of semi-structured interviews collected at strategic points, the teacher's journals, initial application information, and teachers' written case studies. Using an interpretive case study approach, tenets of the teachers' orientations were identified through a recursive process. Results are organized to reflect two principles that were integral to the design of the professional development community. The first principle describes changes in teachers' orientations about the goals and characteristics of science instruction in the elementary grades. The second describes changes about teachers' knowledge about themselves as learners and the influence of this knowledge on their thinking about science instruction and student learning. Illustrative findings indicate that: (a) it is possible for teachers' language regarding conceptions of their practice to change with only superficial change in their orientations, (b) teachers can hold dualistic ways of thinking about their practice, (c) in some cases, teachers use a significant amount of autobiography about their own learning to explain their practice; over time, this was replaced with warrants using the language that developed within the professional development community, and (d) long-term case studies revealed differences in orientations that emerged and were refined over time. These findings provide strong support for communities of practice as a model of professional development and hold implications for advancing teacher learning.
Deutsch, Maxime; Claiser, Nicolas; Pillet, Sébastien; Chumakov, Yurii; Becker, Pierre; Gillet, Jean Michel; Gillon, Béatrice; Lecomte, Claude; Souhassou, Mohamed
2012-11-01
New crystallographic tools were developed to access a more precise description of the spin-dependent electron density of magnetic crystals. The method combines experimental information coming from high-resolution X-ray diffraction (XRD) and polarized neutron diffraction (PND) in a unified model. A new algorithm that allows for a simultaneous refinement of the charge- and spin-density parameters against XRD and PND data is described. The resulting software MOLLYNX is based on the well known Hansen-Coppens multipolar model, and makes it possible to differentiate the electron spins. This algorithm is validated and demonstrated with a molecular crystal formed by a bimetallic chain, MnCu(pba)(H(2)O)(3)·2H(2)O, for which XRD and PND data are available. The joint refinement provides a more detailed description of the spin density than the refinement from PND data alone.
Refined views of multi-protein complexes in the erythrocyte membrane
Mankelow, TJ; Satchwell, TJ; Burton, NM
2015-01-01
The erythrocyte membrane has been extensively studied, both as a model membrane system and to investigate its role in gas exchange and transport. Much is now known about the protein components of the membrane, how they are organised into large multi-protein complexes and how they interact with each other within these complexes. Many links between the membrane and the cytoskeleton have also been delineated and have been demonstrated to be crucial for maintaining the deformability and integrity of the erythrocyte. In this study we have refined previous, highly speculative molecular models of these complexes by including the available data pertaining to known protein-protein interactions. While the refined models remain highly speculative, they provide an evolving framework for visualisation of these important cellular structures at the atomic level. PMID:22465511
Ramus, Franck; Marshall, Chloe R.; Rosen, Stuart
2013-01-01
An on-going debate surrounds the relationship between specific language impairment and developmental dyslexia, in particular with respect to their phonological abilities. Are these distinct disorders? To what extent do they overlap? Which cognitive and linguistic profiles correspond to specific language impairment, dyslexia and comorbid cases? At least three different models have been proposed: the severity model, the additional deficit model and the component model. We address this issue by comparing children with specific language impairment only, those with dyslexia-only, those with specific language impairment and dyslexia and those with no impairment, using a broad test battery of language skills. We find that specific language impairment and dyslexia do not always co-occur, and that some children with specific language impairment do not have a phonological deficit. Using factor analysis, we find that language abilities across the four groups of children have at least three independent sources of variance: one for non-phonological language skills and two for distinct sets of phonological abilities (which we term phonological skills versus phonological representations). Furthermore, children with specific language impairment and dyslexia show partly distinct profiles of phonological deficit along these two dimensions. We conclude that a multiple-component model of language abilities best explains the relationship between specific language impairment and dyslexia and the different profiles of impairment that are observed. PMID:23413264
The proper treatment of language acquisition and change in a population setting.
Niyogi, Partha; Berwick, Robert C
2009-06-23
Language acquisition maps linguistic experience, primary linguistic data (PLD), onto linguistic knowledge, a grammar. Classically, computational models of language acquisition assume a single target grammar and one PLD source, the central question being whether the target grammar can be acquired from the PLD. However, real-world learners confront populations with variation, i.e., multiple target grammars and PLDs. Removing this idealization has inspired a new class of population-based language acquisition models. This paper contrasts 2 such models. In the first, iterated learning (IL), each learner receives PLD from one target grammar but different learners can have different targets. In the second, social learning (SL), each learner receives PLD from possibly multiple targets, e.g., from 2 parents. We demonstrate that these 2 models have radically different evolutionary consequences. The IL model is dynamically deficient in 2 key respects. First, the IL model admits only linear dynamics and so cannot describe phase transitions, attested rapid changes in languages over time. Second, the IL model cannot properly describe the stability of languages over time. In contrast, the SL model leads to nonlinear dynamics, bifurcations, and possibly multiple equilibria and so suffices to model both the case of stable language populations, mixtures of more than 1 language, as well as rapid language change. The 2 models also make distinct, empirically testable predictions about language change. Using historical data, we show that the SL model more faithfully replicates the dynamics of the evolution of Middle English.
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
SFESA: a web server for pairwise alignment refinement by secondary structure shifts.
Tong, Jing; Pei, Jimin; Grishin, Nick V
2015-09-03
Protein sequence alignment is essential for a variety of tasks such as homology modeling and active site prediction. Alignment errors remain the main cause of low-quality structure models. A bioinformatics tool to refine alignments is needed to make protein alignments more accurate. We developed the SFESA web server to refine pairwise protein sequence alignments. Compared to the previous version of SFESA, which required a set of 3D coordinates for a protein, the new server will search a sequence database for the closest homolog with an available 3D structure to be used as a template. For each alignment block defined by secondary structure elements in the template, SFESA evaluates alignment variants generated by local shifts and selects the best-scoring alignment variant. A scoring function that combines the sequence score of profile-profile comparison and the structure score of template-derived contact energy is used for evaluation of alignments. PROMALS pairwise alignments refined by SFESA are more accurate than those produced by current advanced alignment methods such as HHpred and CNFpred. In addition, SFESA also improves alignments generated by other software. SFESA is a web-based tool for alignment refinement, designed for researchers to compute, refine, and evaluate pairwise alignments with a combined sequence and structure scoring of alignment blocks. To our knowledge, the SFESA web server is the only tool that refines alignments by evaluating local shifts of secondary structure elements. The SFESA web server is available at http://prodata.swmed.edu/sfesa.
ERIC Educational Resources Information Center
Marshall, Julie; Goldbart, Juliet; Phillips, Julie
2007-01-01
Background: Parental and speech and language therapist (SLT) explanatory models may affect engagement with speech and language therapy, but there has been dearth of research in this area. This study investigated parents' and SLTs' views about language development, delay and intervention in pre-school children with language delay. Aims: The aims…
ERIC Educational Resources Information Center
Dixon, L. Quentin; Wu, Shuang
2014-01-01
Purpose: This paper examined the application of the input-interaction-output model in English-as-Foreign-Language (EFL) learning environments with four specific questions: (1) How do the three components function in the model? (2) Does interaction in the foreign language classroom seem to be effective for foreign language acquisition? (3) What…
Plasma Vehicle Charging Analysis for Orion Flight Test 1
NASA Technical Reports Server (NTRS)
Lallement, L.; McDonald, T.; Norgard, J.; Scully, B.
2014-01-01
In preparation for the upcoming experimental test flight for the Orion crew module, considerable interest was raised over the possibility of exposure to elevated levels of plasma activity and vehicle charging both externally on surfaces and internally on dielectrics during the flight test orbital operations. Initial analysis using NASCAP-2K indicated very high levels of exposure, and this generated additional interest in refining/defining the plasma and spacecraft models used in the analysis. This refinement was pursued, resulting in the use of specific AE8 and AP8 models, rather than SCATHA models, as well as consideration of flight trajectory, time duration, and other parameters possibly affecting the levels of exposure and the magnitude of charge deposition. Analysis using these refined models strongly indicated that, for flight test operations, no special surface coatings were necessary for the thermal protection system, but would definitely be required for future GEO, trans-lunar, and extra-lunar missions...
Plasma Vehicle Charging Analysis for Orion Flight Test 1
NASA Technical Reports Server (NTRS)
Scully, B.; Norgard, J.
2015-01-01
In preparation for the upcoming experimental test flight for the Orion crew module, considerable interest was raised over the possibility of exposure to elevated levels of plasma activity and vehicle charging both externally on surfaces and internally on dielectrics during the flight test orbital operations. Initial analysis using NASCAP-2K indicated very high levels of exposure, and this generated additional interest in refining/defining the plasma and spacecraft models used in the analysis. This refinement was pursued, resulting in the use of specific AE8 and AP8 models, rather than SCATHA models, as well as consideration of flight trajectory, time duration, and other parameters possibly affecting the levels of exposure and the magnitude of charge deposition. Analysis using these refined models strongly indicated that, for flight test operations, no special surface coatings were necessary for the Thermal Protection System (TPS), but would definitely be required for future GEO, trans-lunar, and extra-lunar missions.
The Bilingual Language Interaction Network for Comprehension of Speech*
Marian, Viorica
2013-01-01
During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension. PMID:24363602
Zheng, Zhong-liang; Zuo, Zhen-yu; Liu, Zhi-gang; Tsai, Keng-chang; Liu, Ai-fu; Zou, Guo-lin
2005-01-01
A three-dimensional structural model of nattokinase (NK) from Bacillus natto was constructed by homology modeling. High-resolution X-ray structures of Subtilisin BPN' (SB), Subtilisin Carlsberg (SC), Subtilisin E (SE) and Subtilisin Savinase (SS), four proteins with sequential, structural and functional homology were used as templates. Initial models of NK were built by MODELLER and analyzed by the PROCHECK programs. The best quality model was chosen for further refinement by constrained molecular dynamics simulations. The overall quality of the refined model was evaluated. The refined model NKC1 was analyzed by different protein analysis programs including PROCHECK for the evaluation of Ramachandran plot quality, PROSA for testing interaction energies and WHATIF for the calculation of packing quality. This structure was found to be satisfactory and also stable at room temperature as demonstrated by a 300ps long unconstrained molecular dynamics (MD) simulation. Further docking analysis promoted the coming of a new nucleophilic catalytic mechanism for NK, which is induced by attacking of hydroxyl rich in catalytic environment and locating of S221.
Refining the treatment of membrane proteins by coarse-grained models.
Vorobyov, Igor; Kim, Ilsoo; Chu, Zhen T; Warshel, Arieh
2016-01-01
Obtaining a quantitative description of the membrane proteins stability is crucial for understanding many biological processes. However the advance in this direction has remained a major challenge for both experimental studies and molecular modeling. One of the possible directions is the use of coarse-grained models but such models must be carefully calibrated and validated. Here we use a recent progress in benchmark studies on the energetics of amino acid residue and peptide membrane insertion and membrane protein stability in refining our previously developed coarse-grained model (Vicatos et al., Proteins 2014;82:1168). Our refined model parameters were fitted and/or tested to reproduce water/membrane partitioning energetics of amino acid side chains and a couple of model peptides. This new model provides a reasonable agreement with experiment for absolute folding free energies of several β-barrel membrane proteins as well as effects of point mutations on a relative stability for one of those proteins, OmpLA. The consideration and ranking of different rotameric states for a mutated residue was found to be essential to achieve satisfactory agreement with the reference data. © 2015 Wiley Periodicals, Inc.
On the impact of a refined stochastic model for airborne LiDAR measurements
NASA Astrophysics Data System (ADS)
Bolkas, Dimitrios; Fotopoulos, Georgia; Glennie, Craig
2016-09-01
Accurate topographic information is critical for a number of applications in science and engineering. In recent years, airborne light detection and ranging (LiDAR) has become a standard tool for acquiring high quality topographic information. The assessment of airborne LiDAR derived DEMs is typically based on (i) independent ground control points and (ii) forward error propagation utilizing the LiDAR geo-referencing equation. The latter approach is dependent on the stochastic model information of the LiDAR observation components. In this paper, the well-known statistical tool of variance component estimation (VCE) is implemented for a dataset in Houston, Texas, in order to refine the initial stochastic information. Simulations demonstrate the impact of stochastic-model refinement for two practical applications, namely coastal inundation mapping and surface displacement estimation. Results highlight scenarios where erroneous stochastic information is detrimental. Furthermore, the refined stochastic information provides insights on the effect of each LiDAR measurement in the airborne LiDAR error budget. The latter is important for targeting future advancements in order to improve point cloud accuracy.
Refining Students' Explanations of an Unfamiliar Physical Phenomenon-Microscopic Friction
NASA Astrophysics Data System (ADS)
Corpuz, Edgar De Guzman; Rebello, N. Sanjay
2017-08-01
The first phase of this multiphase study involves modeling of college students' thinking of friction at the microscopic level. Diagnostic interviews were conducted with 11 students with different levels of physics backgrounds. A phenomenographic approach of data analysis was used to generate categories of responses which subsequently were used to generate a model of explanation. Most of the students interviewed consistently used mechanical interactions in explaining microscopic friction. According to these students, friction is due to the interlocking or rubbing of atoms. Our data suggest that students' explanations of microscopic friction are predominantly influenced by their macroscopic experiences. In the second phase of the research, teaching experiment was conducted with 18 college students to investigate how students' explanations of microscopic friction can be refined by a series of model-building activities. Data were analyzed using Redish's two-level transfer framework. Our results show that through sequences of hands-on and minds-on activities, including cognitive dissonance and resolution, it is possible to facilitate the refinement of students' explanations of microscopic friction. The activities seemed to be productive in helping students activate associations that refine their ideas about microscopic friction.
Molecular dynamics-based refinement and validation for sub-5 Å cryo-electron microscopy maps
Singharoy, Abhishek; Teo, Ivan; McGreevy, Ryan; Stone, John E; Zhao, Jianhua; Schulten, Klaus
2016-01-01
Two structure determination methods, based on the molecular dynamics flexible fitting (MDFF) paradigm, are presented that resolve sub-5 Å cryo-electron microscopy (EM) maps with either single structures or ensembles of such structures. The methods, denoted cascade MDFF and resolution exchange MDFF, sequentially re-refine a search model against a series of maps of progressively higher resolutions, which ends with the original experimental resolution. Application of sequential re-refinement enables MDFF to achieve a radius of convergence of ~25 Å demonstrated with the accurate modeling of β-galactosidase and TRPV1 proteins at 3.2 Å and 3.4 Å resolution, respectively. The MDFF refinements uniquely offer map-model validation and B-factor determination criteria based on the inherent dynamics of the macromolecules studied, captured by means of local root mean square fluctuations. The MDFF tools described are available to researchers through an easy-to-use and cost-effective cloud computing resource on Amazon Web Services. DOI: http://dx.doi.org/10.7554/eLife.16105.001 PMID:27383269
Millán, Claudia; Sammito, Massimo Domenico; McCoy, Airlie J; Nascimento, Andrey F Ziem; Petrillo, Giovanna; Oeffner, Robert D; Domínguez-Gil, Teresa; Hermoso, Juan A; Read, Randy J; Usón, Isabel
2018-04-01
Macromolecular structures can be solved by molecular replacement provided that suitable search models are available. Models from distant homologues may deviate too much from the target structure to succeed, notwithstanding an overall similar fold or even their featuring areas of very close geometry. Successful methods to make the most of such templates usually rely on the degree of conservation to select and improve search models. ARCIMBOLDO_SHREDDER uses fragments derived from distant homologues in a brute-force approach driven by the experimental data, instead of by sequence similarity. The new algorithms implemented in ARCIMBOLDO_SHREDDER are described in detail, illustrating its characteristic aspects in the solution of new and test structures. In an advance from the previously published algorithm, which was based on omitting or extracting contiguous polypeptide spans, model generation now uses three-dimensional volumes respecting structural units. The optimal fragment size is estimated from the expected log-likelihood gain (LLG) values computed assuming that a substructure can be found with a level of accuracy near that required for successful extension of the structure, typically below 0.6 Å root-mean-square deviation (r.m.s.d.) from the target. Better sampling is attempted through model trimming or decomposition into rigid groups and optimization through Phaser's gyre refinement. Also, after model translation, packing filtering and refinement, models are either disassembled into predetermined rigid groups and refined (gimble refinement) or Phaser's LLG-guided pruning is used to trim the model of residues that are not contributing signal to the LLG at the target r.m.s.d. value. Phase combination among consistent partial solutions is performed in reciprocal space with ALIXE. Finally, density modification and main-chain autotracing in SHELXE serve to expand to the full structure and identify successful solutions. The performance on test data and the solution of new structures are described.
Seamless Language Learning: Second Language Learning with Social Media
ERIC Educational Resources Information Center
Wong, Lung-Hsiang; Chai, Ching Sing; Aw, Guat Poh
2017-01-01
This conceptual paper describes a language learning model that applies social media to foster contextualized and connected language learning in communities. The model emphasizes weaving together different forms of language learning activities that take place in different learning contexts to achieve seamless language learning. it promotes social…
Refined carbohydrate intake in relation to non-verbal intelligence among Tehrani schoolchildren.
Abargouei, Amin Salehi; Kalantari, Naser; Omidvar, Nasrin; Rashidkhani, Bahram; Rad, Anahita Houshiar; Ebrahimi, Azizeh Afkham; Khosravi-Boroujeni, Hossein; Esmaillzadeh, Ahmad
2012-10-01
Nutrition has long been considered one of the most important environmental factors affecting human intelligence. Although carbohydrates are the most widely studied nutrient for their possible effects on cognition, limited data are available linking usual refined carbohydrate intake and intelligence. The present study was conducted to examine the relationship between long-term refined carbohydrate intake and non-verbal intelligence among schoolchildren. Cross-sectional study. Tehran, Iran. In this cross-sectional study, 245 students aged 6-7 years were selected from 129 elementary schools in two western regions of Tehran. Anthropometric measurements were carried out. Non-verbal intelligence and refined carbohydrate consumption were determined using Raven's Standard Progressive Matrices test and a modified sixty-seven-item FFQ, respectively. Data about potential confounding variables were collected. Linear regression analysis was applied to examine the relationship between non-verbal intelligence scores and refined carbohydrate consumption. Individuals in top tertile of refined carbohydrate intake had lower mean non-verbal intelligence scores in the crude model (P < 0.038). This association remained significant after controlling for age, gender, birth date, birth order and breast-feeding pattern (P = 0.045). However, further adjustments for mother's age, mother's education, father's education, parental occupation and BMI made the association statistically non-significant. We found a significant inverse association between refined carbohydrate consumption and non-verbal intelligence scores in regression models (β = -11.359, P < 0.001). This relationship remained significant in multivariate analysis after controlling for potential confounders (β = -8.495, P = 0.038). The study provides evidence indicating an inverse relationship between refined carbohydrate consumption and non-verbal intelligence among Tehrani children aged 6-7 years. Prospective studies are needed to confirm our findings.
NASA Astrophysics Data System (ADS)
Torkelson, G. Q.; Stoll, R., II
2017-12-01
Large Eddy Simulation (LES) is a tool commonly used to study the turbulent transport of momentum, heat, and moisture in the Atmospheric Boundary Layer (ABL). For a wide range of ABL LES applications, representing the full range of turbulent length scales in the flow field is a challenge. This is an acute problem in regions of the ABL with strong velocity or scalar gradients, which are typically poorly resolved by standard computational grids (e.g., near the ground surface, in the entrainment zone). Most efforts to address this problem have focused on advanced sub-grid scale (SGS) turbulence model development, or on the use of massive computational resources. While some work exists using embedded meshes, very little has been done on the use of grid refinement. Here, we explore the benefits of grid refinement in a pseudo-spectral LES numerical code. The code utilizes both uniform refinement of the grid in horizontal directions, and stretching of the grid in the vertical direction. Combining the two techniques allows us to refine areas of the flow while maintaining an acceptable grid aspect ratio. In tests that used only refinement of the vertical grid spacing, large grid aspect ratios were found to cause a significant unphysical spike in the stream-wise velocity variance near the ground surface. This was especially problematic in simulations of stably-stratified ABL flows. The use of advanced SGS models was not sufficient to alleviate this issue. The new refinement technique is evaluated using a series of idealized simulation test cases of neutrally and stably stratified ABLs. These test cases illustrate the ability of grid refinement to increase computational efficiency without loss in the representation of statistical features of the flow field.
Jorgensen, Selena; Thorlby, Ruth; Weinick, Robin M; Ayanian, John Z
2010-12-31
A Massachusetts regulation implemented in 2007 has required all acute care hospitals to report patients' race, ethnicity and preferred language using standardized methodology based on self-reported information from patients. This study assessed implementation of the regulation and its impact on the use of race and ethnicity data in performance monitoring and quality improvement within hospitals. Thematic analysis of semi-structured interviews with executives from a representative sample of 28 Massachusetts hospitals in 2009. The number of hospitals using race, ethnicity and language data internally beyond refining interpreter services increased substantially from 11 to 21 after the regulation. Thirteen of these hospitals were utilizing patient race and ethnicity data to identify disparities in quality performance measures for a variety of clinical processes and outcomes, while 16 had developed patient services and community outreach programs based on findings from these data. Commonly reported barriers to data utilization include small numbers within categories, insufficient resources, information system requirements, and lack of direction from the state. The responses of Massachusetts hospitals to this new state regulation indicate that requiring the collection of race, ethnicity and language data can be an effective method to promote performance monitoring and quality improvement, thereby setting the stage for federal standards and incentive programs to eliminate racial and ethnic disparities in the quality of health care.
Exploring the influence of cultural familiarity and expertise on neurological responses to music.
Demorest, Steven M; Morrison, Steven J
2003-11-01
Contemporary music education in many countries has begun to incorporate not only the dominant music of the culture, but also a variety of music from around the world. Although the desirability of such a broadened curriculum is virtually unquestioned, the specific function of these musical encounters and their potential role in children's cognitive development remain unclear. We do not know if studying a variety of world music traditions involves the acquisition of new skills or an extension and refinement of traditional skills long addressed by music teachers. Is a student's familiarity with a variety of musical traditions a manifestation of a single overarching "musicianship" or is knowledge of these various musical styles more similar to a collection of discrete skills much like learning a second language? Research on the comprehension of spoken language has disclosed a neurologically distinct response among subjects listening to their native language rather than an unfamiliar language. In a recent study comparing Western subjects' responses to music of their native culture and music of an unfamiliar culture, we found that subjects' activation did not differ on the basis of the cultural familiarity of the music, but on the basis of musical expertise. We discuss possible interpretations of these findings in relation to the concept of musical universals, cross-cultural stimulus characteristics, cross-cultural judgment tasks, and the influence of musical expertise. We conclude with suggestions for future research.
Reusable and Extensible High Level Data Distributions
NASA Technical Reports Server (NTRS)
Diaconescu, Roxana E.; Chamberlain, Bradford; James, Mark L.; Zima, Hans P.
2005-01-01
This paper presents a reusable design of a data distribution framework for data parallel high performance applications. We are implementing the design in the context of the Chapel high productivity programming language. Distributions in Chapel are a means to express locality in systems composed of large numbers of processor and memory components connected by a network. Since distributions have a great effect on,the performance of applications, it is important that the distribution strategy can be chosen by a user. At the same time, high productivity concerns require that the user is shielded from error-prone, tedious details such as communication and synchronization. We propose an approach to distributions that enables the user to refine a language-provided distribution type and adjust it to optimize the performance of the application. Additionally, we conceal from the user low-level communication and synchronization details to increase productivity. To emphasize the generality of our distribution machinery, we present its abstract design in the form of a design pattern, which is independent of a concrete implementation. To illustrate the applicability of our distribution framework design, we outline the implementation of data distributions in terms of the Chapel language.
NASA Astrophysics Data System (ADS)
Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.
2012-12-01
This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.
More than Words: Towards a Development-Based Approach to Language Revitalization
ERIC Educational Resources Information Center
Henderson, Brent; Rohloff, Peter; Henderson, Robert
2014-01-01
Existing models for language revitalization focus almost exclusively on language learning and use. While recognizing the value of these models, we argue that their effective application is largely limited to situations in which languages have low numbers of speakers. For languages that are rapidly undergoing language shift, but which still…
NASA Astrophysics Data System (ADS)
Reimer, Ashton S.; Cheviakov, Alexei F.
2013-03-01
A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.
Requirements for Medical Modeling Languages
van der Maas, Arnoud A.F.; Ter Hofstede, Arthur H.M.; Ten Hoopen, A. Johannes
2001-01-01
Objective: The development of tailor-made domain-specific modeling languages is sometimes desirable in medical informatics. Naturally, the development of such languages should be guided. The purpose of this article is to introduce a set of requirements for such languages and show their application in analyzing and comparing existing modeling languages. Design: The requirements arise from the practical experience of the authors and others in the development of modeling languages in both general informatics and medical informatics. The requirements initially emerged from the analysis of information modeling techniques. The requirements are designed to be orthogonal, i.e., one requirement can be violated without violation of the others. Results: The proposed requirements for any modeling language are that it be “formal” with regard to syntax and semantics, “conceptual,” “expressive,” “comprehensible,” “suitable,” and “executable.” The requirements are illustrated using both the medical logic modules of the Arden Syntax as a running example and selected examples from other modeling languages. Conclusion: Activity diagrams of the Unified Modeling Language, task structures for work flows, and Petri nets are discussed with regard to the list of requirements, and various tradeoffs are thus made explicit. It is concluded that this set of requirements has the potential to play a vital role in both the evaluation of existing domain-specific languages and the development of new ones. PMID:11230383
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Tartakovsky, A. M.
2017-12-01
We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodruff, David; Hackebeil, Gabe; Laird, Carl Damon
Pyomo supports the formulation and analysis of mathematical models for complex optimization applications. This capability is commonly associated with algebraic modeling languages (AMLs), which support the description and analysis of mathematical models with a high-level language. Although most AMLs are implemented in custom modeling languages, Pyomo's modeling objects are embedded within Python, a full- featured high-level programming language that contains a rich set of supporting libraries.
Proposal: A Hybrid Dictionary Modelling Approach for Malay Tweet Normalization
NASA Astrophysics Data System (ADS)
Muhamad, Nor Azlizawati Binti; Idris, Norisma; Arshi Saloot, Mohammad
2017-02-01
Malay Twitter message presents a special deviation from the original language. Malay Tweet widely used currently by Twitter users, especially at Malaya archipelago. Thus, it is important to make a normalization system which can translated Malay Tweet language into the standard Malay language. Some researchers have conducted in natural language processing which mainly focuses on normalizing English Twitter messages, while few studies have been done for normalize Malay Tweets. This paper proposes an approach to normalize Malay Twitter messages based on hybrid dictionary modelling methods. This approach normalizes noisy Malay twitter messages such as colloquially language, novel words, and interjections into standard Malay language. This research will be used Language Model and N-grams model.
Language acquisition is model-based rather than model-free.
Wang, Felix Hao; Mintz, Toben H
2016-01-01
Christiansen & Chater (C&C) propose that learning language is learning to process language. However, we believe that the general-purpose prediction mechanism they propose is insufficient to account for many phenomena in language acquisition. We argue from theoretical considerations and empirical evidence that many acquisition tasks are model-based, and that different acquisition tasks require different, specialized models.
A simple branching model that reproduces language family and language population distributions
NASA Astrophysics Data System (ADS)
Schwämmle, Veit; de Oliveira, Paulo Murilo Castro
2009-07-01
Human history leaves fingerprints in human languages. Little is known about language evolution and its study is of great importance. Here we construct a simple stochastic model and compare its results to statistical data of real languages. The model is based on the recent finding that language changes occur independently of the population size. We find agreement with the data additionally assuming that languages may be distinguished by having at least one among a finite, small number of different features. This finite set is also used in order to define the distance between two languages, similarly to linguistics tradition since Swadesh.
DiffPy-CMI-Python libraries for Complex Modeling Initiative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Billinge, Simon; Juhas, Pavol; Farrow, Christopher
2014-02-01
Software to manipulate and describe crystal and molecular structures and set up structural refinements from multiple experimental inputs. Calculation and simulation of structure derived physical quantities. Library for creating customized refinements of atomic structures from available experimental and theoretical inputs.
TumorML: Concept and requirements of an in silico cancer modelling markup language.
Johnson, David; Cooper, Jonathan; McKeever, Steve
2011-01-01
This paper describes the initial groundwork carried out as part of the European Commission funded Transatlantic Tumor Model Repositories project, to develop a new markup language for computational cancer modelling, TumorML. In this paper we describe the motivations for such a language, arguing that current state-of-the-art biomodelling languages are not suited to the cancer modelling domain. We go on to describe the work that needs to be done to develop TumorML, the conceptual design, and a description of what existing markup languages will be used to compose the language specification.
A pattern-based analysis of clinical computer-interpretable guideline modeling languages.
Mulyar, Nataliya; van der Aalst, Wil M P; Peleg, Mor
2007-01-01
Languages used to specify computer-interpretable guidelines (CIGs) differ in their approaches to addressing particular modeling challenges. The main goals of this article are: (1) to examine the expressive power of CIG modeling languages, and (2) to define the differences, from the control-flow perspective, between process languages in workflow management systems and modeling languages used to design clinical guidelines. The pattern-based analysis was applied to guideline modeling languages Asbru, EON, GLIF, and PROforma. We focused on control-flow and left other perspectives out of consideration. We evaluated the selected CIG modeling languages and identified their degree of support of 43 control-flow patterns. We used a set of explicitly defined evaluation criteria to determine whether each pattern is supported directly, indirectly, or not at all. PROforma offers direct support for 22 of 43 patterns, Asbru 20, GLIF 17, and EON 11. All four directly support basic control-flow patterns, cancellation patterns, and some advance branching and synchronization patterns. None support multiple instances patterns. They offer varying levels of support for synchronizing merge patterns and state-based patterns. Some support a few scenarios not covered by the 43 control-flow patterns. CIG modeling languages are remarkably close to traditional workflow languages from the control-flow perspective, but cover many fewer workflow patterns. CIG languages offer some flexibility that supports modeling of complex decisions and provide ways for modeling some decisions not covered by workflow management systems. Workflow management systems may be suitable for clinical guideline applications.
Self-organizing map models of language acquisition
Li, Ping; Zhao, Xiaowei
2013-01-01
Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic parallel distributed processing architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper, we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development. We suggest future directions in which these models can be extended, to better connect with behavioral and neural data, and to make clear predictions in testing relevant psycholinguistic theories. PMID:24312061
Language Learning Strategies and Its Training Model
ERIC Educational Resources Information Center
Liu, Jing
2010-01-01
This paper summarizes and reviews the literature regarding language learning strategies and it's training model, pointing out the significance of language learning strategies to EFL learners and an applicable and effective language learning strategies training model, which is beneficial both to EFL learners and instructors, is badly needed.
PREFMD: a web server for protein structure refinement via molecular dynamics simulations.
Heo, Lim; Feig, Michael
2018-03-15
Refinement of protein structure models is a long-standing problem in structural bioinformatics. Molecular dynamics-based methods have emerged as an avenue to achieve consistent refinement. The PREFMD web server implements an optimized protocol based on the method successfully tested in CASP11. Validation with recent CASP refinement targets shows consistent and more significant improvement in global structure accuracy over other state-of-the-art servers. PREFMD is freely available as a web server at http://feiglab.org/prefmd. Scripts for running PREFMD as a stand-alone package are available at https://github.com/feiglab/prefmd.git. feig@msu.edu. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Martinec, Zdeněk; Fullea, Javier
2015-03-01
We aim to interpret the vertical gravity and vertical gravity gradient of the GOCE-GRACE combined gravity model over the southeastern part of the Congo basin to refine the published model of sedimentary rock cover. We use the GOCO03S gravity model and evaluate its spherical harmonic representation at or near the Earth's surface. In this case, the gradiometry signals are enhanced as compared to the original measured GOCE gradients at satellite height and better emphasize the spatial pattern of sedimentary geology. To avoid aliasing, the omission error of the modelled gravity induced by the sedimentary rocks is adjusted to that of the GOCO03S gravity model. The mass-density Green's functions derived for the a priori structure of the sediments show a slightly greater sensitivity to the GOCO03S vertical gravity gradient than to the vertical gravity. Hence, the refinement of the sedimentary model is carried out for the vertical gravity gradient over the basin, such that a few anomalous values of the GOCO03S-derived vertical gravity gradient are adjusted by refining the model. We apply the 5-parameter Helmert's transformation, defined by 2 translations, 1 rotation and 2 scale parameters that are searched for by the steepest descent method. The refined sedimentary model is only slightly changed with respect to the original map, but it significantly improves the fit of the vertical gravity and vertical gravity gradient over the basin. However, there are still spatial features in the gravity and gradiometric data that remain unfitted by the refined model. These may be due to lateral density variation that is not contained in the model, a density contrast at the Moho discontinuity, lithospheric density stratifications or mantle convection. In a second step, the refined sedimentary model is used to find the vertical density stratification of sedimentary rocks. Although the gravity data can be interpreted by a constant sedimentary density, such a model does not correspond to the gravitational compaction of sedimentary rocks. Therefore, the density model is extended by including a linear increase in density with depth. Subsequent L2 and L∞ norm minimization procedures are applied to find the density parameters by adjusting both the vertical gravity and the vertical gravity gradient. We found that including the vertical gravity gradient in the interpretation of the GOCO03S-derived data reduces the non-uniqueness of the inverse gradiometric problem for density determination. The density structure of the sedimentary formations that provide the optimum predictions of the GOCO03S-derived gravity and vertical gradient of gravity consists of a surface density contrast with respect to surrounding rocks of 0.24-0.28 g/cm3 and its decrease with depth of 0.05-0.25 g/cm3 per 10 km. Moreover, the case where the sedimentary rocks are gravitationally completely compacted in the deepest parts of the basin is supported by L∞ norm minimization. However, this minimization also allows a remaining density contrast at the deepest parts of the sedimentary basin of about 0.1 g/cm3.
Li, Muqun; Carrell, David; Aberdeen, John; Hirschman, Lynette; Kirby, Jacqueline; Li, Bo; Vorobeychik, Yevgeniy; Malin, Bradley A
2016-06-01
Electronic medical records (EMRs) are increasingly repurposed for activities beyond clinical care, such as to support translational research and public policy analysis. To mitigate privacy risks, healthcare organizations (HCOs) aim to remove potentially identifying patient information. A substantial quantity of EMR data is in natural language form and there are concerns that automated tools for detecting identifiers are imperfect and leak information that can be exploited by ill-intentioned data recipients. Thus, HCOs have been encouraged to invest as much effort as possible to find and detect potential identifiers, but such a strategy assumes the recipients are sufficiently incentivized and capable of exploiting leaked identifiers. In practice, such an assumption may not hold true and HCOs may overinvest in de-identification technology. The goal of this study is to design a natural language de-identification framework, rooted in game theory, which enables an HCO to optimize their investments given the expected capabilities of an adversarial recipient. We introduce a Stackelberg game to balance risk and utility in natural language de-identification. This game represents a cost-benefit model that enables an HCO with a fixed budget to minimize their investment in the de-identification process. We evaluate this model by assessing the overall payoff to the HCO and the adversary using 2100 clinical notes from Vanderbilt University Medical Center. We simulate several policy alternatives using a range of parameters, including the cost of training a de-identification model and the loss in data utility due to the removal of terms that are not identifiers. In addition, we compare policy options where, when an attacker is fined for misuse, a monetary penalty is paid to the publishing HCO as opposed to a third party (e.g., a federal regulator). Our results show that when an HCO is forced to exhaust a limited budget (set to $2000 in the study), the precision and recall of the de-identification of the HCO are 0.86 and 0.8, respectively. A game-based approach enables a more refined cost-benefit tradeoff, improving both privacy and utility for the HCO. For example, our investigation shows that it is possible for an HCO to release the data without spending all their budget on de-identification and still deter the attacker, with a precision of 0.77 and a recall of 0.61 for the de-identification. There also exist scenarios in which the model indicates an HCO should not release any data because the risk is too great. In addition, we find that the practice of paying fines back to a HCO (an artifact of suing for breach of contract), as opposed to a third party such as a federal regulator, can induce an elevated level of data sharing risk, where the HCO is incentivized to bait the attacker to elicit compensation. A game theoretic framework can be applied in leading HCO's to optimized decision making in natural language de-identification investments before sharing EMR data. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Guapacha Chamorro, Maria Eugenia; Benavidez Paz, Luis Humberto
2017-01-01
This paper reports an action-research study on language learning strategies in tertiary education at a Colombian university. The study aimed at improving the English language performance and language learning strategies use of 33 first-year pre-service language teachers by combining elements from two models: the cognitive academic language…
The Layer-Oriented Approach to Declarative Languages for Biological Modeling
Raikov, Ivan; De Schutter, Erik
2012-01-01
We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language. PMID:22615554
The layer-oriented approach to declarative languages for biological modeling.
Raikov, Ivan; De Schutter, Erik
2012-01-01
We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun
This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less
Refined structure of dimeric diphtheria toxin at 2.0 A resolution.
Bennett, M. J.; Choe, S.; Eisenberg, D.
1994-01-01
The refined structure of dimeric diphtheria toxin (DT) at 2.0 A resolution, based on 37,727 unique reflections (F > 1 sigma (F)), yields a final R factor of 19.5% with a model obeying standard geometry. The refined model consists of 523 amino acid residues, 1 molecule of the bound dinucleotide inhibitor adenylyl 3'-5' uridine 3' monophosphate (ApUp), and 405 well-ordered water molecules. The 2.0-A refined model reveals that the binding motif for ApUp includes residues in the catalytic and receptor-binding domains and is different from the Rossmann dinucleotide-binding fold. ApUp is bound in part by a long loop (residues 34-52) that crosses the active site. Several residues in the active site were previously identified as NAD-binding residues. Glu 148, previously identified as playing a catalytic role in ADP-ribosylation of elongation factor 2 by DT, is about 5 A from uracil in ApUp. The trigger for insertion of the transmembrane domain of DT into the endosomal membrane at low pH may involve 3 intradomain and 4 interdomain salt bridges that will be weakened at low pH by protonation of their acidic residues. The refined model also reveals that each molecule in dimeric DT has an "open" structure unlike most globular proteins, which we call an open monomer. Two open monomers interact by "domain swapping" to form a compact, globular dimeric DT structure. The possibility that the open monomer resembles a membrane insertion intermediate is discussed. PMID:7833807
Student Modeling and Ab Initio Language Learning.
ERIC Educational Resources Information Center
Heift, Trude; Schulze, Mathias
2003-01-01
Provides examples of student modeling techniques that have been employed in computer-assisted language learning over the past decade. Describes two systems for learning German: "German Tutor" and "Geroline." Shows how a student model can support computerized adaptive language testing for diagnostic purposes in a Web-based language learning…
Chiral pathways in DNA dinucleotides using gradient optimized refinement along metastable borders
NASA Astrophysics Data System (ADS)
Romano, Pablo; Guenza, Marina
We present a study of DNA breathing fluctuations using Markov state models (MSM) with our novel refinement procedure. MSM have become a favored method of building kinetic models, however their accuracy has always depended on using a significant number of microstates, making the method costly. We present a method which optimizes macrostates by refining borders with respect to the gradient along the free energy surface. As the separation between macrostates contains highest discretization errors, this method corrects for any errors produced by limited microstate sampling. Using our refined MSM methods, we investigate DNA breathing fluctuations, thermally induced conformational changes in native B-form DNA. Running several microsecond MD simulations of DNA dinucleotides of varying sequences, to include sequence and polarity effects, we've analyzed using our refined MSM to investigate conformational pathways inherent in the unstacking of DNA bases. Our kinetic analysis has shown preferential chirality in unstacking pathways that may be critical in how proteins interact with single stranded regions of DNA. These breathing dynamics can help elucidate the connection between conformational changes and key mechanisms within protein-DNA recognition. NSF Chemistry Division (Theoretical Chemistry), the Division of Physics (Condensed Matter: Material Theory), XSEDE.
NASA Astrophysics Data System (ADS)
Apel, M.; Eiken, J.; Hecht, U.
2014-02-01
This paper aims at briefly reviewing phase field models applied to the simulation of heterogeneous nucleation and subsequent growth, with special emphasis on grain refinement by inoculation. The spherical cap and free growth model (e.g. A.L. Greer, et al., Acta Mater. 48, 2823 (2000)) has proven its applicability for different metallic systems, e.g. Al or Mg based alloys, by computing the grain refinement effect achieved by inoculation of the melt with inert seeding particles. However, recent experiments with peritectic Ti-Al-B alloys revealed that the grain refinement by TiB2 is less effective than predicted by the model. Phase field simulations can be applied to validate the approximations of the spherical cap and free growth model, e.g. by computing explicitly the latent heat release associated with different nucleation and growth scenarios. Here, simulation results for point-shaped nucleation, as well as for partially and completely wetted plate-like seed particles will be discussed with respect to recalescence and impact on grain refinement. It will be shown that particularly for large seeding particles (up to 30 μm), the free growth morphology clearly deviates from the assumed spherical cap and the initial growth - until the free growth barrier is reached - significantly contributes to the latent heat release and determines the recalescence temperature.
Platania, Chiara Bianca Maria; Salomone, Salvatore; Leggio, Gian Marco; Drago, Filippo; Bucolo, Claudio
2012-01-01
Dopamine (DA) receptors, a class of G-protein coupled receptors (GPCRs), have been targeted for drug development for the treatment of neurological, psychiatric and ocular disorders. The lack of structural information about GPCRs and their ligand complexes has prompted the development of homology models of these proteins aimed at structure-based drug design. Crystal structure of human dopamine D3 (hD3) receptor has been recently solved. Based on the hD3 receptor crystal structure we generated dopamine D2 and D3 receptor models and refined them with molecular dynamics (MD) protocol. Refined structures, obtained from the MD simulations in membrane environment, were subsequently used in molecular docking studies in order to investigate potential sites of interaction. The structure of hD3 and hD2L receptors was differentiated by means of MD simulations and D3 selective ligands were discriminated, in terms of binding energy, by docking calculation. Robust correlation of computed and experimental Ki was obtained for hD3 and hD2L receptor ligands. In conclusion, the present computational approach seems suitable to build and refine structure models of homologous dopamine receptors that may be of value for structure-based drug discovery of selective dopaminergic ligands. PMID:22970199
ERIC Educational Resources Information Center
Hongzhi, Long
2017-01-01
Dual-language education models are theoretical and practical systems formed through the process of dual-language education and centered on study and teaching. The language environment is the basis for developing and reforming dual-language education models. The author takes Xiahe County in Gannan Tibetan Autonomous Prefecture as an example and…
Construction of language models for an handwritten mail reading system
NASA Astrophysics Data System (ADS)
Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle
2012-01-01
This paper presents a system for the recognition of unconstrained handwritten mails. The main part of this system is an HMM recognizer which uses trigraphs to model contextual information. This recognition system does not require any segmentation into words or characters and directly works at line level. To take into account linguistic information and enhance performance, a language model is introduced. This language model is based on bigrams and built from training document transcriptions only. Different experiments with various vocabulary sizes and language models have been conducted. Word Error Rate and Perplexity values are compared to show the interest of specific language models, fit to handwritten mail recognition task.
Object-oriented biomedical system modelling--the language.
Hakman, M; Groth, T
1999-11-01
The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.
Protein homology model refinement by large-scale energy optimization.
Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David
2018-03-20
Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.
DPW-VI Results Using FUN3D with Focus on k-kL-MEAH2015 (k-kL) Turbulence Model
NASA Technical Reports Server (NTRS)
Abdol-Hamid, K. S.; Carlson, Jan-Renee; Rumsey, Christopher L.; Lee-Rausch, Elizabeth M.; Park, Michael A.
2017-01-01
The Common Research Model wing-body configuration is investigated with the k-kL-MEAH2015 turbulence model implemented in FUN3D. This includes results presented at the Sixth Drag Prediction Workshop and additional results generated after the workshop with a nonlinear Quadratic Constitutive Relation (QCR) variant of the same turbulence model. The workshop provided grids are used, and a uniform grid refinement study is performed at the design condition. A large variation between results with and without a reconstruction limiter is exhibited on "medium" grid sizes, indicating that the medium grid size is too coarse for drawing conclusions in comparison with experiment. This variation is reduced with grid refinement. At a fixed angle of attack near design conditions, the QCR variant yielded decreased lift and drag compared with the linear eddy-viscosity model by an amount that was approximately constant with grid refinement. The k-kL-MEAH2015 turbulence model produced wing root junction flow behavior consistent with wind tunnel observations.
Extremely high data-rate, reliable network systems research
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Maly, Kurt J.; Mukkamala, R.; Murray, Nicholas D.; Overstreet, C. Michael
1990-01-01
Significant progress was made over the year in the four focus areas of this research group: gigabit protocols, extensions of metropolitan protocols, parallel protocols, and distributed simulations. Two activities, a network management tool and the Carrier Sensed Multiple Access Collision Detection (CSMA/CD) protocol, have developed to the point that a patent is being applied for in the next year; a tool set for distributed simulation using the language SIMSCRIPT also has commercial potential and is to be further refined. The year's results for each of these areas are summarized and next year's activities are described.
ART/Ada design project, phase 1. Task 2 report: Detailed design
NASA Technical Reports Server (NTRS)
Allen, Bradley P.
1988-01-01
Various issues are studied in the context of the design of an Ada based expert system building tool. Using an existing successful design as a starting point, the impact is analyzed of the Ada language and Ada development methodologies on that design, the Ada system is redesigned, and its performance is analyzed using both complexity-theoretic and empirical techniques. The algorithms specified in the overall design are refined, resolving and documenting any open design issues, identifying each system module, documenting the internal architecture and control logic, and describing the primary data structures involved in the module.
Automation Hooks Architecture for Flexible Test Orchestration - Concept Development and Validation
NASA Technical Reports Server (NTRS)
Lansdowne, C. A.; Maclean, John R.; Winton, Chris; McCartney, Pat
2011-01-01
The Automation Hooks Architecture Trade Study for Flexible Test Orchestration sought a standardized data-driven alternative to conventional automated test programming interfaces. The study recommended composing the interface using multicast DNS (mDNS/SD) service discovery, Representational State Transfer (Restful) Web Services, and Automatic Test Markup Language (ATML). We describe additional efforts to rapidly mature the Automation Hooks Architecture candidate interface definition by validating it in a broad spectrum of applications. These activities have allowed us to further refine our concepts and provide observations directed toward objectives of economy, scalability, versatility, performance, severability, maintainability, scriptability and others.
Oral motor deficits in speech-impaired children with autism
Belmonte, Matthew K.; Saxena-Chandhok, Tanushree; Cherian, Ruth; Muneer, Reema; George, Lisa; Karanth, Prathibha
2013-01-01
Absence of communicative speech in autism has been presumed to reflect a fundamental deficit in the use of language, but at least in a subpopulation may instead stem from motor and oral motor issues. Clinical reports of disparity between receptive vs. expressive speech/language abilities reinforce this hypothesis. Our early-intervention clinic develops skills prerequisite to learning and communication, including sitting, attending, and pointing or reference, in children below 6 years of age. In a cohort of 31 children, gross and fine motor skills and activities of daily living as well as receptive and expressive speech were assessed at intake and after 6 and 10 months of intervention. Oral motor skills were evaluated separately within the first 5 months of the child's enrolment in the intervention programme and again at 10 months of intervention. Assessment used a clinician-rated structured report, normed against samples of 360 (for motor and speech skills) and 90 (for oral motor skills) typically developing children matched for age, cultural environment and socio-economic status. In the full sample, oral and other motor skills correlated with receptive and expressive language both in terms of pre-intervention measures and in terms of learning rates during the intervention. A motor-impaired group comprising a third of the sample was discriminated by an uneven profile of skills with oral motor and expressive language deficits out of proportion to the receptive language deficit. This group learnt language more slowly, and ended intervention lagging in oral motor skills. In individuals incapable of the degree of motor sequencing and timing necessary for speech movements, receptive language may outstrip expressive speech. Our data suggest that autistic motor difficulties could range from more basic skills such as pointing to more refined skills such as articulation, and need to be assessed and addressed across this entire range in each individual. PMID:23847480
Language and Cognition Interaction Neural Mechanisms
Perlovsky, Leonid
2011-01-01
How language and cognition interact in thinking? Is language just used for communication of completed thoughts, or is it fundamental for thinking? Existing approaches have not led to a computational theory. We develop a hypothesis that language and cognition are two separate but closely interacting mechanisms. Language accumulates cultural wisdom; cognition develops mental representations modeling surrounding world and adapts cultural knowledge to concrete circumstances of life. Language is acquired from surrounding language “ready-made” and therefore can be acquired early in life. This early acquisition of language in childhood encompasses the entire hierarchy from sounds to words, to phrases, and to highest concepts existing in culture. Cognition is developed from experience. Yet cognition cannot be acquired from experience alone; language is a necessary intermediary, a “teacher.” A mathematical model is developed; it overcomes previous difficulties and leads to a computational theory. This model is consistent with Arbib's “language prewired brain” built on top of mirror neuron system. It models recent neuroimaging data about cognition, remaining unnoticed by other theories. A number of properties of language and cognition are explained, which previously seemed mysterious, including influence of language grammar on cultural evolution, which may explain specifics of English and Arabic cultures. PMID:21876687
English Teachers' Language Awareness: Away with the Monolingual Bias?
ERIC Educational Resources Information Center
Otwinowska, Agnieszka
2017-01-01
The training of language teachers still follows traditional models of teachers' competences and awareness, focusing solely on the target language. Such models are incompatible with multilingual pedagogy, whereby languages are not taught in isolation, and learners' background languages are activated to enhance the process. When teaching…
Assimilating Remote Ammonia Observations with a Refined Aerosol Thermodynamics Adjoint"
Ammonia emissions parameters in North America can be refined in order to improve the evaluation of modeled concentrations against observations. Here, we seek to do so by developing and applying the GEOS-Chem adjoint nested over North America to conductassimilation of observations...
Structure and atomic correlations in molecular systems probed by XAS reverse Monte Carlo refinement
NASA Astrophysics Data System (ADS)
Di Cicco, Andrea; Iesari, Fabio; Trapananti, Angela; D'Angelo, Paola; Filipponi, Adriano
2018-03-01
The Reverse Monte Carlo (RMC) algorithm for structure refinement has been applied to x-ray absorption spectroscopy (XAS) multiple-edge data sets for six gas phase molecular systems (SnI2, CdI2, BBr3, GaI3, GeBr4, GeI4). Sets of thousands of molecular replicas were involved in the refinement process, driven by the XAS data and constrained by available electron diffraction results. The equilibrated configurations were analysed to determine the average tridimensional structure and obtain reliable bond and bond-angle distributions. Detectable deviations from Gaussian models were found in some cases. This work shows that a RMC refinement of XAS data is able to provide geometrical models for molecular structures compatible with present experimental evidence. The validation of this approach on simple molecular systems is particularly important in view of its possible simple extension to more complex and extended systems including metal-organic complexes, biomolecules, or nanocrystalline systems.
Formation and mechanism of nanocrystalline AZ91 powders during HDDR processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yafen; Fan, Jianfeng, E-mail: fanjianfeng@tyu
2017-03-15
Grain sizes of AZ91 alloy powders were markedly refined to about 15 nm from 100 to 160 μm by an optimized hydrogenation-disproportionation-desorption-recombination (HDDR) process. The effect of temperature, hydrogen pressure and processing time on phase and microstructure evolution of AZ91 alloy powders during HDDR process was investigated systematically by X-ray diffraction, optical microscopy, scanning electron microscopy and transmission electron microscopy, respectively. The optimal HDDR process for preparing nanocrystalline Mg alloy powders is hydriding at temperature of 350 °C under 4 MPa hydrogen pressure for 12 h and dehydriding at 350 °C for 3 h in vacuum. A modified unreacted coremore » model was introduced to describe the mechanism of grain refinement of during HDDR process. - Highlights: • Grain size of the AZ91 alloy powders was significantly refined from 100 μm to 15 nm. • The optimal HDDR technology for nano Mg alloy powders is obtained. • A modified unreacted core model of grain refinement mechanism was proposed.« less
Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C
System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less
Refinement, Validation and Benchmarking of a Model for E-Government Service Quality
NASA Astrophysics Data System (ADS)
Magoutas, Babis; Mentzas, Gregoris
This paper presents the refinement and validation of a model for Quality of e-Government Services (QeGS). We built upon our previous work where a conceptualized model was identified and put focus on the confirmatory phase of the model development process, in order to come up with a valid and reliable QeGS model. The validated model, which was benchmarked with very positive results with similar models found in the literature, can be used for measuring the QeGS in a reliable and valid manner. This will form the basis for a continuous quality improvement process, unleashing the full potential of e-government services for both citizens and public administrations.
Language competition in a population of migrating agents.
Lipowska, Dorota; Lipowski, Adam
2017-05-01
Influencing various aspects of human activity, migration is associated also with language formation. To examine the mutual interaction of these processes, we study a Naming Game with migrating agents. The dynamics of the model leads to formation of low-mobility clusters, which turns out to break the symmetry of the model: although the Naming Game remains symmetric, low-mobility languages are favored. High-mobility languages are gradually eliminated from the system, and the dynamics of language formation considerably slows down. Our model is too simple to explain in detail language competition of migrating human communities, but it certainly shows that languages of settlers are favored over nomadic ones.
Language competition in a population of migrating agents
NASA Astrophysics Data System (ADS)
Lipowska, Dorota; Lipowski, Adam
2017-05-01
Influencing various aspects of human activity, migration is associated also with language formation. To examine the mutual interaction of these processes, we study a Naming Game with migrating agents. The dynamics of the model leads to formation of low-mobility clusters, which turns out to break the symmetry of the model: although the Naming Game remains symmetric, low-mobility languages are favored. High-mobility languages are gradually eliminated from the system, and the dynamics of language formation considerably slows down. Our model is too simple to explain in detail language competition of migrating human communities, but it certainly shows that languages of settlers are favored over nomadic ones.
Language shift, bilingualism and the future of Britain's Celtic languages.
Kandler, Anne; Unger, Roman; Steele, James
2010-12-12
'Language shift' is the process whereby members of a community in which more than one language is spoken abandon their original vernacular language in favour of another. The historical shifts to English by Celtic language speakers of Britain and Ireland are particularly well-studied examples for which good census data exist for the most recent 100-120 years in many areas where Celtic languages were once the prevailing vernaculars. We model the dynamics of language shift as a competition process in which the numbers of speakers of each language (both monolingual and bilingual) vary as a function both of internal recruitment (as the net outcome of birth, death, immigration and emigration rates of native speakers), and of gains and losses owing to language shift. We examine two models: a basic model in which bilingualism is simply the transitional state for households moving between alternative monolingual states, and a diglossia model in which there is an additional demand for the endangered language as the preferred medium of communication in some restricted sociolinguistic domain, superimposed on the basic shift dynamics. Fitting our models to census data, we successfully reproduce the demographic trajectories of both languages over the past century. We estimate the rates of recruitment of new Scottish Gaelic speakers that would be required each year (for instance, through school education) to counteract the 'natural wastage' as households with one or more Gaelic speakers fail to transmit the language to the next generation informally, for different rates of loss during informal intergenerational transmission.
Zadeh, Zohreh Yaghoub; Im-Bolter, Nancie; Cohen, Nancy J
2007-04-01
The present study integrates findings from three lines of research on the association of social cognition and externalizing psychopathology, language and externalizing psychopathology, and social cognition and language functioning using Structural Equation Modeling (SEM). To date these associations have been examined in pairs. A sample of 354 clinic-referred children (aged 7 to 14 years) recruited from a children's mental health centre were tested on measures of language, social cognition, working memory, and child psychopathology. We compared a hypothesized model presenting language functioning as a mediator of the association between social cognition and externalizing psychopathology to a model presenting the independent contribution of language and social cognition to externalizing psychopathology. As hypothesized, we found that the mediation model fits the data better than the alternative model. Our findings have implications for developing and modifying intervention techniques for children with dual language and externalizing psychopathology.
Sordo, Margarita; Boxwala, Aziz A; Ogunyemi, Omolola; Greenes, Robert A
2004-01-01
A major obstacle to sharing computable clinical knowledge is the lack of a common language for specifying expressions and criteria. Such a language could be used to specify decision criteria, formulae, and constraints on data and action. Al-though the Arden Syntax addresses this problem for clinical rules, its generalization to HL7's object-oriented data model is limited. The GELLO Expression language is an object-oriented language used for expressing logical conditions and computations in the GLIF3 (GuideLine Interchange Format, v. 3) guideline modeling language. It has been further developed under the auspices of the HL7 Clinical Decision Support Technical Committee, as a proposed HL7 standard., GELLO is based on the Object Constraint Language (OCL), because it is vendor-independent, object-oriented, and side-effect-free. GELLO expects an object-oriented data model. Although choice of model is arbitrary, standardization is facilitated by ensuring that the data model is compatible with the HL7 Reference Information Model (RIM).
An amodal shared resource model of language-mediated visual attention
Smith, Alastair C.; Monaghan, Padraic; Huettig, Falk
2013-01-01
Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. PMID:23966967
Scheduler for monitoring objects orbiting earth using satellite-based telescopes
Olivier, Scot S; Pertica, Alexander J; Riot, Vincent J; De Vries, Willem H; Bauman, Brian J; Nikolaev, Sergei; Henderson, John R; Phillion, Donald W
2015-04-28
An ephemeris refinement system includes satellites with imaging devices in earth orbit to make observations of space-based objects ("target objects") and a ground-based controller that controls the scheduling of the satellites to make the observations of the target objects and refines orbital models of the target objects. The ground-based controller determines when the target objects of interest will be near enough to a satellite for that satellite to collect an image of the target object based on an initial orbital model for the target objects. The ground-based controller directs the schedules to be uploaded to the satellites, and the satellites make observations as scheduled and download the observations to the ground-based controller. The ground-based controller then refines the initial orbital models of the target objects based on the locations of the target objects that are derived from the observations.
Monitoring objects orbiting earth using satellite-based telescopes
Olivier, Scot S.; Pertica, Alexander J.; Riot, Vincent J.; De Vries, Willem H.; Bauman, Brian J.; Nikolaev, Sergei; Henderson, John R.; Phillion, Donald W.
2015-06-30
An ephemeris refinement system includes satellites with imaging devices in earth orbit to make observations of space-based objects ("target objects") and a ground-based controller that controls the scheduling of the satellites to make the observations of the target objects and refines orbital models of the target objects. The ground-based controller determines when the target objects of interest will be near enough to a satellite for that satellite to collect an image of the target object based on an initial orbital model for the target objects. The ground-based controller directs the schedules to be uploaded to the satellites, and the satellites make observations as scheduled and download the observations to the ground-based controller. The ground-based controller then refines the initial orbital models of the target objects based on the locations of the target objects that are derived from the observations.
NASA Technical Reports Server (NTRS)
Arnold, William R.
2015-01-01
Since last year, a number of expanded capabilities have been added to the modeler. The support the integration with thermal modeling, the program can now produce simplified thermal models with the same geometric parameters as the more detailed dynamic and even more refined stress models. The local mesh refinement and mesh improvement tools have been expanded and more user friendly. The goal is to provide a means of evaluating both monolithic and segmented mirrors to the same level of fidelity and loading conditions at reasonable man-power efforts. The paper will demonstrate most of these new capabilities.
Structural analysis of glycoproteins: building N-linked glycans with Coot.
Emsley, Paul; Crispin, Max
2018-04-01
Coot is a graphics application that is used to build or manipulate macromolecular models; its particular forte is manipulation of the model at the residue level. The model-building tools of Coot have been combined and extended to assist or automate the building of N-linked glycans. The model is built by the addition of monosaccharides, placed by variation of internal coordinates. The subsequent model is refined by real-space refinement, which is stabilized with modified and additional restraints. It is hoped that these enhanced building tools will help to reduce building errors of N-linked glycans and improve our knowledge of the structures of glycoproteins.
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.
2015-01-01
Since last year, a number of expanded capabilities have been added to the modeler. The support the integration with thermal modeling, the program can now produce simplified thermal models with the same geometric parameters as the more detailed dynamic and even more refined stress models. The local mesh refinement and mesh improvement tools have been expanded and more user friendly. The goal is to provide a means of evaluating both monolithic and segmented mirrors to the same level of fidelity and loading conditions at reasonable man-power efforts. The paper will demonstrate most of these new capabilities.
ERIC Educational Resources Information Center
Phillips, Lawrence; Pearl, Lisa
2015-01-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…
ERIC Educational Resources Information Center
Brysbaert, Marc; Duyck, Wouter
2010-01-01
The Revised Hierarchical Model (RHM) of bilingual language processing dominates current thinking on bilingual language processing. Recently, basic tenets of the model have been called into question. First, there is little evidence for separate lexicons. Second, there is little evidence for language selective access. Third, the inclusion of…
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
Assessing food allergy risks from residual peanut protein in highly refined vegetable oil.
Blom, W Marty; Kruizinga, Astrid G; Rubingh, Carina M; Remington, Ben C; Crevel, René W R; Houben, Geert F
2017-08-01
Refined vegetable oils including refined peanut oil are widely used in foods. Due to shared production processes, refined non-peanut vegetable oils can contain residual peanut proteins. We estimated the predicted number of allergic reactions to residual peanut proteins using probabilistic risk assessment applied to several scenarios involving food products made with vegetable oils. Variables considered were: a) the estimated production scale of refined peanut oil, b) estimated cross-contact between refined vegetable oils during production, c) the proportion of fat in representative food products and d) the peanut protein concentration in refined peanut oil. For all products examined the predicted risk of objective allergic reactions in peanut-allergic users of the food products was extremely low. The number of predicted reactions ranged depending on the model from a high of 3 per 1000 eating occasions (Weibull) to no reactions (LogNormal). Significantly, all reactions were predicted for allergen intakes well below the amounts reported for the most sensitive individual described in the clinical literature. We conclude that the health risk from cross-contact between vegetable oils and refined peanut oil is negligible. None of the food products would warrant precautionary labelling for peanut according to the VITAL ® programme of the Allergen Bureau. Copyright © 2017 Elsevier Ltd. All rights reserved.
Theory of a refined earth model
NASA Technical Reports Server (NTRS)
Krause, H. G. L.
1968-01-01
Refined equations are derived relating the variations of the earths gravity and radius as functions of longitude and latitude. They particularly relate the oblateness coefficients of the old harmonics and the difference of the polar radii /respectively, ellipticities and polar gravity accelerations/ in the Northern and Southern Hemispheres.
Refining King and Baxter Magolda's Model of Intercultural Maturity
ERIC Educational Resources Information Center
Perez, Rosemary J.; Shim, Woojeong; King, Patricia M.; Baxter Magolda, Marcia B.
2015-01-01
This study examined 110 intercultural experiences from 82 students attending six colleges and universities to explore how students' interpretations of their intercultural experiences reflected their developmental capacities for intercultural maturity. Our analysis of students' experiences confirmed as well as refined and expanded King and Baxter…
A concept analysis of moral resilience.
Young, Peter D; Rushton, Cynda Hylton
Nurses experience moral distress, which has led to emotional distress, frustration, anger, and nurse attrition. Overcoming moral distress has become a significant focus in nursing research. The continued focus on moral distress has not produced sustainable solutions within the nursing profession. Since positive language may alter the outcomes of morally distressing situations, we look to better understand one such positive phrase, moral resilience. We explored moral resilience through a literature search using 11 databases to identify instances of the phrase. Occurrences of moral resilience were then divided into three distinct categories: antecedents, attributes, and consequences, and following this, major themes within each category were identified. There is a dearth of scholarship on moral resilience, and additionally, there is currently no unifying definition. Despite this, our analysis offers promising direction in refining the concept. This concept analysis reveals differences in how moral resilience is understood. More conceptual work is needed to refine the definition of moral resilience and understand how the concept is useful in mitigating the negative consequences of moral distress and other types of moral adversity. Copyright © 2017 Elsevier Inc. All rights reserved.
Entanglement classification with algebraic geometry
NASA Astrophysics Data System (ADS)
Sanz, M.; Braak, D.; Solano, E.; Egusquiza, I. L.
2017-05-01
We approach multipartite entanglement classification in the symmetric subspace in terms of algebraic geometry, its natural language. We show that the class of symmetric separable states has the structure of a Veronese variety and that its k-secant varieties are SLOCC invariants. Thus SLOCC classes gather naturally into families. This classification presents useful properties such as a linear growth of the number of families with the number of particles, and nesting, i.e. upward consistency of the classification. We attach physical meaning to this classification through the required interaction length of parent Hamiltonians. We show that the states W N and GHZ N are in the same secant family and that, effectively, the former can be obtained in a limit from the latter. This limit is understood in terms of tangents, leading to a refinement of the previous families. We compute explicitly the classification of symmetric states with N≤slant4 qubits in terms of both secant families and its refinement using tangents. This paves the way to further use of projective varieties in algebraic geometry to solve open problems in entanglement theory.
Cognitive aging and hearing acuity: modeling spoken language comprehension.
Wingfield, Arthur; Amichetti, Nicole M; Lash, Amanda
2015-01-01
The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.
He, Angela Xiaoxue; Arunachalam, Sudha
2017-07-01
How do children acquire the meanings of words? Many word learning mechanisms have been proposed to guide learners through this challenging task. Despite the availability of rich information in the learner's linguistic and extralinguistic input, the word-learning task is insurmountable without such mechanisms for filtering through and utilizing that information. Different kinds of words, such as nouns denoting object concepts and verbs denoting event concepts, require to some extent different kinds of information and, therefore, access to different kinds of mechanisms. We review some of these mechanisms to examine the relationship between the input that is available to learners and learners' intake of that input-that is, the organized, interpreted, and stored representations they form. We discuss how learners segment individual words from the speech stream and identify their grammatical categories, how they identify the concepts denoted by these words, and how they refine their initial representations of word meanings. WIREs Cogn Sci 2017, 8:e1435. doi: 10.1002/wcs.1435 This article is categorized under: Linguistics > Language Acquisition Psychology > Language. © 2017 Wiley Periodicals, Inc.
Zimmerman, Sheryl; Love, Karen; Cohen, Lauren W; Pinkowitz, Jackie; Nyrop, Kirsten A
2014-01-01
As a result of the Centers for Medicare & Medicaid Services (CMS) interest in creating a unifying definition of "community living" for its Medicaid Home and Community Based Services and Support (HCBS) programs, it needed clarifying descriptors of person-centered (PC) practices in assisted living to distinguish them from institutional ones. Additionally, CMS's proposed language defining "community living" had the unintended potential to exclude many assisted living communities and disadvantage residents who receive Medicaid. This manuscript describes the consensus process through which clarifying language for "community living" and a framework for HCBS PC domains, attributes, and indicators specific to assisted living were developed. It examines the validity of those domains based on literature review, surveys, and stakeholder focus groups, and identifies nine domains and 43 indicators that provide a foundation for defining and measuring PC practice in assisted living. Ongoing efforts using community-based participatory research methods are further refining and testing PC indicators for assisted living to advance knowledge, operational policies, practices, and quality outcomes.
Modeling as an Anchoring Scientific Practice for Explaining Friction Phenomena
NASA Astrophysics Data System (ADS)
Neilson, Drew; Campbell, Todd
2017-12-01
Through examining the day-to-day work of scientists, researchers in science studies have revealed how models are a central sense-making practice of scientists as they construct and critique explanations about how the universe works. Additionally, they allow predictions to be made using the tenets of the model. Given this, alongside research suggesting that engaging students in developing and using models can have a positive effect on learning in science classrooms, the recent national standards documents in science education have identified developing and using models as an important practice students should engage in as they apply and refine their ideas with peers and teachers in explaining phenomena or solving problems in classrooms. This article details how students can be engaged in developing and using models to help them make sense of friction phenomena in a high school conceptual physics classroom in ways that align with visions for teaching and learning outlined in the Next Generation Science Standards. This particular unit has been refined over several years to build on what was initially an inquiry-based unit we have described previously. In this latest iteration of the friction unit, students developed and refined models through engaging in small group and whole class discussions and investigations.
Kim, Young-Suk Grace; Schatschneider, Christopher
2016-01-01
We investigated direct and indirect effects of component skills on writing (DIEW) using data from 193 children in Grade 1. In this model, working memory was hypothesized to be a foundational cognitive ability for language and cognitive skills as well as transcription skills, which, in turn, contribute to writing. Foundational oral language skills (vocabulary and grammatical knowledge) and higher-order cognitive skills (inference and theory of mind) were hypothesized to be component skills of text generation (i.e., discourse-level oral language). Results from structural equation modeling largely supported a complete mediation model among four variations of the DIEW model. Discourse-level oral language, spelling, and handwriting fluency completely mediated the relations of higher-order cognitive skills, foundational oral language, and working memory to writing. Moreover, language and cognitive skills had both direct and indirect relations to discourse-level oral language. Total effects, including direct and indirect effects, were substantial for discourse-level oral language (.46), working memory (.43), and spelling (.37), followed by vocabulary (.19), handwriting (.17), theory of mind (.12), inference (.10), and grammatical knowledge (.10). The model explained approximately 67% of variance in writing quality. These results indicate that multiple language and cognitive skills make direct and indirect contributions, and it is important to consider both direct and indirect pathways of influences when considering skills that are important to writing. PMID:28260812
Why language really is not a communication system: a cognitive view of language evolution
Reboul, Anne C.
2015-01-01
While most evolutionary scenarios for language see it as a communication system with consequences on the language-ready brain, there are major difficulties for such a view. First, language has a core combination of features—semanticity, discrete infinity, and decoupling—that makes it unique among communication systems and that raise deep problems for the view that it evolved for communication. Second, extant models of communication systems—the code model of communication (Millikan, 2005) and the ostensive model of communication (Scott-Phillips, 2015) cannot account for language evolution. I propose an alternative view, according to which language first evolved as a cognitive tool, following Fodor’s (1975, 2008) Language of Thought Hypothesis, and was then exapted (externalized) for communication. On this view, a language-ready brain is a brain profoundly reorganized in terms of connectivity, allowing the human conceptual system to emerge, triggering the emergence of syntax. Language as used in communication inherited its core combination of features from the Language of Thought. PMID:26441802
Rosetta Structure Prediction as a Tool for Solving Difficult Molecular Replacement Problems.
DiMaio, Frank
2017-01-01
Molecular replacement (MR), a method for solving the crystallographic phase problem using phases derived from a model of the target structure, has proven extremely valuable, accounting for the vast majority of structures solved by X-ray crystallography. However, when the resolution of data is low, or the starting model is very dissimilar to the target protein, solving structures via molecular replacement may be very challenging. In recent years, protein structure prediction methodology has emerged as a powerful tool in model building and model refinement for difficult molecular replacement problems. This chapter describes some of the tools available in Rosetta for model building and model refinement specifically geared toward difficult molecular replacement cases.
Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model
Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.
2012-01-01
This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
Modelling dynamics in protein crystal structures by ensemble refinement
Burnley, B Tom; Afonine, Pavel V; Adams, Paul D; Gros, Piet
2012-01-01
Single-structure models derived from X-ray data do not adequately account for the inherent, functionally important dynamics of protein molecules. We generated ensembles of structures by time-averaged refinement, where local molecular vibrations were sampled by molecular-dynamics (MD) simulation whilst global disorder was partitioned into an underlying overall translation–libration–screw (TLS) model. Modeling of 20 protein datasets at 1.1–3.1 Å resolution reduced cross-validated Rfree values by 0.3–4.9%, indicating that ensemble models fit the X-ray data better than single structures. The ensembles revealed that, while most proteins display a well-ordered core, some proteins exhibit a ‘molten core’ likely supporting functionally important dynamics in ligand binding, enzyme activity and protomer assembly. Order–disorder changes in HIV protease indicate a mechanism of entropy compensation for ordering the catalytic residues upon ligand binding by disordering specific core residues. Thus, ensemble refinement extracts dynamical details from the X-ray data that allow a more comprehensive understanding of structure–dynamics–function relationships. DOI: http://dx.doi.org/10.7554/eLife.00311.001 PMID:23251785
Extension of Alvis compiler front-end
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wypych, Michał; Szpyrka, Marcin; Matyasik, Piotr, E-mail: mwypych@agh.edu.pl, E-mail: mszpyrka@agh.edu.pl, E-mail: ptm@agh.edu.pl
2015-12-31
Alvis is a formal modelling language that enables possibility of verification of distributed concurrent systems. An Alvis model semantics finds expression in an LTS graph (labelled transition system). Execution of any language statement is expressed as a transition between formally defined states of such a model. An LTS graph is generated using a middle-stage Haskell representation of an Alvis model. Moreover, Haskell is used as a part of the Alvis language and is used to define parameters’ types and operations on them. Thanks to the compiler’s modular construction many aspects of compilation of an Alvis model may be modified. Providingmore » new plugins for Alvis Compiler that support languages like Java or C makes possible using these languages as a part of Alvis instead of Haskell. The paper presents the compiler internal model and describes how the default specification language can be altered by new plugins.« less
Students' Attitudes and Motivation towards Technology in a Turkish Language Classroom
ERIC Educational Resources Information Center
Chryso, Pelekani
2016-01-01
The purpose of this study is to investigate adult learners' approaches towards Turkish Language (TL) and examine learners' outlooks towards the use of digital technologies for learning. It will also evaluate the impact of the Language Lab's model on learners' language achievement. Language Lab model is a system that is used for learning languages…
ERIC Educational Resources Information Center
Murphy, Audrey Figueroa
2014-01-01
The effects of "transitional-bilingual" and "dual-language" educational models on proficiency in students' home language (Spanish) were examined in a study of English language learners in the first and second grades in a large urban elementary school. In each grade, students were taught with either a transitional-bilingual…
ERIC Educational Resources Information Center
Buggey, Tom
In this investigation, a case study approach was used with two preschool children with language delays to determine whether videotaped self-modeling (VSM) intervention would influence their expressive language development. Language samples of both children were videotaped and then edited to leave only the best examples of the target language…
ERIC Educational Resources Information Center
Elosua, Paula; Egaña, Maria
2017-01-01
One of the focuses of language revitalisation policies is to incorporate minority languages into education. Evaluation of new language-of-instruction models is usually based on the increase of minority language speakers. However, it is also important from an educational perspective to study the possible relationship between performance and…
ERIC Educational Resources Information Center
Medwetsky, Larry
2011-01-01
Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…
ERIC Educational Resources Information Center
Kim, Kyung; Clariana, Roy B.
2015-01-01
In order to further validate and extend the application of recent knowledge structure (KS) measures to second language settings, this investigation explores how second language (L2, English) situation models are influenced by first language (L1, Korean) translation tasks. Fifty Korean low proficient English language learners were asked to read an…
Language shift, bilingualism and the future of Britain's Celtic languages
Kandler, Anne; Unger, Roman; Steele, James
2010-01-01
‘Language shift’ is the process whereby members of a community in which more than one language is spoken abandon their original vernacular language in favour of another. The historical shifts to English by Celtic language speakers of Britain and Ireland are particularly well-studied examples for which good census data exist for the most recent 100–120 years in many areas where Celtic languages were once the prevailing vernaculars. We model the dynamics of language shift as a competition process in which the numbers of speakers of each language (both monolingual and bilingual) vary as a function both of internal recruitment (as the net outcome of birth, death, immigration and emigration rates of native speakers), and of gains and losses owing to language shift. We examine two models: a basic model in which bilingualism is simply the transitional state for households moving between alternative monolingual states, and a diglossia model in which there is an additional demand for the endangered language as the preferred medium of communication in some restricted sociolinguistic domain, superimposed on the basic shift dynamics. Fitting our models to census data, we successfully reproduce the demographic trajectories of both languages over the past century. We estimate the rates of recruitment of new Scottish Gaelic speakers that would be required each year (for instance, through school education) to counteract the ‘natural wastage’ as households with one or more Gaelic speakers fail to transmit the language to the next generation informally, for different rates of loss during informal intergenerational transmission. PMID:21041210
A Statistical-Physics Approach to Language Acquisition and Language Change
NASA Astrophysics Data System (ADS)
Cassandro, Marzio; Collet, Pierre; Galves, Antonio; Galves, Charlotte
1999-02-01
The aim of this paper is to explain why Statistical Physics can help understanding two related linguistic questions. The first question is how to model first language acquisition by a child. The second question is how language change proceeds in time. Our approach is based on a Gibbsian model for the interface between syntax and prosody. We also present a simulated annealing model of language acquisition, which extends the Triggering Learning Algorithm recently introduced in the linguistic literature.
Khoury, George A; Smadbeck, James; Kieslich, Chris A; Koskosidis, Alexandra J; Guzman, Yannis A; Tamamis, Phanourios; Floudas, Christodoulos A
2017-06-01
Protein structure refinement is the challenging problem of operating on any protein structure prediction to improve its accuracy with respect to the native structure in a blind fashion. Although many approaches have been developed and tested during the last four CASP experiments, a majority of the methods continue to degrade models rather than improve them. Princeton_TIGRESS (Khoury et al., Proteins 2014;82:794-814) was developed previously and utilizes separate sampling and selection stages involving Monte Carlo and molecular dynamics simulations and classification using an SVM predictor. The initial implementation was shown to consistently refine protein structures 76% of the time in our own internal benchmarking on CASP 7-10 targets. In this work, we improved the sampling and selection stages and tested the method in blind predictions during CASP11. We added a decomposition of physics-based and hybrid energy functions, as well as a coordinate-free representation of the protein structure through distance-binning Cα-Cα distances to capture fine-grained movements. We performed parameter estimation to optimize the adjustable SVM parameters to maximize precision while balancing sensitivity and specificity across all cross-validated data sets, finding enrichment in our ability to select models from the populations of similar decoys generated for targets in CASPs 7-10. The MD stage was enhanced such that larger structures could be further refined. Among refinement methods that are currently implemented as web-servers, Princeton_TIGRESS 2.0 demonstrated the most consistent and most substantial net refinement in blind predictions during CASP11. The enhanced refinement protocol Princeton_TIGRESS 2.0 is freely available as a web server at http://atlas.engr.tamu.edu/refinement/. Proteins 2017; 85:1078-1098. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A Dialogic Inquiry Approach to Working with Teachers in Developing Classroom Dialogue
ERIC Educational Resources Information Center
Hennessy, Sara; Mercer, Neil; Warwick, Paul
2011-01-01
Background/Context: This article describes how we refined an innovative methodology for equitable collaboration between university researchers and classroom practitioners building and refining theory together. The work builds on other coinquiry models in which complementary professional expertise is respected and deliberately exploited in order to…
This paper examines the use of Moderate Resolution Imaging Spectroradiometer (MODIS) observed active fire data (pixel counts) to refine the National Emissions Inventory (NEI) fire emission estimates for major wildfire events. This study was motivated by the extremely limited info...
ERIC Educational Resources Information Center
Schwartz, Mila
2014-01-01
The aim of this exploratory study was to examine the role of the "First Language First" model for preschool bilingual education in the development of vocabulary depth. The languages studied were Russian (L1) and Hebrew (L2) among bilingual children aged 4-5 years in Israel. According to this model, the children's first language of…
Adaptive mesh refinement and front-tracking for shear bands in an antiplane shear model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garaizar, F.X.; Trangenstein, J.
1998-09-01
In this paper the authors describe a numerical algorithm for the study of hear-band formation and growth in a two-dimensional antiplane shear of granular materials. The algorithm combines front-tracking techniques and adaptive mesh refinement. Tracking provides a more careful evolution of the band when coupled with special techniques to advance the ends of the shear band in the presence of a loss of hyperbolicity. The adaptive mesh refinement allows the computational effort to be concentrated in important areas of the deformation, such as the shear band and the elastic relief wave. The main challenges are the problems related to shearmore » bands that extend across several grid patches and the effects that a nonhyperbolic growth rate of the shear bands has in the refinement process. They give examples of the success of the algorithm for various levels of refinement.« less
Implementation of a parallel protein structure alignment service on cloud.
Hung, Che-Lun; Lin, Yaw-Ling
2013-01-01
Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.
Implementation of a Parallel Protein Structure Alignment Service on Cloud
Hung, Che-Lun; Lin, Yaw-Ling
2013-01-01
Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform. PMID:23671842
Meshfree truncated hierarchical refinement for isogeometric analysis
NASA Astrophysics Data System (ADS)
Atri, H. R.; Shojaee, S.
2018-05-01
In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.
Determination of the optimal level for combining area and yield estimates
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Hixson, M. M.; Jobusch, C. D.
1981-01-01
Several levels of obtaining both area and yield estimates of corn and soybeans in Iowa were considered: county, refined strata, refined/split strata, crop reporting district, and state. Using the CCEA model form and smoothed weather data, regression coefficients at each level were derived to compute yield and its variance. Variances were also computed with stratum level. The variance of the yield estimates was largest at the state and smallest at the county level for both crops. The refined strata had somewhat larger variances than those associated with the refined/split strata and CRD. For production estimates, the difference in standard deviations among levels was not large for corn, but for soybeans the standard deviation at the state level was more than 50% greater than for the other levels. The refined strata had the smallest standard deviations. The county level was not considered in evaluation of production estimates due to lack of county area variances.
Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.
Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J
2015-08-21
In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).
Reeve, Bryce B.; Mitchell, Sandra A.; Clauser, Steven B.; Minasian, Lori M.; Dueck, Amylou C.; Mendoza, Tito R.; Hay, Jennifer; Atkinson, Thomas M.; Abernethy, Amy P.; Bruner, Deborah W.; Cleeland, Charles S.; Sloan, Jeff A.; Chilukuri, Ram; Baumgartner, Paul; Denicoff, Andrea; St. Germain, Diane; O’Mara, Ann M.; Chen, Alice; Kelaghan, Joseph; Bennett, Antonia V.; Sit, Laura; Rogak, Lauren; Barz, Allison; Paul, Diane B.; Schrag, Deborah
2014-01-01
The standard approach for documenting symptomatic adverse events (AEs) in cancer clinical trials involves investigator reporting using the National Cancer Institute’s (NCI’s) Common Terminology Criteria for Adverse Events (CTCAE). Because this approach underdetects symptomatic AEs, the NCI issued two contracts to create a patient-reported outcome (PRO) measurement system as a companion to the CTCAE, called the PRO-CTCAE. This Commentary describes development of the PRO-CTCAE by a group of multidisciplinary investigators and patient representatives and provides an overview of qualitative and quantitative studies of its measurement properties. A systematic evaluation of all 790 AEs listed in the CTCAE identified 78 appropriate for patient self-reporting. For each of these, a PRO-CTCAE plain language term in English and one to three items characterizing the frequency, severity, and/or activity interference of the AE were created, rendering a library of 124 PRO-CTCAE items. These items were refined in a cognitive interviewing study among patients on active cancer treatment with diverse educational, racial, and geographic backgrounds. Favorable measurement properties of the items, including construct validity, reliability, responsiveness, and between-mode equivalence, were determined prospectively in a demographically diverse population of patients receiving treatments for many different tumor types. A software platform was built to administer PRO-CTCAE items to clinical trial participants via the internet or telephone interactive voice response and was refined through usability testing. Work is ongoing to translate the PRO-CTCAE into multiple languages and to determine the optimal approach for integrating the PRO-CTCAE into clinical trial workflow and AE analyses. It is envisioned that the PRO-CTCAE will enhance the precision and patient-centeredness of adverse event reporting in cancer clinical research. PMID:25265940
Developing and Pilot Testing a Spanish Translation of CollaboRATE for Use in the United States.
Forcino, Rachel C; Bustamante, Nitzy; Thompson, Rachel; Percac-Lima, Sanja; Elwyn, Glyn; Pérez-Arechaederra, Diana; Barr, Paul J
2016-01-01
Given the need for access to patient-facing materials in multiple languages, this study aimed to develop and pilot test an accurate and understandable translation of CollaboRATE, a three-item patient-reported measure of shared decision-making, for Spanish-speaking patients in the United States (US). We followed the Translate, Review, Adjudicate, Pre-test, Document (TRAPD) survey translation protocol. Cognitive interviews were conducted with Spanish-speaking adults within an urban Massachusetts internal medicine clinic. For the pilot test, all patients with weekday appointments between May 1 and May 29, 2015 were invited to complete CollaboRATE in either English or Spanish upon exit. We calculated the proportion of respondents giving the best score possible on CollaboRATE and compared scores across key patient subgroups. Four rounds of cognitive interviews with 26 people were completed between January and April 2015. Extensive, iterative refinements to survey items between interview rounds led to final items that were generally understood by participants with diverse educational backgrounds. Pilot data collection achieved an overall response rate of 73 percent, with 606 (49%) patients completing Spanish CollaboRATE questionnaires and 624 (51%) patients completing English CollaboRATE questionnaires. The proportion of respondents giving the best score possible on CollaboRATE was the same (86%) for both the English and Spanish versions of the instrument. Our translation method, guided by emerging best practices in survey and health measurement translation, encompassed multiple levels of review. By conducting four rounds of cognitive interviews with iterative item refinement between each round, we arrived at a Spanish language version of CollaboRATE that was understandable to a majority of cognitive interview participants and was completed by more than 600 pilot questionnaire respondents.
i3Drefine software for protein 3D structure refinement and its assessment in CASP10.
Bhattacharya, Debswapna; Cheng, Jianlin
2013-01-01
Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8(th) CASP experiment. During the 9(th) and recently concluded 10(th) CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as 'MULTICOM-CONSTRUCT') was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/.
Automatic Debugging Support for UML Designs
NASA Technical Reports Server (NTRS)
Schumann, Johann; Swanson, Keith (Technical Monitor)
2001-01-01
Design of large software systems requires rigorous application of software engineering methods covering all phases of the software process. Debugging during the early design phases is extremely important, because late bug-fixes are expensive. In this paper, we describe an approach which facilitates debugging of UML requirements and designs. The Unified Modeling Language (UML) is a set of notations for object-orient design of a software system. We have developed an algorithm which translates requirement specifications in the form of annotated sequence diagrams into structured statecharts. This algorithm detects conflicts between sequence diagrams and inconsistencies in the domain knowledge. After synthesizing statecharts from sequence diagrams, these statecharts usually are subject to manual modification and refinement. By using the "backward" direction of our synthesis algorithm. we are able to map modifications made to the statechart back into the requirements (sequence diagrams) and check for conflicts there. Fed back to the user conflicts detected by our algorithm are the basis for deductive-based debugging of requirements and domain theory in very early development stages. Our approach allows to generate explanations oil why there is a conflict and which parts of the specifications are affected.
2014-05-01
solver to treat the spray process. An Adaptive Mesh Refinement (AMR) and fixed embedding technique is employed to capture the gas - liquid interface with...Adaptive Mesh Refinement (AMR) and fixed embedding technique is employed to capture the gas - liquid interface with high fidelity while keeping the cell...in single and multi-hole nozzle configurations. The models were added to the present CONVERGE liquid fuel database and validated extensively
BPS States, Crystals, and Matrices
Sułkowski, Piotr
2011-01-01
We review free fermion, melting crystal, and matrix model representations of wall-crossing phenomena on local, toric Calabi-Yau manifolds. We consider both unrefined and refined BPS counting of closed BPS states involving D2- and D0-branes bound to a D6-brane, as well as open BPS states involving open D2-branes ending on an additional D4-brane. Appropriate limit of these constructions provides, among the others, matrix model representation of refined and unrefined topological string amplitudes.
Liu, Hao; Liu, Haodong; Lapidus, Saul H.; ...
2017-06-21
Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hao; Liu, Haodong; Lapidus, Saul H.
Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less
Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans; ...
2016-11-09
Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans
Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less
Language-learning disabilities: Paradigms for the nineties.
Wiig, E H
1991-01-01
We are beginning a decade, during which many traditional paradigms in education, special education, and speech-language pathology will undergo change. Among paradigms considered promising for speech-language pathology in the schools are collaborative language intervention and strategy training for language and communication. This presentation introduces management models for developing a collaborative language intervention process, among them the Deming Management Method for Total Quality (TQ) (Deming 1986). Implementation models for language assessment and IEP planning and multicultural issues are also introduced (Damico and Nye 1990; Secord and Wiig in press). While attention to processes involved in developing and implementing collaborative language intervention is paramount, content should not be neglected. To this end, strategy training for language and communication is introduced as a viable paradigm. Macro- and micro-level process models for strategy training are featured and general issues are discussed (Ellis, Deshler, and Schumaker 1989; Swanson 1989; Wiig 1989).
Storytelling, behavior planning, and language evolution in context.
McBride, Glen
2014-01-01
An attempt is made to specify the structure of the hominin bands that began steps to language. Storytelling could evolve without need for language yet be strongly subject to natural selection and could provide a major feedback process in evolving language. A storytelling model is examined, including its effects on the evolution of consciousness and the possible timing of language evolution. Behavior planning is presented as a model of language evolution from storytelling. The behavior programming mechanism in both directions provide a model of creating and understanding behavior and language. Culture began with societies, then family evolution, family life in troops, but storytelling created a culture of experiences, a final step in the long process of achieving experienced adults by natural selection. Most language evolution occurred in conversations where evolving non-verbal feedback ensured mutual agreements on understanding. Natural language evolved in conversations with feedback providing understanding of changes.
Storytelling, behavior planning, and language evolution in context
McBride, Glen
2014-01-01
An attempt is made to specify the structure of the hominin bands that began steps to language. Storytelling could evolve without need for language yet be strongly subject to natural selection and could provide a major feedback process in evolving language. A storytelling model is examined, including its effects on the evolution of consciousness and the possible timing of language evolution. Behavior planning is presented as a model of language evolution from storytelling. The behavior programming mechanism in both directions provide a model of creating and understanding behavior and language. Culture began with societies, then family evolution, family life in troops, but storytelling created a culture of experiences, a final step in the long process of achieving experienced adults by natural selection. Most language evolution occurred in conversations where evolving non-verbal feedback ensured mutual agreements on understanding. Natural language evolved in conversations with feedback providing understanding of changes. PMID:25360123
Li, Mingjie; Zhou, Ping; Wang, Hong; ...
2017-09-19
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Mingjie; Zhou, Ping; Wang, Hong
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
Understanding and Forecasting Ethnolinguistic Vitality
ERIC Educational Resources Information Center
Karan, Mark E.
2011-01-01
Forecasting of ethnolinguistic vitality can only be done within a well-functioning descriptive and explanatory model of the dynamics of language stability and shift. It is proposed that the Perceived Benefit Model of Language Shift, used with a taxonomy of language shift motivations, provides that model. The model, based on individual language…
Experimental Evaluation of a Planning Language Suitable for Formal Verification
NASA Technical Reports Server (NTRS)
Butler, Rick W.; Munoz, Cesar A.; Siminiceanu, Radu I.
2008-01-01
The marriage of model checking and planning faces two seemingly diverging alternatives: the need for a planning language expressive enough to capture the complexity of real-life applications, as opposed to a language simple, yet robust enough to be amenable to exhaustive verification and validation techniques. In an attempt to reconcile these differences, we have designed an abstract plan description language, ANMLite, inspired from the Action Notation Modeling Language (ANML) [17]. We present the basic concepts of the ANMLite language as well as an automatic translator from ANMLite to the model checker SAL (Symbolic Analysis Laboratory) [7]. We discuss various aspects of specifying a plan in terms of constraints and explore the implications of choosing a robust logic behind the specification of constraints, rather than simply propose a new planning language. Additionally, we provide an initial assessment of the efficiency of model checking to search for solutions of planning problems. To this end, we design a basic test benchmark and study the scalability of the generated SAL models in terms of plan complexity.
Vocal Development as a Guide to Modeling the Evolution of Language.
Oller, D Kimbrough; Griebel, Ulrike; Warlaumont, Anne S
2016-04-01
Modeling of evolution and development of language has principally utilized mature units of spoken language, phonemes and words, as both targets and inputs. This approach cannot address the earliest phases of development because young infants are unable to produce such language features. We argue that units of early vocal development-protophones and their primitive illocutionary/perlocutionary forces-should be targeted in evolutionary modeling because they suggest likely units of hominin vocalization/communication shortly after the split from the chimpanzee/bonobo lineage, and because early development of spontaneous vocal capability is a logically necessary step toward vocal language, a root capability without which other crucial steps toward vocal language capability are impossible. Modeling of language evolution/development must account for dynamic change in early communicative units of form/function across time. We argue for interactive contributions of sender/infants and receiver/caregivers in a feedback loop involving both development and evolution and propose to begin computational modeling at the hominin break from the primate communicative background. Copyright © 2016 Cognitive Science Society, Inc.
ERIC Educational Resources Information Center
Rivers, Damian J.
2012-01-01
Adopting mixed methods of data collection and analysis, the current study models the "perceived value of compulsory English language education" in a sample of 138 undergraduate non-language majors of Japanese nationality at a national university in Japan. During the orientation period of a compulsory 15-week English language programme,…
ERIC Educational Resources Information Center
Goodrich, J. Marc; Lonigan, Christopher J.
2017-01-01
According to the common underlying proficiency model (Cummins, 1981), as children acquire academic knowledge and skills in their first language, they also acquire language-independent information about those skills that can be applied when learning a second language. The purpose of this study was to evaluate the relevance of the common underlying…
Modeling methodology for a CMOS-MEMS electrostatic comb
NASA Astrophysics Data System (ADS)
Iyer, Sitaraman V.; Lakdawala, Hasnain; Mukherjee, Tamal; Fedder, Gary K.
2002-04-01
A methodology for combined modeling of capacitance and force 9in a multi-layer electrostatic comb is demonstrated in this paper. Conformal mapping-based analytical methods are limited to 2D symmetric cross-sections and cannot account for charge concentration effects at corners. Vertex capacitance can be more than 30% of the total capacitance in a single-layer 2 micrometers thick comb with 10 micrometers overlap. Furthermore, analytical equations are strictly valid only for perfectly symmetrical finger positions. Fringing and corner effects are likely to be more significant in a multi- layered CMOS-MEMS comb because of the presence of more edges and vertices. Vertical curling of CMOS-MEMS comb fingers may also lead to reduced capacitance and vertical forces. Gyroscopes are particularly sensitive to such undesirable forces, which therefore, need to be well-quantified. In order to address the above issues, a hybrid approach of superposing linear regression models over a set of core analytical models is implemented. Design of experiments is used to obtain data for capacitance and force using a commercial 3D boundary-element solver. Since accurate force values require significantly higher mesh refinement than accurate capacitance, we use numerical derivatives of capacitance values to compute the forces. The model is formulated such that the capacitance and force models use the same regression coefficients. The comb model thus obtained, fits the numerical capacitance data to within +/- 3% and force to within +/- 10%. The model is experimentally verified by measuring capacitance change in a specially designed test structure. The capacitance model matches measurements to within 10%. The comb model is implemented in an Analog Hardware Description Language (ADHL) for use in behavioral simulation of manufacturing variations in a CMOS-MEMS gyroscope.
Expanding the Extent of a UMLS Semantic Type via Group Neighborhood Auditing
Chen, Yan; Gu, Huanying; Perl, Yehoshua; Halper, Michael; Xu, Junchuan
2009-01-01
Objective Each Unified Medical Language System (UMLS) concept is assigned one or more semantic types (ST). A dynamic methodology for aiding an auditor in finding concepts that are missing the assignment of a given ST, S is presented. Design The first part of the methodology exploits the previously introduced Refined Semantic Network and accompanying refined semantic types (RST) to help narrow the search space for offending concepts. The auditing is focused in a neighborhood surrounding the extent of an RST, T (of S) called an envelope, consisting of parents and children of concepts in the extent. The audit moves outward as long as missing assignments are discovered. In the second part, concepts not reached previously are processed and reassigned T as needed during the processing of S's other RSTs. The set of such concepts is expanded in a similar way to that in the first part. Measurements The number of errors discovered is reported. To measure the methodology's efficiency, “error hit rates” (i.e., errors found in concepts examined) are computed. Results The methodology was applied to three STs: Experimental Model of Disease (EMD), Environmental Effect of Humans, and Governmental or Regulatory Activity. The EMD experienced the most drastic change. For its RST “EMD ∩ Neoplastic Process” (RST “EMD”) with only 33 (31) original concepts, 915 (134) concepts were found by the first (second) part to be missing the EMD assignment. Changes to the other two STs were smaller. Conclusion The results show that the proposed auditing methodology can help to effectively and efficiently identify concepts lacking the assignment of a particular semantic type. PMID:19567802
Verifying and Validating Proposed Models for FSW Process Optimization
NASA Technical Reports Server (NTRS)
Schneider, Judith
2008-01-01
This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms
Analysis of multicrystal pump–probe data sets. I. Expressions for the RATIO model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, Bertrand; Coppens, Philip
2014-08-30
The RATIO method in time-resolved crystallography [Coppenset al.(2009).J. Synchrotron Rad.16, 226–230] was developed for use with Laue pump–probe diffraction data to avoid complex corrections due to wavelength dependence of the intensities. The application of the RATIO method in processing/analysis prior to structure refinement requires an appropriate ratio model for modeling the light response. The assessment of the accuracy of pump–probe time-resolved structure refinements based on the observed ratios was discussed in a previous paper. In the current paper, a detailed ratio model is discussed, taking into account both geometric and thermal light-induced changes.
Mulhearn, Tyler J; Watts, Logan L; Todd, E Michelle; Medeiros, Kelsey E; Connelly, Shane; Mumford, Michael D
2017-01-01
Although recent evidence suggests ethics education can be effective, the nature of specific training programs, and their effectiveness, varies considerably. Building on a recent path modeling effort, the present study developed and validated a predictive modeling tool for responsible conduct of research education. The predictive modeling tool allows users to enter ratings in relation to a given ethics training program and receive instantaneous evaluative information for course refinement. Validation work suggests the tool's predicted outcomes correlate strongly (r = 0.46) with objective course outcomes. Implications for training program development and refinement are discussed.
Jacobson, Robert B.; Parsley, Michael J.; Annis, Mandy L.; Colvin, Michael E.; Welker, Timothy L.; James, Daniel A.
2015-01-01
This report documents the process of developing and refining conceptual ecological models (CEMs) for linking river management to pallid sturgeon (Scaphirhynchus albus) population dynamics in the Missouri River. The refined CEMs are being used in the Missouri River Pallid Sturgeon Effects Analysis to organize, document, and formalize an understanding of pallid sturgeon population responses to past and future management alternatives. The general form of the CEMs, represented by a population-level model and component life-stage models, was determined in workshops held in the summer of 2013. Subsequently, the Missouri River Pallid Sturgeon Effects Analysis team designed a general hierarchical structure for the component models, refined the graphical structure, and reconciled variation among the components and between models developed for the upper river (Upper Missouri & Yellowstone Rivers) and the lower river (Missouri River downstream from Gavins Point Dam). Importance scores attributed to the relations between primary biotic characteristics and survival were used to define a candidate set of working dominant hypotheses about pallid sturgeon population dynamics. These CEMs are intended to guide research and adaptive-management actions to benefit pallid sturgeon populations in the Missouri River.
Hayenga, Heather N; Thorne, Bryan C; Peirce, Shayn M; Humphrey, Jay D
2011-11-01
There is a need to develop multiscale models of vascular adaptations to understand tissue-level manifestations of cellular level mechanisms. Continuum-based biomechanical models are well suited for relating blood pressures and flows to stress-mediated changes in geometry and properties, but less so for describing underlying mechanobiological processes. Discrete stochastic agent-based models are well suited for representing biological processes at a cellular level, but not for describing tissue-level mechanical changes. We present here a conceptually new approach to facilitate the coupling of continuum and agent-based models. Because of ubiquitous limitations in both the tissue- and cell-level data from which one derives constitutive relations for continuum models and rule-sets for agent-based models, we suggest that model verification should enforce congruency across scales. That is, multiscale model parameters initially determined from data sets representing different scales should be refined, when possible, to ensure that common outputs are consistent. Potential advantages of this approach are illustrated by comparing simulated aortic responses to a sustained increase in blood pressure predicted by continuum and agent-based models both before and after instituting a genetic algorithm to refine 16 objectively bounded model parameters. We show that congruency-based parameter refinement not only yielded increased consistency across scales, it also yielded predictions that are closer to in vivo observations.
Acculturative family distancing (AFD) and depression in Chinese American families.
Hwang, Wei-Chin; Wood, Jeffrey J; Fujimoto, Ken
2010-10-01
Knowledge of acculturative processes and their impact on immigrant families remains quite limited. Acculturative family distancing (AFD) is the distancing that occurs between immigrant parents and their children and is caused by breakdowns in communication and cultural value differences. It is a more proximal and problem-focused formulation of the acculturation gap and is hypothesized to increase depression via family conflict. Data were collected from 105 Chinese American high school students and their mothers. Rasch modeling was used to refine the AFD measure, and structural equation modeling was used to determine the effects of AFD on youth and maternal depression. Findings indicate that greater AFD was associated with higher depressive symptoms and risk for clinical depression. Family conflict partially mediated this relation for youths, whereas for mothers, AFD directly increased risk for depression. Greater mother-child heritage enculturation discrepancies were associated with greater mother and child AFD. Mainstream acculturation discrepancies and language gaps between mothers and youths were not significantly associated with any of the primary outcome variables. Results highlight the need for better understanding of how AFD and other acculturation-gap phenomena affect immigrant mental health. They also underscore the need for prevention and intervention programs that target communication difficulties and intergenerational cultural value differences. Copyright 2010 APA, all rights reserved.
Musa-Aziz, Raif; Boron, Walter F.
2014-01-01
Exposing an oocyte to CO2/HCO3− causes intracellular pH (pHi) to decline and extracellular-surface pH (pHS) to rise to a peak and decay. The two companion papers showed that oocytes injected with cytosolic carbonic anhydrase II (CA II) or expressing surface CA IV exhibit increased maximal rate of pHi change (dpHi/dt)max, increased maximal pHS changes (ΔpHS), and decreased time constants for pHi decline and pHS decay. Here we investigate these results using refinements of an earlier mathematical model of CO2 influx into a spherical cell. Refinements include 1) reduced cytosolic water content, 2) reduced cytosolic diffusion constants, 3) refined CA II activity, 4) layer of intracellular vesicles, 5) reduced membrane CO2 permeability, 6) microvilli, 7) refined CA IV activity, 8) a vitelline membrane, and 9) a new simulation protocol for delivering and removing the bulk extracellular CO2/HCO3− solution. We show how these features affect the simulated pHi and pHS transients and use the refined model with the experimental data for 1.5% CO2/10 mM HCO3− (pHo = 7.5) to find parameter values that approximate ΔpHS, the time to peak pHS, the time delay to the start of the pHi change, (dpHi/dt)max, and the change in steady-state pHi. We validate the revised model against data collected as we vary levels of CO2/HCO3− or of extracellular HEPES buffer. The model confirms the hypothesis that CA II and CA IV enhance transmembrane CO2 fluxes by maximizing CO2 gradients across the plasma membrane, and it predicts that the pH effects of simultaneously implementing intracellular and extracellular-surface CA are supra-additive. PMID:24965589
Colorado Model Content Standards: Foreign Language.
ERIC Educational Resources Information Center
Colorado State Dept. of Education, Denver.
The model course content standards for foreign language instruction in Colorado's public schools, K-12, provide guidelines, not curriculum, for school districts to design language programs. An introductory section presents some basic considerations in program design. The two general standards for foreign language performance are that: (1) students…
Language Model Applications to Spelling with Brain-Computer Interfaces
Mora-Cortes, Anderson; Manyakov, Nikolay V.; Chumerin, Nikolay; Van Hulle, Marc M.
2014-01-01
Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies. PMID:24675760
Mehl, Steffen W.; Hill, Mary C.
2013-01-01
This report documents the addition of ghost node Local Grid Refinement (LGR2) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference groundwater flow model. LGR2 provides the capability to simulate groundwater flow using multiple block-shaped higher-resolution local grids (a child model) within a coarser-grid parent model. LGR2 accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the grid-refinement interface boundary. LGR2 can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems. Traditional one-way coupled telescopic mesh refinement methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled ghost-node method of LGR2 provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR2, evaluates accuracy and performance for two-and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH2) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR2.
Mesh Convergence Requirements for Composite Damage Models
NASA Technical Reports Server (NTRS)
Davila, Carlos G.
2016-01-01
The ability of the finite element method to accurately represent the response of objects with intricate geometry and loading renders the finite element method as an extremely versatile analysis technique for structural analysis. Finite element analysis is routinely used in industry to calculate deflections, stress concentrations, natural frequencies, buckling loads, and much more. The method works by discretizing complex problems into smaller, simpler approximations that are valid over small uniform domains. For common analyses, the maximum size of the elements that can be used is often be determined by experience. However, to verify the quality of a solution, analyses with several levels of mesh refinement should be performed to ensure that the solution has converged. In recent years, the finite element method has been used to calculate the resistance of structures, and in particular that of composite structures. A number of techniques such as cohesive zone modeling, the virtual crack closure technique, and continuum damage modeling have emerged that can be used to predict cracking, delaminations, fiber failure, and other composite damage modes that lead to structural collapse. However, damage models present mesh refinement requirements that are not well understood. In this presentation, we examine different mesh refinement issues related to the representation of damage in composite materials. Damage process zone sizes and their corresponding mesh requirements will be discussed. The difficulties of modeling discontinuities and the associated need for regularization techniques will be illustrated, and some unexpected element size constraints will be presented. Finally, some of the difficulties in constructing models of composite structures capable of predicting transverse matrix cracking will be discussed. It will be shown that to predict the initiation and propagation of transverse matrix cracks, their density, and their saturation may require models that are significantly more refined than those that have been contemplated in the past.
The Collaborative Seismic Earth Model Project
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Herwaarden, D. P.; Afanasiev, M.
2017-12-01
We present the first generation of the Collaborative Seismic Earth Model (CSEM). This effort is intended to address grand challenges in tomography that currently inhibit imaging the Earth's interior across the seismically accessible scales: [1] For decades to come, computational resources will remain insufficient for the exploitation of the full observable seismic bandwidth. [2] With the man power of individual research groups, only small fractions of available waveform data can be incorporated into seismic tomographies. [3] The limited incorporation of prior knowledge on 3D structure leads to slow progress and inefficient use of resources. The CSEM is a multi-scale model of global 3D Earth structure that evolves continuously through successive regional refinements. Taking the current state of the CSEM as initial model, these refinements are contributed by external collaborators, and used to advance the CSEM to the next state. This mode of operation allows the CSEM to [1] harness the distributed man and computing power of the community, [2] to make consistent use of prior knowledge, and [3] to combine different tomographic techniques, needed to cover the seismic data bandwidth. Furthermore, the CSEM has the potential to serve as a unified and accessible representation of tomographic Earth models. Generation 1 comprises around 15 regional tomographic refinements, computed with full-waveform inversion. These include continental-scale mantle models of North America, Australasia, Europe and the South Atlantic, as well as detailed regional models of the crust beneath the Iberian Peninsula and western Turkey. A global-scale full-waveform inversion ensures that regional refinements are consistent with whole-Earth structure. This first generation will serve as the basis for further automation and methodological improvements concerning validation and uncertainty quantification.
Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua
2018-02-01
High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
LEARNING SEMANTICS-ENHANCED LANGUAGE MODELS APPLIED TO UNSUEPRVISED WSD
DOE Office of Scientific and Technical Information (OSTI.GOV)
VERSPOOR, KARIN; LIN, SHOU-DE
An N-gram language model aims at capturing statistical syntactic word order information from corpora. Although the concept of language models has been applied extensively to handle a variety of NLP problems with reasonable success, the standard model does not incorporate semantic information, and consequently limits its applicability to semantic problems such as word sense disambiguation. We propose a framework that integrates semantic information into the language model schema, allowing a system to exploit both syntactic and semantic information to address NLP problems. Furthermore, acknowledging the limited availability of semantically annotated data, we discuss how the proposed model can be learnedmore » without annotated training examples. Finally, we report on a case study showing how the semantics-enhanced language model can be applied to unsupervised word sense disambiguation with promising results.« less
Coarse Grained Model for Biological Simulations: Recent Refinements and Validation
Vicatos, Spyridon; Rychkova, Anna; Mukherjee, Shayantani; Warshel, Arieh
2014-01-01
Exploring the free energy landscape of proteins and modeling the corresponding functional aspects presents a major challenge for computer simulation approaches. This challenge is due to the complexity of the landscape and the enormous computer time needed for converging simulations. The use of various simplified coarse grained (CG) models offers an effective way of sampling the landscape, but most current models are not expected to give a reliable description of protein stability and functional aspects. The main problem is associated with insufficient focus on the electrostatic features of the model. In this respect our recent CG model offers significant advantage as it has been refined while focusing on its electrostatic free energy. Here we review the current state of our model, describing recent refinement, extensions and validation studies while focusing on demonstrating key applications. These include studies of protein stability, extending the model to include membranes and electrolytes and electrodes as well as studies of voltage activated proteins, protein insertion trough the translocon, the action of molecular motors and even the coupling of the stalled ribosome and the translocon. Our example illustrates the general potential of our approach in overcoming major challenges in studies of structure function correlation in proteins and large macromolecular complexes. PMID:25050439